Uh oh!
There was an error while loading. Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork 34.2k
Description
What is the problem this feature will solve?
When the node service is under a very high load, multiple connections are processed at the same time in one worker ( we use the cluster module currently in our project ). We set the "maxConnections" to limit the connections of the worker. But we found that if a new request reach the limit of the "maxConnections", the request will retry on other workers. I think can we have an option, if a new request reach the limit, we can just drop the request instead of retrying the request on other workers ? Because as the system is under a very high load, the other workers may also be very busy at this moment. Here is a example on "v22.7.0".
constcluster=require('cluster');consthttp=require('http');constprocess=require('process');if(cluster.isPrimary){console.log(`Master ${process.pid} is running.\n`);for(leti=0;i<1;i++){cluster.fork();}}else{constserver=http.createServer((req,res)=>{res.writeHead(200);res.end('hello world\n');});server.maxConnections=0;server.listen(8000,()=>{console.log(`Worker ${process.pid} started`);});}What is the feature you are proposing to solve the problem?
For example, add An option "--maxconnections-drop-request" to the node "Command-line options" while on start up.
What alternatives have you considered?
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status