Multi-core Node.JS using the cluster module

Node.js is single-threaded, which means that it does not take full advantage of all the cores the

Multi-core Node.JS using the cluster module

Node.js is single-threaded, which means that it does not take full advantage of all the cores the microprocessor has. However, Node provides us with a module to delegate the tasks in child processes, the cluster module. With this module we can make Node create copies of the program in other processes, where each child can share the same socket (listen to the same port), which could not be done if we executed the processes separately (we would obtain the exception EADDRINUSE).

Let us see the first example:

The cluster module works by creating processes called “workers”. In order to create them, we use cluster.fork(). In this example, the master process creates 4 workers. If we are in a worker process cluster.isMaster is false, while cluster.isWorker is true, and the other way about if we are in a master process. Therefore, the else section of the code will only be executed by workers. At the end, we show the worker’s ID using; cluster.worker is an objet that carries information about the worker that is executing the code (if we are in the master process, we can access all the workers through cluster.workers).

As was previously mentioned, all the workers can listen to the same port, so let us see an example of an http server:

In this example, all the workers are listening to the port 8000. If we access localhost:8000 we will obtain the ID of the worker that answered our request. If we try refreshing the page, the ID may change, showing that actually another process answered the request.

Restarting down processes

From the master process we can answer some events. One of them is ‘exit’, which is executed when a child process ends. Using this event, we can restart a process as soon as it is down and so keep all of them active permanently.

Here we have the fragment of the master section with the addition we mentioned:

If we want to try it, we can use kill pid (taskkill /PID in Windows) to end individual processes and see how the master simply starts a new one.

Sending messages between the master and the workers

In order to send messages from the master to a worker, we use worker.send({}), and to send them from a worker to the master, we use cluster.worker.send (or process.send, which is the same). In order to receive messages, we use the ‘message’ event (in a similar way to the ‘exit’ event we saw before).

In the following example, workers send information about their memory use to the master process; later, the master catches that event and shows it in the console.


For the benchmark to make sense, we have to add some intensive function for the CPU, otherwise we would not see much improvement due to the non-blocking I/O operation. For this reason, the server was modified with a loop, so as to wear down the CPU:

for (var x = 0; x < 9999999; x++) {;}


Then, using loadtest (or any other program) we sent 1000 requests, simulating 100 concurrent clients:

$ loadtest -c 100 -n 1000 http://localhost:8000


The results we obtained were:
1 worker:

4 workers:

Using 4 workers we obtained a speedup of 2.65 in requests per seconds rather than using only one.

Further information

The official documentation is very clear and should be the main source of information.


Share this articleShare on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone
Go Back