logo_nodejs


In the second part of this article, we will look into Events, Event Emitter, libuv, streams, pipes etc. This will give us a better understanding of how NodeJS is able to handle requests and responses.
And also help to explain how the non-blocking I/O architecture of NodeJS is achieved.
NodeJS has two types of events

  1. System Events
  2. Custom Events

System Events are lower level events related to the Operating System (e.g. opening and reading files) while Custom Events are more high level and are linked to the NodeJS server (e.g. when receiving responses from the browser).
System Events are part of the C++ library called libuv whereas Custom Events are written in JavaScript. Let’s start with the Custom Events.

Custom Events in NodeJS
First, we need to look into event listeners. The concept of listeners should be familiar to anyone who has done some JavaScript coding for the browser. Usually this pertains to mouse events like click or mouseover and involves the built in addEventListener() method. You can also handle events using callback functions, for example when you have asynchronous code that runs when a future event occurs like in AJAX – the server sends a response to your request after some other backend processes. There are also custom events created by using JavaScript libraries like jQuery, AngularJS or Backbone.

In order to implement a custom event we first create a custom emitter function with a property called events. This events property holds an array of listener functions. Next we create a method on the custom emitter’s prototype called addListener (similar to JavaScript’s addEventListener). What this method does is to simply add the listener functions to the events array. The second step involves creating the trigger or emit method that will invoke the listener functions, similar to jQuery’s trigger() method.
This method is implemented by simply checking for the events array, looping through it and invoking each item in the array. Recall that we added functions to the events array earlier.

// a simple custom event listener
function myEmitter () {
    this.events = {};
}
myEmitter.prototype.addListener = function(type, listener) {
    this.events[type] = this.events[type] || [];
    this.events[type].push(listener);
}

myEmitter.prototype.trigger = function(type) {
    if(this.events[type]) {
        this.events[type].forEach(function(listener) {
            listener();
        }

       )
    }

}

Next we will see how the event listener is used in a simple example

var myEmtr = new myEmitter();

myEmtr.addListener('loaded', function(){
   console.log('Yes it is loaded!');

});

myEmtr.trigger('loaded');

You can find the JavaScript event handling functionality in node > lib > events.js in the NodeJS Github repository. The libuv event wrappers can be found in node > src > fs_event_wrap.cc

System Events from libuv
Within libuv there is a queue of events and an event loop which checks the queue for any changes. Whenever an event is completed it is added to the queue by the Operating System and when the event loop reaches this point it notifies the V8 engine via a callback function.
In the libuv GitHub repository we can see the actual event loop code by following this path src > win > core.c Then search for uv_run and within that function there is a while loop

while (r != 0 && loop->stop_flag == 0) {
...
...
}


Streams, Buffers and Pipes
In this section we will look into some important applications of the NodeJS EventEmitter. Let’s begin by defining a few fundamental terms and concepts.

Buffers
A Buffer is a temporary storage space for binary data that is being moved from one memory address to another.

Streams
A stream is a sequence of binary data being transferred and delivered over a period of time. The separate data sequences are eventually combined into one whole chunk of data.

Character encoding
Binary data is what computers understand at the lowest level, however humans cannot read a string of 1s and 0s. When developers write code in JavaScript, they use characters from the alphabet. Character encoding is the means by which the alphabet is transformed into binary data. JavaScript does not handle character encoding since it has no built in support for dealing with raw bits or binary data. (N.B. TypedArray in ECMAScript 2015 introduced this feature) However a web server needs to deal with internet data streams and NodeJS adds this capability to JavaScript through the Buffer Class.

NodeJS Buffer Class
The Buffer class in NodeJS allows JavaScript developers to manipulate binary data. This class makes it possible to interact with octet streams found in TCP and file system operations.
In the code example below, we create a Buffer object and write some data into it. The function takes two arguments: string and encoding. By default, encoding is utf8. After creating this buffer, the size cannot be changed, if you write any more data it will simply over write the previous values in buffer. Adding more data will not expand or increase the buffer size.

var myBuffer = new Buffer('Portfour', 'utf8');

console.log(myBuffer);
console.log(myBuffer.toString());

myBuffer.write('xx');

console.log(myBuffer.toString());



Reading Files using fs
The fs module source code can be found in the NodeJS GitHub repository, in this path node > lib > fs.js. It contains a readSync function which takes a buffer as part of the arguments. This module allows JavaScript to access files on your computer’s hard drive. This is a big deal because JavaScript in the browser is not allowed to do this for security reasons!

fs.readSync = function(fd, buffer, offset, length, position) {
  if (length === 0) {
    return 0;
  }

A simple file reading demo is shown in the code snippet below:

var fs = require('fs');
var readme = fs.readFileSync(__dirname + '/mytext.txt', 'utf8');
console.log(readme);

var readme2 = fs.readFile(__dirname + '/mytext.txt', 'utf8',
     function (error, data){
     console.log(data);
 });
console.log('finished');



The first method readFileSync is a synchronous method, which means NodeJS will not wait for the file to be completely loaded before moving on. The second method readFile, is asynchronous, and it takes a callback function as an argument. This callback function is invoked after the file has been completely loaded.
If you run the code above, when will the “finished” text be displayed in the console? Is it after the readme2 runs its console.log() or before?
The correct answer is before readme2… Since readme2 will execute asynchronously, nodeJS will not wait for the file to be read but will execute the last console.log and then come back to the readme2 function.

Streams
The NodeJS Streams module can be found in the GitHub project using this path: node > lib > stream.js.
Within the first few lines you will find the snippet below:

module.exports = Stream;
...
const EE = require('events').EventEmitter;
...
util.inherits(Stream, EE);

Line 5 shows that Stream inherits from EventEmitter. This means that Stream also has access to all EventEmitter methods and it can perform event listener tasks. NodeJS supports various types of Streams such as Readable, Writable, Duplex etc. A Duplex stream is both readable and writable.

var fs = require('fs');
var readable = fs.createReadStream(__dirname + '/readable.txt', {encoding: 'utf8', highWaterMark: 16 * 1024});

var writable = fs.createwriteStream(__dirname + '/newtext.txt');

readable.on('data', function(chunk){
    console.log(chunk.length);
    writable.write(chunk);
});



Pipes
Pipes basically involve reading a chunk of binary data from a readable (or duplex) stream and writing to a writable (or duplex) stream. In the code example above, the readable.on(...) event listener implements this process. NodeJS has a built-in pipe function that makes this process more efficient and performant. You can find this is in node > lib > _stream_readable.js

Readable.prototype.pipe = function(dest, pipeOpts) {...}
.
.
.
src.on('data', ondata);
function ondata (chunk){...}
.
.
.
return dest;

As a result, using pipe, the above code can be simplified:

var fs = require('fs');
var readable = fs.createReadStream(__dirname + '/readable.txt');

var writable = fs.createWriteStream(__dirname + '/newtext.txt');

readable.pipe(writable);
});



Front end web development has become much more complex over the past few years, due to the multitude of JavaScript libraries and frameworks involved. Any JavaScript developer who has worked on large scale projects must have used one of the common task runners available freely online. These task runners allow developers to automate repetitive tasks and also manage the large number of files created in such large projects.

Grunt and Gulp are two of the most common examples. They both involve reading files on your hard drive and writing to new or existing files. Gulp uses pipes extensively to perform various tasks that make the development workflow less burdensome. NodeJS has a Gulp module that basically reads JavaScript (or LESS, SASS and HTML) files, then passes them through various transformations using other modules and finally writes them into other files. In the code example below the concat(...) method is provided by the NodeJS concat module and is used to combine all JavaScript files from the lib folder into a single all.js file.

gulp.task('scripts', function() {
  return gulp.src('./lib/*.js')
    .pipe(concat('all.js'))
    .pipe(gulp.dest('./dist/'));
});

gulp.task('css', function(){
  return gulp.src('client/templates/*.less')
    .pipe(less())
    .pipe(minifyCSS())
    .pipe(gulp.dest('build/css'))
});