How to build and test your Rest API with Node.js, Express and Mocha


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


If you need to build an API, you want it done quickly and simply and you love JavaScript, then this article is for you!
I’m going to show a simple, quick and effective way to build and unit test your API and your routes, using Node.js, Express and a few amazing test libraries: Mocha, SuperTest and should.js that integrate very well together in a natural fashion.
The API style is Rest, that is it leverages URLs and HTTP verbs, but I won’t go much into details in this tutorial.
If you ever used technologies such as WCF, you will be amazed at how much quicker it is to build your API with Node, there’s virtually no machinery to put in place, no service references to update, no WSDL… it’s pretty much just logic to implement!
Of course when you want to design your API around a Rest concept, it is very important to think about the access paths to your resources, they NEED to make sense and to be well structured, otherwise you just end up walking down the good old RPC path.
For a nice intro to this concept, see
By the way, in this tutorial I won’t cover advanced topics such as securing the API through HMAC or OAuth solutions.

From words to code:

A nice and free dev environment where to start coding is cloud9 (, as  it offers everything you need and maybe more: a text editor with syntax highlighting, a Node.js environment, a debugger (!), a shell, cloud storage and it connects to your GitHub repository… because you have a GitHub repository, don’t you? We don’t need more than that to write our first API, so let’s start!

Ok we want to write a HTTP API, so we obviously want our API to be accessible via some sort of webserver, but as you may already know, Node.js doesn’t come with its own webserver.
Instead it comes with some facilities built-in; on top of these facilities lies Express.js, the web server of choice for this tutorial. Express.js ( describes itself as a web application framework for node, and I think you should use Express too, as it is a solid and well tested piece of software, easy to setup and test.

After reading some tutorials around, I came up with a structure of 4 short files  that setup and run the API:

  1. config.js exports an object that contains all configuration keys and values, pretty much like web.config in an Asp.Net web app. Following this line we can think about having a config-debug.js  and a config-release.js. We’ll see later why.

  2. routes.js exports a setup function that takes care of declaring the routes and the handlers that will manage each request.

  3. server.js where we configure Express and we export a start function that takes care of setting up the routes and starting the web server

  4. index.js is the entry point of our API. It uses pretty much everything we have defined so far to start the logger (my logger of choice is winston), to connect to a database, if present, and to finally start the Express server.



As you can see, setup expects two parameters: app is the Express app and handlers is an object that maps to a set of functions that handles user requests. Each of these functions accepts as parameters the request and response objects: e.g.

function handleCreateAccountRequest(req, res) { … }


The first part is just some standard Express setup + Express logging.
Now we can see clearly what the parameters for routes.setup are: app is just the Express instance and handlers contains two objects that point to the different handlers that will be able to handle API requests.
Finally, after declaring and exporting the start function, this module exports the Express instance, which will be used in the tests.


As already said, index.js is the entry point, that means our API will be executed by running the command

$ node index.js

We can easily configure the API logger using winston and the configuration file, we connect in this case to MongoDB using mongoose, a fantastic tool, and then we start the server using the freshly exported function.
That’s it, you don’t need anything else to setup your API along with a solid logger and a database connection.

Ok I have an API but… how do I test it?

The test framework of choice for this tutorial is Mocha ( along with Supertest for HTTP assertions ( and Should.js for BDD style tests ( Mocha is a great choice because it makes async test natural and fluent.

The purpose here is to unit test our routes  (integration tests) and to build the tests so that we will be able to read our test and the results as if they were plain English sentences!
Let’s create a folder named “test” where we will place our test files and our mocha.opts configuration file where we can specify the output style for our tests.
Finally we can create our unit tests for the routes in a file called “test/routes.js”.

The structure of a test with Mocha is simple and verbose:

So if you try and read the test, it would come out something like:
Describe Routing, describe Account, it should return error trying to save duplicate username, it should correctly update an existing account, and so on.
To add more test, simply add more describe or it functions, and that’s it!

To execute the tests, start your api with

$ node index.js

and then in another shell run

$ mocha

as by default mocha will run everything in /test off of your main project.

If you want to run your test by typing

$ npm test

then all you have to do is to create a makefile and can even be as follows:

then add the following lines to your package.json file:

“scripts”: {
“test”: “make test”

and that’s the result (note that in the screenshot I have more tests as it’s taken from an actual project of mine)

Mocha test output with BDD style

Mocha test output with BDD style

That’s it, pretty easy, isn’t it?

Javascript promises and why jQuery implementation is broken


Introduction to Javascript promises

Callbacks: a classic approach to async

Callbacks are Javascript classic approach to collaborative asynchronous programming.
A callback is a function object that is passed to another function as a parameter and that  later on must be invoked under some circumstances: for example when an asynchronous function successfully completes a task, it invokes the callback function to give back control to the function that was previously executing, signaling that the task has completed.
Callbacks are easy to use, but they make the code less readable and messier, especially if you have few of them one after another using anonymous functions:

Small example

function invokingFunction() {
    // some stuff
    asyncFunction(function(data) { // the first callback function
                   anotherAsyncFunction(function() { // the second callback function
                          //more stuff

This pattern can lead to what is known as the “pyramid of doom”, especially when using jQuery’s mouse event handlers combined with async operations like $.get or $.post.

Javascript promises: the specification

To fix this and other problems (as we’ll see) with callbacks style of code, a specification has been proposed and it is known under the name CommonJS Promises/A. Let’s see what it says:

A promise represents the eventual value returned from the single completion of an operation. A promise may be in one of the three states, unfulfilled, fulfilled, and failed. The promise may only move from unfulfilled to fulfilled, or unfulfilled to failed. Once a promise is fulfilled or failed, the promise’s value MUST not be changed, just as a values in JavaScript, primitives and object identities, can not change (although objects themselves may always be mutable even if their identity isn’t). The immutable characteristic of promises are important for avoiding side-effects from listeners that can create unanticipated changes in behavior and allows promises to be passed to other functions without affecting the caller, in same way that primitives can be passed to functions without any concern that the caller’s variable will be modified by the callee.
A promise is defined as an object that has a function as the value for the property ‘then’:
then(fulfilledHandler, errorHandler, progressHandler)
Adds a fulfilledHandler, errorHandler, and progressHandler to be called for completion of a promise. The fulfilledHandler is called when the promise is fulfilled. The errorHandler is called when a promise fails. The progressHandler is called for progress events. All arguments are optional and non-function values are ignored. The progressHandler is not only an optional argument, but progress events are purely optional. Promise implementors are not required to ever call a progressHandler (the progressHandler may be ignored), this parameter exists so that implementors may call it if they have progress events to report.
This function should return a new promise that is fulfilled when the given fulfilledHandler or errorHandler callback is finished. This allows promise operations to be chained together. The value returned from the callback handler is the fulfillment value for the returned promise. If the callback throws an error, the returned promise will be moved to failed state.

It’s very easy to find blog articles and tutorials online, especially around jQuery Deferred object, and almost all of them show how to do callback aggregation using the “then” function to attach callbacks to a promise, whether for success or for errors (or even to signal that an operation has made some progress). When the promise transitions state, the callbacks will be called, that’s as simple as that.
After reading a lot, I thought I knew enough about promises, but then I stumbled upon this page ( by Domenic Denicola, titled “You’re Missing the Point of Promises”, and after reading it I really had the feeling I was missing it entirely!

What promises are really about

As the previously linked page states, Javascript promises are not just about aggregating callbacks, but actually they are mostly about having a few of the biggest benefits of synchronous functions in async code!

  1. function composition: chainable async invocations
  2. error bubbling: for example if at some point of the async chain of invocation an exception is produced, then the exception bypasses all further invocations until a catch clause can handle it (otherwise we have an uncaught exception that breaks our web app)

To quote Domenic:

The point of promises is to give us back functional composition and error bubbling in the async world. They do this by saying that your functions should return a promise, which can do one of two things:

  • Become fulfilled by a value
  • Become rejected with an exception

And, if you have a correctly implemented then function that follows Promises/A, then fulfillment and rejection will compose just like their synchronous counterparts, with fulfillments flowing up a compositional chain, but being interrupted at any time by a rejection that is only handled by someone who declares they are ready to handle it.

That is, promises have their foundation in this “then” function, if this is broken than the whole mechanism is broken. And that is exactly what is happening with jQuery’s implementation, let’s see why with the help of an explicative (I hope!) code example.

Why jQuery promises are broken

The problem with jQuery’s implementation (up until version 1.9) is that it doesn’t respect the second part of the specification, “This function should return a new promise…”, that is “then” doesn’t return a new promise object when executing one of the handlers (either the fullfillment, the rejection or the progress handler).

This means we cannot do function composition as we don’t have a “then” function to chain to, and we won’t have error bubbling due to a broken chain, the two most important points about this spec.
Finally, what we have is just callback aggregation.

JsFiddle examples

The following fiddles show a simple chain of async functions.
I’m simulating the case where the original promise is fulfilled, the fulfillment handler is invoked, gets the data and then throws an exception in response to it. The exception should be handled by the first rejection handler down the chain.

The first fiddle  is not working as expected: the rejection handler is never invoked and the error bubbles up to the app level, breaking it. Below I show the console reporting the uncaught error:

Uncaught errors

Uncaught errors

Next fiddle  behaves as expected, with the rejection handler correctly invoked. The way to quickly “fix” the broken implementation is to wrap the handler with a new Deferred object that will return a fulfilled/rejected promise, that can be later used for chaining, for example. Below we see the  console showing no uncaught errors.

No errors

No errors


As we have seen, until at least version 1.9.0, jQuery can’t do pomises properly out of the box, but there are several alternatives libraries on the market such as Q, rsvp.js and others, that adhere completely to the specification.


Promises are the present and the future of Javascript asynchronous operations, they provide an elegant and readable code and more importantly they allow function composition and error bubbling, making async more similar to sync programming style, thus making the life of a developer a little bit easier!
I said that Promises are the future of Javascript async programming because Harmony, the next version of Javascript, will allow for great stuff combining promises with Generators. To see a sneak peek preview on how powerful these two concepts can be if used together, point your browsers to !


Again, credits to Domenic Denicola for writing this post  and to all the ones who commented and posted examples that helped me understand, notably user jdiamond!

Javascript array performance oddities


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


I’ve recently attended a talk by a Google engineer part of the Google V8 team about writing efficient Javascript code with an eye on performance, obviously with a focus on V8, and I’ve started to read a lot on the topic.
What everyone seems to agree on is that there still is a widespread gap of performances between the different Javascript engines and also sometimes there are odd behaviour that will puzzle you, like the one I’m going to show below.
During the talk I was really impressed by huge performances gap between different approaches to array manipulation, most of the times due to differences in code that looked so trivial!
Before diving in, let’s have a super quick introduction to javascript arrays.

Small overview about Javascript arrays

(Extract from
Arrays are one of JavaScript’s core data structures.
Arrays are a special case of objects and they inherit from Object.prototype, which also explains why typeof([]) == “object”.
The keys of the object are positive integers and in addition the length property is always updated to contain the largest index + 1.

Arrays are very useful and used in contexts like games or mathematical simulations for example, where we may need to store and manipulate a lot of objects and we need to do it in a very short amount of time.

State of the art

The general consensus about V8 (and more generally about Javascript engines) has a few useful tips to speed up array manipualtion

  1. If you want to use an array data structure, then treat it as an array: do not mix the types of the data stored. For example in V8 when we first populate the array with data, the JIT creats a HiddenClass object that tracks element types, and if at a certain point we change the types by for example storing a string instead of a number, V8 will have to forget it all and restart, therefore dropping performances.
  2. pre-allocate (pre-allocate means specifying the array length) “small” arrays  in hot portions of the code using the constructor “var array = new Array(num)” or setting the array length if declared with []
  3. do NOT pre-allocate big arrays (e.g. > 64K elements) to their maximum size, instead grow as you go.
  4. it’s best to use WebGL typed arrays (Uint32Array, Float32Array, Float64Array etc…)
  5. use contiguous keys starting at 0 for Arrays
  6. don’t delete elements in arrays, especially numeric arrays
  7. don’t load uninitialized or deleted elements:

I was quite intrigued by the possible performance increase obtainable by simply pre-allocating an array by specifying its size (var array = []; array.length = 1000; or simply var array = new Array(1000) ) so I created a test suite on JsPerf to test these assumptions, and it turns out that Chromes doesn’t really behave as expected, despite being the fastest browser out there anyway.

Test setup:

I setup a two-parts test, as IE 9 doesn’t support typed arrays (and I don’t have access to IE10 yet).
The first part tests the performance differences between pre-allocating and not pre-allocating arrays, and also the difference between array declaration styles
var arr = []
var arr = new Array()
The second part tests typed arrays.

Test part 1

Test results: Chrome is the winner

Test results: Chrome is the winner

The figures tell us that Chrome is by far the fastest browser in this test, with Chrome v26 being more ~3x faster than Firefox v17.0.
Let’s now take a look at the assumptions 1 and 2 stated above and see if they are still true when put to the test:
As expected from assumption 1, Chrome is indeed at least 3x faster when pre-allocating a small array (with 1000 items), whereas on Firefox and IE 9 there isn’t any significative difference, with Firefox v17 being more 4x faster than IE 9!
On the browsers tested so far it makes no big difference using the alternative syntax
var arr = new Array(l) vs var arr = [l], although using the constructor is slightly faster on Chrome (especially on Chrome v26) and Firefox.

What about arrays with more than 64k items?
Unsurprisingly Chrome behaves in a different manner than the other two browsers: IE and Firefox do not show any remarkable difference about pre-allocated vs not initialised arrays, with a size of 65000 items. Chrome instead, as expected by assumption 2, performs ~3x faster when the array is not initialised and we are using var arr = []; array initialisation code.
But with much surprise this is not the best performing case, in fact this tests reveals that pre-allocating a big array using the constructor var arr = new Array(65000)  shows that Chrome is ~3x times faster!
Therefore we cannot say assumption 2 is always valid, as we see here that using Array constructor does a lot of difference.
The reason is not clear to me, given that the not-initialised cases (with new Array() and []) perform closely.
And we are not done with the surprises yet!

Test part 2
In fact if we take a look at the test about typed arrays, defined here as following

l = window.growFactor;
var arr =  new Uint32Array(l);
for (var i = 0; i < l; ++i) {
 arr[i] = 0;

shows again Firefox and Chrome performing in a significant different manner: the first benefits a lot by using a typed array (as expected), increasing performance by 2.5x, whereas Chrome sees its performances halved! As a result, Firefox’s performance peek almost matches Chrome’s peek.
Again, it is not clear to me why, I just think this is an oddity that is worth sharing.

We have seen that sometimes browsers performs in an unexpected way (see Chrome with typed arrays), and their performances may differ so much one from the other in things that look trivial and meaningless.
Therefore if your Javascript application is using arrays and needs to squeeze every drop of performance out of the engine, it may be worth spending some time to make some tests as the performance increase can be very significant!


PongR part 3: client side


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


In part 1 of “PongR – my first experience with HTML5 multiplayer gaming” , I talked about the spirit of this project, the system architecture, technologies used and the design of the game (remember the server authoritative model?)

In part 2 I talked about the server side code and the server loops

This is part 3 and I will talk about the client side code, the client loops and how we draw basic (no sprites, only basic shapes) images on a canvas.

Code and Demo

As always, the code is fully published on github [] and a (boring!) demo is available at . If you don’t have anyone to play with, just open two tabs and enjoy!

Project structure and technologies used

As a quick recap, this project has been realised using Asp.Net MVC, HTML5 (Canvas), Javascript, C#, SignalR for client/server real-time communication and qUnit for javascript unit testing.

PongR is a multiplayer HTML5 game, and this means that the client itself is doing a good amount of work.
The engine is replicated both on the server and on the client, as we have seen in the first part of this article when I talked about the server authoritative model + client side prediction that force us to add logic onto the client.
In the server this logic is built into a class named “Engine” and later on we will go into more details.
In the client this logic is built into a javascript module named PongR.

Javascript  code structure and unit testing

Most of the Javascript client side code is contained into a folder called Js, whose structure is shown below:

the module PongR is inside the file PongR.js. This module exports a public prototype with two public functions needed to setup an instance of the game and to connect to the server.

pongR.PublicPrototype.createInstance = function (width, height, username) { … }
pongR.PublicPrototype.connect = function () { … }

Furthermore the public prototype exports also a set of functions that have been unit tested using qUnit [].
qUnit is very easy to use and the way I set it up is very neat: you can find all the details in this awesome article
Basically what you get is an easy way to share certain pieces of the tests layout in an MVC3 application, with the possibility to have all the links to start your different tests grouped into one HTML page, being able to start them with just one click.
The downside of doing things this way is that the functions we want to unit test need to be visible from outside the module. In order not to pollute the public prototype too much, the module exports a UnitTestPrototype object, which will contain the references to the functions that I want to unit test.

var pongR = {
    PublicPrototype: { UnitTestPrototype: {} }

All the unit tests are contained into PongR_test.js.
A quick example:

And when runned, this is what you get:

Finally, I’m going to talk about RequestAnimationFrameShim.js later in the article, when I’ll focus on the Canvas element.


Other than what we already saw, this module defines several View Model, related to objects that needs to be simulated. Below we can see a basic diagram of the objects and their connections:

The main object here is clearly Game, as it contains the whole state of a match.
Settings is another interesting enough object, it holds a few configuration variables that are mostly used for client prediction and interpolation (I will cover this topic later on in the post).
These models are widely used by the client logic: for example the physics update loop directly modifies these models at every simulation step.
And this leads us to discuss about the client side update loops.

Client side update loops

Let’s restart the flow and let’s see more in detail what is happening:
when a client enters its username and posts the form, the system redirects the client to the page where the actual game will be displayed. This page will load all the necessary javascript libraries and modules, as well as PongR, and will then perform this piece of code on the document onready event:

createInstance sets up the basic configuration of the game and creates the variable that will host the SignalR object, pongR.pongRHub, and all related callbacks.
Once the SignalR object has been correctly populated, we can invoke the .connect() function on PongR, that will start the SignalR connection and on success we invoke the joined() function, which will be where the server will process the client.
We need to have something after start() because in the server side handler the round trip state is not yet available.
When 2 players are connected and waiting, the server sends an init message to both clients that is handled by the client by the following callback:

This code initiliases a new Game, the canvas on which the game will be drawn on, an empty list of server updates that will be received throughout the game and the default delta time set to 0.015 ms that corresponds to 66 fps, a keyboard object, which is a slightly modified version of the THREEx keyboard library that I edited to serve my purposes here, and draws the initial scene (the field, the players and the ball).
After completing initialisation, we can perform a 3 seconds countdown so that the match doesn’t start all of a sudden and the players are not yet ready.
At the end of the countdown the startGame function is invoked.
This function is very important because it starts the two client loops responsible of handling the game inputs and rendering.

function startGame() {

Client side loops

Client physics update loop

Exactly as the server was running a physics update loop, the client is also running a similar loop.
This loops interacts with and modifies directly the View Models that I described earlier.

function startPhysicsLoop() {
    physicsLoopId = window.setInterval(updatePhysics, 15);

This loop runs every 15msec and is responsible for

  • updating at each round the delta time, used to compute the movements of the players
  • updating the position of the client
  • updating the position of the ball
  • checking for possible collisions between the ball and the objects of the game, players and ball. If a collision is detected, than the position and the direction of the ball are updated as well.

Despite the fact that the source I used to create this project puts the collision-checking code into the client update loop, I moved it inside the physics update loop for simplicity. This is obviously not an ideal solution if you want to play sounds on collisions, for example, given that the sound should be played by the update loop.

Client update loop

This loop, unlike the physics loop, is scheduled using a function recently introduced into modern browsers, RequestAnimationFrame.

function startAnimation() {
    requestAnimationFrameRequestId = window.requestAnimationFrame(startUpdateLoop);

You can read in detail about this function here  and more here  .
Basically instead of using setTimeout or setInterval, we tell the browser that we wish to perform an animation and requests that the browser schedule a repaint of the window for the next animation frame. Reasons why RequestAnimationFrame is better than old style setTimeout and setInterval for drawing purposes are clearly stated in the above link, but I think it’s important to quote them here:

The browser can optimize concurrent animations together into a single reflow and repaint cycle, leading to higher fidelity animation. For example, JS-based animations synchronized with CSS transitions or SVG SMIL. Plus, if you’re running the animation loop in a tab that’s not visible, the browser won’t keep it running, which means less CPU, GPU, and memory usage, leading to much longer battery life.

Because this function has been recently introuced, it may happen that some browser still don’t support it and that’s the reason of that RequestAnimationFrameShim.js file that we saw at the beginning. It’s a piece of code found on Paul Irish’s blog aticle mentioned above, so credits go to him.
Let’s see the code:

Initially I check that we are not in a post-goal condition, because after a goal a 3 seconds countdown will be performed and we don’t want to update any of our internal data structures during this time.
If we are not, then we can simulate a new step of the game

At every step the canvas must be redrawn, therefore I can safely clear it.
As this game is fairly simple it won’t impact our performances, otherwise it could have been better to have multiple specialised overlapping canvas, where for example we could use one to draw the background that never changes, and therefore needs not to be cleared & redrawn at each step, and so forth…
Furthermore, I need to process client inputs (if any) and accordingly update a meta-structure that contains a list of commands (“up”/”down”). This meta-structure will then be used by the client physics loop and converted in movements.
Every input processed at each loop is stored in a list of commands ( and assigned a sequence number that will be used when the server will acknowledge already processed inputs. For example, if the server acknowledges input 15, then we can safely remove from the buffer all inputs with sequence number equal to or lower than 15, and reapply all non yet acknowledged inputs (remember the client side prediction model?).
Every input is packed and immediately sent to the server to be processed.

if (!pongR.settings.naive_approach) {

function interpolateClientMovements is a bit tricky and you can read the code by yourself for a better understanding (or even better you can check the blog article where I took this from), but basically it is trying to interpolate the different positions of the opponent in time so that at each redraw its movements will result more continuous and natural to the eyes of the player.
Imagine that the opponent is currently at position (50,50) and then finally we receive its new position at (50,100), if we naively assign this new position to the opponent, what we will see on the screen is a big leap from last frame and it’s obviously something we don’t want.
I have to say that my implementation is not working that well at the moment, but the idea is there.
Finally, after having handled all inputs, I can draw the new frame by drawing each object on the screen.

Server authoritative model in practice

Each time the server runs its own update loop, the clients receive an update packet.
Each client is then responsible for processing it and updating its internal structures so that the next time the update loops will run, they will see an updated snapshot of the game as dictated by the server.
The function responsible to handle all of this is the SignalR callback updateGame.
Let’s see it in detail:

As I mentioned in part 1 of this blog article, it’s the server responsibility to simulate game state changing events, and a goal event is one of this kind!
This should clarify the meaning of the first lines of code: the client only knows that a goal happened because the score changed!
Then, based on this condition, we need to perform two different tasks:

  1. If one of the two players scored, we need to update the scores (both internally and on the screen), reset the positions of all the objects drawn on the screen, reset the state of the keyboard object (we don’t care of all the keystroke pressed after a goal!) and finally perform a countdown that will start the match once again.
  2. Otherwise
    1. We need to apply client side prediction to update our client position, re-processing all those inputs which have not been acked by the server yet
    2. If we are not using a naive approach we do not directly update the other player position, but we simply update some of its internal properties and then we push the just received update packet into the updates buffer, so that it will be used in the update loop for interpolation
    3. We update the information related to the ball  object

It is interesting to note that the updates buffer is not infinite and we limit it to cover 1 seconds of server updates.

Final considerations and next step

This was my first attempt at creating a game and adding multiplayer didn’t make the task easier for sure!
The graphic is really basic, the game itself is not that entartaining, the network lag is clearly affecting the user experience mostly due to a not optimised networking code and to the impossibility of using WebSockets (AppHarbor doesn’t support them yet), but nevertheless it was very funny and I learned plenty of stuff while working on this project.
I have to say that offering the clients a seamless game over the wire has probably been the hardest part of it, and I’m sure there are things which are not working like they should in my code.
Also I think that Asp.Net MVC doesn’t offer the best in class experience to realise this sort of web app (as expected), whereas I see much more fitting Node.Js because of its event-driven nature: if you think about it, almost everything happening in a game can be seen as an event.
Last but not least, using a single codebase and a single language can greatly help to speed up the process.
I couldn’t cover everything I wanted to in these three posts, they are already monster size, so I encourage you to clone the repository and dig into the code to find everything out (like for example delta-time implementation and time synchronisation).
In the near future I would like to rewrite this project entirely in Javascript using Node.Js and, enhancing the user experience ameliorating the graphics using sprites, making the game funnier (e.g. possibility to use bonuses), adding sounds, upgrading SignalR to 1.0 etc…


That’s all about PongR, I hope this can help someone


ChatR: just another chat application using SignalR


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


Update (30 November, 2012): David Fowler (SignalR co-creator) has made ChatR into an example application downloadable via Nuget to show SignalR beginners how to quickly build a chat web app.
He forked the project on GitHub and updated the code base to the latest version of the library, 1.0Alpha2, and then sent me a pull request that I accepted and merged into the master branch.
Therefore the code that you see now in the latest commit on the master branch is slightly different than the one shown in this post, but this is not a problem, you just need to go back in time in the history of the commits to find exactly what is shown here if you really want to, otherwise just dive into the new code!

Update (17 November, 2012): this blog article is related to version 0.5.2. In these days the SignalR team is working on version 1.0 and has released an Alpha. There are quite a lot of significant changes in the API, but the concepts at the basis still apply.
I hope to be able to post an update article when 1.0 is released.

There’s a lot of buzz these days around SignalR ( and more in general around real time communication techniques for the web (
There are many scenarios where real time can be embedded in web applications in a way that is actually useful for the user, but the most easy to understand scenario is the good old chat 🙂
That’s why today I’m going to write about…


A chat web application! Yes, I know it’s not the most original example, but it’s still a good starting point to excercise the fundamentals of the framework.
The user will be able to choose a username and then join the (single room) chat, where he/she will be able to chat with the others and see a list of currently connected users.
This list will obviously update itself everytime someone joins or disconnect the chat.

Source code and demo

As always the code for this tutorial is published on GitHub at
A working demo is deployed on the fantastic AppHarbor at  (if you want to run a test, simply load up new tabs… SignalR is not relying on Sessions so it will be safe). 


This project relies on Asp.Net MVC 3 for the general architecture and the basic client-server communication.
As anticipated, I’ll be using SignalR for real time client-server communication, Twitter’s Bootstrap for basic UI layout and CSS and Knockout Js ( to automatize the view.

Before starting, I’d like to spend a few words about SignalR: it’s growing up as a solid library to use in order to build production code and it’s dead easy to use ( once you grasp the fundamentals (which I’ll cover later on). It not only gets the job done, but also sports an awesome client-server async communication pattern, that makes really easy to invoke methods from Js to C# and viceversa (there is even someone who proposed to use it for general purpose communication
If you are interested in understanding more about the internals, I suggest you clone the repository on GitHub ( and read something like


This application will feature a 1-click process to join the chat:

Step 1: landing page with a form to enter username
Step 2: chat page

The landing page is really simple, just a few text elements and an HTML form with an input text and a submit button that will trigger a HTML POST handled by the server via a MVC action.


I’m going to use 2 objects as Domain models and 1 repository object to store users currently connected to the system.
Let’s see them briefly:
ChatUser has two properties, Id and Username. The former uniquely identifies a client during a session and is assigned by SignalR.
ChatMessage is super easy and has a minimal set of properties: username (I could have used the ChatUser object, but because I’m displaying just a username and not the user id, I chose to transfer the minimum set of information), content and timestamp.
InMemoryRepository implements the Singleton pattern and helps clients retrieve the list of users.

Basic UI structure

Super simple and clean: a title bar on the top of the page that holds the name of the app, a container on the left that holds the usernames of currently connected users, a container, side by side with the previous one, that holds the chat messages and below an input box to enter a message with a submit button.
All the client-server communication round-trips here are handled by SignalR.
To dynamically update the UI, I’ve used Knockout Js and its data-binding declarations.

Javascript namespace and Knockout view models

I chose to create a separate Js file to declare a namespace and the Knockout viewmodels.
Therefore I created a separate folder called “Js” and a javascript file called chatR.js.
The code is pretty straightforward, as you can see:

The rest of the javascript code is inside chat.cshtml as I needed to access the Model binding engine to retrieve the username chosen by the user.


Server side code

Let’s start with the server side part of SignalR.
Do  you remember what I mentioned earlier in the post, that this library sports a great communication pattern that makes really easy to call C# methods from Javascript and viceversa? Well, the C# methods that we can call from the client side are those declared (as public) in the so called Hub.
What is a Hub? Well, a Hub is an abstraction that, to cite the Wiki,  “provides a higher level RPC framework over a PersistentConnection. If you have different types of messages that you want to send between server and client then hubs is recommended so you don’t have to do your own dispatching”.
SignalR offers two options to build a RTC app, leaving to the developer the choice between two built-in objects: Persistent Connections and Hubs.
It took me a while to fully understand what are the differences between these two, and why choose one over another.
Basically Hubs allow you to dispatch whatever object you want to the server, taking care of JSON serialization and deserialization, integrate model binding and are easier to setup, as you won’t have to write additional code to manually register additional routes (the SignalR specific routes). (You may want to read this question on SO, with words from David Fowler, creator of SignalR:
I think it’s pretty safe to say that if you are writing a common web application or .Net application, you’ll be fine with Hubs.

N.B It’s imporant to note that Hubs are created on a “per request” basis, so no static stuff in it should be declared.

To start with, create a Hubs folder and create a new class that derives from Hub.
Here I called mine “ChatHub” and I defined the methods that will be used by the clients:
GetConnectedUsers: retrieves the list of currently connected users
Joined: fired when a new client joins the chat. We add the user to the repository and notify all the clients (even the newly joined) that this new client connected.
Send: broadcasts the message to all the clients (plus some extra work, like formatting links and youtube urls embedded in the messages, showing them as active links and embedded video player respectively. This part is mostly taken by the Jabbr codebase, thanks again David Fowler 🙂 )

Here’s the code:

As you can see, there’s something more: this class implements an interface called IDisconnect, part of SignalR, which allows us to handle the Disconnect() event fired when a user leaves the application. Again what we do here is remove the user from the repository and notify the other clients of the event.
There is another interface, called IConnected, that allows us to catch connection events like Connect and Reconnect, but I chose not to use them because unfortunately (at least in Connect) the round-trip state (see is not available, therefore I cannot access the state set on the Hub with information like the username set on the client that caused the event to fire.
Later on I’ll explain in more detail with code.
This is also the reason why I have a Joined event handler, because I need an event handler, fired after the Connect event, where I can access the round-trip state. Just for the record, Jabbr uses the same philosohpy, not implementing IConnected.
Finally, as you can see almost all of the methods have a void return type, but one: GetConnectedUsers has a ICollection<ChatUsers> return type and this means that only the client invoking the method will get the data back and furthermore the library will handle the JSON serialization and deserialization of the collection.
Alternatively I could have used Caller.callback and set a void return type to the method.

Client side code

On the client side, the first important thing to note is the order of the script references:
1) jQuery 2) jQuery.signalR 3) <script src=”signalr/hubs” type=”text/javascript”></script>

The last one is important because navigating to /signalr/hubs will dynamically generate the script based on the hubs declared on the server. Each hub on the server will become a property on the client side $.connection, e.g. $.connection.myHub.

Let’s see now how to hook up the client and the server:

It’s important to note that all Hub names (methods and class name) are in  camelCase, so for example
var chatHub = $.connection.chatHub;   is creating the proxy to invoke server code.

The next statement,
chatHub.username = currentUser.username;
is setting the round-trip state on the Hub, in this case I’m setting the username so that it will be accessible from the server and I’ll be able to add this user to the list of currently connected clients.

Following we have client-side event handlers, as invoked by the server (remember? Clients.onMessageReceived(message); )

We also apply knockout bindings so that the UI will start updating itself as the ViewModels change.
Finally we have the code block to start the SignalR connection and invoke two Hubs method:

// Step 1: Start the connection
// Step 2: Get all currenlty connected users
// Step 3: Join to the chat and nmotify all the clients (me included) that there is a new user connected
            .done(function () {
                              .done(function (connectedUsers) {
                                   ko.utils.arrayForEach(connectedUsers, function (item) {
                                      users.contacts.push(new chatR.user(item.Username, item.Id));
                               }).done(function () {

method $.connection.hub.start provides two callbacks, done and fail.
So in the code above I’m saying that if the connections is successfully estabilished, we can now try to retrieve the list of connected users and if this worked too, we can notify all the clients that this new client connected via chatHub.joined().
At this point the round-trip state will be available and therefore we will be able to register a new User, along with its connection ID and username.

One final note: by default DateTime is serialized by SignalR using ISO8601 format (, which is ok to parse for Chrome and latest browsers, but, guess what?, not for IE 9 and older versions. That’s why I had to search the web for a nice and quick solution, that you can find in the source code on GitHub.

Useful links

ChatR on GitHub:
ChatR on AppHarbor:
SignalR Wiki:
Hubs vs Persistent Connections:
Useful tutorial:
Jabbr on GitHub:

My WAT? moment using javascript function parseInt()


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


I know a good developer should always read the manual, but sometimes somebody is  just brave enough to start coding withouth caring to read it all… and that’s how I ended up having my “WAT?” moment using the function parseInt().

I was trying to create a Date object from a json serialized data sent by the server, using parseInt() to transform the day, the month and the year into an int. For several data points the process was successfull, whereas for just a few it was not and I could not understand why… until I pointed Chrome debugger to the right line of code and found out that parseInt has a funny behaviour if we do not specify the radix.

Let me show you what I mean: if you type each of the following line into Chrome developer console, the result you’ll get is what comes after the “–>”

parseInt(“01”) –>1
parseInt(“02”) –>2
parseInt(“03”) –>3
parseInt(“04”) –>4
parseInt(“05”) –> 5
parseInt(“06”) –> 6
parseInt(“07”) –> 7
parseInt(“08”) –> 0
parseInt(“09”) –> 0
It comes out that
If the input string begins with “0”, radix is eight (octal). This feature is non-standard, and some implementations deliberately do not support it (instead using the radix 10).  For this reason always specify a radix when using parseInt. (
Ok I got it, always specify the radix like parseInt(“08”, 10)  to avoid unexpected results.
I just have one last doubt… how in the world 9 mod 8 is equal to 0? Even Google confirms it’s wrong (try for yourself typing 9 mod 8 in the search box)!

Image voting system using jQuery, Bootstrap and Asp.Net MVC 3, or how to create a jQuery plugin


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


I was working on a project were one of the requirements was that users had to select a set of images out of a bunch of them, to be furtherly reviewed and commented.
While working on the implementation, I had a sudden idea: I’m going to write for my blog a simple image voting system packed as a jQuery plugin! And so iLikeIt! was born.
The link between the project and the blog article has been Twitter’s Bootstrap, and precisely the Carousel component, as I was using it to iterate between the images.

Live demo and source code

I’ve always wanted to be able to offer live demos of my web applications, and I finally found a fantastic (free) app hosting, appharbor, which also integrates perfectly with Github, so that when I push my changes to Github, it automatically pushes them to appharbor and the new version gets immediately and automatically deployed
on the server!
Click to reach the live demo of iLikeIt!
As always my code is hosted on Github and is free to fork, download etc… (link).

iLikeIt! Screenshot

iLikeIt! screenshot

What iLikeIt! is

iLikeIt! is a simple image voting system that can be used as a classic jQuery plugin, using Bootstrap and jQuery (of course) as base components, and Asp.Net MVC 3 as the web framework to handle server side computation, even though the server side part is really less important here.
The vote is visually expressed with classic “stars” icons, and is 5-stars based.

UI and UX

The idea is to keep the page simple, focusing only on the images and the voting system.
The image rotation is based on a carousel styled component, and to cast a vote a user simply has to mouse over the image and the panel to vote will fade in in the middle of the image, allowing the user to rate the image by clicking on a star.
After the user has casted his vote, the system will display the average rating for the image and won’t allow the user to change his vote (or not until the user refreshes the page… ehi I told you this is a simple system, but you’re free to modify it by cloning and adapting the source code to your needs!).
I know mouse over is not good this days due to mobile experience, and that’s why I will try and cover this topic (and fix the design) in one of my next post.

How to

The HTML part

I wanted the plugin to be dead simple to use, with a jQuery classic approach such as $(“…”).myPlugin().
In the following Gist you can see the HTML code of the View that is needed to use iLikeIt!

The main idea is that from the HTML point of view, all that is needed is to setup the Bootstrap’s Carousel component filled in of all the images you need (if you don’t know about Bootstrap and how to setup a Carousel, please read this ).
It is very important to specify the IDs of the images, as they will be posted to the server to identify the image being voted!
Then, at the bottom of the page, in the script portion, we need to initialize the plugin simply writing

As you can see I’m passing an object called options to the plugin, which contains the URL used for Ajax POST actions to the server.

The javascript part

The baisc idea is to display a small div in the middle of the carousel, between the controls for sliding the images, where we display the star icons and the rating description, so that the user is facilitated when voting.
Here is the js file, which I’ll be covering in more detail immediately after:

The code is contained within an anonymouse closure, thus granting privacy and state throughout the lifetime of the application. You may notice that at the top there is a small bit of code commented out, that is because at the beginning I had decided to use the Module pattern ( and, but then I chose to simplify the usage of the plugin and changed my mind for anonymouse closure.
The usage of iLikeIt! as a jQuery plugin is possible because of the following code block

where I add iLikeIt to the set of functions that is possible to access using jQuery “$” notation.
When called, this function creates and injects the HTML to display the voting panel into the Carousel, and then hooks up all the events that are needed, like mouseenter for the voting panel, mouseover, mouseout and click for the star icons.
Whenever a user goes with the mouse over a star icon, a mouseover event will be triggered and handled by the function displayRating

which retrieves the active image (using the “active” class attribute that Bootstrap places on the current carousel item) and checks if the image has not yet been voted (more on that later). If that’s the case, then the this function is going to replace all the necessary empty icons with the full star icon, thus providing a visual feedback to the user, along a short text description of the current vote. To easily do this I enumerated the IDs of the star icon with a crescent numbering: the first, that corresponds to vote “1”, has an ID = “1” and so on till 5.
When the user has made up his mind, he can click on the icon that better expresses his thoughts about the image and the mouse event “click” will be triggered and handled by the function registerVote

This function performs an Ajax POST to the server, sending out the ID of the image and the string representation of the vote.
On success, we need to mark this image as “already voted”, and to do this I’m using the jQuery function “data()”“vote”, msg);
where msg holds the average rating that is sent back with the server response, and this value is displayed to the user by calling the preloadRating function.
I think this covers pretty much everything that is important.
The server side code is really short and not significant, but should you want to take a look at that, you can download the source.


We have seen how to create a simple jQuery plugin to build a simple image voting system, and how to use the anonymous closure construct to isolate and organize your javascript code.

Have fun,