Javascript promises and why jQuery implementation is broken


Introduction to Javascript promises

Callbacks: a classic approach to async

Callbacks are Javascript classic approach to collaborative asynchronous programming.
A callback is a function object that is passed to another function as a parameter and that  later on must be invoked under some circumstances: for example when an asynchronous function successfully completes a task, it invokes the callback function to give back control to the function that was previously executing, signaling that the task has completed.
Callbacks are easy to use, but they make the code less readable and messier, especially if you have few of them one after another using anonymous functions:

Small example

function invokingFunction() {
    // some stuff
    asyncFunction(function(data) { // the first callback function
                   anotherAsyncFunction(function() { // the second callback function
                          //more stuff

This pattern can lead to what is known as the “pyramid of doom”, especially when using jQuery’s mouse event handlers combined with async operations like $.get or $.post.

Javascript promises: the specification

To fix this and other problems (as we’ll see) with callbacks style of code, a specification has been proposed and it is known under the name CommonJS Promises/A. Let’s see what it says:

A promise represents the eventual value returned from the single completion of an operation. A promise may be in one of the three states, unfulfilled, fulfilled, and failed. The promise may only move from unfulfilled to fulfilled, or unfulfilled to failed. Once a promise is fulfilled or failed, the promise’s value MUST not be changed, just as a values in JavaScript, primitives and object identities, can not change (although objects themselves may always be mutable even if their identity isn’t). The immutable characteristic of promises are important for avoiding side-effects from listeners that can create unanticipated changes in behavior and allows promises to be passed to other functions without affecting the caller, in same way that primitives can be passed to functions without any concern that the caller’s variable will be modified by the callee.
A promise is defined as an object that has a function as the value for the property ‘then’:
then(fulfilledHandler, errorHandler, progressHandler)
Adds a fulfilledHandler, errorHandler, and progressHandler to be called for completion of a promise. The fulfilledHandler is called when the promise is fulfilled. The errorHandler is called when a promise fails. The progressHandler is called for progress events. All arguments are optional and non-function values are ignored. The progressHandler is not only an optional argument, but progress events are purely optional. Promise implementors are not required to ever call a progressHandler (the progressHandler may be ignored), this parameter exists so that implementors may call it if they have progress events to report.
This function should return a new promise that is fulfilled when the given fulfilledHandler or errorHandler callback is finished. This allows promise operations to be chained together. The value returned from the callback handler is the fulfillment value for the returned promise. If the callback throws an error, the returned promise will be moved to failed state.

It’s very easy to find blog articles and tutorials online, especially around jQuery Deferred object, and almost all of them show how to do callback aggregation using the “then” function to attach callbacks to a promise, whether for success or for errors (or even to signal that an operation has made some progress). When the promise transitions state, the callbacks will be called, that’s as simple as that.
After reading a lot, I thought I knew enough about promises, but then I stumbled upon this page ( by Domenic Denicola, titled “You’re Missing the Point of Promises”, and after reading it I really had the feeling I was missing it entirely!

What promises are really about

As the previously linked page states, Javascript promises are not just about aggregating callbacks, but actually they are mostly about having a few of the biggest benefits of synchronous functions in async code!

  1. function composition: chainable async invocations
  2. error bubbling: for example if at some point of the async chain of invocation an exception is produced, then the exception bypasses all further invocations until a catch clause can handle it (otherwise we have an uncaught exception that breaks our web app)

To quote Domenic:

The point of promises is to give us back functional composition and error bubbling in the async world. They do this by saying that your functions should return a promise, which can do one of two things:

  • Become fulfilled by a value
  • Become rejected with an exception

And, if you have a correctly implemented then function that follows Promises/A, then fulfillment and rejection will compose just like their synchronous counterparts, with fulfillments flowing up a compositional chain, but being interrupted at any time by a rejection that is only handled by someone who declares they are ready to handle it.

That is, promises have their foundation in this “then” function, if this is broken than the whole mechanism is broken. And that is exactly what is happening with jQuery’s implementation, let’s see why with the help of an explicative (I hope!) code example.

Why jQuery promises are broken

The problem with jQuery’s implementation (up until version 1.9) is that it doesn’t respect the second part of the specification, “This function should return a new promise…”, that is “then” doesn’t return a new promise object when executing one of the handlers (either the fullfillment, the rejection or the progress handler).

This means we cannot do function composition as we don’t have a “then” function to chain to, and we won’t have error bubbling due to a broken chain, the two most important points about this spec.
Finally, what we have is just callback aggregation.

JsFiddle examples

The following fiddles show a simple chain of async functions.
I’m simulating the case where the original promise is fulfilled, the fulfillment handler is invoked, gets the data and then throws an exception in response to it. The exception should be handled by the first rejection handler down the chain.

The first fiddle  is not working as expected: the rejection handler is never invoked and the error bubbles up to the app level, breaking it. Below I show the console reporting the uncaught error:

Uncaught errors

Uncaught errors

Next fiddle  behaves as expected, with the rejection handler correctly invoked. The way to quickly “fix” the broken implementation is to wrap the handler with a new Deferred object that will return a fulfilled/rejected promise, that can be later used for chaining, for example. Below we see the  console showing no uncaught errors.

No errors

No errors


As we have seen, until at least version 1.9.0, jQuery can’t do pomises properly out of the box, but there are several alternatives libraries on the market such as Q, rsvp.js and others, that adhere completely to the specification.


Promises are the present and the future of Javascript asynchronous operations, they provide an elegant and readable code and more importantly they allow function composition and error bubbling, making async more similar to sync programming style, thus making the life of a developer a little bit easier!
I said that Promises are the future of Javascript async programming because Harmony, the next version of Javascript, will allow for great stuff combining promises with Generators. To see a sneak peek preview on how powerful these two concepts can be if used together, point your browsers to !


Again, credits to Domenic Denicola for writing this post  and to all the ones who commented and posted examples that helped me understand, notably user jdiamond!


Javascript array performance oddities


Heads up!

The blog has moved!
If you are interested in reading new posts, the new URL to bookmark is


I’ve recently attended a talk by a Google engineer part of the Google V8 team about writing efficient Javascript code with an eye on performance, obviously with a focus on V8, and I’ve started to read a lot on the topic.
What everyone seems to agree on is that there still is a widespread gap of performances between the different Javascript engines and also sometimes there are odd behaviour that will puzzle you, like the one I’m going to show below.
During the talk I was really impressed by huge performances gap between different approaches to array manipulation, most of the times due to differences in code that looked so trivial!
Before diving in, let’s have a super quick introduction to javascript arrays.

Small overview about Javascript arrays

(Extract from
Arrays are one of JavaScript’s core data structures.
Arrays are a special case of objects and they inherit from Object.prototype, which also explains why typeof([]) == “object”.
The keys of the object are positive integers and in addition the length property is always updated to contain the largest index + 1.

Arrays are very useful and used in contexts like games or mathematical simulations for example, where we may need to store and manipulate a lot of objects and we need to do it in a very short amount of time.

State of the art

The general consensus about V8 (and more generally about Javascript engines) has a few useful tips to speed up array manipualtion

  1. If you want to use an array data structure, then treat it as an array: do not mix the types of the data stored. For example in V8 when we first populate the array with data, the JIT creats a HiddenClass object that tracks element types, and if at a certain point we change the types by for example storing a string instead of a number, V8 will have to forget it all and restart, therefore dropping performances.
  2. pre-allocate (pre-allocate means specifying the array length) “small” arrays  in hot portions of the code using the constructor “var array = new Array(num)” or setting the array length if declared with []
  3. do NOT pre-allocate big arrays (e.g. > 64K elements) to their maximum size, instead grow as you go.
  4. it’s best to use WebGL typed arrays (Uint32Array, Float32Array, Float64Array etc…)
  5. use contiguous keys starting at 0 for Arrays
  6. don’t delete elements in arrays, especially numeric arrays
  7. don’t load uninitialized or deleted elements:

I was quite intrigued by the possible performance increase obtainable by simply pre-allocating an array by specifying its size (var array = []; array.length = 1000; or simply var array = new Array(1000) ) so I created a test suite on JsPerf to test these assumptions, and it turns out that Chromes doesn’t really behave as expected, despite being the fastest browser out there anyway.

Test setup:

I setup a two-parts test, as IE 9 doesn’t support typed arrays (and I don’t have access to IE10 yet).
The first part tests the performance differences between pre-allocating and not pre-allocating arrays, and also the difference between array declaration styles
var arr = []
var arr = new Array()
The second part tests typed arrays.

Test part 1

Test results: Chrome is the winner

Test results: Chrome is the winner

The figures tell us that Chrome is by far the fastest browser in this test, with Chrome v26 being more ~3x faster than Firefox v17.0.
Let’s now take a look at the assumptions 1 and 2 stated above and see if they are still true when put to the test:
As expected from assumption 1, Chrome is indeed at least 3x faster when pre-allocating a small array (with 1000 items), whereas on Firefox and IE 9 there isn’t any significative difference, with Firefox v17 being more 4x faster than IE 9!
On the browsers tested so far it makes no big difference using the alternative syntax
var arr = new Array(l) vs var arr = [l], although using the constructor is slightly faster on Chrome (especially on Chrome v26) and Firefox.

What about arrays with more than 64k items?
Unsurprisingly Chrome behaves in a different manner than the other two browsers: IE and Firefox do not show any remarkable difference about pre-allocated vs not initialised arrays, with a size of 65000 items. Chrome instead, as expected by assumption 2, performs ~3x faster when the array is not initialised and we are using var arr = []; array initialisation code.
But with much surprise this is not the best performing case, in fact this tests reveals that pre-allocating a big array using the constructor var arr = new Array(65000)  shows that Chrome is ~3x times faster!
Therefore we cannot say assumption 2 is always valid, as we see here that using Array constructor does a lot of difference.
The reason is not clear to me, given that the not-initialised cases (with new Array() and []) perform closely.
And we are not done with the surprises yet!

Test part 2
In fact if we take a look at the test about typed arrays, defined here as following

l = window.growFactor;
var arr =  new Uint32Array(l);
for (var i = 0; i < l; ++i) {
 arr[i] = 0;

shows again Firefox and Chrome performing in a significant different manner: the first benefits a lot by using a typed array (as expected), increasing performance by 2.5x, whereas Chrome sees its performances halved! As a result, Firefox’s performance peek almost matches Chrome’s peek.
Again, it is not clear to me why, I just think this is an oddity that is worth sharing.

We have seen that sometimes browsers performs in an unexpected way (see Chrome with typed arrays), and their performances may differ so much one from the other in things that look trivial and meaningless.
Therefore if your Javascript application is using arrays and needs to squeeze every drop of performance out of the engine, it may be worth spending some time to make some tests as the performance increase can be very significant!