Converting Node.js Promises into Deferreds

Dealing with node.js callbacks is unpleasent. If you don't agree then probably best to skip this post.

Here is one way to convert node.js callbacks into deferreds / promises.

Step 1 - install q

npm install q

Q is a javascript deferreds framework with support for node.js.

Step 2 use deferreds

Consider the following couchdb client code.

module.exports = {
  
  get: function (key, callback) {
    var db = new(cradle.Connection)().database('listagram');
    db.get(key, function (err, doc) {
      if (err) return callback(err);
      callback(null,doc);
    });
  }

};

The essence of this code (wrapping cradle's get) is hopelessly obscured by node.js ceremony. Using Q we can convert this function to returning a promise.


module.exports = {

  get: function (key) {
    var deferred = Q.defer();
    var db = new(cradle.Connection)().database('listagram');
    db.get(key, function (err, doc) {
      if (err) {
        deferred.reject(new Error(err));
      } else {
        deferred.resolve(doc);
      }
    });
    return deferred.promise;
  }

};

You may notice that there is some boilerplate code required within every callback to configure the deferred. We can get rid of this using the makeNodeResolver() function.


module.exports = {

  get: function (key) {
    var deferred = Q.defer();
    var db = new(cradle.Connection)().database('listagram');
    db.get(key, deferred.makeNodeResolver());
    return deferred.promise;
  }

};

Step 3 - Work with the promise

Now instead of passing a callback to our get function (continuation passing) we attach functions to the returned promise. If we don't want to end the promise immediately we can instead return it up the callstack or save the reference for later.


var getWrapper = require('./getwrapper');

getWrapper.get('thisismykey').then(function (doc) {
  // handle returned document
}, function (err) {
  // handle error
});


Node.js Dependency Injection

I strongly dislike this sort of dependency injection used to test dynamic languages. The two possible solutions given are:

  1. monkey patch node's core fs module
  2. replace node's module require function

In either case you need to manage the lifetime of your monkey patch, manually track a reference to the original function and replace it at the correct moment. This is not a one off. You have to do this for every sinlge dependency.

I prefer to be explicit about dependencies. Here is one way to manually provide dependencies for testing and allow node to automatically fall back to using real dependencies when stubs are not supplied.


Underscorec - Pre-compile underscore templates

Handlebars has a great pre-compilation system for taking an organised set of templates and pre-compiling them into a single JavaScript file. Compilation is the slowest part of template rendering, so server-side pre-compilation makes a lot of sense.

If you don't want to take a dependency on handlebars, you can use the basic _.template() function within underscore.js. The underscore templating is sufficient if you have simple templating needs and is pleasantly light-weight.

To bring the brilliant handlebars pre-compilation to underscore templates I created underscorec, my second npm package. Here is the readme:
 

underscorec

Command line precompilation for underscore.js templates.

Example

Given a file system like this:

views/
  layout.us
  home/
    index.us
    blah.us
  admin/
    dashboard.us

The following command:

underscorec views/ output.js

will compile the four underscore templates into the file output.js. The views are attached to a global templates object and named according to their path:

  • templates[layout]
  • templates[home/index]
  • templates[home/blah]
  • templates[admin/dashboard]

Testing

mocha --compilers coffee:coffee-script test/fs_tests.coffee


Compile and concatenate CoffeeScript files

The following is a simple bash script to search a directory heirarchy for CoffeeScript files, concatenate them and compile them into a single javascript file.

First recursively print the paths to all CoffeeScript files:

find . -name "*.coffee" 

Then print their contents:

find . -name "*.coffee" -exec cat {} \; 

Then pipe the concatenated coffeescript into the CoffeeScript compiler. The output will be sent to stdout:

find . -name "*.coffee" -exec cat {} \; | coffee -sc 

Finally, redirect stdout to a file:

find . -name "*.coffee" -exec cat {} \; | coffee -sc > compiled.js 

Original gist


JavaScript Allonge

bookpage

The set of programming books form a spectrum. At one end the titles end with 'in 24 hours' or 'for dummies'. At the other end is JavaScript Allonge. The joy of JavaScript Allonge is its theoretical examination of JavaScript and programming. I won't claim that this is unique but it is certainly rare. When was the last time a programming book made you think? When was the last time a programming book taught you something truly novel?

JavaScript Allonge begins with the basics in detail: values, expressions and function application. It's  not long before it dives deep into functional programming propaganda, with the author introducing the JavaScript version of currying, maybe and combinators. You can expect deep coverage of value vs reference types, functions, closure, binding and rebinding, this, function decorators, classes, inheritance, and mixins.

If you enjoy JavaScript Allonge then you will love its sister book CoffeeScript Ristretto. Many of the concepts covered by these books present better in CoffeeScript due to some of its extra features and functional programming orientation.

JavaScript Allonge is well suited to readers who are looking for something a little different to the mainstream and who are not allergic to functional programming. It contains a refresher on JavaScript fundamentals but also covers advanced topics. Highly recommended.


Adding key/value pairs to trello cards (and other apis)

One of the limitations of trello is that you can’t add extra structured data to cards to enable things such as the production of burndown charts.

It is possible to add key/value data to trello cards (or any json api) using the following method.

  1. Access the API via a proxy
  2. Add key value data to string fields using some kind of table format:
    1. | key1 | a |
    2. | key2 | b |
  3. Within the proxy recursively search text properties for key value data. When found, parse it and add it to the parent object.

The following card data:

   1:  {
   2:    cards: [
   3:      abcd1234: {
   4:        title: "card 1",
   5:        description: "This is the first card. 
   6:                             | key1 | a |
   7:                             | key2 | b |"
   8:      }
   9:    ]
  10:  }

is converted to:

{
  cards: [
    abcd1234: {
      title: "card 1",
      description: "This is the first card.",
      key1: "a",
      key2: "b"
    }
  ]
}

Note that the key value text has been removed and the data has been added to the parent object (card abcd1234).

Having promoted the data to json properties it is now easy to map to a table structure and use excel to create pretty visualisations.

 


Why Functional Programming Matters

If you are remotely interested in functional programming then I recommend Why Functional Programming Matters by John Hughes. If you want a shorter and more meme-y slide presentation then take a look at my presentation from dddbrisbane.


Spiderfi.sh video demo

Since my last update I have deployed a working sample client-side web application that is configured to render server-side when requested by the Google crawler. The following video shows how the site functions when JavaScript is disabled and then when the user agent is changed to mimic the Google crawler.  

The video begins by demonstrating a sample client-side web application. The content of the pages is loaded via ajax. Then JavaScript is disabled and the site is demonstrated again. Nothing works. This simulates what the site looks like to the google crawler without spiderfi.sh. Finally the user agent is set to mimic the google crawler (Googlebot). When the page is reloaded the content is present as part of the page, even with JavaScript still disabled. Google can now index the siite!

Honeypot - demonstration of Spiderfi.sh static rendering of a client-side web application from Liam McLennan on Vimeo.

 


Keeping the web searchable

For the past few months I have been working on a way to make client-side web applications indexable by search engines. At this point the core server-side rendering of client-side UI is working and publically available (although not very reliable or user friendly).

To help me test I built a demo site that loads its content asynchronously via ajax. I can then use curl, or a browser, to request a rendered version of the page.

curl spiderfi.sh

The response includes the original headers from target site:

< HTTP/1.1 200 OK
< X-Powered-By: Express
< server: nginx/1.1.19
< date: Fri, 05 Oct 2012 20:59:00 GMT
< content-type: text/html; charset: UTF-8
< transfer-encoding: chunked
< connection: keep-alive
< set-cookie: sid=xMlJXAeW3RxHR4KYSXjcdFEsUpqsCLvrLFKQnNer3Sj3sO4dgiaT2JgNwlgMpBtoLNBDN0jrkxnXKr8T7x90HkfZ3HtxHUcHLcYfoTm3dhstxLOm5RYMNqdwicBZMPtL; path=/; expires=Fri, 19 Oct 2012 20:58:58 GMT
< Content-Length: 2699
< ETag: "-1005630087"

And the content includes the dynamically loaded content.

For a more extreme example, the url http://honeypot.withouttheloop.com/page/handlebarsejs loads content from the server with a two second delay. When the page first loads it is a blank frame, then the content of the article appears two seconds later. When spiderfi.sh is asked to render this url it waits for the ajax load and renders the complete page, including the dynamic content:

  <div class="content"><article class="markdown-body entry-content" itemprop="mainContentOfPage"><p>Byte order mark (BOM) is a unicode character that signals the byte order of a UTF text file. If displayed in a text editor it most often appears as <code></code>. </p>

<p>Sometimes, when using <a href="http://handlebarsjs.com/">Handlebars.js</a> to produce markup you will see extraneous whitespace inserted into the DOM immediately prior to your rendered template. In the chrome developer tool, this extra whitespace appears as <code>" "</code>. In firebug it appears as EF BB BF.</p>

<p>This problem seems to occur when using the <a href="http://handlebarsjs.com/precompilation.html">handlebars.js precompilation feature</a> to precompile templates on the server and combine them into a single script. The precompiler doesn't remove the BOM marks when compiling the templates so they end up in the DOM, messing with your layout. </p>

<blockquote>
<p>The solution is to make sure that the template files do not include BOMs. You can use Notepad++, Sublime Text or any good text editor to save a file as UTF-8 without BOM. </p>
</blockquote></article>
</div>

What's Next?

Now that server-side rendering is done the next step is to add an interceptor to the test site. When the test site receives a request from Google the interceptor will forward the request to spiderfi.sh. That will get the dynamic content indexed by Google and prove that this whole system works.


Spiderfi.sh: Search engine optimization for modern web applications

 The title says it all. My latest project is a service that makes client-side web applications (SPAs) indexable by search engines. This is necessary because client-side web applications generate their user interface in the browser, which means that search engines only see an empty page. 

It is early days, but you can preview the service already. Go to http://spiderfi.sh and open a javascript console such as firebug or chrome developer tools. Run buildUrl('<url of a client-side web application>') to get a url in the correct format. 

generating a url

 

with that url you can use curl or a web browser to ask spiderfi.sh to render a page of a client-side web application:

In the following example I asked spiderfi.sh to render http://honeypot.withouttheloop.com, which is a test site with a dynamically rendered UI (as you can see if you inspect the source).  

rendered

Obviously the image and css assets are missing but that's ok. The search engine spiders will not make requests to spiderfi.sh directly. They will make requests to the client-side web application, which will reverse proxy the request to spiderfi.sh.