Nodejitsu

Save time managing and deploying your node.js app. Code faster with jitsu and npm

The Node.js Philosophy

About the author

Name
Location
Worldwide
nodejitsu nodejitsu

This is Part one of a four part series leading up to Node Knockout

The distillation of a philosophy to coding in a particular language (or environment) takes time, trial and error, and above all a group of extremely talented and dedicated professionals. Most of the ideas central to this philosophy are not new to the practice of Software Engineering or the study of Computer Science, but merely the selection of choices that appear often in the community with motivation from choices made in node.js core. With best efforts, sources and references will be supplied to support the arguments made here.

You first may ask "has there been enough time?" Node.js is still very much in its infancy as put so eloquently by Isaac Schlueter (author of npm) earlier this year at NodeConf. That is, Node.js has barely been around for two years. It may be therefore premature to write this, but there has the immense growth seen in both node.js core and in the ecosystem surrounding it. Nodejitsu has been intimately involved in this growth and thus, in the opinion of the author, enough time has passed for a few granules of truth to trickle through.


There surely has been a large amount of trial and error: as of writing this article there are over different 3200 modules on npm (the node package manager) many of which are failed attempts at one problem or another. And last but not least there is of course an immensely talented group of professionals working both on node.js code itself and on important modules which grow the node.js ecosystem.

Looking back over the evolution of the past two years, there is truly one central tenet to the development of node.js core set out by Ryan Dahl (the creator of node.js) that has defined the node.js philosophy this article is attempting to capture. That is, node.js core should be kept as small as possible. This is very clear on both the node.js and node.js development mailing lists where frequently feature additions are not included because they simply "do not belong in core."

The focus on a very small, performant, cross-platform compatible core was the impetus for the vast number of modules available through npm today. This is because many commonly requested features:

  • "Framework": templating, routing, cookies, etc.
  • Asynchronous flow control
  • Pluggable Middlewares

were not available in node.js core. The need for these features from developers eager to use node.js allowed talented and prolific module authors to emerge and fill the void. This continually expanding and changing ecosystem of modules is in many ways the defining factor around the node.js philosophy: experimentation with small kernels of functionality which rely on loosely coupled components.


Experimentation

We are no strangers to mad science at Nodejitsu, but at it's core why is experimentation so core to the node.js community? With no large existing codebase to solve common web application development problems combined with Javascripts minimalistic language design many modules emerged to solve the same problem(s). Revisiting established problems caused many developers to ask "are the accepted approaches to solving the problem at hand necessarily the best?" This focus on experimentation has shown clear benefits: socket.io, dnode, the recent redesign of npm are some clear examples. I think it was put best by Mikeal Rogers (organizer of NodeConf): "if you think node.js needs less modules you're $%#@!^$ crazy."


Small kernels of Functionality

One of the benefits of having a well established package manager like npm (especially now that it is not possible to have version conflicts in node.js modules), is that a module developer can really focus on developing the best tool for an individual task. In other words, why package two modules together if you can simply break them apart into two kernels of functionality which are codependent? Exactly. There is no reason to do this unless your source code distribution (e.g. package manager) is deficient in someway.

Another way to think about this is to consider a phrase that I use often to describe node.js core: "close to the metal." For example, there are not a lot of abstractions or conventions between the developer and the raw HTTP stream. Each step along the way is a small kernel of functionality (the net module, the http module, etc.). This low-level functionality clearly has impacted the approach of the community at large.


Loosely coupled components

Coupling is poison to large codebases. It allows developers to hide improperly thought out abstractions and bad APIs by strictly defining and limiting the consumer to a fixed surface area. The idea of "loosely coupling" your components is that you can easily switch one out for another. This is something that was borrowed by Rails from Merb when they merged into Rails 3.0 and something that I am very glad is engrained into the approach taken by node.js developers. Early experiments on tight coupling such as Connect have not been repeated in other approaches and (in the case of Connect) considered "harmful" by their original authors; in this case Tim Caswell.

The argument in favor of loosely coupled components becoming the de-facto standard in node.js is (again) precipitated by npm. Without a strong, easy-to-use package manager with well defined semantics, module authors would be incentivized against decoupling components and releasing them separately.


The Road Ahead

Node.js is still not yet 1.0, yet it would be surprising for the core tenets of the philosophy to change. It is a philosophy that has led to important early-stage design choices within node.js, npm, and high-profile modules that continue to make node.js the technology to watch. It is this philosophy that results in the vibrant ecosystem, continued flow of interesting projects and modules, and intelligent debate around core functionality