Dockerizing A Node.JS Web App (again)
5 minute read
*tl;dr: You might have seen or read the Docker tutorial Dockerizing a Node.js web app; this is another take on how you might go about doing just that if you need a more flexible approach than the one you can find in the Docker docs (how are they not called docks?!).*
Head here to check out the sample Dockerfile.
##Ahoy! I’ve been working on a small side-project/really-small startup called Charityware. It’s been a ton of fun and I’ve learned lots and lots…and lots. It’s (still) mainly a learning exercise, but it could turn out to be profitable – only time will tell.
Anyways, one of the technologies I’ve chosen to use is Docker. It’s supposedly The Future™ and you’ve probably been seeing everyone ‘dockerize’, well, everything. And with pretty good reason – Docker is pretty great, in the main.
There are lots of pros to using a container focused infrastructure and it might be worth writing at length about Docker, but for now these few from some Red Hat documentation will suffice:
- Rapid application deployment
- Portability across machines
- Version control and component reuse
- Sharing of images/dockerfiles
- Lightweight footprint and minimal overhead
- Simplified maintenance
Let’s Get Going
In working on Charityware, I ended up making Amazon AWS my platform of choice. I looked at Heroku, EngineYard, Nodejitsu, Google App Engine, and others, but AWS’s reliability, feature offering, flexibility, and pricing ended up winning out. Initially, I went with the Elastic Beanstalk nodejs-focused offering on AWS. Elastic Beantstalk is essentially just a coordinated collection of AWS resources, so there’s no dark magic going on.
At first, I was only able to use the nodejs-focused EB configuration. This was great, but I had to specify quite a few node-specific commands and customizing dependencies on the instances that get spun up was really difficult.
I needed a setup with the following requirements:
- understandable and inspectable build steps
- intelligent caching of resources if at all possible
- general dependency freedom
- flexibility to use whatever version of node I wanted (I don’t want to wait for a vendor to update a version when there’s a security update)
Eventually, after trying several different approaches and several different technologies, I found out AWS EB had recently started supporting single- and multi-container Docker setups.
After lots more learning, wisdom-gaining, and making mistakes (and not in that order), I finally feel like I have a stable, flexible approach to building and deploying the Charityware API. Below is an example
Dockerfile that is pretty close to how we build our node apps with Docker.
##Breaking It Down
Most of the
Dockerfile should be readable enough to understand, but I’ll break each step down a little further:
This is the base-image we’ll pull from; I found that I didn’t really need all that ubuntu brought to the table, so I ended up going with the slightly smaller
debian:jessie base image.
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
We’ll need to do some
sourceing to get nvm working properly, so we replace the shell with bash.
ENV appDir /var/www/app/current
We set some environment variables we’ll use later (just one in this case)
RUN apt-get install -y -q --no-install-recommends && rm -rf /var/lib/apt/lists/* \ && apt-get -y autoclean
Install all the dependencies we’ll need for our app and clean up after APT.
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 0.12.7
Set some more environment variables so we can easily choose which version of node or iojs we want.
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.26.0/install.sh | bash \ && source $NVM_DIR/nvm.sh \ && nvm install $NODE_VERSION \ && nvm alias default $NODE_VERSION \ && nvm use default
This is the key part where nvm comes in and works its magic. We need to fetch and run the install script and source it. Then, once it’s available to us, we need to install, alias, and start using the version of node we want. One key thing to note here is not to rely on the creationix install script staying the same or even persisting. I ran into this the other week and have since moved to hosting the install script to avoid drifting resources that would break the build.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
We need to set up our PATH correctly so we can access
RUN mkdir -p /var/www/app/current
Almost done! We’re now setting up the
WORKDIR so Docker knows where to run our app-specific commands in a bit.
ADD package.json ./
RUN npm i --production
We add just our package.json before adding the rest of our app files. This lets Docker cache things because it will only rebuild the layer or step when
package.json has changed, not every time.
ADD . /var/www/app/current
RUN service nginx restart
CMD ["pm2", "start", "processes.json"]
Nearly there! now we need to add the rest of our app files to the
WORKDIR, restart nginx or other services that need restarting (optional), EXPOSE the right port, and start our app!
There you have it. This approach satisfied all of my requirements for a build system and has greatly improved build speed, flexibility, reliability, and my understanding of the process. I hope this helps you in some small way. Feedback, fixes, suggestions all welcome!
 Red Hat Release Notes – 7.2. ADVANTAGES OF USING DOCKER
- Using Node, Redis, and Kue for Priority Job Processing
- Using Event Emitter in Node.js
- Dockerizing a Node.js Web Application
- React Native: Quick Start and Including Images
- New NPM Module: Favorites
- Npm Modules I can't live without (pt. 2)
- Running Node.js Apps in Production
- Server-Side Rendering with React and React-Router
- Installing iojs and Node.js Together