This article builds on the prior article: Koa.
Now that we have a simple application running and testable locally, we need to figure out a way to package and deploy it to other machines (i.e. Development and Production). We could install NodeJS on a machine and copy the code there. But, we would have to make sure it was the right version of NodeJS. We would have to make sure the packages we depend on (ex. Koa) were installed. And, we would have to modify these things when we upgrade them. This process can be error-prone and time consuming.
Java partially solved this problem by creating a WAR. This was a zip file with a special directory structure that stored the application code as well as dependencies. Docker takes this idea further to package up an entire “image”, or lightweight virtual machine, into a single item. Whereas a WAR file just packaged the application code, a docker image captures both the application code and the server code. This image can be stored in a repository (a.k.a. docker registry just once and then pulled down and run (as a docker “container”) in multiple environments, such as another developer’s laptop or a Test environment. To use a programming comparison, an image is to a class, as a container is to an instance of a class. Because the class is consistent across environments, it greatly cuts down on surprises resulting from variations in deployment environments.
Start by creating a file named
Dockerfile in the root of the project with these contents:
# Start from this image on https://hub.docker.com (the public docker repo) # This gives us a known starting point to configure from for each build FROM node:10-alpine # Let docker know that our app is listening on port 3000 when it runs EXPOSE 3000 # This just sets the current directory so we don't have to put '/app' everywhere WORKDIR /app # copy these files from our local workspace into the container (they will end up in /app) COPY package*.json ./ # install npm packages. This is exactly the same as running it on our local workstation but is running inside the container so will install packages there. RUN npm install # Copy everything else (i.e. Code) into the container from our local workspace COPY . . # Run our test cases, if any fail this will fail the docker build command (non-zero exit code) RUN npm test # set the startup command to npm start so when our container runs we start up the server # this is way easier then configuring some sort of system daemon CMD [ "npm", "start" ]
We don’t want to copy in the
node_modules from our workstation since it is going to be installed within the container. To prevent the
COPY . . command from picking it up, create a
.dockerignore file with this line:
That’s all there is to it. Run this command to build the image:
docker build --tag node-ref .
The image can be started locally by running:
docker run --publish 3000:3000 node-ref
You should be able to make a request to http://localhost:3000/hello and see the same response as earlier. Press control-C twice to exit.
In this article, we learned how to containerize our microservice.
Bonus: If you’re used to virtual machines being super large files that are gigabytes in size, you’ll be happy to know that the docker image we just created is only 73.8MB.
Table of Contents
- Unit Testing
- Docker (this post)
- Code Pipeline
- Application Load Balancer
- Put Product
- Smoke Testing
- List Products
- Get Product
- Patch Product
- History Tracking
- Change Events
If you have questions or feedback on this series, contact the authors at firstname.lastname@example.org.