Sharing Our Passion for Technology
& continuous learning
〈  Back to Blog

Node Reference - Conclusion

Teammates sitting together and looking at code


This article builds on the prior article: The “Join” Problem.


In this series, we walked the reader through the considerations of building a production-ready microservice. More important than the tools we used (e.x. NodeJS, AWS) were the questions we asked. You may choose different tools and libraries for your applications, but it is important to understand why those tools are being chosen over alternatives.

Additionally, a focus on test covered quality code and simple design kept us focused on delivering value.

By quality, we mean not only quality in the traditional sense of working software, but also quality measured by how readable the code base is to future developers. Remember that when using infrastructure as code, it is just as important that the infrastructure be as simple and maintainable as the rest of the code base.

One of the most valuable advantages of building smaller, more focused pieces of software is that these pieces are naturally simpler and more consistent over time:

  • Exchanging one library for another can be consistently done across the entire code base at once.
  • Upgrades are smaller in scope and, thus, become approachable rather than daunting.
  • Because simple microservices are often housed in small source code repositories, any member of the team can more easily grasp what each microservice is responsible for and can, therefore, reasonably understand the code flow of the entire application.
  • Even rewriting the entire service from the ground up (say, to switch languages) is possible in a reasonable amount of time.

We believe the service built during this series has all the core pieces necessary to be production ready. This does not mean there isn’t room for improvement.
We have had countless conversations about whether we should discuss a particular topic or if it was overcomplicating things. Here are some additional topics that might be useful to explore or expand upon in your implementation:

  • We did not cover building a user interface for products. A Single Page Application can be built as well as a more traditional multi-page approach. Whichever approach you choose, we recommend keeping the user interface as a separate code base from the service. This allows the interface to be built using different tooling, deployed on its own schedule and in general keeps technical concerns separated. It also helps to test that the services you write are consumable from a client perspective.
  • If you choose to implement a Single Page Application it is worth noting that this does not mean the entire organization has to have one “single” project driving its entire user interface. All of the downsides of a large monolithic application (namely lack of maintainability and flexibility) are just as much a concern in the browser. Be careful that the organization does not accidentally build an application that becomes too big and fragile to maintain. A paradigm like Micro Frontends or simply linking between independent applications that share some common CSS or agree on styling decisions can help mitigate this.
  • Additional Security should be implemented in your production AWS account. For example, we recommend enabling VPC Flow Logs and monitor threats across all of your AWS accounts with Amazon GuardDuty from a centralized security account. You can also mitigate being DDoS’d by placing AWS Shield in front of your service. Secrets like OAuth tokens, or the GitHub Personal Access Token (used in our Code Pipeline article), could be managed instead by the AWS Secrets Manager service. Furthermore, IAM policies should grant only the least privilege required. IAM Best Practices should be studied and implemented.
  • Additional Performance gains can always be achieved as the need arises. For example, when using an AWS.DynamoDB.DocumentClient.scan in listProducts.js and filtering out the soft-deleted products, we could instead leverage a sparse index.
  • AWS Resource Tagging is an important way to understand service ownership as well as cost allocation across an organization. You can learn more about tagging strategies here. Most all resources can and should be tagged with a common set of tags. This enables organizations to categorize resources by purpose, owner, environment. The sooner a standard set of tags is agreed upon and used by your teams, the better.
  • Behavior-Driven Development tools like Cucumber.js allow the business to more closely see the scenarios that are being tested. This can be very valuable in situations where many complicated business processes exist.
  • Jest is another Unit Testing that is gaining popularity and takes an “all inclusive” approach.
  • Test data generation libraries like Chance can aid in writing unit tests. Tests can become more readable when using random data for parameters that are required by the implementation but not relevant to the specific test or when the exact value of the data isn’t what is being tested but rather the flow of data. Be careful that using random test data does not cause the test case to become another implementation of the same logic being tested. For example, if the expectation is that a date be formatted in a particular way and the test is using a random date as input, then the test case is forced to format the date in order to assert the correct output. This test case is not testing correctness but, rather, consistency.
  • As the services are built out and consumed by more clients, documentation can become increasingly valuable. Specifications like Swagger can help provide a standard way to document endpoints that can be automatically consumed. There are even libraries that can aid in auto-generating the Swagger file so that it does not have to be maintained by hand. Documentation can be measured by its usefulness to the readers. If documentation is vastly out of date or inaccurate it may be less useful than if it didn’t exist at all. For this reason, try to keep API documentation in source control so that it can be reviewed and merged with the functionality it is documenting.
  • Our implementation of CodePipeline took the liberty of deploying the pipeline, development, and production to the same AWS account. The best practice is to separate the production and non-production environments into separate accounts because an account is the default security boundary in AWS. With the introduction of AWS Organizations, it makes sense to not only create an account for each environment but even for each department or business unit within the organization.
  • Testing Microservice Interactions can be accomplished via Consumer-Driven Contract(CDC) testing with frameworks like Pact. Learn more about CDC here.

We hope you found this series valuable. Again, we welcome feedback on the series. Because the series is only published online, it’s never too late to make an improvement to any of the articles. So please do share your ideas for improvement with the authors at

If you would like to help your team in adopting any of the concepts, our consultancy provides coaching services, so feel free to contact us. If you’re an organization wanting their products built in the manner we’ve described, just ask; We offer full project teams to get the job done!

About the Authors

Paul Rowe studied computer science at Western Illinois University. He started working with Source Allies in 2007 and has over a decade of software development consulting experience in the Health Care and Agriculture markets. Paul is versed in a variety of Node.js, Java and AWS tech stacks. He experiments on GitHub here.

Matt Vincent studied industrial engineering at the University of Iowa. He founded Source Allies in 2002. Matt is an AWS Certified Solutions Architect & DevOps Engineer with a specialty certification in Data Analytics.

Table of Contents

〈  Back to Blog