Welcome back to the Node.js reference architecture series. In this article, we’ll take a look at continuous integration/continuous delivery (CI/CD) in the Node.js landscape and discuss the guidelines recommended by the Node.js reference architecture team.
Follow the series:
- Part 1: Overview of the Node.js reference architecture
- Part 2: Logging in Node.js
- Part 3: Code consistency in Node.js
- Part 4: GraphQL in Node.js
- Part 5: Building good containers
- Part 6: Choosing web frameworks
- Part 7: Code coverage
- Part 8: Typescript
- Part 9: Securing Node.js applications
- Part 10: Accessibility
- Part 11: Typical development workflows
- Part 12: npm development
- Part 13: Problem determination
- Part 14: Testing
- Part 15: Transaction handling
- Part 16: Load balancing, threading, and scaling in Node.js
- Part 17: CI/CD best practices in Node.js
The team's discussion on CI/CD practices was based on the common developer workflows section of the Node.js reference architecture. We found that most targets end up as a container image. Therefore most of the guidance in this post is related to building and testing code that results in a container image.
During the team's discussions, we determined that there were two flows that were used the most frequently together. These were testing on code check-in and the use of a container pipeline.
If you are familiar with using GitHub and GitHub actions, this recommendation and workflow should come as no surprise. When a pull request (PR) is made against a main branch in a source code repository initial testing is automatically kicked off using services like Travis-CI or the aforementioned GitHub actions.
This level of testing usually starts with running unit tests and code quality checks. You can view the team's recommendations on the Node.js components to use in the Code consistency, Testing and Code coverage sections of the reference architecture.
Check-in testing is often configured to run on a number of Node.js versions in parallel and the team recommends that you test at least on the Long Term Support (LTS) version of Node.js that you currently deploy with along with later LTS versions in order to enable future upgrades.
It should be noted that while the check-in testing may not run on the image that flows through the container pipeline, the team tries to ensure that testing is on the same environment architecture as used by the container pipeline. In addition, the team also recommends that the same is true for local testing done by developers.
While tools like Source-to-image, Docker, or Podman may be used by the development team to test locally, they do not usually push that image to source control. Instead, the images are built as part of a container pipeline.
This workflow uses common tools such as Jenkins and Tekton to create pipelines to build and test images as they are promoted from development to production. Once updates have passed the initial check-in testing, the container pipeline kicks off. In the teams experience, the pipeline may support a number of stages or environments including the following:
This environment often mirrors the main branch of each component.
This stage uses tagged versions of each component which are known to work together. The team has also used multiple staging environments to either mirror a production environment or an environment that will target future production configurations.
This is an optional environment which would allow customers to sign off on images ready for production. Many teams will use staging instead of a separate pre-production environment, but that choice is up to the development team.
That image is typically pushed to an internal registry and later steps in the pipeline use a tagged version of that image from the registry.
This is the environment that hosts the customer facing services and deployments. As the image for the Node.js component is built and promoted through the pipeline stages, it is important that all configuration related to the environment is externalized so that environment specific values can be provided in each environment. This includes, for example, configuration for how to connect to databases and other services, the Node.js run configuration (production/development), and any other configuration information.
While a Node.js developer may not need to set up the CI/CD pipeline, a good understanding of their organization's pipeline is often valuable in order to understand what's needed to externalize configuration in the code they write as well as being able to investigate problems that may occur during different environments in the pipeline.
Security checks are an important part of the CI/CD workflow. Typically, the team deploys code and image scans in the code check-in tests and/or the container pipeline. The benefit of running in the code check-in tests helps developers validate that they have resolved any reported issues.
Tools like Mend.io, Snyk, and those built into GitHub have been used for scanning in the check-in phase. Often a number of different scans are required to cover all of the important aspects which include scans of the following:
- application dependencies for vulnerabilities
- OS packages for vulnerabilities
- source code using static analysis
- container images for best practices (not running as root etc.)
Looking to the future
The team periodically looks at new workflows for the CI/CD process, and something that has been announced recently is the Red Hat Trusted Software Supply Chain. It is still early in its life, but the team will be looking into how this offering could fit into our guidance.
We plan to cover new topics regularly as part of the Node.js reference architecture series. Until the next installment, we invite you to visit the Node.js reference architecture repository on GitHub, where you can view our work. To learn more about what Red Hat is up to on the Node.js front, check out our Node.js topic page.