kpat.io
June 22, 2019

CI/CD Pipeline

Continuous Integration (CI) and Continuous Delivery (CD) are essential to fast delivery of software. Getting ramped up can be quite cumbersome. This post aims at providing a jump start.

First things first, we have to define the two terms

Continuous Integration is a practice of automatically running tests, inspecting code, and building project artifacts to create a continuous feedback loop about the state of the project. It’s a verification that the changes can be released without causing a hassle.

Continuous Delivery is a practice of continuously creating working and reliable software increments (without deployment)

Let’s start building.

The Pipeline

Optimally, the pipeline gets triggered after each commit to the VCS to provide developers with instant feedback about the current project status. The following is a non-exhaustive list of steps to consider in a pipeline

  • Checkout the project from the VCS
  • Install dependencies
  • Static code testing
  • Dynamic code testing
  • Building the deployment artifact
  • Deployment to a non-production environment

In addition to checking the quality of the code, these steps also make sure that the process is repeatable and reproducible on a generic environment.

Tip: If one of the steps should fail on the way, it is crucial that the pipeline is aborted and feedback is reported to the developers. A failing pipeline should be the top priority of the development team.

Let’s go through these steps one by one.

Checkout & Dependencies

Here examples for the first two steps

# Step one: fetch the code from the repository
$ git clone ssh://git@example.com/path/to/repo.git
# for node dependencies:
$ npm install
# for golang dependencies:
$ dep ensure

The important thing to note here is that for checking out the code you need the appropriate access to the repository. It is common practice to issue a access token to the CI server for that purpose. Alternatively, a ssh key can be generated on the CI server and be added to the runner.

Static Code Testing

Static properties of the code base can be used as a first quality gate. There are various static code analysis methods

  • Linting - Make sure that the source code conforms to the coding standards and doesn’t raise any compiler problems
  • Mess detection - Measure how “messy” the code is
  • Dependency analysis - Ensure that the code follows loose coupling
  • Security checking - Detect security vulnerabilities early on

Golang for example features the golint command

$ golint $(go list ./... | grep -v /vendor/)

This command returns a non-zero exit code when there are issues with the code base, thus aborting the pipeline if the code doesn’t conform to the defined guidelines.

Dynamic Testing

In contrast to static tests, dynamic tests provide feedback about the runtime functionality of the code base.

Tip: In order to be effective, dynamic tests should run in under 10 minutes. The feedback loop should be kept short. Long running tests should be scheduled separately

Golang includes the test subcommand. It is executed as such

$ go test $(go list ./... | grep -v /vendor/)

Similar to the linting process, this command returns a non-zero exit code upon failure, printing out the errors into STDERR.

Tip: Developers should have access to the pipeline logs

Building the Artifact for Deployment

Different programming environments have different build artifacts. For golang this is a compiled static binary, for PHP or python this could be zipped source code with all dependencies included. Nowadays, the artifact most commonly is a tagged docker image.

Tip: The build step should not be executed on the production system. This way the production system is not exposed to development tooling vulnerabilities.

Golang features the go build command. This squashes the code, together with the dependencies, into a single static binary. Here’s an example

$ go build -ldflags "-extldflags '-static'" -o bin/server.bin

This builds the go binary for the same execution environment as the CI server. In a second step, this binary can be included into a docker image

FROM alpine:3.8
WORKDIR /app
ADD bin/server.bin server.bin
CMD ./server.bin

Which can then be built, tagged, and deployed to the appropriate environment

Conclusion

The following is an incomplete list of best practices to consider when creating your own pipeline.

  • Trigger the pipeline with every commit
  • Feedback from the pipeline should be fast
    • Fail early - abort the pipeline if a single stage fails
    • Consider 10 minutes as a glass ceiling
    • If you have test that take too long, consider optimizing or running these tests seperately
  • If the pipeline fails, it’s the utmost priority of the team to fix it
  • The developers must have access to the pipeline logs
  • Build artifacts on the CI server instead of the production system
  • Keep the deployment lean and fast