Continuous Integration Deployment

Rivers Agile Software Engineer, Stephen Teodori, wraps up his Continuous Integration series by detailing the deployment process in Github

In the previous articles of this series, we covered how to easily run unit tests when working on your feature branch. Now, we’ll look at how these checks will integrate into your team environment and how to deploy changes to Github.

Ensuring due diligence

Let’s start by looking at how this all looks when a pull request is involved. Mostly because I think it’s a cool process. Here is the “checks” section of a PR. You can see two checks are listed, however only one was triggered by the PR. The first check of “Run unit tests for feature branch” was run when I pushed my changes to the branch I was working on. However, the pull request check section will include the most recent version of any workflows that ran for a branch you are trying to merge. So, we can see in the PR that the latest run of the unit tests all passed.

What’s the second unit test for? Well, the second workflow “Unit test for pull request” isn’t there just because it rhymes. These tests run on the temporary PR branch that represents what the code will look like after it is merged. So let’s say whoever was working on the feature branch failed to get the most recent version of master before making the pull request. Their branch unit tests would all work fine, however, the PR unit tests (triggered by pull_request) could still fail. Ideally, you’ll configure your repo to require that a branch is up to date before it can be merged, but this is a nice fail-safe.

Don’t forget, once your workflows have run at least once you’ll be able to go into the branch protection settings for your repository and require that these workflows execute successfully before a pull request can be merged.

Packaging your code

Now that we’ve merged our pull request, we want to deploy the new version of our project. Normally, we’d get the latest copy of the code locally, run a package/publish command, and (if it isn’t part of your publishing tool) upload files somewhere on the internet. Not really that difficult, and there’s plenty of good software to assist this process. That said, we’ve just created this nice clean build environment. Wouldn’t we want to package our production code in that same clean environment?

Let’s do just that. In the repo, I’ve created an AWS Lambda project that I creatively called Lambda. We’re going to publish that project to a local folder and then upload it to another location on the web. We’ve created a new workflow to handle these steps. You can see that it will trigger when a change is pushed to the master branch, which is exactly what happens when you merge a pull request. As soon as a pull request is merged our workflow will begin running. However, be aware that it won’t be included in the pull request’s list of workflows since it wasn’t actually part of the PR.

In the workflow, you’ll see a bunch of environment variables set using the env: keyword. We use these to ensure consistent values as well as providing a central place to call out important configuration settings. We reference them with the syntax ${VARNAME}, however, referencing values might differ based on the type of shell you use and where you input the reference.

For the actual logic we’re going to focus on the steps named “Package Lambda” and “Zip lambda package”. In the “Package Lambda” step, we manually execute dotnet’s publish command targeting our Lambda project:

dotnet publish “./${LAMBDA_FOLDER}/” -c ${CONFIG} -r $TARGET_RUNTIME –no-self-contained

If you test this command locally you’ll see that it outputs a published folder in /Lambda/bin/Release/netcoreapp3.1/rhel.7.2-x64/publish. Your folder may be different if you use different publish settings, but you’ll need to know the folder path to retrieve those files in a moment.

Currently, we have a folder with a bunch of loose files in it. Let’s tidy up by creating a zip of those files. In step “Zip lambda package”, you can see we use the zip command that we’d expect to find pre-installed in our linux environment. This zip command will zip up our published files and store them in the current directory where we can grab it easily in a moment.

zip -j “${LAMBDA_FOLDER}.zip” ./${LAMBDA_FOLDER}/bin/${CONFIG}/*/${TARGET_RUNTIME}/publish/*.*

We now have our zipped up program that . . . will be destroyed as soon as the workflow finishes. So, we should probably send it somewhere for safe keeping.

Uploading your output to storage

Since this is an AWS Lambda we’re going to go ahead and store it in an AWS S3 bucket. To simplify that process, I’d like to use AWS CLI tools for uploading files. We’ll also need a way to authenticate to AWS without adding our credentials in the workflow file.

Let’s start with preparing the AWS CLI tools because we don’t actually have to do anything. Depending on what image you use in the runs-on entry, GitHub will provide some resources out of the box that you are likely to need. For ubuntu-latest this includes the AWS CLI tools.

Next, we need to authenticate with AWS. We perform our authentication in the “Configure AWS Credentials” step using the aws-actions/configure-aws-credentials@v1 marketplace action. In this step, you’ll see some new syntax – specifically ${{ secrets.AWS_ACCESS_KEY_ID }}. You can read more about creating and using secrets in GitHub’s documentation. In short, secrets allow you to store and use sensitive information (relatively) safely.

I said “relatively”, but secrets are a feature you can and should use. Understanding the risks, mitigating potential damage, and comparing alternatives is a complex choice that should be discussed with your team. Any time you store credentials or provide access to someone, you should be thinking about minimizing risk, minimizing access, and maximizing your control.

Getting back to the topic at hand, we’ve stored several secrets in our repository. Storing them in our repository lets the entire team use those credentials while only allowing a repo administrator to actually edit them. The administrator can share, rescind, or update access for the entire team with minimal effort.

At this point, we are authenticated with AWS and any AWS CLI commands will automatically use our current authentication, so let’s go ahead and upload our file! The upload will be handled in the “Upload Lambda zip to S3 bucket” step. Again, we find that it’s a simple command:

aws s3 cp “${LAMBDA_FOLDER}.zip” s3://${S3_BUCKET}/

We just tell the AWS CLI that we want to copy the zip file in our current folder, up to the bucket we specify. It uses our current credentials that we established in the previous step and executes no problem. Our zipped up program is now safely in our S3 bucket. From there, we can use the AWS web interface to load the new code into our lambda, or if this was a desktop application, we could upload the output to our public download directory.

Deploying GitHub packages

As a second example of deploying changes, we’re going to take a look at uploading a GitHub package. In our case, we’ve created a Validator package that uses the same validation logic that would be performed by the web service. The idea is that an app that uses our web service can verify input before sending the request to our web service. We (hopefully) reduce traffic load and the user doesn’t have to wait if it’s an obvious error.

The logic for this step can be found in “Publish nuget package”. There, you’ll find a simple package deployment command using the dotnet sdk tools:

dotnet nuget push ./${PACKAGE_FOLDER}/bin/${CONFIG}/${PACKAGE_NAME_TEMPLATE} -k ${{ secrets.GITHUB_TOKEN }} -s https://nuget.pkg.github.com/stephenteodori/ –skip-duplicate –no-symbols true

GitHub also provides documentation for uploading a package with a variety of technologies beyond just dotnet.

With more time I’d create something a bit . . . smarter. In essence, this command will find all the nupkg files in the validator project output and upload them to GitHub’s package management web service. I say “all” the nupkg files, but we only expect one to exist. Ideally, we’d have a secondary way of verifying the package’s version number (which is part of the .nupkg filename) and point directly to the one file we expect to exist. However, this is simpler and the risk of problems is very low since we always start in a completely clean file structure.

Another important item to point out is the reference to secrets.GITHUB_TOKEN. Unlike the other secrets, this value is pre-populated by GitHub and provides basic access to the repository. It acts similar to a personal access token, but you don’t have to manually create it. You can read more about what access it provides in GitHub’s documentation.

Every great journey begins with a single step

With that, we have our simple testing and deployment process. From here you can see how it fits into your development workflow and add additional refinement. While simple, by maintaining this process you naturally sanitize your project’s dependencies, ensure that unit tests are being run when changes are made, and open up opportunities for further automation by ensuring that manual processes are not introduced in the testing and deployment workflows.

We hope you’ve found these articles instructive and thank you for taking the time to learn more about Continuous Integration. While CI has become pretty mainstream in software development, remember we’re always here to help answer any development questions you may have. Read the previous editions of our CI blog article series or reach out for ideas on how to implement Continuous Integration and streamline your development process. Contact us today.