Amazon Web Services Cloud Development Kit

Setup and Deployment in AWS Cloud Development Kit

Rivers Agile Lead Software Engineer, Jimi Tallon, wraps up his blog series with a deep dive into deployment within Amazon’s software development framework.

Introduction

The goal of this blog article is to discuss and break down the technical details of Amazon Web Services Cloud Development Kit. If you wish to follow along, it helps to have a base understanding of the following:

  • The command-line interface (CLI) for your operating system
  • TypeScript
  • Code source control
  • Web services

Moreover, if a refresher is needed on the general details of CDK or if you landed on this blog article first, please refer to Part 1 for your convenience.

Setup

To start creating our project, we’ll need to fulfill some prerequisites in our setup. These prerequisites will require certain installation and configuration steps. Additionally, credentials are needed and are separate from the AWS Console login. Without these credentials, the AWS CLI console will not be able to connect to the AWS account from the command line. The noted credentials are called access keys and can only be downloaded once. If these keys are misplaced, forgotten, or no longer needed, they can be revoked at any time and new ones can be generated. The place where you can generate these access keys is in user setup within AWS IAM (Identity & Access Management) service.

The first installation step will be to install the AWS CLI (Command Line Interface). The AWS CLI is dependent on Python (currently version 3.x) and will help to tie your specific Amazon Console account to the command line tool. Select the instructions for your particular operating system from this article by Amazon to install the CLI and dependencies.

Once the installation is complete, open a Command Prompt/Terminal window and type [aws configure]. You will then be prompted to enter your access keys via AWS access key identifier, AWS secret access key, default region name, and default output format. It is recommended to enter at least the first two fields. If possible, also enter the default region as this can save time depending on your AWS cloud infrastructure. Proper credentials must exist within the AWS Web Console in order to proceed.

After configuration (still at the command prompt/terminal window), execute the command [aws s3 ls] to list the S3 buckets available in the default region. If the configuration is incorrect or your credentials are not set up properly, you will see an error.

The last prerequisite for the sample project is to install Node.js. The quickest way is to follow these instructions. The installation of this program should install the command-line executables [node] and [npm] (node package manager). It is also recommended (but outside of the scope of this blog article) to use nvm to make sure environments are isolated and particular Node.js packages are installed for specific project needs.

Now, we can install the AWS CDK. It is a Node.js package on npm so we use an npm command to perform the installation. In the command prompt/terminal window, execute the command [npm install -g aws-cdk]. For simplicity, the -g command-line option is to tell npm to install the aws-cdk package globally instead of just at a project level (i.e. the relative directory where the command is taking place). To test to see if AWS CDK is set up properly, execute the command [cdk –version]. If the version information is displayed, you are ready to go (e.g., .1.108.1 (build ae24d8a)).

To summarize these steps, below is a checklist of what you should have done:

  1. Install AWS CLI (i.e., Boto3)
    • Dependent on Python
  2. Configure AWS CLI
  3. Install Node.js
  4. Install AWS CDK
  5. Configure AWS CDK

In the event that any issues/errors were encountered that were not covered in this document, please refer to AWS CDK documentation for additional troubleshooting information.

Project

With setup out of the way, we can now begin project creation. To start, create a directory on the Command Prompt/Terminal window via mkdir test-cdk. Then navigate into the new directory (cd test-cdk). Once navigated, use the AWS CDK to scaffold a new project with the command cdk init app –language typescript. This command creates all necessary files and folders to get the project ready to go. Note: the command line parameter ”–language typescript” is specifying the language the CDK code will be written in. Moreover, TypeScript should be configured as a development dependency in the packages.json file along with other related dependencies.

There are 3 initial scaffolding areas to be concerned with when you begin working with CDK. They are: bin, lib, and package.json. The folder location bin is your starting point to define and initialize any stacks that you are ready to deploy. If you created a stack (but have not defined it here via code), then you will not see it listed (e.g., via the cdk ls command) for deployment. If you create a new stack and need to deploy it – bin is the place where you create the instance of the class along with any relevant parameters. The folder location lib is where you define your stacks along with any relevant supporting TypeScript classes. The final scalloding area is package.json. This is where you can install CDK or other relevant packages. For a full list of packages, please refer to the AWS CDK API documentation.

Stacks

As discussed in Part 1 of the blog article, we need to create stacks. These stacks are “middle men” to ultimately generate the YAML structure that is understood by AWS CloudFormation. The first stack will host the web service. The web service will be a Node.js application. The second stack will build and deploy code to the web service that will be triggered automatically from a CodeCommit repository. Details of the stack will include AWS CI/CD CodePipeline features of CodeBuild and CodeDeploy. Noted via the Project section, the stack classes will be created within the lib folder of our project scaffolding. As a best practice, it’s ideal to keep stacks as small a unit of definition as possible. For this project example, we’ll be combining them into one for simplicity and readability sake.

Since we are now into the code part of this article, a supplemental link is provided. Not all code logic will be discussed within this blog article, but key syntax will be. As discussed in Part 1, the resources we need to create are:

  • Isolated network (Stack 1)
  • Web service (Stack 1)
  • Security (Stack 1)
  • Build of web service code (Stack 2)
  • Deployment of web service code (Stack 2)

In TypeScript, every stack will extend the Stack object defined in base CDK package @aws-cdk/core.

Web Service Stack (Stack 1)

Stack 1 will be our web service stack. In this stack, we’ll be defining our network, web service, and some security. Let’s start in order, but first we need to create our TypeScript file. Following the file naming convention that was automatically generated in our scaffolding, we’ll be using kebab-case (e.g., primary-word-secondary-word). The stack 1 file will be called web-stack.ts. With stack file creation out of the way, defining our components can begin. For the network component, we create a class called Vpc. In AWS terms, this means a Virtual Private Cloud (VPC) which is an isolated network for our resources const vpc = new ec2.Vpc(..) is the partial syntax used.

Within the VPC class, we need to provide at least one subnet and the base CIDR notation which, in this project example, will be 192.168.1.0/24. A security group component is required to be defined in order to create an EC2 instance (i.e., web service). Security groups can be thought of as what is allowed in and out from a network perspective. For the example project security group, we’ll be allowing network traffic outbound for everything and inbound rules of 22 (SSH so we can remotely connect to our service) and 80 (standard web protocol). const securityGroup = new ec2.SecurityGroup(..) is the partial syntax used. securityGroup.addIngressRule(..) is the partial syntax for adding ports 80 and 20.
Because it is known that deployment will be occurring from CodeDeploy, it’s important to configure permissions so the EC2 instance will be allowed to read AWS S3 buckets (i.e., cloud file storage). A role will be created and attached to the EC2 instance. const role = new iam.Role(..) is the syntax used to accomplish this.

Our last component will be our web server (i.e., EC2 instance). To keep cost at a minimum for this example project, the EC2 instance will be of type t2.micro which is free tier eligible. const ec2Instance = new ec2.Instance(..) is the partial syntax used. The ec2Instance TypeScript class object will be tied to all respective components defined before (i.e., Vpc, SecurityGroup, Role). An additional shell script will also be tied to the EC2 instance upon creation. This is known as the user-data. The shell script will be defined in ./lib/data/user-data.sh and will create an Nginx server with a generic HTML page to start with.

Build and Deploy Stack (Stack 2)

Stack 2 is our build and deployment stack. Using the kebab-case naming convention, the TypeScript file will be called deploy-stack.ts. The CI/CD process in AWS has most of its functionality encapsulated in a project known as a pipeline. A pipeline can define where the source code exists (e.g., GitHub or CodeCommit), how to build it, and where to deploy the built code. const project = new pipeline.Pipeline(..) is the partial syntax used to create a AWS CodePipeline project. Stages exist within the pipeline defined as a programmatic array and respective stage names are provided for customization. The order in which they are declared in code specifies the order of operations which stage will occur. Artifacts from the build process are stored in AWS S3. Remember how in Stack 1 we needed to allow for read access for deployment? This is why. The syntax to create a bucket is const output = new pipeline.Artifact(“Output”).

So, how does the code get pushed? The most common scenario is when a developer checks in code to a branch that the defined pipeline is watching. In our example, the “master” branch is our target. This event of checking in code is known as a trigger. The master branch will be stored in AWS CodeCommit and contains code for a simple Node.js “hello world” application (supplemented as a zip file in the CDK project). For simplicity, we’ll be creating the CodeCommit repository via the syntax const repository = new code.Repository(..) and will supply this parameter as a stage within the pipeline (as CodeCommitSourceAction). There are EC2 tag filters that must be specified in order to indicate which virtual machines to deploy to. This is noted in the example project source code. Now, every time a developer modifies the Node.js application and checks in code to the master branch, the CodeBuild will automatically deploy. Also note that there is a special service required to be installed on each EC2 instance that matches the name tag query. This local EC2 service is created via the user-data.sh script.

Additional deployment configuration can be specified in a file called appspec.yml which needs to exist in the base directory source code for the node application. The important information to specify is ‘where does the code live’ relative to ‘what was built’. Using source and destination directory YAML properties allows for proper mapping of this information. There are life cycle events that can be triggered to specify particular scripts that need to run on the server. One example could be a file saying the server is down for maintenance when the first part of the life cycle begins and then another to bring up services via the node command to resume server operations.

Bootstrapping

In order to deploy our newly built YAML and supporting files, we need to have an environment to put it in. But how do we send it to AWS to fulfill this environment need? Bootstrapping via an AWS CLI command allows us to create a reusable S3 bucket that understands this need. This command is to be run only once and is required for a proper deployment.

Note that the local CDK project references it the next time a deployment occurs (discussed in the next section) and the bucket information and data is to be consumed by AWS CloudFormation.

To create this environment, type cdk bootstrap –toolkit-stack-name Test. Providing a toolstack name can be tedious, but allows for granular CDK project organization. For example, imagine you have more than one CDK project that impacts other parts of your cloud infrastructure. It may make sense to have this depending on needs. Otherwise, you can use the default cdk-stagingbucket-{random id} name that is generated via the AWS CDK system instead of test-stagingbucket-{random id}. If that is the case, the –tool-stack-name parameter can be omitted.

Deployment

The final step to deploy our stacks to CloudFormation is calling a syntax of cdk deploy {StackName} –toolkit-stack-name Test. If you only have one stack, you can omit the stack name. Otherwise for this project example, use WebStack or BuildStack to deploy each respectively. To view a list of deployed stacks, use the cdk ls CLI syntax. As discussed in bootstrapping for organization purposes, we defined our CDK deployment bucket as Test which can be seen in the –toolkit-stack-name CLI parameter. The progress of the CloudFormation transaction can be seen on the CLI Command Line/Terminal window or via AWS Web Console in CloudFormation. Also, AWS CDK is smart with its logic to either create or update resources needed depending on how the CloudFormation structure was modified. It’s not perfect, but does a good enough job to direct any issues that may occur. This information of state smart logic is kept mostly in 2 places: one place is in the cdk.out directory of your local CDK project and the other is within the tool stack name deployment bucket (test in our example).

With the state information stored, if only small modifications are done, only the components affected will be updated within the CloudFormation transaction. If any point of the CloudFormation fails or has issues, the transaction will rollback to the best of its ability. Usually 100%, but in certain cases, like S3 bucket creation, manual cleanup is required.

Cleanup

As part of the CDK deployment, you can also destroy what you have created. The main portion that usually requires manual cleanup is any S3 buckets that have been created (so pay close attention!). To clean up stack creation work, use the syntax of cdk destroy WebStack –toolkit-stack-name Test to clean up stack 1 and cdk destroy DeployStack –toolkit-stack-name Test to clean up stack 2. Cleanup is important so you don’t get charged for any unneeded items within your AWS Cloud account.

Final Thoughts

The example project is, by no means, a perfect way of doing things in CDK, but IS a way to illustrate how to organize thoughts and begin to build infrastructure into the AWS cloud. The more simplistic each stack can be will help cement reusability and long-term support for your cloud infrastructure. Moreover, configuration stored in json or yaml can add to the CDK project enrichment via certain fields (e.g., certain EC2 instance types). There is also secure configuration that can be accomplished using the AWS SecretsManager service.

Additionally, specific deployment environments can be grouped appropriately and named dynamically with certain custom TypeScript helper classes which can allow for faster development and ensure adherence with specifications between environments (e.g., Dev, Prod, QA). From the CodePipeline, it would be wise to send notifications, perhaps through AWS SNS to make sure the team is aware when Node.js application changes are pushed. Hopefully after reviewing this blog article, the power of AWS CDK is apparent and some new cloud ideas can begin to bubble up. This article concludes our Amazon Web Services Cloud Development Kit blog series, but if you still have questions about implementing AWS or CDK, don’t hesitate to reach out. Our team can offer solutions to help streamline your development process – contact us today.

External Resources