0 Replies Latest reply on Oct 3, 2017 3:19 AM by Nicolekristen

    Continuous Infrastructure Delivery Pipeline with AWS CodePipeline, CodeBuild and Terraform

    Nicolekristen

      overview.png

      This explores how to build low-maintenance Continuous Delivery pipelines for Terraform, by using AWS building blocks CloudFormation, CodePipeline and CodeBuild.

       

      Cloud Formation

       

      Cloud Formation is the worked in answer for Infrastructure-as-Code (Iac) in AWS. It's normally a decent decision since it offers a low-upkeep and simple to-begin arrangement. Then again, it can have a few downsides in light of the utilization case or the use level. Here are a few focuses which fly up consistently:

       

      AWS-just: Cloud Formation has no local help for outsider administrations. It really underpins custom assets, however those are generally unbalanced to compose and keep up. I would just utilize them if all else fails.

       

      Not all AWS administrations/highlights bolstered: The typical AWS include discharge process is that a segment group (e.g. EC2) discharges another element, yet the Cloud Formation part is feeling the loss of (the Cloud Formation group at AWShttp://community.cypress.com/is clearly a different group with its own particular guide). Also, since Cloud Formation isn't open source, we can't include the missing usefulness without anyone else.

       

      No imports of existing assets: AWS assets made outside of Cloud Formation can't be "transported in" into a stack. This would be useful for instance when assets had been set up physically before earlier (perhaps in light of the fact that Cloud Formation did not bolster them yet).

       

      Terraform to the rescue!

       

      Terraform is an IaC tool from HashiCorp, similar to CloudFormation, but with a broader usage range and greater flexibility than CloudFormation.

       

      Terraform has several advantages over Cloud Formation, here are some of them:

       

      Open source: Terraform is open source so you can patch it and send changes upstream to make it better. This is great because anyone can, for example, add new services or features, or fix bugs. It’s not uncommon that Terraform is even faster than CloudFormation with implementing new AWS features.

       

      Supports a broad range of services, not only AWS: This enables automating bigger ecosystems spanning e.g. multiple clouds or providers. In CloudFormation one would have to fall back to awkward custom resources. A particular use-case is provisioning databases and users of a MySQL database,

       

      Data sources: While CloudFormation has only “imports“ and some intrinsic functions to lookup values (e.g. from existing resources) Terraform provides a wide range of data sources (just have a look at this impressive list.

       

      Imports: Terraform can import existing resources (if supported by the resources type)! As mentioned, this becomes handy when working with a brownfield infrastructure, e.g. manually created resources.

       

      (Some) Downsides of Terraform

       

      TerraForm is no overseen benefit, so the support trouble is on the client side. That implies we as clients need to introduce, update, keep up, investigate it et cetera (rather than concentrating without anyone else items).

       

      Another vital point is that Terraform utilizes "state documents" to keep up the condition of the framework it made. The documents are the blessed vessel of Terraform and messing around with them can bring you into genuine inconvenience, e.g. bringing your foundation into an unclear state. The client needs to concoct an answer how to keep those state documents in a synchronized and focal area (Luckily Terraform gives remote state dealing with, I will return to this in a moment). CloudFormation quite keeps up the condition of the assets it made, yet AWS deals with state stockpiling!

       

      To wrap things up, Terraform right now does not deal with locking, so two simultaneous Terraform runs could prompt unintended results. (which will change soon).

       

      Putting it all together

       

      So how can we leverage the described advantages of Terraform while still minimizing its operational overhead and costs?

       

      Serverless delivery pipelines

       

      First of all, we should use a Continuous Delivery Pipeline: Every change in the source code triggers a run of the pipeline consisting of several steps, e.g. running tests and finally applying/deploying the changes. AWS offers a service called CodePipeline to create and run these pipelines. It’s a fully managed service, no servers or containers to manage (a.k.a “serverless”).

       

      Executing Terraform

       

      Keep in mind that, we need to make a sheltered situation to execute Terraform, which is predictable and which can be inspected (so NOT your workstation!!).

       

      To execute Terraform, we will utilize AWS CodeBuild, which can be called as an activity inside a CodePipeline. The CodePipeline will naturally deal with the Terraform state document bolting as it doesn't enable a solitary activity to run different circumstances simultaneously. Like CodePipeline, CodeBuild itself is completely overseen. What's more, it takes after a compensation by-utilize display (you pay for every moment of fabricate assets expended).

       

      CodeBuild is told by a YAML setup, like e.g. TravisCI (I investigated some more points of interest in a prior post). Here is the manner by which a Terraform execution could resemble:

       

      version: 0.1

      phases:

        install:

          commands:

            - yum -y install jq

            - curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn"))  | not) ] |  map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/aws_cred_export.txt # work around https://github.com/hashicorp/terraform/issues/8746

            - cd /tmp && curl -o terraform.zip https://releases.hashicorp.com/terraform/${TerraformVersion}/terraform_${TerraformVersion}_linux_amd64.zip && echo "${TerraformSha256} terraform.zip" | sha256sum -c --quiet && unzip terraform.zip && mv terraform /usr/bin

        build:

          commands:

            - source /tmp/aws_cred_export.txt && terraform remote config -backend=s3 -backend-config="bucket=${TerraformStateBucket}" -backend-config="key=terraform.tfstate"

            - source /tmp/aws_cred_export.txt && terraform apply

       

      To start with, in the introduce stage, the device jq is introduced to be utilized for a little workaround I needed to write to get the AWS certifications from the metadata benefit, as Terraform does not yet bolster this yet. In the wake of recovering the AWS qualifications for later use, Terraform is downloaded, checksum'd and introduced (they have no Linux vaults).

       

      In the assemble stage, first the Terraform state document area is set up. As specified before, it's conceivable to utilize S3 cans as a state document area, so we will advise Terraform to store it there.

       

      You may have seen the source/tmp/aws_cred_export.txt order. This just deals with setting the AWS accreditations condition factors before executing Terraform. It's fundamental in light of the fact that CodeBuild does not hold condition factors set in past orders.

       

      Last, yet not slightest, terraform apply is called which will take all .tf documents and focalize the framework against this portrayal.

       

      Pipeline as Code

       

      The delivery pipeline used as an example in this article is available as an AWS CloudFormation template, which means that it is codified and reproducible. Yes, that also means that CloudFormation is used to generate a delivery pipeline which will, in turn, call Terraform. And that we did not have to touch any servers, VMs or containers.

       

      You can try out the CloudFormation one-button template here:

      cloudformation-launch-stack.png

       

      You need a GitHub repository containing one or more .tf files, which will in turn get executed by the pipeline and Terraform.

       

      Once the CloudFormation stack has been created, the CodePipeline will run initially:

      pipeline.png

      The InvokeTerraformAction will call CodeBuild, which looks like this:

       

      codebuild.png

       

      Stronger together

       

      The real power of both TerraForm and CloudFormation comes to light when we combine them, as we can actually use best of both worlds. This will be a topic of a coming blog post.