Running in the “cloud” is not only a buzzworthy term, but also it can lead to real financial savings for your company and an increase in the happiness of your devops. Like all things, you have to do it correctly.

In this blog post, we will discuss the mistakes that we made when we started, how we’ve improved our process and how Packer helps us with that.

What We Did Wrong

Since the beginning of our company, ThreatSim has been hosted in Amazon Web Services. We weren’t taking full advantage of the full suite of services - we really only used the Elastic Compute Cloud to run virtual servers and the Simple Storage Service to host our static assets.

We liked the scalability and the reliability of those services, but always had the feeling that we weren’t doing things the “Cloud Way”. For instance, we had long running servers and used tools like Capistrano to deploy our applications and Ansible to manage our infrastructure. This “worked” and kept us going for the first few years of our existence.

But something about it didn’t feel right. Somewhere, Werner Vogels was silently judging us. We were ignoring the Elastic in Elastic Compute Cloud.

Put simply, we were in the Amazon Cloud but we acted like we had a data center with physical machines. We were using these EC2 instances as if they were metal servers, which they weren’t.
Other concerns that we had included:

  1. We were unable to be highly available because we only had one instance of each type of server we ran.
  2. Updating the system packages required scheduled maintenance windows where we planned the least impactful upgrade path for our system.
  3. We wasted money by not being able to scale elastically when we saw increased traffic/load. We ran extra servers just to be safe.
  4. When Amazon had an outage in our availability zone, we went down completely with no real disaster recovery scenario.
  5. We were slowly but surely getting out of date with the latest versions of Ubuntu. As a security company, that was absolutely untenable.

So, we sat down and discussed how we were going to get highly available, scalable, easily upgradeable and make use of the best that Amazon AWS had to offer. Initially, we wanted to utilize our existing investment in tools like Capistrano and Ansible, but were running into serious constraints with them.

A Ship Arrives

We have been long-term users of Codeship for our continuous integration and build/test pipeline. They offer a great service at a reasonable price with amazing support. We also are avid subscribers to their technical blog.

Fortuitously for us, at the same time we were discussing solutions to our “problems”, they posted a really interested series of articles on immutable infrastructure (which admittedly stood on the shoulders of articles by Chad Fowler amongst others.)

Go over and read their blogs and then come back here. We’ll wait.

Welcome Back

After discussing immutable infrustructure internally, we decided that we wanted to take the plunge and move our application to use AWS properly (read: immutably).

Early on, we separated our application by function so we had already done the hardest part conceptually. For example, we had web servers for our different web applications and worker servers for our different job related applications. We had defined roles for our servers and kept all functionality separated.

Since we were moving to immutable components, we needed to have a central place to store state. We made our application layers more rigid so that state stays where it is supposed to. We no longer saved anything outside of logs to our filesystem (and all logs were pushed to a remote log aggregation service).

The main advantage when it comes to state in immutable infrastructure is that it is siloed. The boundaries between layers storing state and the layers that are ephemeral are clearly drawn and no leakage can possibly happen between those layers. There simply is no way to mix state into different components when you can’t expect them to be up and running the next minute. - Codeship

After transitioning to Amazon’s Relational Database Service (using the Key Management Service to securely encrypt our data), we started building out our Amazon Machine Image library.

Since we have server types that are somewhat similar (such as multiple types of web servers), we decided to create a base Packer JSON template and build specialized servers from that AMI. Our “base_ami” template starts from the latest Ubuntu Cloud Image and configures users and packages like build-essentials, curl, Ruby, etc. that we need on every instance.

We make heavy use of the shell provisioners in Packer. To make management of our shell scripts easier, we broke the functionality into multiple scripts. Also, we realized after some time that running the largest compute instance for this build sped the process up dramatically and led to overall lower costs. Just because you build on a c4.4xlarge doesn’t mean you can’t run on a smaller instance for your production use.

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "source_ami": "ami-fb4cfd90",
    "instance_type": "c4.4xlarge",
  }],
  "provisioners": [
    {
      "type": "shell",
      "scripts": [
        "scripts/base_ami/install_packages.sh",
        "scripts/base_ami/install_aws_tools.sh",
        "scripts/base_ami/add_users.sh"]
    }
  ]
}

So that our install_packages.sh looks something like this:

#!/bin/sh -x
apt-get -y install build-essential curl ntp git mcrypt secure-delete
...

While our install_aws_tools.sh would look like this:

#!/bin/sh -x
apt-get -y install unzip cloud-utils axel
wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
unzip awscli-bundle.zip
...

We build that Packer template

packer build --var-file=production.json base_ami.json

When it completes, we are given the id of the built AMI, which we will use in the next stage

==> amazon-ebs: Creating the AMI: base_ami
    amazon-ebs: AMI: ami-abcdefgh
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Modifying attributes on AMI (ami-abcdefgh)...
    amazon-ebs: Modifying: description
==> amazon-ebs: Adding tags to AMI (ami-abcdefgh)...
    amazon-ebs: Adding tag: "source_ami": "ami-c1a104aa"
    amazon-ebs: Adding tag: "name": "Base AMI (Built 2015-09-11T17:28:27Z)"
==> amazon-ebs: Tagging snapshot: snap-abcdefgh
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:

us-east-1: ami-abcdefgh

After we build that AMI using Packer, we pass the id from the output to the next templates that are more specialized. For example, we have a template that is used as the base for all of our web servers. We want our web servers to have the same version of nginx installed and similar base nginx configurations for security purposes.

We use the AMI id from our base ami in place of the Ubuntu Cloud AMI id.

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "source_ami": "ami-abcdefgh",
    "instance_type": "c4.4xlarge",
  }],
    "provisioners": [
      {
        "type": "file",
        "source": "conf/upstart/nginx.upstart",
        "destination": "/var/nginx.upstart"
      },
      {
        "type": "shell",
        "scripts": [
          "scripts/nginx_ami/install_nginx.sh",
          "scripts/nginx_ami/make_server_directory.sh"]
      }
    ]
}

When we build that, we are given the id for an AMI that has all of our base packages and nginx configured and ready to serve. From this AMI, we will build our custom servers for our web applications.

In future posts, we’ll discuss how we get our servers to set themselves up with configuration details for their specific roles. Thanks for taking the time to read this and we hope it helps.