At IT Galaxy 2018, PQR‘s annual customer event, I presented about Infrastructure as Code (IAC). Various topics and IAC solutions were discussed in this presentation, such as Azure Resource Manager Templates, AWS CloudFormation and Terraform.
In this article I will dive deeper into Terraform and discuss the Terraform configuration I demoed at IT Galaxy: an AWS multi Availbility Zone (high available) website. The following AWS concepts will be part of the Terraform configuration we will look into: EC2 instances, Load Balancers, Launch Configurations, Auto Scaling Groups, Scaling Policies, S3 object storage and IAM roles. I will also explain some of the Terraform basics in this blogpost.
Short introduction to Terraform
Terraform is open source command line tool and enables you to orchestrate infrastructure deployment using so called Terraform configurations. A Terraform configuration is a set of text files that describe the infrastructure and set variables. These textfiles are in Terraform or JSON format. The Terraform format is more human readable and preferred.
Terraform files have the .tf extension, but you can also use .tfvars to store your variables. Terraform will read all .tf and .tfvars files in a directory; it’s also possible to use a subdirectory structure and include files as modules.
Terraform supports different virtualization, cloud and configuration management platforms through the concept of provider plugins. Terraform is an infrastructure orchestration solution, and is used quite often part of a DevOps toolchain.
Terraform accepts different commands, the most important commands are:
- init – Initializes the environment and downloads the required provider plugin. A provider plugin is a independent binary that is used to connect to an endpoint such as Azure, AWS or vSphere;
- plan – The plan command generates an execution plan and displays an output with the components that will be created;
- apply – Applies the execution plan to the endpoint. Apply can be used for the initial deployment and updates (the same counts for plan);
- destroy – Destroys the deployment.
Now that you know a few Terraform commands, let’s have a look at the Terraform configuration I presented and demoed at IT Galaxy 2018.
Designing a high available website on AWS
To demo the power of Terraform, I deployed a high available, multi Availability Zone website on AWS using Terraform. To get an idea of the architecture of this website/application, I have included the design in this article:
The basic idea behind this application is:
- Create a launch configuration, which includes an AMI image/size and also includes a script for copying the website files from S3 to a webserver instance. Access to the S3 bucket is managed through an IAM role, more on this later.
- Create an auto scaling group, including a scaling policy. The auto scaling group includes a minimum of 2 webservers;
- Create a load balancer that is linked to the auto scaling group;
- Create two security groups and link these security groups respectively to the load balancer and deployed virtual machines.
The configuration is saved in the following files:
- main.tf – Main configuration;
- outputs.tf – Contains the output variables;
- vars.tf – Contains the (non secret) variables;
- terraform.tfvars – Contains the secret variables, such as your AWS key and secret;
All the files are available through a repository at GitHub. Take a look at the Terraform files, to get an idea of the exact configuration. The Terraform format is human readable, so you will understand what is going on. I will provide you some additional details on how the website is deployed.
Using an S3 bucket to deploy the website files from
In this example the website files are available on S3, this object storage provided by Amazon. The website that is deployed in this example, is a copy of the IT Galaxy 2018 website. This website is a static website, so no server side processing is required. An important step in the deployment is to copy the website files from S3 to the /var/www/html directory on a deployed instance. A website instances are stateless in this setup, so the webserver + required website files are deployed over and over again as part of the instance deployment. To get stuff up and running the “User data” option is used, which is accessible in Terraform through the user_data variable. The contents of user_data is:
user_data = <<-EOF #!/bin/bash sudo yum update -y sudo yum install httpd -y sudo service httpd start sudo chkconfig httpd on aws s3 cp "${var.s3_bucket}" /var/www/html/ --recursive hostname -f >> /var/www/html/index.html EOF
The first few lines will do a basic installation and configuration of Apache. The line starting with “aws” is using the AWS CLI to copy the files from S3 to /var/www/html. You might be wondering why this VM has access to this S3 datastore? This is managed through an IAM role that is linked to the VM instance. The required IAM role is detailed in the next screendump:
The linked policy only allows read-only access to the S3 bucket where the website files are stored:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<<bucket-name>>" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::<<bucket-name>>/*" ] } ] }
Now the deployed VM instance can copy the website files to its local harddrive. The IAM role is linked to the launch configuration through the iam_instance_profile Terraform variable.
The high availability setup
The high availability setup in this example leverages the following AWS concepts: launch configuration, auto scaling group, auto scaling policy and the classic load balancer. See the main.tf file for the configuration of these constructs.
The launch configuration describes the VM (AMI, Amazon Machine Instance) that is included in the auto scaling group. The launch configuration is based on the AMI with ID “ami-5652ce39”. An Amazon Linux Hardware Virtual Machine SSD EBS backed VM in the Frankfurt region, which size is t2.nano.
The auto scaling group (ASG) has a minimum size of 2 VMs, and a maximum size of 10 VMs. The ASG is linked to the classic load balancer that is deployed by Terraform. The auto scaling policy is a “Target Tracking Scaling” policy, that is targeted to maintain a 70 percent average CPU load.
Deploying the website with Terraform
A simple
terraform apply
will create an execution plan and deploy the website/application to AWS. Terraform will report what the public URL of the load balancer is, so you can access the website.
After an update to the .tf files, you can run terraform apply again. Terraform will automatically determine the steps to take, and change the configuration or redeploy some of the components. After you’ve finished you can delete the application by a simple terraform destroy.
That is it for today, I hope this was helpful for you. Please leave a comment below if you have any feedback.
1 Comments
m2guru
Thanks for this write up and sharing the code on github. Going to give it a try!