Skip to content
  • About Us
  • Our Services
  • Case Studies
  • Content Hub
  • Blog
  • Join Us
  • Contact Us
Terraform: Cloud Made Easy (Part 1 Of 4)
Jordan Taylor

Terraform: Cloud Made Easy (Part 1 Of 4)

Terraform, by Hashicorp, is an automation tool that possesses the ability to build, change, and version cloud and on-premise infrastructure simply and effectively. Terraform manages your infrastructure, as code. Terraform is your organization's ticket into the cloud.

This blog is the first of a four part series that aims to provide readers with enough experience and knowledge of Terraform to go forth and adopt it all the way through to production, deploying all of your mission critical applications. The series will cover advanced techniques as the series progresses. At a high level here are the outcomes of the series by release:

  • Release 1: Directory Structure and Basic Compute in AWS
  • Release 2: Basic Compute in Microsoft Azure, Google Cloud & Rackspace
  • Release 3: Networking Infrastructure in AWS, Microsoft Azure, Google Cloud & Rackspace
  • Release 4: Terraform Provisioning with Puppet, Chef & Ansible

To accompany each blog post, a code release consisting of demos and instructions will occur to ensure that readers absorb as much knowledge from the series as possible. Fork it, re-use it, they’re your learning materials!

Terraform Release 1: Repository Release Notes

This blog provides a view into the best practices to follow when setting up your Terraform repository, inclusive of the how and why. Setting up your Terraform repository in such a way that allows a orchestration into many different cloud providers, enables separate (safe) management between compute and networking infrastructure while also giving you a extremely reusable setup is paramount. It will fully unlock the magic of Terraform, a tool that could allow you to build a platform of hundreds of nodes that are configured to your desired state in minutes, as well as giving you a control panel to maintain that existing infrastructure for the future.

Directory Structure

From here on in, it will be a step-by-step overview of how a Terraform repository should be structured and why. This is the base of the Terraform repository, inclusive of sub-directories that are named after the cloud providers that you wish to deploy into using Terraform.

Within each of the cloud provider sub-directories you would then have two further sub-directories: environments and modules.

Modules

Modules are your reusable pieces of code for both compute and networking. They have variable values abstracted from them completely to allow maximum reusability.

A typical compute node module would consist of the following three files:

  • ${module_name}.tf representing your main class, inclusive of the actual instance resource declarations and any resources related to them (security groups for example)

  • output.tf declaring the outputs that will be provided by the module, representing variables that can be access by other modules called within an environment.
  • variables.tf declaring all of the variables to be passed into the modular resources. This file is responsible for pulling through the environment specific variable values that are passed through to the module, ready to be used.

If a module consists of a large amount of resources, it would be logical to split the main class out into separate .tf files that represent the resources, output.tf and variables.tf would remain. For a typical networking resource module it is very much a repeat of a compute node module, inclusive of three important files, unless you have a module consisting of a large amount of resource declarations.

Environments

Now for the organization specific setup: environments. This setup just forms an example set of environments: dev, test, preprod and prod.

Each environment should have separate subdirectories for compute and networking.

This is because both the compute and networking sub-directories will have their own unique Terraform statefile (terraform.tfstate), meaning that they are managed by Terraform separately and are uncoupled from each other. In other words, compute can be created, destroyed and recreated a thousand times on top of networking infrastructure that was only created once.

Storing this terraform.tfstate file is critical to the management of your infrastructure from the point of creation, therefore an approach of doing this is a critical decision that is dependent on the organisational use case.

A compute subdirectory of an environment consists of a series of .tf classes that reference the modules that are to be used in this environment (in this case, demo_nodes.tf), including the code that enables the passing through of tailored variables through to the modules. Running a ‘terraform apply’ from this directory would create the resources detailed, running ‘terraform destroy’ would destroy them. But before running ‘terraform apply’, a ‘terraform get’ would have to be executed in order to pull through the required modules.

The terraform.tfvars file is where the hard-coded variables for the environment should be entered, including the number of replications of a module you desire, the AMIs you wish a module to be built on or the VPC to place your compute onto, as an example of compute variables.

It’s also worth noting that Terraform can pick-up Terraform-specific environment variables, so hardcoding them into terraform.tfvars may not be needed. An example of doing this for a variable called ‘x’ would require the command ‘export TF_VAR_x=1234’, Terraform will now use the value 1234 for the variable ‘x’.

Networking environments have the exact same setup as described for compute, the only difference is the content of the .tf classes and the variables that are passed.

As can be seen in the image, this directory has its own terraform.tfstate file, meaning that this directory itself is the control panel for manipulating the dev networking infrastructure, dev compute will also have its own terraform.tfstate and this should be replicated across all environments.

That’s it for Part 1, but remember to take a look at the repository, read the READMEs and follow the demos to build some resources into AWS. Next up: Basic Compute in Microsoft Azure, Google Cloud & Rackspace.

More Articles

Getting Started With Docker on Windows

Getting Started With Docker on Windows

9 June 2016 by Ryan Lockard and Hibri Marzook
Containers vs. Virtual Machines: Which Is a Better Fit for You?

Containers vs. Virtual Machines: Which Is a Better Fit for You?

29 September 2016 by Jesse White
Why Microservices Require a DevOps Approach

Why Microservices Require a DevOps Approach

15 January 2016 by Benjamin Wootton
  • Londonlondon@contino.io