Terraform – Modules

Deploying Re-usable Code

In my series of Terraform, from the basics <link> to the more advanced topics, we’re going to cover off Modules.  It is very helpful to have an understand of how state files work, refer to my other blog on state files and start there.

This blog is going to continue with the more advanced topics and talk about using modules in your Terraform deployments.  To start, watch my video on Channel9:

https://channel9.msdn.com/Shows/DevOps-Lab/Terraform-Modules-deploying-reusable-code

Modules – Organizing your code

In the video I group my resources in Azure together, I want to create containers in which to organize my code.  You can absolutely put all your resources into one main.tf file.  That could get very long and complex, especially if you’re deploying dependencies and larger, more complex environments.  This series focuses on how we can make our code better organized (even for something as simple as readability) and to make our code re-usable.

In the previous series, we had a single folder with a simple Terraform deployment, it included our ‘main.tf’ file and our variables, and outputs.  That is a great setup for a simple deployment and to learn how to write Terraform structure.  We want to move more into an enterprise level solution that will allow you to make your code repeatable, a definitive goal to Infrastructure as Code (IaC).

In the video I deployed a single environment into Azure, which included a vNET, a subnet and virtual machine scale set.  The folder structure allows us to segment out each Azure resource using a folder to write our code, I call each module from my ‘main.tf’ file.  Our ‘main.tf’ file acts as the starting point to execute our code.  When building out a new resource we use the reference ‘resource’, when calling a module, we use the reference ‘module’, calling the source folder:

Folder Structure

Terraform in its declarative form, will read the ‘main.tf’ file from top down and then call each resource or module from our script. Hence, if we put all our resources, backend calls and outputs into our ‘main.tf’ file, it becomes a very complicated and unwieldy beast. Let’s look at the folder structure that I deployed in my demo. If we drill into the entire folder structure of the environment, we can look at what each file does:

‘variables.tf’ – This file defines the variables that I am calling out for each configuration, I have put a variables file into each module

‘outputs.tf’ -Defines the variables from the deployment that we want to retrieve from the root module.  The ‘output’ command allows us to retrieve specific references from the state file, i.e. the id or name of a subnet

Input Variables

Looking at my folder structure I used input variables in my top-level folder.  This is another great benefit to using modules, we can add in input variables that allow us to change our variables to scale or deploy to different environments.

Looking at my ‘terraform.tfvars’ file I declare specific variables that are applied to my deployment. In the video I change the capacity of the virtual machine scale set from 5 to 25.  Once the change is applied, Azure is quick to deploy these (remember, this all depends on datacentre capacity).   

The use case here is easy, deploy some networking and a virtual machine scale set, then scaled up (or down) based on project requirements.  This is an easy and repeatable solution that you can use to give your organisation on-demand cloud resources. 

Another use case is deploying this same environment into another Azure region.  All we must do is change our ‘location’ variable.  We can then deploy an exact replica of our environment into another region, i.e. East US, UK South, etc. 

The structure of modules allows us to modify a single set of resources from that one file, in a clean and well-structured manner.  I could add more resources into my deployment by amending any of the module files, or even adding additional modules for other Azure resources.  The use of modules make segmenting your resources that much easier in larger deployments. 

I have customers that use modules in other ways, depending on the size of their deployments.  They may group resource groups or vNETs into a module instead of by type of Azure resource.  This will come down to the architecture of your environment and what you’re trying to achieve, also what makes the most sense to your organisation.

Customers always ask, ‘how should we write our modules?’

There are some things to think about in your design, what are the dependencies of your resources? Modules are a great way to map those dependencies. Do all the resources have the same lifecycle? If yes, then modules could be the best fit.  If not, read on about workspaces.

Workspaces

The above example is great for a single environment that has a lifecycle of all the resources that can be managed together.  They can be created, deleted and modified together.  Let’s say you have the requirement to deploy the above infrastructure but replicate it into identical environments for testing and development. 

The use case here is that we need to be able to have identical environments for our application in which we can run development and testing on, as well as deploy into production.  Workspaces will allow us to create a state file per environment, so that we can manage each environment independently, but have identical deployments of our infrastructure, like for like.  With the added ability to modify networking, capacity, etc to only one environment if needed.

To implement workspaces, we are going to rely heavily on our state file and the backend.  By default, when you run Terraform the persistent data is stored in your backend to a ‘default’ workspace. Hence why you have one state associated to your configuration. In Azure the backed can support multiple named workspaces, meaning we only have one backend still, but have distinct instances of that configuration to be deployed.

We will deploy 3 workspaces (you can create just 2 or more depending on your requirements), that will deploy the modules that we deployed in the last section. In effect, we’ll have a dev, test and production environment with identical resources.  We are going to deploy a virtual machine scale set with networking to Azure.  Using the code above, we are using the exact same module configuration, the difference is that we have built a folder structure for ‘dev’, ‘test’ and ‘prod’, each containing their own ‘*.tfvars’ folder.

In each ‘*.tfvars’ folder I have declared the variables for that environment:

I have changed the capacity in each environment just for the purpose of showing that each is managed independently from the other ones using the state file. The state file is hosted in Azure, under the same storage account. For this demo I amended each state file name with the environment name:

As you can see in the blog storage you can also place them into distinct folders, the visual is just giving you options of how, either way is acceptable.

In my previous blog we discussed ‘backend’ files and securing your state file. For simplicity of the demo, I reference my storage in my ‘backend’ file, but it is ideal to call your access key from KeyVault. Please note: making your storage access key visible is not best practice, nor secure in an enterprise environment.  I leverage this ‘backend’ file to tell Azure where to find our state files, I add an annotation to each state file for each environment that it applies to.  As you’ll see above, each environment has a ‘dev’, ‘test’ or ‘prod’ annotation to the end of the state file so we can clearly differentiate which environment it references.

Deploying Your code into Workspaces

Now that we’ve setup our folder structure and configured our storage in Azure, we need to create our workspaces.  We need to use the ‘Terraform workspace’ commands. Remember, when you run Terraform ‘plan’ it assumes you’re using the ‘default’ workspace, so we need to create our workspaces.

After you run ‘Terraform init’, we need to create our new workspace:

$ terraform workspace new test
Created and switched to workspace "test"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

Once that workspace is created, you can run ‘terraform plan’ and you’ll be working in that workspace.  Terraform will not see any existing resources that existed in the default or any other workspace.  You can then go and create a new workspace for ‘dev’ and ‘prod’.  All the resources that you deploy exist but cannot be managed from another workspace.

To switch between workspaces, you will need to run ‘Terraform workspace select <name>’

$ terraform workspace select test

or...

$ terraform workspace select prod

You will be easily able to navigate between your workspaces without affecting the other ones.  You can change resources or destroy a workspace that may not be needed.  This makes it easier to manage our 3 environments independently, and maybe once we’re done testing our code in our ‘dev’ environment we can destroy it.  Enabling us to deploy or destroy resources as needed.

To reference all of the code that was used in this demo, please reference it on my GitHub repo: https://github.com/scubaninja/Ch9Demo-Terraform-Modules

Please note: The code was written as v0.12 was released, it has not been updated for the latest syntax.

I hope this guide has been useful, please send across any questions here or on Twitter. I have covered off other advanced topics on Terraform, links for them are below:

Terraform and Azure DevOps – Delivering CI/CD deployments – Link Coming Soon!

Terraform and Github Actions – Delivering code from your repo – Link Coming soon!

Terraform State Files – Scaling and Securing your Deployments

4 thoughts on “Terraform – Modules”

  1. Hi April,
    This is a great explanation in simple understandable terms,learned a lot. I just wanted to know how to maintain this life-cycle and also switching to any workspace during AZ Devops pipeline. Can you please put up a blog for the same, using modules and workspaces which can be integrated into AZ Devops pipeline?
    Thanks,
    Vinay

    Like

  2. I’m kind of getting this, slowly. Some quick questions:
    1) How do you get VSCode to show the references (in your screenshots)
    2). You have resourcegroup name variable, but you don’t specify it in any tfvars. Do you get prompted for that name when you plan/apply?

    Like

  3. Hi Mark,
    1. I have the TF extension installed, so it allows me to review syntax and reference my code
    2. The TFVARS are in the environment folders, the RG names should pull from the application-Env names

    Give me a shout if you need more

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s