30 April 2020

Deploying IaC using Terraform

TERRAFORM is used to automate deployment in infrastructure across multi providers in both public and private clouds. Provisioning of infrastructure through 'software' to achieve 'consistent' and 'predictable' environments are Infrastructure as code

               IaC - In simple words replacement of SOPs which is automated on top of it.

core concepts to achieve this:
  • Defined in Code: Iac should be defined in code weather in form of json yaml or hcl.
  • Stored in source control: the code should be stored somewhere in the version source control repository like GitHub.
  • Declarative & Imperative: In imperative, I am going to tell software each and everything which it needs to do the job. In declarative software already have some sort of Idea or a predefined routine, what it is going to do with taking some references. so terraform is an example of a declarative approach to deploy IaC
  • Idempotent & consistency: once a job is done, and if again I get a request to do the same job it is Idempotent behaviour of terraform to not repeat the steps done while fulfilling this job, instead will say there is no change in configuration and current requirement is same as the desired one so no changes needs to be made. otherwise in a non-idempotent world each time this job comes it gonna repeat the same steps again and again to fulfil the requirement which is already in place.
  • Push & pull: terraform works on the principle of push mechanism where it pushes the configuration to its target.
The key benefit here is - everything is documented in code, which makes you understand your infrastructure in more details.
key terraform components
In this exercise, I am trying to demonstrate how you can quickly deploy a t2.micro instance of amazon Linux without login into aws console by just writing a terraform plan
to begin with, you need to fulfil a prerequisite:
  • terraform client to run terraform commands                              
  • IAM user with AWS CLI access
Note: at the time of writing this article I have used terraform version 0.8.5 so you may see some resource deprecation.

Install terraform client
$ wget https://releases.hashicorp.com/terraform/0.13.4/terraform_0.13.4_linux_amd64.zip
$ unzip terraform_0.13.4_linux_amd64.zip //updated version mv terraform /usr/sbin/

To create a terraform config file with .tf as an extension, here are the key blocks that terraform tends to use to define Iac

#PROVIDER - AWS, google like providers can be declared here
#VARIABLES - input variables ca be declared here
#DATA - data from provider collected here in form of data source
#RESOURCE - feeding info about resources from provider here
#OUTPUT - data is outputted when apply is called

Defining variables in terraform can be achieved in multiple ways, you can either create an external file with *.tfvars extension or can create a variables.tf or can include it in your main.tf file to persist variable values.

aws_access_key = "AKIA5O3G54mp13OBSE4RA"
aws_secret_key = "+bh/vVqo54mp13Erxv7YlrSs/sdRwN9ZzeKDtAjCP" key_name = "tfkeypair"
private_key_path = "/home/user/tfkeypair.pem"

So in this exercise, I attempt to deploy a "t2.micro" instance on Amazon EC instance with Nginx up and running on it.
in the end, your terraform configuration files structure may look like where *.tfplan & *.tfstate are the key files for your IaC.

Creating a terraform configuration file


First, we are going to define a set of variables here, that are used during the configuration. I have defined keypairs so that we can SSH to our AWS instance, with a default region where my instance will be deployed
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "key_name" {} variable "private_key_path" {} variable "region" { default = "us-east-1" }


In the provider file we are defining our providers, and feeding information about our key details defined in our variable section with syntax var.variableName
provider "aws" {
access_key = var.aws_access_key secret_key = var.aws_secret_key region = var.region }


In the datasource block, we are pulling data from the provider, in this exercise we are using amazon as a provider and using Linux AMI for our EC2 instance
data "aws_ami" "aws-linux" {
most_recent = true owners = ["amazon"] filter { name = "name" values = ["amzn-ami-hvm*"] } filter { name = "root-device-type" values = ["ebs"] } filter { name = "virtualization-type" values = ["hvm"] } }


In this block we can define more than one resources, here I have used default VPC so that it will not be deleted on destroying the instance. Next, we have defined a security group so that we can SSH to our instance which is going to run Nginx in this example, and opening port 80 & 22; and for that, we need to define VPC id so that it will create the security group.
resource "aws_default_vpc" "default" {
} resource "aws_security_group" "allow_ssh" { name = "nginx_demo" description = "allow ports for nginx demo" vpc_id = aws_default_vpc.default.id

# to allow traffic from outside to inside
ingress {
from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [""] } ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [""] }

# to allow traffic from inside to outside i.e.
from instance to internet
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [""] } }

# in this block we are actually defining out instance
which will be nginx with t2.micro as resource type
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id instance_type = "t2.micro" key_name = var.key_name vpc_security_group_ids = [aws_security_group.allow_ssh.id]

# since we are doing SSH so we need to define a connection in
resource block, so that terraform understand where to connect
connection { type = "ssh" host = self.public_ip user = "ec2-user" private_key = file(var.private_key_path) } # since we want to remotely exec command so provisioner "remote-exec" { inline = [ "sudo yum install nginx -y" , "sudo service nginx start" ] } }


this block will help to give you the output of your configuration
output "aws_instance_public_dns" {
value = aws_instance.nginx.public_dns }


now to deploy the above configuration, terraform deployment process follows a cycle:

Initialization > Planning > Application > destruction

$ terraform init

this initializes the terraform configuration and checks for provider modules/plugins if its already not available and downloads the modules as shown below

$ terraform fmt // this will check the formatting of all the config files
$ terraform validate // this will further validate your config
$ terraform plan -out ami.tfplan // outing the plan will help to reuse it

it looks for the configuration file in pwd and loads all variables if found in the variable file, and stores out the plan as shown below

$ terraform apply "ami.tfplan" --auto-approve

it performs the configuration you created as code, applies it to the provider and does the magic. At the time of applying tfplan if anything config your terraform doesn't like and gives you an error, you need to correct it again and replan the ami.tfplan

Test your configuration by hitting the URL generated by outputs.tf file

Validate from your aws console you will see this

now if you don't want the configs to be active and charge you the money you can destroy it

$ terraform destroy --auto-approve

lastly from your config folder you can destroy the config you applied and it will destroy everything corresponding to your config


No comments:

Post a Comment