Racism is unacceptable and has no place in our bloggers community. #BlackLivesMatter #WhatMatters2020
Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

03 April 2020

Deploying IaC using Terraform

TERRAFORM is used to automate deployment in infrastructure across multi providers in both public and private clouds. Provisioning of infrastructure through 'software' to achieve 'consistent' and 'predictable' environments is Infrastructure as code

               IaC - In simple words replacement of SOPs which is automated on top of it.

core concepts to achieve this:
  • Defined in Code: Iac should be defined in code weather in form of json yaml or hcl.
  • Stored in source control: the code should be stored somewhere in version source control repository like GitHub.
  • Declarative & Imperative: In imperative I am going to tell software each and everything which it needs to do the job. In declarative software already have some sort of Idea or a predefined routine, what it is going to do with taking some references.so terraform is an example of declarative approach to deploy IaC
  • Idempotent & consistence: once a job is done, and if again I get a request to do the same job it is Idempotent behavior of terraform to not repeat the steps done while fulfilling this job, instead will say there is no change in configuration and current requirement is same as the desired one so no changes needs to be made. otherwise in non-idempotent world each time this job comes it gonna repeat the same steps again and again to fulfil the requirement which is already in place.
  • Push & pull : terraform works on the principle of push mechanism where it pushes the configuration to its target.
Key benefit here is - everything is documented in code, which makes you understand about your infrastructure in more details.
key terraform components
In this exercise I am trying to demonstrate as how you can quickly deploy t2.micro instance of amazon linux without login into aws console by just writing a terraform plan
to being with you need to fulfill a prerequisite:
                          
                             "you should have an IAM user with AWS CLI access"

and to form a terraform configuration file with .tf as extension, below are few blocks which terraform tend to use to define things

#VARIABLES - input variables ca be declared here
#PROVIDER - AWS, google like providers can be declared here
#DATA - data from provider collected here in form of data source
#RESOURCE - feeding info about resources from provider here
#OUTPUT - data is outputted when apply is called

defining variables in terraform can be achieved in multiple ways, so here I tried to define variable from an external file

To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars with the following contents:

aws_access_key = "AKIA5O3GDEMOVOBSE4RA"
aws_secret_key = "+bh/vVqoDEMOErxv7YlrSs/sdRwN9ZzeKDtAjCP" key_name = "tfkeypair"
private_key_path = "C:\\tfkeypair.pem"

For all files which match terraform.tfvars or *.auto.tfvars present in the config directory terraform reads the variable

So in this exercise I tried to deploy an t2.micro instance on Amazon EC instance with nginx up and running on it.

and at the end your terraform configuration files structure will look like:


let start composing a terraform configuration file

#VARIABLES

first we are going to define set of variables here, that are used during the configuration. I have defined keypairs so that we can SSH to our AWS instance, with default region where my instance will be deployed

#VARIABLES
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "key_name" {} variable "private_key_path" {} variable "region" { default = "us-east-1" }


#PROVIDER

here we are defining our providers, and feeding information about our key details defined in our variable section with syntax var.variableName

#PROVIDER
provider "aws" {
access_key = var.aws_access_key secret_key = var.aws_secret_key region = var.region }


#DATA

here in data source block we are pulling data from the provider, in this exercise we are using amazon as provider and using linux ami for our EC2 instance

#DATA
data "aws_ami" "aws-linux" {
most_recent = true owners = ["amazon"] filter { name = "name" values = ["amzn-ami-hvm*"] } filter { name = "root-device-type" values = ["ebs"] } filter { name = "virtualization-type" values = ["hvm"] } }

#RESOURCE

In this block we can define more then one resources, here I have used default VPC, so that it will not be deleted on destroying the instance. Next we have defined security group so that we can SSH to our instance which is going to run nginx in this example, and opening port 80 & 22 ; and for that we need to define VPC id so that it will create the security group.

#RESOURCE
resource "aws_default_vpc" "default" {
} resource "aws_security_group" "allow_ssh" { name = "nginx_demo" description = "allow ports for nginx demo" vpc_id = aws_default_vpc.default.id

# to allow traffic from outside to inside
ingress {
from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }

# to allow traffic from inside to outside i.e. from instance to internet
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }


# in this block we are actually defining out instance which will be nginx with t2.micro as resource type
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id instance_type = "t2.micro" key_name = var.key_name vpc_security_group_ids = [aws_security_group.allow_ssh.id]


# since we are doing SSH so we need to define a connection in resource block, so that terraform understand where to connect with connection { type = "ssh" host = self.public_ip user = "ec2-user" private_key = file(var.private_key_path) } # since we want to remotely exec command so provisioner "remote-exec" { inline = [ "sudo yum install nginx -y" , "sudo service nginx start" ] } }

#OUTPUT

this block will help to give you the output of your configuration

#OUTPUT
output "aws_instance_public_dns" {
value = aws_instance.nginx.public_dns }

#END

now to deploy above configuration, terraform deployment process follows a cycle:

Initialization 
Planning
Application 
destruction

$ terraform init
this initializes the terraform configuration and check for provider modules/plugins if its already not available and downloads the modules as shown below

$ terraform plan -out ami.tfplan
it looks for configuration file in pwd and loads any variables if found in form of terraform.tfvars file , and stores out the plan as shown below

$ terraform apply "ami.tfplan" 
it performs the configuration you created as code, applies it to provider and does the magic
now test the configuration by hitting above highlighted url and you will see result as:
and validate from your aws console you will see this:


now if you dont want the configs to be active and charge you money you can destroy it

$ terraform destroy
lastly from your config folder you can destroy the config you applied and it will destroy everything corresponding to your config


Br,
Punit


15 September 2018

Understanding network concepts as they relate to AWS

Creating Virtual private cloud aka VPC

VPC is a virtual network dedicated to your AWS account & you can launch your AWS resources like EC2 instance into VPC

while creating VPC you must specify range of IPv4 addresses in form of CIDR block.

CIDR block is Classless inter domain routing
which is a set of internet protocol (IP) standards that is used to create unique identifiers for networks and individual devices in it.

The IP addresses allow particular information packets to be sent to specific computers. Shortly after the introduction of CIDR, technicians found it difficult to track and label IP addresses, so a notation system was developed to make the process more efficient and standardized. That system is known as CIDR notation.

for ex: defining a CIDR block:

/32 represents the number of bits in the mask

CIDR                              Subnet Mask                                Total IP's
  /32                             255.255.255.255                                1

--------------
10.0.0.0/26

start with 10.0.0.0

formula is 2 ^(32-26) = 2^6 = 64  i.e. 64 IP's in this block

End with 10.0.0.63

so out of 64 IP we can subdivide it into 4 subnet of 16 IPs each

i.e. 10.0.0.0/28 = 16 IP's

1st subnet range :  10.0.0.0  - 10.0.0.15
2nd subnet range: 10.0.0.16 - 10.0.0.31
3rd subnet range: 10.0.0.32 - 10.0.0.47
4th subnet range: 10.0.0.48 - 10.0.0.63

lets say we have created 2 private & 2 public subnet

out of 16 IP's in each subnet, only 11 will be available to use. whereas 5 will be blocked for internal use (first 4 and last 1 )
--------------
  • with every VPC it will create a route table by default
  • 1 subnet can have only one route table
  • but 1 route table can be associated with multiple subnet
  • only 1 IGW can be attached to a VPC 
  • you need to keep NAT gateway in public subnet always which helps in all Internet bound traffic
for NACL Inbound rules in your VPC: number with smaller value will get higher rank & will be prioritize and will overwrite the other rule of its higher value.

Network ACLs aka Firewall for VPC

( you can limit the traffic inbound/outbound traffic coming to your subnet by applying rules [*] )

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnet.
You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Creating Virtual private network aka VPN

to setup a VPN we need to create two gateways:

a customer gateway representing on-prem end which specifies public IP of router
a VPri gateway representing cloud-end of the tunnel, and use both of them to create a VPN connection.

NAT vs ELB

direction of traffic is :

traffic that goes from private instances > outside world goes via NAT gateway ~ forward proxy
traffic which comes from outside world > private instance is comes via ELB ~ reverse proxy

NAT should always be placed in public SUBNET ~ cannot span more than 1 subnet

ELB can put across multiple SUBNET ~ can span across subnet

ELB Application LB (app level LB) & Classic LB (n/w layer LB)

- private
- public
(depending on the subnet you put it will be private/public)

ELB is a managed service ~ distributes incoming traffic from internet ~ does health checks, if any instance is unhealthy will not forward the traffic

types of LB

1. Classic LB : obsolete now

2. Application LB : Layer 7 PDNTSPA

  • Supports http & https 
  • due to security group you can do port filtering 
  • header may be modified 
  • SSL Offloading
  • Path based routing & diff logic 
  • you need target group (instances) to route traffic

3. N/W LB : Layer 4 PDN

  • supports TCP 80/8080
  • coming traffic
  • absence of security group 
  • no header modification
  • no logic can be applied here
  • static IP is possible

now you can send traffic to a target group which is on-prem and not on AWS via giving target type as IP


keep refreshing more to come...

29 August 2018

Quick guide on Amazon Web Services for beginners


Amazon has bundled the entire 7 layers of  OSI model architecture in form of cloud services aka web services.

there are 3 basic type of cloud services

Compute: EC2, Paas Elastic beans, Faas Lambda, Auto Scaling

Storage: S3, Glacier(used to archive), Elastic block storage, Elastic file system

Networking: VPC, CluodFront, Route53 for DNS

where auto-scaling is sufficiently great, due to its auto provision property, it can helps in increased demand, as well as on reduced demand.

What is Geo targeting in cloud front ?
It works on the principal of caching, and is handled globally, which provide data to user from very nearest server. ( URL remains same, you can modify the content and customize the content).
in Geo targeting cloud front detects the country code and forward it to origin server, then origin server sent the specific content to cache server and will be stored for ever and then the user will get specific content images defined specifically for their region/country.

how do you upgrade or downgrade a system with near zero downtime ?
- Launch another system parallel may be with bigger EC2 capacity 
- Install all the software/packages needed 
- Launch the instance and test locally
- If works, swap the IPs if using route 53, update the IPs and it gonna send traffic to new servers in 
0 Downtime

What is Amazon S3 bucket ?
An Amazon S3 bucket is a public cloud storage resource backed by AWS formally known as Simple Storage Service (S3), an object storage offering.

S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

An S3 user first creates a bucket in the AWS region of choice and gives it a globally unique name. AWS recommends that customers choose regions geographically close to them to reduce latency and costs.

Once the bucket has been created, the user then selects a tier for the data, with different S3 tiers having different levels of redundancy, prices and accessibility. One bucket can store objects from different S3 storage tiers.

User than specify access privileges for the objects stored in a bucket, via IAM mechanisms, bucket policies and access control lists.

User can interact with an S3 bucket via the AWS Management Console, AWS Command Line Interface or application programming interfaces (APIs).

There is no limit to the amount of objects a user can store in a bucket, though buckets cannot exist inside of other buckets.

What is Amazon CloudWatch? 
A place from where you can track all the infrastructure logs at one place.

What if provisioned service is not available in region/country ?
not all services available in all region, it all depends on liking of the services, all depending on requirements. always find nearest region to serve your customer, else you will face high latency.

what is Amazon Elastic container service ?
- It is highly scalable.
- Its a high performance container management.
- It allows you to run application on manged clusters of EC2 instances.
- It can be used to launch or stop container-enabled applications.

some useful services when trying to achieve CI/CD:

CodeCommit: as source repository S3 bucket GitHub | used for version controlling
CodeDeploy: to deploy a sample/custom deployment on an EC2 instances
CodePipeline: service that deploy, build & test your code
  • for continuous deployment we need to create/enable versioning
  • configure | set-AWSCredentials for user by providing Accesskey and secretkey
                                                                  AccessKey AKIAIKOAWUJQB75WLFLA
                                                                  SecretKey XHNKW8EixLu4fBVjL+KKj5wSjohG4ypipKlfR2/E

How to configure AWS PowerShell (if working on windows) Download from here

- services > IAM > Users > Create a user > security Credentials > create Access Key > Download the File (*.csv)
then Launch AWS PowerShell or AWS Configure and give:

- Access key
- Secret Key
- Region

(input keys you get from downloaded .csv files, and region depending on your geographical location)


How to use Codecommitused for version controlling and a useful tool for developers for CI/CD

First thing is to get AWSCredentials for your AWS environments
- services > IAM  > Users > codecommit

now configure your credentials for codecommit

# cd C:\Program Files (x86)\AWS Tools\CodeCommit
# C:\Program Files (x86)\AWS Tools\CodeCommit> .\git-credential-    AWSS4.exe -p codecommit

now create a Repository

- services > codecommit > create a repo(MyRepo) > cloneURL via Https
#git clone 'https-clone-url'  (other developer all do same)
# git config user.mail 'mailId'
# git config user.name 'name'
   (start working)
# notepad index.html
# git status
# git add index.html
# git status
# git commit -m 'initial commit'
# git push origin master (it will connect via https url and push the file to MyRepo)
# git log

How to use CodeDeploy to deploy an App : to automate the deployments and adding new features continuously


first thing is to setup codeDeploy role for instance
> services > IAM > roles > newRole > EC2 > AmazonEC2RoleforAWSCodeDeploy > CDInstanceRole
now create another role for service
> services > IAM > roles > newRole > codeDeploy > AWSCodeDeployRole > CDServiceRole
now go to
> services > EC2 > LaunchInstance >
now create an application
> services > codeDeploy > create App > custom Deployment > skipWalkThrough > GiveDetails > App-DemoApp Group-DemoAppInstance > Amazon EC2 Instance > Key-Name Value-Dev > DeploymentConfig-OneAtATime > role-CDServiceRole > createApplication

How to use CodePipeline: used to deploy code direct from S3/GitHub/codeCommitRepo
> services > codePipeline > create > name-pipeline > source-gitHub > connect > Add Repo-aws-codeDeploy-Linux > branch-Master > buildProvider- noBuild > deploymentProvider-AWS CodeDeploy >  App-DemoApp Group-DemoAppInstance > roleName-AWS-CodePipeline-Service > create

How to use CloudFormation to setup Jenkins Server: using jenkins-server template 
> services > cloudFormation > CreateNewStack > upload the template > stackName-Jenkins > microInstance > dropdownList > IPrange-0.0.0.0/0 > acknowledge > complete
Now you could see a new EC2 instance being created and running as Jenkins Server and ready to use

Importantly how do you connect to your EC2-Linux-Instance running on Windows
for that you need to have Putty and PuttyGen (since Putty wont recognize keypair.pem provided by aws) 
so you need to convert keypair.pem to keypair.ppk using keygen
> launch-puttygen > Load-*.pem > savePrivateKey
> launch-putty > hostname-aws-instance-publicName > Data-autoLogin-ec2-user > SSH > Auth > supply generated *.ppk file > open session

now unlock Jenkins by: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
-------------------------------------
Installing docker on AWS-EC2-Instance
#sudo yum update -y
#sudo amazon-linux-extras install docker
#sudo service docker start
#sudo usermod -a -G docker ec2-user (adding ec2-user to docker group)
-------------------------------------

Br,
Punit