29 August 2018

Quick guide on Amazon Web Services for beginners

amazon, first company who come up with an idea of bundling all the 7 layers of OSI model in form of services aka web services which are built on compute capabilities. At the time of writing this article there are more than 90 service in AWS.

there are 4 core foundation elements

Compute: EC2, Paas Elastic beanstalk, Faas Lambda, Auto Scaling
Storage: S3, Glacier(used to archive), Elastic object, block storage, Elastic file system
Database: daas, custom DB, mysql
Network: VPC, CluodFront, Route53 for DNS, API gateway, Direct-connect

where auto-scaling is sufficiently great, due to its auto provision property, it can helps in increased demand, as well as on reduced demand.

here are some topic which may help you to start with AWS journey

What is the difference b/w EC2 and Elastic beanstalk ?
with EC2 instance you are manually going to launch an instance and tell the system what kind of OS, memory/CPU and other resources you want to spin. whereas with beanstalk you tell the system about your requirement and system will spin up all suitable and eligible resources for you.
ex: if you have a .net application you tell the system and it will launch all the app and db instance required for a .net application to work.

What is an EBS Volume?
An Elastic Block Store Volume is a network drive you can attach to your instances while they are running. It allows your instances to persist data, even after their termination, they can only be mounted to one instance at a time & they are bound to a specific availability zone i.e. you cannot use EBS present in one zone to attach to an instance on another zone instead you used a method called snapshot.
Think of them as a "USB stick" but attached at network level.

What is Geo targeting in cloud front ?
It works on the principal of caching, and is handled globally, which provide data to user from very nearest server. ( URL remains same, you can modify the content and customize the content).
in Geo targeting cloud front detects the country code and forward it to origin server, then origin server sent the specific content to cache server and will be stored for ever and then the user will get specific content images defined specifically for their region/country.

how do you upgrade or downgrade a system with near zero downtime ?
- Launch another system parallel may be with bigger EC2 capacity 
- Install all the software/packages needed 
- Launch the instance and test locally
- If works, swap the IPs if using route 53, update the IPs and it gonna send traffic to new servers in 
0 Downtime

What is Amazon S3 bucket ?
An Amazon S3 bucket is a public cloud storage resource backed by AWS formally known as Simple Storage Service (S3), an object storage offering.
S3 buckets are similar to file folders, store objects, which consist of data and its descriptive metadata.
An S3 user first creates a bucket in an AWS region of choice and gives it a globally unique name. AWS recommends that customers choose regions geographically close to them to reduce latency and costs.
Once the bucket has been created, the user then selects a tier for the data, with different S3 tiers having different levels of redundancy, prices and accessibility. One bucket can store objects from different S3 storage tiers.
User than specify access privileges for the objects stored in a bucket, via IAM mechanisms, bucket policies and access control lists.
User can interact with an S3 bucket via the AWS Management Console, AWS CLI or application programming interfaces (APIs).
There is no limit to the amount of objects a user can store in a bucket, though buckets cannot exist inside of other buckets.

What is Amazon CloudWatch? 
A place from where you can track all the infrastructure logs at one place.

What if provisioned service is not available in region/country ?
not all services available in all region, it all depends on liking of the services, all depending on requirements. always find nearest region to serve your customer, else you will face high latency.

what is Amazon Elastic container service ?
- It is highly scalable.
- Its a high performance container management.
- It allows you to run application on manged clusters of EC2 instances.
- It can be used to launch or stop container-enabled applications.

some useful services when trying to achieve CI/CD:

CodeCommit: as source repository S3 bucket GitHub | used for version controlling
CodeDeploy: to deploy a sample/custom deployment on an EC2 instances
CodePipeline: service that deploy, build & test your code
  • for continuous deployment we need to create/enable versioning
  • configure | set-AWSCredentials for user by providing Accesskey and secretkey
                                                                  AccessKey AKIAIKOAWUJQB75WLFLA
                                                                  SecretKey XHNKW8EixLu4fBVjL+KKj5wSjohG4ypipKlfR2/E

How to configure AWS PowerShell (if working on windows) Download from here

- services > IAM > Users > Create a user > security Credentials > create Access Key > Download the File (*.csv)
then Launch AWS PowerShell or AWS Configure and give:

- Access key
- Secret Key
- Region

input keys you get from downloaded .csv files, and region depending on your geographical location

How to use Codecommitused for version controlling and a useful tool for developers for CI/CD

First thing is to get AWSCredentials for your AWS environments
- services > IAM  > Users > codecommit

now configure your credentials for codecommit

$ cd C:\Program Files (x86)\AWS Tools\CodeCommit
$ C:\Program Files (x86)\AWS Tools\CodeCommit> .\git-credential-    AWSS4.exe -p codecommit

create a Repository
  • services > codecommit > create a repo(MyRepo) > cloneURL via Https
$ git clone 'https-clone-url'  (other developer all do same)
$ git config user.mail 'mailId'
$ git config user.name 'name'
   (start working)
$ notepad index.html
$ git status
$ git add index.html
$ git status
$ git commit -m 'initial commit'
$ git push origin master (it will connect via https url and push the file to MyRepo)
$ git log

How to use CodeDeploy to deploy an App : to automate the deployments and adding new features continuously

first thing is to setup codeDeploy role for instance
create another role for service
go to
  • services > EC2 > LaunchInstance >
create an application
  • services > codeDeploy > create App > custom Deployment > skipWalkThrough > GiveDetails > App-DemoApp Group-DemoAppInstance > Amazon EC2 Instance > Key-Name Value-Dev > DeploymentConfig-OneAtATime > role-CDServiceRole > createApplication
How to use CodePipeline: used to deploy code direct from S3/GitHub/codeCommitRepo
  • services > codePipeline > create > name-pipeline > source-gitHub > connect > Add Repo-aws-codeDeploy-Linux > branch-Master > buildProvider- noBuild > deploymentProvider-AWS CodeDeploy >  App-DemoApp Group-DemoAppInstance > roleName-AWS-CodePipeline-Service > create

How to use CloudFormation to setup Jenkins Server: using jenkins-server template 
  • services > cloudFormation > CreateNewStack > upload the template > stackName-Jenkins > microInstance > dropdownList > IPrange- > acknowledge > complete
now you could see a new EC2 instance being created and running as Jenkins Server and ready to use

Importantly how do you connect to your EC2-Linux-Instance running on Windows
for that you need to have Putty and PuttyGen (since Putty wont recognize keypair.pem provided by aws) 
so you need to convert keypair.pem to keypair.ppk using keygen
launch-puttygen > Load-*.pem > savePrivateKey
launch-putty > hostname-aws-instance-publicName > Data-autoLogin-ec2-user > SSH > Auth > supply generated *.ppk file > open session

now unlock Jenkins by: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Installing docker on AWS-EC2-Instance
#sudo yum update -y
#sudo amazon-linux-extras install docker
#sudo service docker start
#sudo usermod -a -G docker ec2-user (adding ec2-user to docker group)