Showing posts with label codeCommit. Show all posts
Showing posts with label codeCommit. Show all posts

29 August 2018

Quick guide on Amazon Web Services for beginners


Amazon has bundled the entire 7 layers of  OSI model architecture in form of cloud services aka web services.

there are 3 basic type of cloud services

Compute: EC2, Paas Elastic beans, Faas Lambda, Auto Scaling

Storage: S3, Glacier(used to archive), Elastic block storage, Elastic file system

Networking: VPC, CluodFront, Route53 for DNS

where auto-scaling is sufficiently great, due to its auto provision property, it can helps in increased demand, as well as on reduced demand.

What is Geo targeting in cloud front ?
It works on the principal of caching, and is handled globally, which provide data to user from very nearest server. ( URL remains same, you can modify the content and customize the content).
in Geo targeting cloud front detects the country code and forward it to origin server, then origin server sent the specific content to cache server and will be stored for ever and then the user will get specific content images defined specifically for their region/country.

how do you upgrade or downgrade a system with near zero downtime ?
- Launch another system parallel may be with bigger EC2 capacity 
- Install all the software/packages needed 
- Launch the instance and test locally
- If works, swap the IPs if using route 53, update the IPs and it gonna send traffic to new servers in 
0 Downtime

What is Amazon S3 bucket ?
An Amazon S3 bucket is a public cloud storage resource backed by AWS formally known as Simple Storage Service (S3), an object storage offering.

S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

An S3 user first creates a bucket in the AWS region of choice and gives it a globally unique name. AWS recommends that customers choose regions geographically close to them to reduce latency and costs.

Once the bucket has been created, the user then selects a tier for the data, with different S3 tiers having different levels of redundancy, prices and accessibility. One bucket can store objects from different S3 storage tiers.

User than specify access privileges for the objects stored in a bucket, via IAM mechanisms, bucket policies and access control lists.

User can interact with an S3 bucket via the AWS Management Console, AWS Command Line Interface or application programming interfaces (APIs).

There is no limit to the amount of objects a user can store in a bucket, though buckets cannot exist inside of other buckets.

What is Amazon CloudWatch? 
A place from where you can track all the infrastructure logs at one place.

What if provisioned service is not available in region/country ?
not all services available in all region, it all depends on liking of the services, all depending on requirements. always find nearest region to serve your customer, else you will face high latency.

what is Amazon Elastic container service ?
- It is highly scalable.
- Its a high performance container management.
- It allows you to run application on manged clusters of EC2 instances.
- It can be used to launch or stop container-enabled applications.

some useful services when trying to achieve CI/CD:

CodeCommit: as source repository S3 bucket GitHub | used for version controlling
CodeDeploy: to deploy a sample/custom deployment on an EC2 instances
CodePipeline: service that deploy, build & test your code
  • for continuous deployment we need to create/enable versioning
  • configure | set-AWSCredentials for user by providing Accesskey and secretkey
                                                                  AccessKey AKIAIKOAWUJQB75WLFLA
                                                                  SecretKey XHNKW8EixLu4fBVjL+KKj5wSjohG4ypipKlfR2/E

How to configure AWS PowerShell (if working on windows) Download from here

- services > IAM > Users > Create a user > security Credentials > create Access Key > Download the File (*.csv)
then Launch AWS PowerShell or AWS Configure and give:

- Access key
- Secret Key
- Region

(input keys you get from downloaded .csv files, and region depending on your geographical location)


How to use Codecommitused for version controlling and a useful tool for developers for CI/CD

First thing is to get AWSCredentials for your AWS environments
- services > IAM  > Users > codecommit

now configure your credentials for codecommit

# cd C:\Program Files (x86)\AWS Tools\CodeCommit
# C:\Program Files (x86)\AWS Tools\CodeCommit> .\git-credential-    AWSS4.exe -p codecommit

now create a Repository

- services > codecommit > create a repo(MyRepo) > cloneURL via Https
#git clone 'https-clone-url'  (other developer all do same)
# git config user.mail 'mailId'
# git config user.name 'name'
   (start working)
# notepad index.html
# git status
# git add index.html
# git status
# git commit -m 'initial commit'
# git push origin master (it will connect via https url and push the file to MyRepo)
# git log

How to use CodeDeploy to deploy an App : to automate the deployments and adding new features continuously


first thing is to setup codeDeploy role for instance
> services > IAM > roles > newRole > EC2 > AmazonEC2RoleforAWSCodeDeploy > CDInstanceRole
now create another role for service
> services > IAM > roles > newRole > codeDeploy > AWSCodeDeployRole > CDServiceRole
now go to
> services > EC2 > LaunchInstance >
now create an application
> services > codeDeploy > create App > custom Deployment > skipWalkThrough > GiveDetails > App-DemoApp Group-DemoAppInstance > Amazon EC2 Instance > Key-Name Value-Dev > DeploymentConfig-OneAtATime > role-CDServiceRole > createApplication

How to use CodePipeline: used to deploy code direct from S3/GitHub/codeCommitRepo
> services > codePipeline > create > name-pipeline > source-gitHub > connect > Add Repo-aws-codeDeploy-Linux > branch-Master > buildProvider- noBuild > deploymentProvider-AWS CodeDeploy >  App-DemoApp Group-DemoAppInstance > roleName-AWS-CodePipeline-Service > create

How to use CloudFormation to setup Jenkins Server: using jenkins-server template 
> services > cloudFormation > CreateNewStack > upload the template > stackName-Jenkins > microInstance > dropdownList > IPrange-0.0.0.0/0 > acknowledge > complete
Now you could see a new EC2 instance being created and running as Jenkins Server and ready to use

Importantly how do you connect to your EC2-Linux-Instance running on Windows
for that you need to have Putty and PuttyGen (since Putty wont recognize keypair.pem provided by aws) 
so you need to convert keypair.pem to keypair.ppk using keygen
> launch-puttygen > Load-*.pem > savePrivateKey
> launch-putty > hostname-aws-instance-publicName > Data-autoLogin-ec2-user > SSH > Auth > supply generated *.ppk file > open session

now unlock Jenkins by: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
-------------------------------------
Installing docker on AWS-EC2-Instance
#sudo yum update -y
#sudo amazon-linux-extras install docker
#sudo service docker start
#sudo usermod -a -G docker ec2-user (adding ec2-user to docker group)
-------------------------------------

Br,
Punit