Showing posts sorted by relevance for query jenkins. Sort by date Show all posts
Showing posts sorted by relevance for query jenkins. Sort by date Show all posts

11 February 2018

JENKINS: a Continuous Integration Tool

Getting started with Jenkins is 3 step process:

- Install Jenkins
- Download the required plugins
- Configure the plugins & create project

Installing Jenkins

$ wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
$ sudo apt-get update
$ sudo apt-get install jenkins
above steps will automatically start Jenkins

Starting Jenkins when configured as a service

$ sudo systemctl start jenkins
$ sudo systemctl status jenkin

Upgrade Jenkins

$ sudo apt-get update
$ sudo apt-get install jenkins

If your /etc/init.d/jenkins file fails to start jenkins then,
edit the /etc/default/jenkins to replace the HTTP_PORT=8080 to HTTP_PORT=8081
or
$ cd /var/share/jenkins/
$ java -jar jenkins.war (this will setup a new jenkins from begining)
I prefer to use as below, this will start old jenkins without any delay:
$ service jenkins restart

Running docker container of Jenkins

$ docker pull punitporwal07/jenkins
# docker run -d -p 8081:8080 -v jenkins-data:/software/jenkins punitporwal07/jenkins:tag
(to understand this command in brief check here Docker)

Useful Plugins

 you can push from manage plugins tab/ push from back-end into plugins dir
- Build pipeline: to chain multiple jobs
- Delivery pipeline: this will visualize deliver pipelines (upstream/downstream)
- Weblogic deployer: this is used to deploy a jar/war/ear to any weblogic target
- Deploy to container: to deploy war/ear to a tomcat/glassfish container
Roles strategy: this plugin allows you to assign roles to different user of jenkins

Automate deployment on Tomcat using Jenkins pipeline

(benefits.war as example on tomcat 8.x for Linux)
- install Deploy to container plugin, restart jenkins to reflect changes
- create new project/workspace & select post build action as: Deploy war/ear to a container
- add properties as below:-
   - war/ear files: **/*.war
   - context path: benefits.war (provided you need to push this war file into your workspace)
   - select container from drop-down list: tomcat 8.x
   - Add credentials: tomcat/tomcat (provided you have added this user in conf/tomcat-user.xml with all the required roles)
   - Tomcat URL: http://localhost:8080/
- apply/save you project and build it to validate the result.

Automate deployment on Weblogic using jenkins pipeline

 (benefits.war as example on Weblogic 10.3.6)
- install Weblogic deployer plugin, restart jenkins to reflect changes
- configure the plugin,
- create new project/workspace
- Add post build action as: Deploy the artifact to any weblogic environment (if no configuration has been set, the plugin will display an error message, else it will open up a new window)
- add properties as below:-
   - Task Name: give any task name
   - Environment: from drop down list select your AdminServer ( provided you have created configuration.xml and added it to Weblogic deployer Plugin)
   - Name: The name used by WebLogic server to display the deployed component
   - Base directory of deployment : give path to your deployment.war or push it to your workspace and leave it blank
   - Built resource to deploy: give your deployment.war name
   - Targets: give target name
- Apply/save you project and build it to validate the result.

k/r,
P

23 November 2017

Docker: Containerization Tool

Docker allows you to encapsulate your application, operating system and hardware configuration into a single unit to run it anywhere.

Its all about applications, and every application require tons of Infrastructure, which is massive waste of resources since it utilize very less % of it. I mean with Physical Machine/ram/CPU results heavy loss of cost & bla bla.. hence Hypervisor/Virtualization came into picture, where we use shared resources on top of a single physical machine and create multiple VMs to utilize more from it but still not perfect.
Docker is the solution of above problem, it can containerize your requirement & works on the principle of layered images.

working with docker is as simple as three steps:
  • Install Docker-engine
  • Pull the image from HUB/docker-registry
  • Run image as a container/service
How containers evolved over Virtualization
-In virtual era you need to maintain guest OS on host OS in form of virtualization which boots up in minutes or so.
whereas containers by pass gust OS from host OS in containerization & boots up in fraction of seconds
- It is not replacing the virtualization, it is just the next step in evolution (more advanced)

What is docker?
Docker is a containerization platform which can bundle up technologies and packages your application and all it dependencies together in the form of image which further you run as a service called container so as to ensure that your application will work in any environment be it Dev/Test/Prod

Point to remember
  • docker images are the read-only template & used to run containers
  • docker images are the build component of docker
  • There is always a base image on which you layer up your requirement
  • container are the actual running instances of the images
  • we always create images and run container using images
  • we can pull images from docker hub/registry can be public/private
  • docker daemon runs on host machine
  • docker0 is not a normal interface | Its a Bridge | Virtual Switch | that links multiple containers
  • Docker images are registered in Docker registry & stored in docker hub
  • Docker hub is docker's own cloud repository (for sharing & caring purpose of images)
Essence of docker: if you are new to any technology and want to work on it, get its image from docker hub configure it, work on it, destroy it, then you can move same image to other environment and run as it is out there .   
                          
                      
key attribute of kernel used by containers
  • Namespaces (PID, net, mountpoint, user) Provides Isolation
  • cgroups (control groups)
  • capabilities ( assigning privileges to container users)
  • but each container shares common Kernel
how communication happen b/w docker client & docker daemon
  • Rest API
  • Socket.IO
  • TCP
Dockerfile supports following list of variables

FROM       image:tag AS name
ADD        ["src",... "dest"]
COPY       /src/ dest/
ENV        ORACLE_HOME=/software/Oracle/
EXPOSE     port, [port/protocol]
LABEL      multi.label1="value1" multi.label2="value2" other="value3"
STOPSIGNAL
USER       
myuser
VOLUME     /myvolume
WORKDIR    /locationof/directory/
RUN        write your shell command
CMD        ["executable","param1","param2"]
ENTRYPOINT ["executable","param1","param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)
ENTRYPOINT script ; /bin/bash

Some arguments which you can use while running any docker Image
$ docker run -it --privileged image:tag
--privileged will give all capabilities to container and lifts all the limitaion enforced by OS/device, even you can run docker inside docker with it.

Installing docker-engine onto any Ubuntu system

$ sudo apt-get update -y && apt-get install docker.io

this will install docker-engine as a linux service . check engine status by running service docker status if its running you are good to play with docker now. else start docker engine by running service docker start

check docker details installed in your system by running any of these commands

$ docker -v | docker version | docker info

Docker needs root to work for creation of Namespaces/cgroups/etc..


so you need to add your local user to docker group (verify docker group from /etc/group and add your user as:

$ sudo gpasswd -a red docker

then restart your session, alternatively add your user to docker group

$ vi /etc/group 

append your user to docker group and start using docker with your user

Basic commands 
FunctionCommand
pull a docker imagedocker pull reponame:imagename:tag
run an image
docker run parameters imagename:tag
list docker imagesdocker images
list running containers
list container even not running
docker ps
 docker ps -a
build an imagedocker build -t imagename:tag .
remove n processes in one commanddocker rm $(docker ps -a -q)
remove n images in one commanddocker rmi $(docker image -a -q)                                                               
reset docker systemdocker system prune
create mount docker volume create
using mount point
docker run -it -p 8001-8006:7001-7006 --mount type=bind, source=/software/, target=/software/docker/data/ registry.docker/weblogic12213:191004
docker run -it -p 8001-8006:7001-7006
-v data:/software/ registry.docker/weblogic1036:191004
create network                    docker network create --driver bridge --subnet=192.168.0.0/20 --gateway=192.168.0.2 mynetwork
docker run -it -p 8001:8006:7001:7006 
--network=mynetwork registry.docker/weblogic1036:191004         
for more on networkingclick here: networking in docker 

Setting up Jenkins Via Docker on a Linux machine

Open a terminal window and run(Provided Docker is already installed)
$ docker pull punitporwal07/jenkins
$ docker run -d -p 9090:8080 -v jenkins-data:/var/jenkins_home punitporwal07/jenkins

docker run : default command to run any docker container
-d : run the container in detached made(in background) and omit the container ID
-p : port assignation from image to you local setup -p host-port:container-port
-v : Jenkins data to be mapped to /var/Jenkins_home/ directory/volume to one of your file system
punitporwal07/jenkins: docker will pull this image from docker Hub

it will process for 2-3 mins then prompt as:

INFO: Jenkins is fully up and running

to access the jenkins console( http://localhost:9090 ) for the first time you need to provide admin password to make sure it was installed by admin only. & it will prompt admin password during the installation process as something like:

e72fb538166943269e96d5071895f31c

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

here we are running Jenkins inside docker as a detached container you can use:
$ docker logs to collect jenkins logs

if we select to install recommended plugins which are most useful, Jenkins by default will install:



Best practice to write a Dockerfile
best practice is to build a container first, run all the instructions one by one that you are planning to put in a Dockerfile. Once they got succeed you can put them in your Dockerfile, which will avoid you building n images from your Dockerfile again and again and save image layers as well.

Writing a docker File: ( FROM COPY RUN CMD)

a Container runs on level of images:
            base image
            layer1 image
            layer2 image

Dockerfiles are simple text files with a command on each line.
To define a base image we use the instruction FROM 

Creating a Dockerfile
  • The first line of the Dockerfile should be FROM nginx:1.11-alpine (it is better to use exact version rather then writing it as latest, as it can deviate your desired version)
  • COPY allows you to copy files from the directory containing the Dockerfile to the container's image. This is extremely useful for source code and assets that you want to be deployed inside your container.
  • RUN allows you to execute any command as you would at a command prompt, for example installing different application packages or running a build command. The results of the RUN are persisted to the image so it's important not to leave any unnecessary or temporary files on the disk as these will be included in the image & it will create a image for each command
  • CMD is used to execute any single command as soon as container launch

Life of a docker Image
write a Dockerfile > build the image > tag the image > push it to registry > pull it back to any system > run the image 

vi Dockerfile: 

FROM baseLayer:version
MAINTAINER xxx@xx.com
RUN install
CMD special commands/instructions

$ docker build -t imagename:tag .
$ docker tag 4a34imageidgfg43 punixxorwal07/image:tag
$ docker push punixxorwal07/image:tag
$ docker pull punixxorwal07/image:tag
$ docker run -it -p yourPort:imagePort punixxorwal07/image:tag

How to Upload/Push your image to registry

after building your image (docker build -t imageName:tag .) do the following:

step1- login to your docker registry
$ docker login --username=punitporwal --email=punixxorwal@xxxx.com

list your images
$ docker images

step2- tag your image for registry
$ docker tag b9cc1bcac0fd reponame/punitporwal07/helloworld:0.1

step3- push your image to registry
$ docker push reponame/punitporwal07/helloworld:0.1

your image is now available and open for world, by default your images is public.

repeat the same step if you wish to do any changes in your docker image, make the changes, tag the new image, push it to you docker hub


Volumes in Docker

first of all create volume for your docker container using command

$ docker volume create myVolume
$ docker volume ls 
DRIVER              VOLUME NAME
local               2f14a4803f8081a1af30c0d531c41684d756a9bcbfee3334ba4c33247fc90265
local               21d7149ec1b8fcdc2c6725f614ec3d2a5da5286139a6acc0896012b404188876
local               myVolume

there after use following way to use volume feature
we can define volumes in one container and same can be share across multiple containers

to define in container 1
$ docker run -it -v /volume1 --name voltainer centos /bin/bash

to call in another container from other container
$ docker run -it --volumes-from=voltainer centos /bin/bash

we can call Volumes in a container from Docker engine host
$ docker run -v /data:/data
$ docker run --volume mydata:/mnt/mqm

     /volumeofYourHost/:/volumeofContainer/

to define in a Dockerfile
VOLUME /data (but we cannot bind the volume from docker host to container via this, just docker run command can do this)


PORT MAPPING

when you expose a port from Dockerfile that means you are mapping a port defined in your image to your newly launched container , use:
$ docker run -d -p 5001:80 --name=mycontaniername myimagename

when you want to change the protocol from default i.e tcp to udp , use:
$ docker run -d -p 5001:80/udp --name=mycontinername myimagename

lets say when you want to expose your image port to any specific IP address from your docker host, use:
$ docker run -d - -p 192.168.0.100:5002:80 --name=mycontaniername myimagename

when you want to map multiple ports exposed in your Dockerfile to high random available ports , use:
$ docker run -d -P --name=mycontaniername3 myimagename

to expose a port range, use:
$ docker run -it -p 61000-61006:61000-61006 myimagename:myimagetag
                  also you can use EXPOSE 61000-61006 in your Dockerfile

to check port mapping , use:
$ docker port myimagename


DOCKER DAEMON LOGGING

first of all stop the docker service
$ service docker stop
$ docker -d -l debug &
-d here is for daemon
-l log level
& to get our terminal back
or
$ vi /etc/default/docker/
add log-level
DOCKER_OPTS="--log-level=fatal"
then restart docker deamon
$ service docker start


Br
Punit

16 February 2018

Configuring weblogic deployer plugin in Jenkins

Configure the weblogic deployer plugin to Deploy an war/ear  on weblogic managed  server
- Install the plugin (from manage Jenkins -> manage plugins -> available -> download & install)
- restart the Jenkins for changes to take reflect
- go to manage Jenkins
- go to configure systems
- scroll down to weblogic deployment plugins
- give details as below:-                                 
        - additional path: /software/bea/jenkins/wlfullclient.jar (path to you wlfullclient.jar file, weblogic.jar is deprecated now and Oracle recommends to use wlfullclient.jar from 10.3 onwards, this jar can be created from $WL_HOME/wl_server10.3/servers/lib/java -jar wljarbuilder.jar for more check Oracle Doc on this.)
        - configuration file: /software/bea/jenkins/configuration.xml (path to your configuation.xml file)
        - apply/save
- sample configuration.xml file is as below
modify the highlighted tags as per your local configuration
--
Punit

29 August 2018

Quick guide on Amazon Web Services for beginners

amazon, first company who come up with an idea of bundling all the 7 layers of OSI model in form of services aka web services which are built on compute capabilities. At the time of writing this article there more than 90 service in AWS.

there are 4 core foundation elements

Compute: EC2, Paas Elastic beanstalk, Faas Lambda, Auto Scaling
Storage: S3, Glacier(used to archive), Elastic object, block storage, Elastic file system
Database: daas, custom DB, mysql
Network: VPC, CluodFront, Route53 for DNS, API gateway, Direct-connect

where auto-scaling is sufficiently great, due to its auto provision property, it can helps in increased demand, as well as on reduced demand.

here are some topic which may help you to start with AWS journey

What is the difference b/w EC2 and Elastic beanstalk ?
with EC2 instance you are manually going to launch an instance and tell the system what kind of OS, memory/CPU and other resources you want to spin. whereas with beanstalk you tell the system about your requirement and system will spin up all suitable and eligible resources for you.
ex: if you have a .net application you tell the system and it will launch all the app and db instance required for a .net application to work.

What is an EBS Volume?
An Elastic Block Store Volume is a network drive you can attach to your instances while they are running. It allows your instances to persist data, even after their termination, they can only be mounted to one instance at a time & they are bound to a specific availability zone i.e. you cannot use EBS present in one zone to attach to an instance on another zone instead you used a method called snapshot.
Think of them as a "USB stick" but attached at network level.

What is Geo targeting in cloud front ?
It works on the principal of caching, and is handled globally, which provide data to user from very nearest server. ( URL remains same, you can modify the content and customize the content).
in Geo targeting cloud front detects the country code and forward it to origin server, then origin server sent the specific content to cache server and will be stored for ever and then the user will get specific content images defined specifically for their region/country.

how do you upgrade or downgrade a system with near zero downtime ?
- Launch another system parallel may be with bigger EC2 capacity 
- Install all the software/packages needed 
- Launch the instance and test locally
- If works, swap the IPs if using route 53, update the IPs and it gonna send traffic to new servers in 
0 Downtime

What is Amazon S3 bucket ?
An Amazon S3 bucket is a public cloud storage resource backed by AWS formally known as Simple Storage Service (S3), an object storage offering.
S3 buckets are similar to file folders, store objects, which consist of data and its descriptive metadata.
An S3 user first creates a bucket in an AWS region of choice and gives it a globally unique name. AWS recommends that customers choose regions geographically close to them to reduce latency and costs.
Once the bucket has been created, the user then selects a tier for the data, with different S3 tiers having different levels of redundancy, prices and accessibility. One bucket can store objects from different S3 storage tiers.
User than specify access privileges for the objects stored in a bucket, via IAM mechanisms, bucket policies and access control lists.
User can interact with an S3 bucket via the AWS Management Console, AWS CLI or application programming interfaces (APIs).
There is no limit to the amount of objects a user can store in a bucket, though buckets cannot exist inside of other buckets.

What is Amazon CloudWatch? 
A place from where you can track all the infrastructure logs at one place.

What if provisioned service is not available in region/country ?
not all services available in all region, it all depends on liking of the services, all depending on requirements. always find nearest region to serve your customer, else you will face high latency.

what is Amazon Elastic container service ?
- It is highly scalable.
- Its a high performance container management.
- It allows you to run application on manged clusters of EC2 instances.
- It can be used to launch or stop container-enabled applications.

some useful services when trying to achieve CI/CD:

CodeCommit: as source repository S3 bucket GitHub | used for version controlling
CodeDeploy: to deploy a sample/custom deployment on an EC2 instances
CodePipeline: service that deploy, build & test your code
  • for continuous deployment we need to create/enable versioning
  • configure | set-AWSCredentials for user by providing Accesskey and secretkey
                                                                  AccessKey AKIAIKOAWUJQB75WLFLA
                                                                  SecretKey XHNKW8EixLu4fBVjL+KKj5wSjohG4ypipKlfR2/E

How to configure AWS PowerShell (if working on windows) Download from here

- services > IAM > Users > Create a user > security Credentials > create Access Key > Download the File (*.csv)
then Launch AWS PowerShell or AWS Configure and give:

- Access key
- Secret Key
- Region

input keys you get from downloaded .csv files, and region depending on your geographical location


How to use Codecommitused for version controlling and a useful tool for developers for CI/CD

First thing is to get AWSCredentials for your AWS environments
- services > IAM  > Users > codecommit

now configure your credentials for codecommit

$ cd C:\Program Files (x86)\AWS Tools\CodeCommit
$ C:\Program Files (x86)\AWS Tools\CodeCommit> .\git-credential-    AWSS4.exe -p codecommit

create a Repository
  • services > codecommit > create a repo(MyRepo) > cloneURL via Https
$ git clone 'https-clone-url'  (other developer all do same)
$ git config user.mail 'mailId'
$ git config user.name 'name'
   (start working)
$ notepad index.html
$ git status
$ git add index.html
$ git status
$ git commit -m 'initial commit'
$ git push origin master (it will connect via https url and push the file to MyRepo)
$ git log

How to use CodeDeploy to deploy an App : to automate the deployments and adding new features continuously

first thing is to setup codeDeploy role for instance
create another role for service
go to
  • services > EC2 > LaunchInstance >
create an application
  • services > codeDeploy > create App > custom Deployment > skipWalkThrough > GiveDetails > App-DemoApp Group-DemoAppInstance > Amazon EC2 Instance > Key-Name Value-Dev > DeploymentConfig-OneAtATime > role-CDServiceRole > createApplication
How to use CodePipeline: used to deploy code direct from S3/GitHub/codeCommitRepo
  • services > codePipeline > create > name-pipeline > source-gitHub > connect > Add Repo-aws-codeDeploy-Linux > branch-Master > buildProvider- noBuild > deploymentProvider-AWS CodeDeploy >  App-DemoApp Group-DemoAppInstance > roleName-AWS-CodePipeline-Service > create

How to use CloudFormation to setup Jenkins Server: using jenkins-server template 
  • services > cloudFormation > CreateNewStack > upload the template > stackName-Jenkins > microInstance > dropdownList > IPrange-0.0.0.0/0 > acknowledge > complete
now you could see a new EC2 instance being created and running as Jenkins Server and ready to use

Importantly how do you connect to your EC2-Linux-Instance running on Windows
for that you need to have Putty and PuttyGen (since Putty wont recognize keypair.pem provided by aws) 
so you need to convert keypair.pem to keypair.ppk using keygen
launch-puttygen > Load-*.pem > savePrivateKey
launch-putty > hostname-aws-instance-publicName > Data-autoLogin-ec2-user > SSH > Auth > supply generated *.ppk file > open session

now unlock Jenkins by: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
-------------------------------------
Installing docker on AWS-EC2-Instance
#sudo yum update -y
#sudo amazon-linux-extras install docker
#sudo service docker start
#sudo usermod -a -G docker ec2-user (adding ec2-user to docker group)
-------------------------------------

k/r
P

24 February 2018

DevOps

Primary Objective: 

To get the changes into live as quickly as possible while minimizing the risks in software quality assurance and compliance.

What are the top DevOps tools?

- Git
- Jenkins
- Ansible/Chef/Puppet
- Selenium
- Nagios
- Docker

How do DevOps tools work together ?

In an organisation where everything gets automated for seamless delivery the generic logical flow can be:
  1. Developers develop the code and the source code is managed by Version Control System tool like Git, then developers send this code to git repository and any changes made in the code is committed to this repository.
  2. Then Jenkins pull this code from the repository using the git plugin and build it using tools like Ant or Maven.
  3. Configuration management tool like Ansible/Puppet deploys this code & provision testing env. and then jenkins releases this code on the test env. on which testing is done using tools like selenium
  4. Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like Ansible/Puppet)
  5. After deployment it is continuously monitored by tool like Nagios.
  6. Docker containers provide quick environment to test the build features. 

k/r,
P