Showing posts sorted by relevance for query docker. Sort by date Show all posts
Showing posts sorted by relevance for query docker. Sort by date Show all posts

23 November 2017

Docker: Containerization Tool

Docker allows you to encapsulate your application, operating system and hardware configuration into a single unit to run it anywhere.

Its all about applications, and every application require tons of Infrastructure, which is massive waste of resources since it utilize very less % of it. I mean with Physical Machine/ram/CPU results heavy loss of cost & bla bla.. hence Hypervisor/Virtualization came into picture, where we use shared resources on top of a single physical machine and create multiple VMs to utilize more from it but still not perfect.
Docker is the solution of above problem, it can containerize your requirement & works on the principle of layered images.

working with docker is as simple as three steps:
  • Install Docker-engine
  • Pull the image from HUB/docker-registry
  • Run image as a container/service
How containers evolved over Virtualization
-In virtual era you need to maintain guest OS on host OS in form of virtualization which boots up in minutes or so.
whereas containers by pass gust OS from host OS in containerization & boots up in fraction of seconds
- It is not replacing the virtualization, it is just the next step in evolution (more advanced)

What is docker?
Docker is a containerization platform which can bundle up technologies and packages your application and all it dependencies together in the form of image which further you run as a service called container so as to ensure that your application will work in any environment be it Dev/Test/Prod

Point to remember
  • docker images are the read-only template & used to run containers
  • docker images are the build component of docker
  • There is always a base image on which you layer up your requirement
  • container are the actual running instances of the images
  • we always create images and run container using images
  • we can pull images from docker hub/registry can be public/private
  • docker daemon runs on host machine
  • docker0 is not a normal interface | Its a Bridge | Virtual Switch | that links multiple containers
  • Docker images are registered in Docker registry & stored in docker hub
  • Docker hub is docker's own cloud repository (for sharing & caring purpose of images)
Essence of docker: if you are new to any technology and want to work on it, get its image from docker hub configure it, work on it, destroy it, then you can move same image to other environment and run as it is out there .   
                          
                      
key attribute of kernel used by containers
  • Namespaces (PID, net, mountpoint, user) Provides Isolation
  • cgroups (control groups)
  • capabilities ( assigning privileges to container users)
  • but each container shares common Kernel
how communication happen b/w docker client & docker daemon
  • Rest API
  • Socket.IO
  • TCP
Dockerfile supports following list of variables

FROM       image:tag AS name
ADD        ["src",... "dest"]
COPY       /src/ dest/
ENV        ORACLE_HOME=/software/Oracle/
EXPOSE     port, [port/protocol]
LABEL      multi.label1="value1" multi.label2="value2" other="value3"
STOPSIGNAL
USER       
myuser
VOLUME     /myvolume
WORKDIR    /locationof/directory/
RUN        write your shell command
CMD        ["executable","param1","param2"]
ENTRYPOINT ["executable","param1","param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)
ENTRYPOINT script ; /bin/bash

Some arguments which you can use while running any docker Image
$ docker run -it --privileged image:tag
--privileged will give all capabilities to container and lifts all the limitaion enforced by OS/device, even you can run docker inside docker with it.

Installing docker-engine onto any Ubuntu system

$ sudo apt-get update -y && apt-get install docker.io

this will install docker-engine as a linux service . check engine status by running service docker status if its running you are good to play with docker now. else start docker engine by running service docker start

check docker details installed in your system by running any of these commands

$ docker -v | docker version | docker info

Docker needs root to work for creation of Namespaces/cgroups/etc..


so you need to add your local user to docker group (verify docker group from /etc/group and add your user as:

$ sudo gpasswd -a red docker

then restart your session, alternatively add your user to docker group

$ vi /etc/group 

append your user to docker group and start using docker with your user

Basic commands 
FunctionCommand
pull a docker imagedocker pull reponame:imagename:tag
run an image
docker run parameters imagename:tag
list docker imagesdocker images
list running containers
list container even not running
docker ps
 docker ps -a
build an imagedocker build -t imagename:tag .
remove n processes in one commanddocker rm $(docker ps -a -q)
remove n images in one commanddocker rmi $(docker image -a -q)                                                               
reset docker systemdocker system prune
create mount docker volume create
using mount point
docker run -it -p 8001-8006:7001-7006 --mount type=bind, source=/software/, target=/software/docker/data/ registry.docker/weblogic12213:191004
docker run -it -p 8001-8006:7001-7006
-v data:/software/ registry.docker/weblogic1036:191004
create network                    docker network create --driver bridge --subnet=192.168.0.0/20 --gateway=192.168.0.2 mynetwork
docker run -it -p 8001:8006:7001:7006 
--network=mynetwork registry.docker/weblogic1036:191004         
for more on networkingclick here: networking in docker 

Setting up Jenkins Via Docker on a Linux machine

Open a terminal window and run(Provided Docker is already installed)
$ docker pull punitporwal07/jenkins
$ docker run -d -p 9090:8080 -v jenkins-data:/var/jenkins_home punitporwal07/jenkins

docker run : default command to run any docker container
-d : run the container in detached made(in background) and omit the container ID
-p : port assignation from image to you local setup -p host-port:container-port
-v : Jenkins data to be mapped to /var/Jenkins_home/ directory/volume to one of your file system
punitporwal07/jenkins: docker will pull this image from docker Hub

it will process for 2-3 mins then prompt as:

INFO: Jenkins is fully up and running

to access the jenkins console( http://localhost:9090 ) for the first time you need to provide admin password to make sure it was installed by admin only. & it will prompt admin password during the installation process as something like:

e72fb538166943269e96d5071895f31c

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

here we are running Jenkins inside docker as a detached container you can use:
$ docker logs to collect jenkins logs

if we select to install recommended plugins which are most useful, Jenkins by default will install:



Best practice to write a Dockerfile
best practice is to build a container first, run all the instructions one by one that you are planning to put in a Dockerfile. Once they got succeed you can put them in your Dockerfile, which will avoid you building n images from your Dockerfile again and again and save image layers as well.

Writing a docker File: ( FROM COPY RUN CMD)

a Container runs on level of images:
            base image
            layer1 image
            layer2 image

Dockerfiles are simple text files with a command on each line.
To define a base image we use the instruction FROM 

Creating a Dockerfile
  • The first line of the Dockerfile should be FROM nginx:1.11-alpine (it is better to use exact version rather then writing it as latest, as it can deviate your desired version)
  • COPY allows you to copy files from the directory containing the Dockerfile to the container's image. This is extremely useful for source code and assets that you want to be deployed inside your container.
  • RUN allows you to execute any command as you would at a command prompt, for example installing different application packages or running a build command. The results of the RUN are persisted to the image so it's important not to leave any unnecessary or temporary files on the disk as these will be included in the image & it will create a image for each command
  • CMD is used to execute any single command as soon as container launch

Life of a docker Image
write a Dockerfile > build the image > tag the image > push it to registry > pull it back to any system > run the image 

vi Dockerfile: 

FROM baseLayer:version
MAINTAINER xxx@xx.com
RUN install
CMD special commands/instructions

$ docker build -t imagename:tag .
$ docker tag 4a34imageidgfg43 punixxorwal07/image:tag
$ docker push punixxorwal07/image:tag
$ docker pull punixxorwal07/image:tag
$ docker run -it -p yourPort:imagePort punixxorwal07/image:tag

How to Upload/Push your image to registry

after building your image (docker build -t imageName:tag .) do the following:

step1- login to your docker registry
$ docker login --username=punitporwal --email=punixxorwal@xxxx.com

list your images
$ docker images

step2- tag your image for registry
$ docker tag b9cc1bcac0fd reponame/punitporwal07/helloworld:0.1

step3- push your image to registry
$ docker push reponame/punitporwal07/helloworld:0.1

your image is now available and open for world, by default your images is public.

repeat the same step if you wish to do any changes in your docker image, make the changes, tag the new image, push it to you docker hub


Volumes in Docker

first of all create volume for your docker container using command

$ docker volume create myVolume
$ docker volume ls 
DRIVER              VOLUME NAME
local               2f14a4803f8081a1af30c0d531c41684d756a9bcbfee3334ba4c33247fc90265
local               21d7149ec1b8fcdc2c6725f614ec3d2a5da5286139a6acc0896012b404188876
local               myVolume

there after use following way to use volume feature
we can define volumes in one container and same can be share across multiple containers

to define in container 1
$ docker run -it -v /volume1 --name voltainer centos /bin/bash

to call in another container from other container
$ docker run -it --volumes-from=voltainer centos /bin/bash

we can call Volumes in a container from Docker engine host
$ docker run -v /data:/data
$ docker run --volume mydata:/mnt/mqm

     /volumeofYourHost/:/volumeofContainer/

to define in a Dockerfile
VOLUME /data (but we cannot bind the volume from docker host to container via this, just docker run command can do this)


PORT MAPPING

when you expose a port from Dockerfile that means you are mapping a port defined in your image to your newly launched container , use:
$ docker run -d -p 5001:80 --name=mycontaniername myimagename

when you want to change the protocol from default i.e tcp to udp , use:
$ docker run -d -p 5001:80/udp --name=mycontinername myimagename

lets say when you want to expose your image port to any specific IP address from your docker host, use:
$ docker run -d - -p 192.168.0.100:5002:80 --name=mycontaniername myimagename

when you want to map multiple ports exposed in your Dockerfile to high random available ports , use:
$ docker run -d -P --name=mycontaniername3 myimagename

to expose a port range, use:
$ docker run -it -p 61000-61006:61000-61006 myimagename:myimagetag
                  also you can use EXPOSE 61000-61006 in your Dockerfile

to check port mapping , use:
$ docker port myimagename


DOCKER DAEMON LOGGING

first of all stop the docker service
$ service docker stop
$ docker -d -l debug &
-d here is for daemon
-l log level
& to get our terminal back
or
$ vi /etc/default/docker/
add log-level
DOCKER_OPTS="--log-level=fatal"
then restart docker deamon
$ service docker start


Br
Punit

20 January 2019

All about Docker swarm

There is always a requirement to run every individual service without a fail over and load balancing. When this comes to container services docker swarm comes into picture.
Docker swarm is a cluster of docker containers and provide a container orchestration framework.
  • comprises of managers and workers
  • managers are also know as workers
  • there will be only one manager as leader, other managers will act as a backup
  • as a pre-requisite, you docker version should be on 1.12+
To initiate docker swarm

$ docker swarm init --advertise-addr :2377 --listen-addr managerIP:swarmListenPort

2377: is the default port for swarm
addvertise-addrwill let swarm manager to use specific IP:PORT. here I am running this on ec2 instance as manager1(in case if your host contains multiple IPs its best practice to use a specific one for all swarm related stuff)


[root@ip-172-31-22-15 ec2-user]# docker swarm init --advertise-addr 172.31.22.15:2377 --listen-addr 172.31.22.15:2377
Swarm initialized: current node (icuih1r0n8juo8xigkceniu3j) is now a manager.
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-15z6ejowo...63dn550as-7998mw9sxnh3ig 172.31.22.15:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@ip-172-31-22-15 ec2-user]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
icuih1r0n8juo8xigkceniu3j *  docker    Ready   Active        Leader


the highlighted command is the exact command that we need to run on a worker/manager that you wanna join to this swarm, it includes a token


[root@ip-172-31-22-15 ec2-user]# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-15z6ejowow...63dn550as-9wiyb3pyiviqik 172.31.22.15:2377


[root@ip-172-31-22-15 ec2-user]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-15z6ejowow...63dn550as-7998mw9sxnh3ig 172.31.22.15:2377

following above command to join leader as worker/manager launch another ec2 instance or any with docker 1.12+ in it and


$ docker swarm join --token SWMTKN-1-15z6ejowow53...63dn550as-9wiyb3pyiviqik 172.31.22.15:2377


you will see all the workers/managers you have joined with your swarm from Leader node as:


[root@ip-172-31-22-15 ec2-user]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
1ndqsslh7fpquc7fi35leig54    worker4   Ready   Active
1qh4aat24nts5izo3cgsboy77    worker5   Ready   Active
25nwmw5eg7a5ms4ch93aw0k03    worker3   Ready   Active
icuih1r0n8juo8xigkceniu3j *  manager1  Ready   Active        Leader
5pm9f2pzr8ndijqkkblkgqbsf    worker2   Ready   Active
9yq4lcmfg0382p39euk8lj9p4    worker1   Ready   Active

 $ docker info will give you a detailed info on your swarm
[root@ip-172-31-22-15 ec2-user]# docker info
Containers: 12
Running: 0
Paused: 0
Stopped: 12
Images: 1
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 54
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: icuih1r0n8juo8xigkceniu3j
Is Manager: true
ClusterID: hpvfpcevwt8144bj65yk744q8
Managers: 1
Nodes: 6
Orchestration:
.
..
Node Address: 10.91.20.119
Manager Addresses:
10.91.20.119:2377
......
..


now creating a SERVICE and running it on docker swarm
(the whole idea of setting this orchestration layer is, we don't need to worry on our app as where it is running but it will be up for the whole time)


$ docker serivce create | scale | ls | ps | inspect | rm
ex: $ docker network create -d overlay pp-net $ docker service scale >> docker service update --replicas $ docker service scale Name=7 $ docker service ps Name
red@docker:/software/docker-images$ docker service create --name myswarmapp -p 9090:80 punitporwal07/apache rvzrpe4szt0vdyqte7g7tfshs



by doing this, any time when you gonna hit your exposed port for service to any host/IP in swarm it will give you your application , without having its container running on it. (service will be running only on leader/manager1)

accessing the service now:


NOTE: after advertising listen address to docker swarm, you may get error next time when you try to initialize docker daemon. (if you are using dynamic IP)


/var/lib/docker/swarm/docker-state.json /var/lib/docker/swarm/state.json


will hold the IP and failed to initial docker daemon

ERRO[0001] cluster exited with error: failed to listen on remote API
address: listen tcp 10.91.20.119:2377: bind: cannot assign requested address
FATA[0001] Error creating cluster component: swarm component could
not be started: failed to listen on remote API address: listen tcp
10.91.20.119:2377: bind: cannot assign requested address


change the IP and initialize it again

service docker restart

k/r,
P

21 January 2019

what is docker-compose

When you wish to run services together and want to run them as single unit then docker-compose is the tool for you, which allows you to run multiple services as kind of micro service by defining them in a single configuration file.
  • docker compose is a docker tool for defining and running multi containers docker application.
  • docker compose allows us to define all the services in a configuration file and with one command it will spin up all the containers that we need.
  • it uses yaml files to configure application services (docker-compose.yml)
  • it uses single command to start and stop all the services (docker-compose up & docker-compose down)
  • it can scale up services when ever required.
by default this tool is automatically installed when you are on windows or mac with docker v1.12+

but if you are on Linux try this command given at github for docker-compose


$ curl -L https://github.com/docker/compose/releases/download/1.23.2/docker-compose \
-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose


alternatively you can find the latest version available here at github

docker-compose.yml prototype will look like:

version:
services:
  image:
network:
volume:


version: first thing first define version of the docker-compose that we are using, there is no restrictions of not to use latest version of compose so I have used '3' here

version: '3'

services: service definition contains configuration which will be applied to each container started for that service, much like passing command-line parameter to docker run

---
version: ‘3’
services:
webserver:
image: punitporwal07/apache
ports:
- “9090:80” database:
image: mysql
ports: - “4041:3306” environment:
- MYSQL_ROOT_PASSWORD=password - MYSQL_USER=user - MYSQL_PASSWORD=password - MYSQL_DATABASE=demodb
...

so instead of defining items in docker run command, now we can define it more easily in configuration file here but with little bit of syntax

now launch the service using a simple command docker-compose up and it will spin up mysql and apache in fractions of minutes for you.

Br,
Punit

24 January 2018

Networking in docker

Docker works on the principal of running containers as a service, when you run a container it has its own attributes like namespace ip-address port etc. This attributes are allocated to containers by docker daemon at run time. There are ways to control this behavior like creating namespaces of your choice at the time of launching them.

Same way when it comes to ip-addresses you can create your own docker network which can give a static ip to your container or its underline service. 

docker comes with 5 kind of networking drivers:

bridge : when you want to communicate between standalone containers.
overlay : to connect multiple Docker daemons together and enable swarm services to communicate with each other.
host : For standalone containers, remove network isolation between the container and the Docker host. 
macvlan : Allow you to assign a MAC address to  container, making it appear as a physical device on your network.
none : disables all networking.

by default bridge is the default driver that got created when you launch any of the container as service.

How one can create its own docker network as per requirement 

the syntax to create a network is : 

$ docker network create --options networkname

few widely used options are:

--driver drivername
--subnet=subnetrange/x
--gateway=anIPfromdefinedsubnet

for example assigning static IP out of your your CIDR block


$ docker network create --driver overlay --subnet=192.168.0.0/26 --gateway=192.168.0.1 my-network


additionally you can use this created network for your container at the the time of its launch

for example:


$ docker run --publish 61000:1414 --publish 61001:9443 --net my-network --ip 192.168.0.3 --detach --env
  MY_APP_PASSWORD=password punitporwal07/apache:2.2.29


this way your container will be allocated with an static IP within your defined subnet range.

k/r
P

24 February 2018

Kubernetes: Orchestration framework for containers

Kubernetes is an Open-source tool donated by Google after experiencing it for over 10 years as Borg. It is a platform to work with containers and is an orchestration framework for Docker containers which gives you: deployment, scaling, monitoring







K8s helps in moving from host-centric infrastructure to container-centric infrastructure

In virtualization world atomic unit of scheduling is VM same way in docker its Container and in Kubernetes it is Pod

keys of kubernetes
- we describe our application requirement in k8s yaml's
- It expose containers as services to the outside world.
- Kubernetes follows client-server architecture.
- In K8s we enforce desired state management via a manifest.yaml file. Here we feed the cluster service to run on a desired state in our infrastructure.
- on the other side we have worker. Worker is a container host & it has a kubelet process running which is responsible for communicating with K8S cluster services.

**Kubernetes rule says- pod cannot be directly exposed it has to be via service**

Containers > pods > deployments

For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services.


Resources in kubernetes

minion − is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD.

Pod  Pods are Mortal & is the smallest unit of deployment in K8s object mode or is like hosting a service. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. You cannot create your own pods, they are created by replicasets.

ReplicaSet  replicasets are created by deployment, these deployments contains declaration of containers which you want to run in cluster. like image/tag, env variable, data volumes, 
Kubernetes has several components in its architecture.

Labels − use labels in your deployment manifest to target specific pods. that means pod with specific labels will only be manipulated depending on the label you have defined in your deploy manifest. 

etcd − k8s objects persisted here. This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service.

kube-apiserver − This is an API which can be used to orchestrate the Docker containers.
kube-controller-manager − This is used to control the Kubernetes services.
kube-scheduler − This is used to schedule the containers on hosts.
Kubelet − This is used to control the launching of containers via manifest files from worker host. (which talks with K8S cluster).
kube-proxy − This is used to provide network proxy services to the outside world. 
Flannel − This is a back-end network which is required for the containers.

Advance resources

context - it is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl : all kubectl commands run against that cluster.
ConfigMap - an API object that let you store your other object or application configuration, setting connection strings, analytics keys, and service URLs & further mounting them in volumes to use them as environment variable.
sidecar - is just a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. Common examples of sidecar containers are log shippers, log watchers, monitoring agents among others, aka utility container.

 helm  helm is a package manager for k8s which allows to package, configure & deploy applications & services to k8s-cluster.
helm Chart  helm packages are called charts, which consist of few YAML configs and some templates which are cooked into k8s manifest file.
helm chart repository − this packaged charts brought available and can be downloaded from chart repos.

Mandatory Fields while writing a manifest file
In manifest file for kubernetes objects you want to create, you’ll need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you’re using to create this object. for more on apiversions see this >  Different api versions to use in your manifest file
kind - What kind of object you want to create.
metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
spec - What state you desire for the object.

Service in kubernetes
There are four ways to make a service accessible externally in kubernetes cluster
  • Nodeport: deployment that need to be exposed as a service to the outside world can be configured with the NodePort type. In this method when deployment exposed, cluster node opens a random port between default range: 30000-32767 on the node itself with IP (hence this name was given) and redirects traffic received on that random port to the underlying service endpoint which got generated when you expose your deployment. (combination of NodeIP + Port is NodePort ) accessing your app/svc as http://public-node-ip:nodePort
  • clusterIP is the default and most basic, which give service its own IP and is only reachable within the cluster.
  • Loadbalancer:  an extension of the NodePort type—This makes the service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.
  • Ingress resource, a radically different mechanism for exposing multiple services through a single IP address. It operates at the HTTP level (network layer 7) and can thus offer more features than layer 4 services can.
Network in kubernetes

Kubernetes default ethernet is called as cbr0 like you have docker0 for docker.

3 fundamental requirement in k8s networing model:
  • All the containers can communicate with each other directly without NAT.
  • All the nodes can communicate with all containers (and vice versa) without NAT.
  • The IP that a container sees itself as is the same IP that others see it as.
Pods Networks
Implemented by CNI plugins
pod network is big and flat
you have IP/Pod
every pod can talk to any other pod

Nodes Networks
All nodes needs to be able to talk
kubelet <-> API Server
Every node on the n/w has this process running called Kubeproxy & kubelet 
n/w not implemeneted by k8s.

Service Networks
IP of your service is not tied up with any interface 
Kube-proxy in IPVS modes create dummy interface on the service n/w, called kube-ipvs0 
where as kube-proxy in IPTABLES mode does not.

Storage in kubernetes
there are three type of access mode: 
RWO : Read Write Once    - only one pod in cluster can access this volume
RWM : Read Write Many  - All pods in cluster can acess data from this volume
ROM : Read Only Many    - All pods in cluster can only read data from this volume

Not all volume support all modes

to claim the storage 3 properties has to match between PersistentVolume & PersistentVolumeClaim

1. accessMode
2. storageClassName
3. capacity 

have a look on to sample persistentVolume & persistentVolumeClaim to understand storage manifest
After you create the persistentVolume & persistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
as of now the pv is not claimed by any pvc and thus is available and waiting for a pvc to claim it.
after you deploy the persistentVolume(pv) & persistentVolumeClaim (pvc) you can assign it to your running pod using below kind 

---
apiVersionv1
kindPersistentVolume
metadata
  namepv
spec:
  accessModes
- ReadWriteOnce
storageClassNamessd
capacity
   storage10Gi
hostPath:
   path: "/mnt/mydata"
...
---
apiVersionv1
kindPersistentVolumeClaim
metadata
  namepvc
spec:
  accessModes
- ReadWriteOnce
storageClassNamessd
capacity
   storage10Gi
...

Deployment in kubernetes

Deployment is all about scaling and updating your release. You deploy your container inside a pod and scale them using replicaSet. It is not like only updating replicaSet will do the rolling update, we need to add a strategy in deployment manifest to get the job done
strategy:
 typeRollingUpdate
 rollingUpdate:
   maxUnavailable25%
   maxSurge1

an ideal deployment manifest will look like deployment.yml
its deployment manifest you need to update every time when you want to scale you application tune your number of replicaSet, if you want to update the app modify your image version or anything just tweak deployment manifest and it will redeploy your pod communicating with apiServer 
$ kubectl apply -f deployment.yml



Autoscaling in kubernetes
when demand goes up, spin up more Pods but not via replicas this time. horizontal pod autoScaler is the answer

IF
---
apiVersionv1
kindDeployment
metadata
  namemydeploy
spec:
  replicas4
...

THEN
---
apiVersionautoscaling/v1
kindHorizontalPodAutoscaller
...
spec
  scaleTargetRef
    apiVersionapps/v1
    kinddeployment
    namemydeploy
  minReplicas1
  maxReplicas10
targetCPUUtilizationPercentage50
...

Launching kubernetes as a single node cluster locally

Minikube is the tool that allows you to launch K8S locally. Minikubes runs a single-node-K8S-cluster inside a VM at your local.
before you install kubectl
Install minikube on Linux: 
use this script to launch K8S VM on local and interact with Minikube cluster install-minikube.sh
basic minikube command
FunctionCommand
verify kubectl to talk to clusterkubectl config current-context ( should return minikube)
to stop clusterminikube stop 
to delete noteminikube delete
start version specific kube nodeminikube start --vm-driver=none --kubernetes-version="v1.6.0"                                     
check node info kubectl get nodes
kubernetes cluster infokubectl cluster-info
kubectl binnary for windowskubectl.exe
minikube 64-bit installerminikube-installer.exe

Launching Kubernetes-Cluster on Google Cloud Platform

Presuming you holding account with GCP and is active then follow:

Go to Navigation menu--> Kubernetes engine --> clusters

provide all the details as per requirement like Zone, number of CPU's, OS, size of cluster(number of nodes/minions not include master- as that's taken care by platform behind the scene) and create

or same time we have command line option to create the cluster as:

$ gcloud container --project "gcp-gke-lab-7778" clusters create "cluster-1" \
--zone "asia-south1-a" --username "admin" --cluster-version "1.14.10-gke.0" \
--machine-type "f1-micro" --image-type "COS" --disk-type "pd-standard" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "3" --network "default" --subnetwork "default" --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard \
--no-enable-autoupgrade --no-enable-autorepair
                                                                 
GKE cluster look like
          GCP-ClusterInfo                                                                                            GCP-CLusterNode                                                                                GCP-3NodeCluster

In GCP command line:

$ gcloud container clusters list
$ gcloud container clusters get-credentials cluster-1 --zone asia-south1-a --project psyched-metrics-208409
    this will configure kubectl command-line access










*Launching K8S-cluster locally (1-Master and 2 Node)

Note:
not all versions of docker supports kubernetes you need to install compatible version when needed

Pre-reqs:
docker      -  runtime container
kubelet     -  k8s node agent that runs on all nodes in your cluster and starts pods and containers
kubeadm  -  admin tool that bootstrap the cluster
kubectl     -  command line util to talk to you cluster
CNI          -  install support for Container networking/ContainerN/wInterface

check if your Linux is in permissive mode:
$ getenforce
   should return Permissive

Command to setup
$ apt-get update && apt-get install -y apt-transport-https \
   curl -s https://package.cloud.google.com/apt/doc/apt-key.gpg \
   | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list \
   deb http://apt.kubernetes.io/ kubernetes-xenial main \
   EOF

if fails with PGP key try following
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 6A030B21BA07F4FB

alternate way
if  you fails to add k8s repository add it manually
$ vi /etc/apt/sources.list.d/kubernetes.list (for Ubuntu)
$ vi /etc/yum.repos.d/kubernetes.list(for Linux)
     add--> deb http://apt.kubernetes.io/ kubernetes-xenial main
or use this REPO
$ apt-get update (for Ubuntu)
$ yum update (for Linux)
$ apt-get install docker.io kubeadm kubectl kubelet kubernetes-cni (for Ubuntu)
$ yum install docker.io kubeadm kubectl kubelet kubernetes-cni --disableexcludes=kubernetes (for Linux)
$ systemctl start docker kubelet && systemctl enable docker kubelet

Cluster maintainance 
$ kubectl drain NodeName > which moves your nodes to SchedullingDisabled state
$ kubectl uncordon NodeName > which Make the node schedulable again

Uninstall k8s-cluster
$ kubeadm reset 
$ sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*

Deploy k8s cluster specifying pod network via kubeadm
$ kubeadm init --apiserver-advertise-address=MasterIP  --pod-network-cidr=192.168.0.0/16 

If it fails with lower docker version update docker:
docker.io (used for older versions 1.10.x)
docker-engine (is used for before 1.13.x )
docker-ce ( used for higher version since 17.03)
$ apt-get install docker-engine 

if fails with [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
$ systemctl enable kubelet.service

If fails with [ERROR Swap]: running with swap on is not supported. Please disable swap.
$ swapoff -a

if fails with [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
use command with flag --ignore-preflight-errors=NumCPU
this will actually skip the issue. Please note this is OK to use in Dev/test only.. not in production. 

Run again 
$ kubeadm init --apiserver-advertise-address=MasterIP --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU

and result will be like:














now grab the three commands from output and run them with a regular user so as to configure our account on master to have admin access to API server from a non-privileged account

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl get nodes
$ kubectl get pods --all-namespaces


if your normal user is not a sudoer then do this:
$ vi /etc/sudoers
              add following entry somewhere like:
                root ALL=(ALL) ALL
                red ALL=(ALL) NOPASSWD:ALL

if still fails to run kubectl command and fails with below error:
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
consider checking kubelet status by running below command it should be active and running
$ sudo systemctl kubelet status
    if it is inactive 
check swap status, if it is enabled, disable it (sudo swapoff -a) and restart kubelet service

the status remains pending until we will not create pod networks

to add pod-network you can install only one pod-network/cluster either use calico, weave, flannel or any as cin provider
$ kubectl apply --filename https://git.io/weave-kube
$ kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

if at all you failed to deploy pod network, you might need to do the following:
  • sudo swapoff -av
  • sudo systemctl restart kubelet
  • sudo reboot now

now check nodes status again, you will see them Ready & Running 

now time to run minions

go to Node2 & Node3 and run the command given by K8S-cluster when initialized

Ensure you have fulfilled the pre-reqs (docker/kubectl/kubeadm/kubelet/kubernetes-cni)

$ kubeadm join 192.168.0.104:6443 --token zo6fd9.j26yrdb9qlu1190n --discovery-token-ca-cert-hash sha256:c165160bd18b89ab7219ec5bd5a60cfca24887ee816c257b84451c9feaf0e05a

if fails while joining cluster with  [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
provision your nodes with the following command
$ echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

at times kubectl commands fails to give o/p while running any command and results with error:
Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
you may have some proxy problems, try running following command:
$ unset http_proxy
$ unset https_proxy
and repeat your kubectl call

check status from any node you will see a master & workers
once you do deployment pods will be spread across workers

kubectl helpful commands
FunctionCommand
Initialize cluster
verify k8s cluster-info
IP address show
reset cluster
delete tunl0 iface

deregister a node from cluster
kubeadm init --apiserver-advertise-address=MASTERIP --pod-network-cidr=192.168.0.0/16
kubectl cluster-info
ip a s

kubeadm reset -f && rm -rf /etc/kubernetes/
modprobe -r ipip

kubectl drain nodeName
kubectl drain nodeName --ignore-daemonsets --delete-local-data
kubectl delete node nodeName
listing namespaces
setting namespace preference
validate current namespace
kubectl get namespace
kubectl config set-context --current --namespace=<namespace-name>
kubectl config view --minify | grep namespace
investigate any object
investigate kubelet service  
kubectl describe node/deployment/svc <objectName>
sudo journalctl -u kubelet
exposing deployment as service
scaling your deployment
kubectl expose deployment my-deployment --type=NodePort --name=my-service
kubectl expose deploy my-deployment --port=9443 --target-port=61002 --name=my-service --type=LoadBalancer
kubectl scale --current-replicas=3 --replicas=4 deployment/my-deployment
all possible attribute of an obj
wide details of running pods
delete a pod forcefully
delete bulk rsrc frm a namespace
kubectl explain pod --recursive
kubectl get pods -o wide
kubectl delete pod mypodName --grace-period=0 --force --namespace myNamespace
kubectl delete --all po/podName -n myNamespace
open a bash terminal in pod appkubectl exec -it app --bash
create a yaml manifest, 
without sending it to the cluster
kubectl create deploy web --image=nginx --dry-run -o yaml > web.yaml
edit deployment web runtimekubectl edit deploy/web

Br
Punit