Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

30 July 2019

PROBES - Health check mechanism of application running inside Pod's container in Kubernetes

Kubernetes provides health checking mechanism to verify if a container inside a pod is working or not using PROBE.
Kubernetes gives two types of health checks performed by the kubelet.

Liveness probe
k8s checks the status of the container via liveness probe.
If liveness Probe fails, then the container is subjected to its restart policy.

Readiness Probe
Readiness probe checks whether your application is ready to serve the requests.
If readiness probe fails, the pod's IP is removed from the endpoint list of the service.

we can define liveness probe in three types of actions that kubelet performs on a pod:
  • Executes a command inside the container
  • Checks for a state of a particular port on the container
  • Performs a GET request on container's IP

# Define a liveness command
    - sh
    - /tmp/; sleep 10; rm /tmp/; sleep 600
  initialDelaySeconds: 10
  periodSeconds: 5

# Define a liveness HTTP request 
    path: /healthz
    port: 10254 
  initialDelaySeconds: 5
  periodSeconds: 3
# Define a TCP liveness probe
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20

Readiness probes are configured similarly to liveness probes.
The only difference is that you use the readinessProbe field 
instead of the livenessProbe field.

# Define readiness probe
    - sh
    - /tmp/
  initialDelaySeconds: 5
  periodSeconds: 5 

Configure Probes
Probes have a number of fields that one can use more precisely to control the behavior of liveness and readiness checks

initialDelaySeconds: Number of seconds after the container starts before liveness or readiness probes are initiated.
Defaults to 0 seconds. Minimum value is 0.
periodSeconds: How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
timeoutSeconds: Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed.
Defaults to 1. Must be 1 for liveness. Minimum value is 1.
failureThreshold: Minimum consecutive fails for the probe to be considered restarting the container. In case of readiness probe, the Pod will be marked Unready.
Defaults to 3. Minimum value is 1.

# example of Nginx deployment
apiVersion: apps/v1
kind: Deployment
  name: app1
    app: webserver
  replicas: 1
        app: webserver
        - name: app1
          image: punitporwal07/apache4ingress:1.0
          imagePullPolicy: Always
            - containerPort: 80
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3

httpGet have additional fields that can be set
path: Path to access on the HTTP server.
port: Name or number of the port to access the container. Number must be in the range 1 to 65535.
host: Hostname to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP

keep probing!

02 June 2019

Deploying application on Kubernetes cluster

In today's emerging world containers become a prominent deployment mechanism. Lets see how you can deploy your containerized application to k8s cluster backed by a cloud provider. create an image using any container service, your application should adhere to all the essential requirement to get into a container. Then you build your application image.
for ex: If I have to deploy my application on tomcat inside a docker image, so that everytime when I launch my docker image, it will launch my tomcat instance and deploy my application bundled in it and run as a container service, which further I can deploy on my docker-swarm or kubernetes cluster.
NOTE: You need a cloud provider or other controller that knows how to allocate an IP and route traffic into the nodes like GKE has that in GCP. If you are on-prem, k8s has no idea what infrastructure exists on your network so you need to use NodePort approach to generate endPoints or setup your own container n/w.
to acheive this your application should be encapsulated in a docker image with all the required resources. You can always scale your deployment and perform versioning of your deployment.
When you deploy an image it should be available in image:registry so that it can be called in your deployment manifest. Than you expose your application to the outside world using service resource of k8s. You can use any cloud provider to have a ready-made container platform like GKE, EKS, AKS to launch K8s cluster and deploying container directly on to it.

in short:
  1. Package your app into a container image (docker build)
  2. Run the container locally on your machine (optional)
  3. Upload the image to a registry/hub (docker push)
  4. Create a container cluster (cluster init)
  5. Deploy your app to the cluster (deployment.yaml)
  6. Expose your app to the Internet (service.yaml)
  7. Scale up your deployment (kubectl scale --replica)
  8. Deploy a new version of your app

# creating a k8s-cluster in GKE using CLI else follow this to setup a cluster locally
$ gcloud container clusters create mycluster --num-nodes=2 --zone=us-central1-c # get the instance status $ gcloud compute instances list


# running a container-image on a k8s-cluster of kind deployment
$ kubectl run hello --image=punitporwal07/myapp:0.1 # To see the Pod created by the Deployment, run the following command
$ kubectl get pods

at times you see a different status after pulling the image or while fetching the pods status

so there are mainly three possible reasons behind such failures:
  1. The image tag is incorrect
  2. The image doesn't exist (or is in a different registry)
  3. Kubernetes doesn't have permissions to pull that image
whereas CrashLoopBackOff tells us that kubernetes is trying to launch your pod, but one or more of the containers is crashing or getting killed.

Let's describe the pod to get some more information:

$ kubectl describe pod ans-7974b8cc6b-dvsgz(Name)

Deleting deployment/container from K8s cluster :

first of all you need to get structure of your deployment resources

$ kubectl get all
$ kubectl delete deployment.apps/<..>

# since it has replicasets enabled it keeps spinning another pod for you deployment,
so you need to delete the deployment in order to stop its cycle
$ kubectl get deployments $ kebectl delete deployment myapp # and redeploy a fresh image $ kubectl run myapp --replicas=3 --image=punitporwal07/myapp:0.1 --port=8000 # expose you application to the internet in form of service $ kubectl expose deployment myapp --type=LoadBalancer \
   --port 8000 --target-port 8080 --name=myservice
 $ kubectl get service

once your service being exposed, you will see k8s will allocate and external IP to your service

NOTE: instead of deleting a GKE cluster to save cost, recommend to resize=0 by following command
$ gcloud container clusters resize mycluster --size=0 --zone=us-central1-c

Then scale it back up later by running it with a non-zero value for the size flag.


06 March 2019

Different api versions to use in your manifest file

according to kubernetes : The API server exposes an HTTP API that lets end users, different parts of your cluster, and external components communicate with one another. The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API (for example: Pods, Namespaces, ConfigMaps, and Events)

                                  APIs are gateway to your kubernetes cluster


API versions with ‘alpha’ in their name are early candidates for new functionality coming into Kubernetes. These may contain bugs and are not guaranteed to work in the future.

‘beta’ in the API version name means that testing has progressed past alpha level, and that the feature will eventually be included in Kubernetes. Although the way it works might change, and the way objects are defined may change completely, the feature itself is highly likely to make it into Kubernetes in some form.

Those do not contain ‘alpha’ or ‘beta’ in their name. They are safe to use.

24 February 2018

Kubernetes: Orchestration framework for containers

Kubernetes is an open-source tool donated by Google after experiencing it for over 10 years as Borg. It is a platform to work with containers and is an orchestration framework for Docker containers which gives you: deployment, scaling, monitoring

K8s helps in moving from host-centric infrastructure to container-centric infrastructure

In virtualization world atomic unit of scheduling is VM same way in docker its Container and in Kubernetes it is Pod

keys of kubernetes
- we describe our application requirement in k8s yaml's
- It expose containers as services to the outside world.
- Kubernetes follows client-server architecture.
- In K8s we enforce desired state management via a manifest.yaml file. Here we feed the cluster service to run on a desired state in our infrastructure.
- on the other side we have worker. Worker is a container host & it has a kubelet process running which is responsible for communicating with K8S cluster services.

**Kubernetes rule says- pod cannot be directly exposed it has to be via service**

deployments > pods > containers

For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services.

Resources in kubernetes

minion − is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD.

Pod  Pods are Mortal & is the smallest unit of deployment in K8s object mode or is like hosting a service. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. You cannot create your own pods, they are created by replicasets.

ReplicaSet  replicasets are created by deployment, these deployments contains declaration of containers which you want to run in cluster. like image/tag, env variable, data volumes, 
Kubernetes has several components in its architecture.

DaemonSet -  ensures that all Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Labels − use labels in your deployment manifest to target specific pods. that means pod with specific labels will only be manipulated depending on the label you have defined in your deploy manifest. 

etcd − k8s objects persisted here. This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service.

kube-apiserver − This is an API which can be used to orchestrate the Docker containers.
kube-controller-manager − This is used to control the Kubernetes services.
kube-scheduler − This is used to schedule the containers on hosts.
Kubelet − This is used to control the launching of containers via manifest files from worker host. (which talks with K8S cluster).
kube-proxy − This is used to provide network proxy services to the outside world. 
Flannel − This is a back-end network which is required for the containers. 

Advanced resources

context - it is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl : all kubectl commands run against that cluster.
ConfigMap - an API object that let you store your other object or application configuration, setting connection strings, analytics keys, and service URLs & further mounting them in volumes to use them as environment variable.
sidecar - is just a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. Common examples of sidecar containers are log shippers, log watchers, monitoring agents among others, aka utility container.

helm  helm is a package manager for k8s which allows to package, configure & deploy applications & services to k8s-cluster.
helm Chart  helm packages are called charts, which consist of few YAML configs and some templates which are cooked into k8s manifest file.
helm chart repository − this packaged charts brought available and can be downloaded from chart repos.

Mandatory Fields while writing a manifest file
In manifest file for kubernetes objects you want to create, you’ll need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you’re using to create this object. for more on apiversions see this >  Different api versions to use in your manifest file
kind - What kind of object you want to create.
metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
spec - What state you desire for the object.

Service in kubernetes
There are four ways to make a service accessible externally in kubernetes cluster
  • Nodeport: deployment that need to be exposed as a service to the outside world can be configured with the NodePort type. In this method when deployment exposed, cluster node opens a random port between default range: 30000-32767 on the node itself with IP (hence this name was given) and redirects traffic received on that random port to the underlying service endpoint which got generated when you expose your deployment. (combination of NodeIP + Port is NodePort ) accessing your app/svc as http://public-node-ip:nodePort
  • clusterIP is the default and most basic, which give service its own IP and is only reachable within the cluster.
  • Loadbalancer:  an extension of the NodePort type—This makes the service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.
  • Ingress resource, a radically different mechanism for exposing multiple services through a single IP address. It operates at the HTTP level (network layer 7) and can thus offer more features than layer 4 services can.
Network in kubernetes

Kubernetes default ethernet is called as cbr0 like you have docker0 for docker.

3 fundamental requirement in k8s networing model:
  • All the containers can communicate with each other directly without NAT.
  • All the nodes can communicate with all containers (and vice versa) without NAT.
  • The IP that a container sees itself as is the same IP that others see it as.
Pods Networks
Implemented by CNI plugins
pod network is big and flat
you have IP/Pod
every pod can talk to any other pod

Nodes Networks
All nodes needs to be able to talk
kubelet <-> API Server
Every node on the n/w has this process running called Kubeproxy & kubelet 
n/w not implemeneted by k8s.

Service Networks
IP of your service is not tied up with any interface 
Kube-proxy in IPVS modes create dummy interface on the service n/w, called kube-ipvs0 
where as kube-proxy in IPTABLES mode does not.

Storage in kubernetes
there are three type of access mode: 
RWO : Read Write Once    - only one pod in cluster can access this volume
RWM : Read Write Many  - All pods in cluster can acess data from this volume
ROM : Read Only Many    - All pods in cluster can only read data from this volume

Not all volume support all modes

to claim the storage 3 properties has to match between PersistentVolume & PersistentVolumeClaim

1. accessMode
2. storageClassName
3. capacity 

have a look on to sample persistentVolume & persistentVolumeClaim to understand storage manifest
After you create the persistentVolume & persistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
as of now the pv is not claimed by any pvc and thus is available and waiting for a pvc to claim it.
after you deploy the persistentVolume(pv) & persistentVolumeClaim (pvc) you can assign it to your running pod using below kind 

- ReadWriteOnce
   path: "/mnt/mydata"
- ReadWriteOnce

Deployment in kubernetes

Deployment is all about scaling and updating your release. You deploy your container inside a pod and scale them using replicaSet. It is not like only updating replicaSet will do the rolling update, we need to add a strategy in deployment manifest to get the job done

an ideal deployment manifest will look like deployment.yml
its deployment manifest you need to update every time when you want to scale you application tune your number of replicaSet, if you want to update the app modify your image version or anything just tweak deployment manifest and it will redeploy your pod communicating with apiServer 
$ kubectl apply -f deployment.yml

Autoscaling in kubernetes
when demand goes up, spin up more Pods but not via replicas this time. horizontal pod autoScaler is the answer



kubernetes cheatsheet
kubectl autocompleteecho "source <(kubectl completion bash)" >> ~/.bashrc
Initialize cluster
verify k8s cluster-info
IP address show
reset cluster
delete tunl0 iface

deregister a node from cluster
(Unscheduling enabled)

   Scheduling enabled
kubeadm init --apiserver-advertise-address=MASTERIP --pod-network-cidr=
kubectl cluster-info
ip a s

kubeadm reset -f && rm -rf /etc/kubernetes/
modprobe -r ipip

kubectl drain nodeName
kubectl drain nodeName --ignore-daemonsets --delete-local-data --force
kubectl delete node nodeName

kubectl uncordon nodeName
listing namespaces
setting namespace preference
validate current namespace
kubectl get namespace
kubectl config set-context --current --namespace=<namespace-name>
kubectl config view --minify | grep namespace
investigate any object
investigate kubelet service  
kubectl describe node/deployment/svc <objectName>
sudo journalctl -u kubelet
exposing deployment as service
scaling your deployment
kubectl expose deployment my-deployment --type=NodePort --name=my-service
kubectl expose deploy my-deployment --port=9443 --target-port=61002 --name=my-service --type=LoadBalancer
kubectl scale --current-replicas=3 --replicas=4 deployment/my-deployment
kubectl scale deployment/my-deployment --replicas=2 -n my-namespace
all possible attribute of an obj
wide details of running pods
delete a pod forcefully
delete bulk rsrc frm a namespace
kubectl explain pod --recursive
kubectl get pods -o wide
kubectl delete pod mypodName --grace-period=0 --force --namespace myNamespace
kubectl delete --all po/podName -n myNamespace
open a bash terminal in pod appkubectl exec -it app --bash
create a yaml manifest, 
without sending it to the cluster
kubectl create deploy web --image=nginx --dry-run -o yaml > web.yaml
edit deployment web runtimekubectl edit deploy/web