Racism is unacceptable and has no place in our bloggers community. #BlackLivesMatter #WhatMatters2020
Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

30 July 2019

PROBES - Health check mechanism of application running inside Pod's container in Kubernetes

Kubernetes provides health checking mechanism to verify if a container inside a pod is working or not using PROBE.
Kubernetes gives two types of health checks performed by the kubelet.

Liveness probe
k8s checks the status of the container via liveness probe.
If liveness Probe fails, then the container is subjected to its restart policy.

Readiness Probe
Readiness probe checks whether your application is ready to serve the requests.
If readiness probe fails, the pod's IP is removed from the endpoint list of the service.

we can define liveness probe in three types of actions that kubelet performs on a pod:
  • Executes a command inside the container
  • Checks for a state of a particular port on the container
  • Performs a GET request on container's IP
Define a liveness command
livenessProbe:
  exec:
    command:
    - sh
    - /tmp/status.sh; sleep 10; rm /tmp/status.sh; sleep 600
  initialDelaySeconds: 10
  periodSeconds: 5


Define a liveness HTTP request 
livenessProbe:
  httpGet:
    path: /healthz
    port: 10254
  initialDelaySeconds: 5
  periodSeconds: 3


Define a TCP liveness probe
livenessProbe:
  tcpSocket:
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20


Readiness probes are configured similarly to liveness probes.
The only difference is that you use the readinessProbe field instead of the livenessProbe field.

Define readiness probe
readinessProbe:
  exec:
    command:
    - sh
    - /tmp/status_check.sh
  initialDelaySeconds: 5
  periodSeconds: 5

Configure Probes
Probes have a number of fields that one can use more precisely to control the behavior of liveness and readiness checks

initialDelaySeconds: Number of seconds after the container starts before liveness or readiness probes are initiated.
Defaults to 0 seconds. Minimum value is 0.
periodSeconds: How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
timeoutSeconds: Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed.
Defaults to 1. Must be 1 for liveness. Minimum value is 1.
failureThreshold: Minimum consecutive fails for the probe to be considered restarting the container. In case of readiness probe, the Pod will be marked Unready.
Defaults to 3. Minimum value is 1.

ex: Nginx deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  labels:
    app: webserver
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
        - name: app1
          image: punitporwal07/apache4ingress:1.0
          imagePullPolicy: Always
          ports:
            - containerPort: 80
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3

httpGet have additional fields that can be set
path: Path to access on the HTTP server.
port: Name or number of the port to access the container. Number must be in the range 1 to 65535.
host: Hostname to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP

k/r
P

02 June 2019

Deploying application on Kubernetes cluster

You deploy application on kubernetes cluster in form of docker images and run their containers inside a pod to serve as a live application.

Your application should be encapsulated in a docker image with all the required resources.
You can always scale your deployment and perform versioning of your deployment.

When you deploy an image it should be available on your docker hub or from docker registry and from their it can be called in a yml file.Than you expose your application to the outside world using service object of k8s. alternatively you can also use Google cloud Engine to launch K8s cluster and deploying container directly on to it.

in short:
  1. Package your app into a Docker image (docker build)
  2. Run the container locally on your machine (optional)
  3. Upload the image to a registry/hub (docker push)
  4. Create a container cluster (cluster init)
  5. Deploy your app to the cluster (deployment.yaml)
  6. Expose your app to the Internet (service.yaml)
  7. Scale up your deployment (kubectl scale --replica)
  8. Deploy a new version of your app
creating a k8s-cluster in GKE using CLI
$ gcloud container clusters create mycluster --num-nodes=2 --zone=us-central1-c

get the instance status:
$ gcloud compute instances list

Output:













running a container-image on a k8s-cluster of kind deployment
$ kubectl run hello --image=punitporwal07/myapp:0.1

To see the Pod created by the Deployment, run the following command:
$ kubectl get pods








at times you see a different status after pulling the image or while fetching the pods status:

ErrImagePull
ImagePullBackOff
CrashLoopBackOff
Running

so there are mainly three possible reasons behind such failures:
  1. The image tag is incorrect
  2. The image doesn't exist (or is in a different registry)
  3. Kubernetes doesn't have permissions to pull that image
whereas CrashLoopBackOff tells us that Kubernetes is trying to launch this Pod, but one or more of the containers is crashing or getting killed.

Let's describe the pod to get some more information:

kubectl describe pod ans-7974b8cc6b-dvsgz(Name)

Deleting deployment/container from K8s cluster :

first of all you need to get structure of your deployment resources

$ kubectl get all
$ kubectl delete deployment.apps/<..>


but since it has replicasets enabled it keeps spinning another pod for you deployment so you need to delete the deployment in order to stop its cycle
$ kubectl get deployments
$ kebectl delete deployment myapp

and redeploy a fresh image
kubectl run myapp --replicas=3 --image=punitporwal07/myapp:0.1  --port=8000 

expose you application to the internet in form of service
$ kubectl expose deployment myapp --type=LoadBalancer --port 8000 --target-port 8080 --name=myservice
$ kubectl get service

once your service being exposed, you will see k8s will allocate and external IP to your service








NOTE: instead of deleting a GKE cluster to save cost, recommend to resize=0 by following command

$ gcloud container clusters resize mycluster --size=0 --zone=us-central1-c

Then scale it back up later by running it with a non-zero value for the size flag.

Br,
Punit