Showing posts with label Pod. Show all posts
Showing posts with label Pod. Show all posts

17 November 2020

What is a Pod in Kubernetes world

PODs are mortal & is the smallest unit of deployment in K8s object mode or is like hosting a service.
Each pod can host different set of containers. The proxy is then used to control the exposing of these services to the outside world. You cannot create your own pods, they are created by replicasets or statefulsets. When ever a pod runs it calls a container image from registry(if not available locally) and deploy its container within it self. It can have more than one container.
  • each Pod has only 1 IP, irrespective of number of containers.
  • all container in a Pod shares IP, cgroups, namespaces, localhost adapter, volumes
  • every pod can interact directly with other pod via Pod N/W (Inter-Pod communication)
there are two types of communication in pods
  • Inter-pod
  • Intra-pod - every container in a pod can interact each other via shared localhost interface
                                   




    lets see how a pod manifest file look like to run in a cluster pod.yaml

    ---
    apiVersion: v1
    kind: Pod
    metadata: 
      name: myapp
      labels:
        zones: prod
        version: v1
    spec:
      containers:
      - name: app-container
        image: punitporwal07/myapp:0.1
        ports:
        -  containerPort: 8000
    ...
    $ kubectl create -f pod.yaml
    $ kubectl get pods 
    $ kubectl describe pods (checks status)


    NOTE: we don't work directly on pods
    so we use replication controller to manage container inside a pod, which implements the desired state
    get your sample replcationController.yml

    Now big question is, how do we access our pods ? Service is the answer.
    1. accessing outside the cluster (Browser, client)
    2. accessing inside the cluster (How Pods interact with each other)
    services nail both the above

    every service gets a name and IP which are STABLE! which means name and this IP will never change throughout its life.
    services are REST objects in K8s, service stands infront of Pod so that outside world can interact to Pods via service. service never change mean its IP, DNS, Ports are reliable, unlike Pods which are unreliable in nature.

    Service use Labels to identify the Pods and do the things on them.
    now since pods are mortal and they come and go, so how do service identify which pods are alive. so Its Endpoint which maintains the list of available pods dynamically and let service know about the active pods.



    accessing replication controller pod and exposing it to a different Port (create/get/describe)

    kubectl create -f svc.yml
    kubectl get svc 
    kubectl describe svc -myapp-svc
    $ kubectl expose rc myapp-rc --name=myapp-svc --taget-port=8000 --type=NodePort (this way it will expose the service hello-svc)
    $ kubectl describe svc myapp-svc (describes the service with all its meaningful attributes like port, Namespace, labels etc.)

    K8S Deployments:
    Deployment is about updates and rollbacks this is the superset of replication controller and can access deployment via node pod service

    for example deployment.yml may look like


    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata: 
      name: myapp-deploy
    spec:
      replicas: 5
      template:
        metadata: 
          labels:
            app: prod
        spec:
          containers:
          - name: myapp-container
            image: punitporwal07/myapp:0.1
            ports:
            -  containerPort: 8000
    ...

    at last be declarative!
    kubectl create -f <manifest.yaml>
    check in to source control > make changes to same file > apply change with 
    $ kubectl apply --record 

    k/r
    P

    24 February 2018

    Kubernetes: Orchestration framework for containers

    Kubernetes is an Open-source tool donated by Google after experiencing it for over 10 years as Borg. It is a platform to work with containers and is an orchestration framework for Docker containers which gives you: deployment, scaling, monitoring







    K8s helps in moving from host-centric infrastructure to container-centric infrastructure

    In virtualization world atomic unit of scheduling is VM same way in docker its Container and in Kubernetes it is Pod

    keys of kubernetes
    - we describe our application requirement in k8s yaml's
    - It expose containers as services to the outside world.
    - Kubernetes follows client-server architecture.
    - In K8s we enforce desired state management via a manifest.yaml file. Here we feed the cluster service to run on a desired state in our infrastructure.
    - on the other side we have worker. Worker is a container host & it has a kubelet process running which is responsible for communicating with K8S cluster services.

    **Kubernetes rule says- pod cannot be directly exposed it has to be via service**

    Containers > pods > deployments

    For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services.


    Resources in kubernetes

    minion − is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD.

    Pod  Pods are Mortal & is the smallest unit of deployment in K8s object mode or is like hosting a service. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. You cannot create your own pods, they are created by replicasets.

    ReplicaSet  replicasets are created by deployment, these deployments contains declaration of containers which you want to run in cluster. like image/tag, env variable, data volumes, 
    Kubernetes has several components in its architecture.

    Labels − use labels in your deployment manifest to target specific pods. that means pod with specific labels will only be manipulated depending on the label you have defined in your deploy manifest. 

    etcd − k8s objects persisted here. This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service.

    kube-apiserver − This is an API which can be used to orchestrate the Docker containers.
    kube-controller-manager − This is used to control the Kubernetes services.
    kube-scheduler − This is used to schedule the containers on hosts.
    Kubelet − This is used to control the launching of containers via manifest files from worker host. (which talks with K8S cluster).
    kube-proxy − This is used to provide network proxy services to the outside world. 
    Flannel − This is a back-end network which is required for the containers.

    Advance resources

    context - it is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl : all kubectl commands run against that cluster.
    ConfigMap - an API object that let you store your other object or application configuration, setting connection strings, analytics keys, and service URLs & further mounting them in volumes to use them as environment variable.
    sidecar - is just a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. Common examples of sidecar containers are log shippers, log watchers, monitoring agents among others, aka utility container.

     helm  helm is a package manager for k8s which allows to package, configure & deploy applications & services to k8s-cluster.
    helm Chart  helm packages are called charts, which consist of few YAML configs and some templates which are cooked into k8s manifest file.
    helm chart repository − this packaged charts brought available and can be downloaded from chart repos.

    Mandatory Fields while writing a manifest file
    In manifest file for kubernetes objects you want to create, you’ll need to set values for the following fields:
    apiVersion - Which version of the Kubernetes API you’re using to create this object. for more on apiversions see this >  Different api versions to use in your manifest file
    kind - What kind of object you want to create.
    metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
    spec - What state you desire for the object.

    Service in kubernetes
    There are four ways to make a service accessible externally in kubernetes cluster
    • Nodeport: deployment that need to be exposed as a service to the outside world can be configured with the NodePort type. In this method when deployment exposed, cluster node opens a random port between default range: 30000-32767 on the node itself with IP (hence this name was given) and redirects traffic received on that random port to the underlying service endpoint which got generated when you expose your deployment. (combination of NodeIP + Port is NodePort ) accessing your app/svc as http://public-node-ip:nodePort
    • clusterIP is the default and most basic, which give service its own IP and is only reachable within the cluster.
    • Loadbalancer:  an extension of the NodePort type—This makes the service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.
    • Ingress resource, a radically different mechanism for exposing multiple services through a single IP address. It operates at the HTTP level (network layer 7) and can thus offer more features than layer 4 services can.
    Network in kubernetes

    Kubernetes default ethernet is called as cbr0 like you have docker0 for docker.

    3 fundamental requirement in k8s networing model:
    • All the containers can communicate with each other directly without NAT.
    • All the nodes can communicate with all containers (and vice versa) without NAT.
    • The IP that a container sees itself as is the same IP that others see it as.
    Pods Networks
    Implemented by CNI plugins
    pod network is big and flat
    you have IP/Pod
    every pod can talk to any other pod

    Nodes Networks
    All nodes needs to be able to talk
    kubelet <-> API Server
    Every node on the n/w has this process running called Kubeproxy & kubelet 
    n/w not implemeneted by k8s.

    Service Networks
    IP of your service is not tied up with any interface 
    Kube-proxy in IPVS modes create dummy interface on the service n/w, called kube-ipvs0 
    where as kube-proxy in IPTABLES mode does not.

    Storage in kubernetes
    there are three type of access mode: 
    RWO : Read Write Once    - only one pod in cluster can access this volume
    RWM : Read Write Many  - All pods in cluster can acess data from this volume
    ROM : Read Only Many    - All pods in cluster can only read data from this volume

    Not all volume support all modes

    to claim the storage 3 properties has to match between PersistentVolume & PersistentVolumeClaim

    1. accessMode
    2. storageClassName
    3. capacity 

    have a look on to sample persistentVolume & persistentVolumeClaim to understand storage manifest
    After you create the persistentVolume & persistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
    as of now the pv is not claimed by any pvc and thus is available and waiting for a pvc to claim it.
    after you deploy the persistentVolume(pv) & persistentVolumeClaim (pvc) you can assign it to your running pod using below kind 

    ---
    apiVersionv1
    kindPersistentVolume
    metadata
      namepv
    spec:
      accessModes
    - ReadWriteOnce
    storageClassNamessd
    capacity
       storage10Gi
    hostPath:
       path: "/mnt/mydata"
    ...
    ---
    apiVersionv1
    kindPersistentVolumeClaim
    metadata
      namepvc
    spec:
      accessModes
    - ReadWriteOnce
    storageClassNamessd
    capacity
       storage10Gi
    ...

    Deployment in kubernetes

    Deployment is all about scaling and updating your release. You deploy your container inside a pod and scale them using replicaSet. It is not like only updating replicaSet will do the rolling update, we need to add a strategy in deployment manifest to get the job done
    strategy:
     typeRollingUpdate
     rollingUpdate:
       maxUnavailable25%
       maxSurge1

    an ideal deployment manifest will look like deployment.yml
    its deployment manifest you need to update every time when you want to scale you application tune your number of replicaSet, if you want to update the app modify your image version or anything just tweak deployment manifest and it will redeploy your pod communicating with apiServer 
    $ kubectl apply -f deployment.yml



    Autoscaling in kubernetes
    when demand goes up, spin up more Pods but not via replicas this time. horizontal pod autoScaler is the answer

    IF
    ---
    apiVersionv1
    kindDeployment
    metadata
      namemydeploy
    spec:
      replicas4
    ...

    THEN
    ---
    apiVersionautoscaling/v1
    
    kindHorizontalPodAutoscaller
    ...
    spec
      scaleTargetRef
        apiVersionapps/v1
        kinddeployment
        namemydeploy
      minReplicas1
      maxReplicas10
    targetCPUUtilizationPercentage50
    ...

    Launching kubernetes as a single node cluster locally

    Minikube is the tool that allows you to launch K8S locally. Minikubes runs a single-node-K8S-cluster inside a VM at your local.
    before you install kubectl
    Install minikube on Linux: 
    use this script to launch K8S VM on local and interact with Minikube cluster install-minikube.sh
    basic minikube command
    FunctionCommand
    verify kubectl to talk to clusterkubectl config current-context ( should return minikube)
    to stop clusterminikube stop 
    to delete noteminikube delete
    start version specific kube nodeminikube start --vm-driver=none --kubernetes-version="v1.6.0"                                     
    check node info kubectl get nodes
    kubernetes cluster infokubectl cluster-info
    kubectl binnary for windowskubectl.exe
    minikube 64-bit installerminikube-installer.exe

    Launching Kubernetes-Cluster on Google Cloud Platform

    Presuming you holding account with GCP and is active then follow:

    Go to Navigation menu--> Kubernetes engine --> clusters

    provide all the details as per requirement like Zone, number of CPU's, OS, size of cluster(number of nodes/minions not include master- as that's taken care by platform behind the scene) and create

    or same time we have command line option to create the cluster as:

    $ gcloud container --project "gcp-gke-lab-7778" clusters create "cluster-1" \
    --zone "asia-south1-a" --username "admin" --cluster-version "1.14.10-gke.0" \
    --machine-type "f1-micro" --image-type "COS" --disk-type "pd-standard" --disk-size "100" \
    --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
    --num-nodes "3" --network "default" --subnetwork "default" --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard \
    --no-enable-autoupgrade --no-enable-autorepair
                                                                     
    GKE cluster look like
              GCP-ClusterInfo                                                                                            GCP-CLusterNode                                                                                GCP-3NodeCluster

    In GCP command line:

    $ gcloud container clusters list
    $ gcloud container clusters get-credentials cluster-1 --zone asia-south1-a --project psyched-metrics-208409
        this will configure kubectl command-line access










    *Launching K8S-cluster locally (1-Master and 2 Node)

    Note:
    not all versions of docker supports kubernetes you need to install compatible version when needed

    Pre-reqs:
    docker      -  runtime container
    kubelet     -  k8s node agent that runs on all nodes in your cluster and starts pods and containers
    kubeadm  -  admin tool that bootstrap the cluster
    kubectl     -  command line util to talk to you cluster
    CNI          -  install support for Container networking/ContainerN/wInterface

    check if your Linux is in permissive mode:
    $ getenforce
       should return Permissive

    Command to setup
    $ apt-get update && apt-get install -y apt-transport-https \
       curl -s https://package.cloud.google.com/apt/doc/apt-key.gpg \
       | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list \
       deb http://apt.kubernetes.io/ kubernetes-xenial main \
       EOF

    if fails with PGP key try following
    $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 6A030B21BA07F4FB

    alternate way
    if  you fails to add k8s repository add it manually
    $ vi /etc/apt/sources.list.d/kubernetes.list (for Ubuntu)
    $ vi /etc/yum.repos.d/kubernetes.list(for Linux)
         add--> deb http://apt.kubernetes.io/ kubernetes-xenial main
    or use this REPO
    $ apt-get update (for Ubuntu)
    $ yum update (for Linux)
    $ apt-get install docker.io kubeadm kubectl kubelet kubernetes-cni (for Ubuntu)
    $ yum install docker.io kubeadm kubectl kubelet kubernetes-cni --disableexcludes=kubernetes (for Linux)
    $ systemctl start docker kubelet && systemctl enable docker kubelet

    Cluster maintainance 
    $ kubectl drain NodeName > which moves your nodes to SchedullingDisabled state
    $ kubectl uncordon NodeName > which Make the node schedulable again

    Uninstall k8s-cluster
    $ kubeadm reset 
    $ sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*

    Deploy k8s cluster specifying pod network via kubeadm
    $ kubeadm init --apiserver-advertise-address=MasterIP  --pod-network-cidr=192.168.0.0/16 

    If it fails with lower docker version update docker:
    docker.io (used for older versions 1.10.x)
    docker-engine (is used for before 1.13.x )
    docker-ce ( used for higher version since 17.03)
    $ apt-get install docker-engine 

    if fails with [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    $ systemctl enable kubelet.service

    If fails with [ERROR Swap]: running with swap on is not supported. Please disable swap.
    $ swapoff -a

    if fails with [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
    use command with flag --ignore-preflight-errors=NumCPU
    this will actually skip the issue. Please note this is OK to use in Dev/test only.. not in production. 

    Run again 
    $ kubeadm init --apiserver-advertise-address=MasterIP --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU

    and result will be like:














    now grab the three commands from output and run them with a regular user so as to configure our account on master to have admin access to API server from a non-privileged account

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    $ kubectl get nodes
    $ kubectl get pods --all-namespaces


    if your normal user is not a sudoer then do this:
    $ vi /etc/sudoers
                  add following entry somewhere like:
                    root ALL=(ALL) ALL
                    red ALL=(ALL) NOPASSWD:ALL

    if still fails to run kubectl command and fails with below error:
    The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
    consider checking kubelet status by running below command it should be active and running
    $ sudo systemctl kubelet status
        if it is inactive 
    check swap status, if it is enabled, disable it (sudo swapoff -a) and restart kubelet service

    the status remains pending until we will not create pod networks

    to add pod-network you can install only one pod-network/cluster either use calico, weave, flannel or any as cin provider
    $ kubectl apply --filename https://git.io/weave-kube
    $ kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

    if at all you failed to deploy pod network, you might need to do the following:
    • sudo swapoff -av
    • sudo systemctl restart kubelet
    • sudo reboot now

    now check nodes status again, you will see them Ready & Running 

    now time to run minions

    go to Node2 & Node3 and run the command given by K8S-cluster when initialized

    Ensure you have fulfilled the pre-reqs (docker/kubectl/kubeadm/kubelet/kubernetes-cni)

    $ kubeadm join 192.168.0.104:6443 --token zo6fd9.j26yrdb9qlu1190n --discovery-token-ca-cert-hash sha256:c165160bd18b89ab7219ec5bd5a60cfca24887ee816c257b84451c9feaf0e05a

    if fails while joining cluster with  [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
    provision your nodes with the following command
    $ echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

    at times kubectl commands fails to give o/p while running any command and results with error:
    Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    you may have some proxy problems, try running following command:
    $ unset http_proxy
    $ unset https_proxy
    and repeat your kubectl call

    check status from any node you will see a master & workers
    once you do deployment pods will be spread across workers

    kubectl helpful commands
    FunctionCommand
    Initialize cluster
    verify k8s cluster-info
    IP address show
    reset cluster
    delete tunl0 iface

    deregister a node from cluster
    kubeadm init --apiserver-advertise-address=MASTERIP --pod-network-cidr=192.168.0.0/16
    kubectl cluster-info
    ip a s

    kubeadm reset -f && rm -rf /etc/kubernetes/
    modprobe -r ipip

    kubectl drain nodeName
    kubectl drain nodeName --ignore-daemonsets --delete-local-data
    kubectl delete node nodeName
    listing namespaces
    setting namespace preference
    validate current namespace
    kubectl get namespace
    kubectl config set-context --current --namespace=<namespace-name>
    kubectl config view --minify | grep namespace
    investigate any object
    investigate kubelet service  
    kubectl describe node/deployment/svc <objectName>
    sudo journalctl -u kubelet
    exposing deployment as service
    scaling your deployment
    kubectl expose deployment my-deployment --type=NodePort --name=my-service
    kubectl expose deploy my-deployment --port=9443 --target-port=61002 --name=my-service --type=LoadBalancer
    kubectl scale --current-replicas=3 --replicas=4 deployment/my-deployment
    all possible attribute of an obj
    wide details of running pods
    delete a pod forcefully
    delete bulk rsrc frm a namespace
    kubectl explain pod --recursive
    kubectl get pods -o wide
    kubectl delete pod mypodName --grace-period=0 --force --namespace myNamespace
    kubectl delete --all po/podName -n myNamespace
    open a bash terminal in pod appkubectl exec -it app --bash
    create a yaml manifest, 
    without sending it to the cluster
    kubectl create deploy web --image=nginx --dry-run -o yaml > web.yaml
    edit deployment web runtimekubectl edit deploy/web

    Br
    Punit