Showing posts sorted by relevance for query docker. Sort by date Show all posts
Showing posts sorted by relevance for query docker. Sort by date Show all posts

02 June 2019

Deploying application on Kubernetes cluster

In today's emerging world containers become a prominent deployment mechanism. Lets see how you can deploy your containerized application to k8s cluster backed by a cloud provider. create an image using any container service, your application should adhere to all the essential requirement to get into a container. Then you build your application image.
for ex: If I have to deploy my application on tomcat inside a docker image, so that everytime when I launch my docker image, it will launch my tomcat instance and deploy my application bundled in it and run as a container service, which further I can deploy on my docker-swarm or kubernetes cluster.
NOTE: You need a cloud provider or other controller that knows how to allocate an IP and route traffic into the nodes like GKE has that in GCP. If you are on-prem, k8s has no idea what infrastructure exists on your network so you need to use NodePort approach to generate endPoints or setup your own container n/w.
to acheive this your application should be encapsulated in a docker image with all the required resources. You can always scale your deployment and perform versioning of your deployment.
When you deploy an image it should be available in image:registry so that it can be called in your deployment manifest. Than you expose your application to the outside world using service resource of k8s. You can use any cloud provider to have a ready-made container platform like GKE, EKS, AKS to launch K8s cluster and deploying container directly on to it.

in short:
  1. Package your app into a container image (docker build)
  2. Run the container locally on your machine (optional)
  3. Upload the image to a registry/hub (docker push)
  4. Create a container cluster (cluster init)
  5. Deploy your app to the cluster (deployment.yaml)
  6. Expose your app to the Internet (service.yaml)
  7. Scale up your deployment (kubectl scale --replica)
  8. Deploy a new version of your app

# creating a k8s-cluster in GKE using CLI else follow this to setup a cluster locally
$ gcloud container clusters create mycluster --num-nodes=2 --zone=us-central1-c # get the instance status $ gcloud compute instances list


Output:














# running a container-image on a k8s-cluster of kind deployment
$ kubectl run hello --image=punitporwal07/myapp:0.1 # To see the Pod created by the Deployment, run the following command
$ kubectl get pods


at times you see a different status after pulling the image or while fetching the pods status

ErrImagePull
ImagePullBackOff
CrashLoopBackOff
Running
so there are mainly three possible reasons behind such failures:
  1. The image tag is incorrect
  2. The image doesn't exist (or is in a different registry)
  3. Kubernetes doesn't have permissions to pull that image
whereas CrashLoopBackOff tells us that kubernetes is trying to launch your pod, but one or more of the containers is crashing or getting killed.

Let's describe the pod to get some more information:

$ kubectl describe pod ans-7974b8cc6b-dvsgz(Name)

Deleting deployment/container from K8s cluster :

first of all you need to get structure of your deployment resources

$ kubectl get all
$ kubectl delete deployment.apps/<..>

# since it has replicasets enabled it keeps spinning another pod for you deployment,
so you need to delete the deployment in order to stop its cycle
$ kubectl get deployments $ kebectl delete deployment myapp # and redeploy a fresh image $ kubectl run myapp --replicas=3 --image=punitporwal07/myapp:0.1 --port=8000 # expose you application to the internet in form of service $ kubectl expose deployment myapp --type=LoadBalancer \
   --port 8000 --target-port 8080 --name=myservice
 $ kubectl get service


once your service being exposed, you will see k8s will allocate and external IP to your service







NOTE: instead of deleting a GKE cluster to save cost, recommend to resize=0 by following command
$ gcloud container clusters resize mycluster --size=0 --zone=us-central1-c

Then scale it back up later by running it with a non-zero value for the size flag.

Br,
Punit

24 February 2018

Kubernetes: Orchestration framework for containers

Kubernetes is an open-source tool donated by Google after experiencing it for over 10 years as Borg. It is a platform to work with containers and is an orchestration framework for Docker containers which gives you: deployment, scaling, monitoring







K8s helps in moving from host-centric infrastructure to container-centric infrastructure

In virtualization world atomic unit of scheduling is VM same way in docker its Container and in Kubernetes it is Pod

keys of kubernetes
- we describe our application requirement in k8s yaml's
- It expose containers as services to the outside world.
- Kubernetes follows client-server architecture.
- In K8s we enforce desired state management via a manifest.yaml file. Here we feed the cluster service to run on a desired state in our infrastructure.
- on the other side we have worker. Worker is a container host & it has a kubelet process running which is responsible for communicating with K8S cluster services.

**Kubernetes rule says- pod cannot be directly exposed it has to be via service**

deployments > pods > containers

For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services.


Resources in kubernetes

minion − is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD.

Pod  Pods are Mortal & is the smallest unit of deployment in K8s object mode or is like hosting a service. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. You cannot create your own pods, they are created by replicasets.

ReplicaSet  replicasets are created by deployment, these deployments contains declaration of containers which you want to run in cluster. like image/tag, env variable, data volumes, 
Kubernetes has several components in its architecture.

DaemonSet -  ensures that all Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Labels − use labels in your deployment manifest to target specific pods. that means pod with specific labels will only be manipulated depending on the label you have defined in your deploy manifest. 

etcd − k8s objects persisted here. This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service.

kube-apiserver − This is an API which can be used to orchestrate the Docker containers.
kube-controller-manager − This is used to control the Kubernetes services.
kube-scheduler − This is used to schedule the containers on hosts.
Kubelet − This is used to control the launching of containers via manifest files from worker host. (which talks with K8S cluster).
kube-proxy − This is used to provide network proxy services to the outside world. 
Flannel − This is a back-end network which is required for the containers. 

Advanced resources

context - it is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl : all kubectl commands run against that cluster.
ConfigMap - an API object that let you store your other object or application configuration, setting connection strings, analytics keys, and service URLs & further mounting them in volumes to use them as environment variable.
sidecar - is just a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. Common examples of sidecar containers are log shippers, log watchers, monitoring agents among others, aka utility container.

helm  helm is a package manager for k8s which allows to package, configure & deploy applications & services to k8s-cluster.
helm Chart  helm packages are called charts, which consist of few YAML configs and some templates which are cooked into k8s manifest file.
helm chart repository − this packaged charts brought available and can be downloaded from chart repos.

Mandatory Fields while writing a manifest file
In manifest file for kubernetes objects you want to create, you’ll need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you’re using to create this object. for more on apiversions see this >  Different api versions to use in your manifest file
kind - What kind of object you want to create.
metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
spec - What state you desire for the object.

Service in kubernetes
There are four ways to make a service accessible externally in kubernetes cluster
  • Nodeport: deployment that need to be exposed as a service to the outside world can be configured with the NodePort type. In this method when deployment exposed, cluster node opens a random port between default range: 30000-32767 on the node itself with IP (hence this name was given) and redirects traffic received on that random port to the underlying service endpoint which got generated when you expose your deployment. (combination of NodeIP + Port is NodePort ) accessing your app/svc as http://public-node-ip:nodePort
  • clusterIP is the default and most basic, which give service its own IP and is only reachable within the cluster.
  • Loadbalancer:  an extension of the NodePort type—This makes the service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.
  • Ingress resource, a radically different mechanism for exposing multiple services through a single IP address. It operates at the HTTP level (network layer 7) and can thus offer more features than layer 4 services can.
Network in kubernetes

Kubernetes default ethernet is called as cbr0 like you have docker0 for docker.

3 fundamental requirement in k8s networing model:
  • All the containers can communicate with each other directly without NAT.
  • All the nodes can communicate with all containers (and vice versa) without NAT.
  • The IP that a container sees itself as is the same IP that others see it as.
Pods Networks
Implemented by CNI plugins
pod network is big and flat
you have IP/Pod
every pod can talk to any other pod

Nodes Networks
All nodes needs to be able to talk
kubelet <-> API Server
Every node on the n/w has this process running called Kubeproxy & kubelet 
n/w not implemeneted by k8s.

Service Networks
IP of your service is not tied up with any interface 
Kube-proxy in IPVS modes create dummy interface on the service n/w, called kube-ipvs0 
where as kube-proxy in IPTABLES mode does not.

Storage in kubernetes
there are three type of access mode: 
RWO : Read Write Once    - only one pod in cluster can access this volume
RWM : Read Write Many  - All pods in cluster can acess data from this volume
ROM : Read Only Many    - All pods in cluster can only read data from this volume

Not all volume support all modes

to claim the storage 3 properties has to match between PersistentVolume & PersistentVolumeClaim

1. accessMode
2. storageClassName
3. capacity 

have a look on to sample persistentVolume & persistentVolumeClaim to understand storage manifest
After you create the persistentVolume & persistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
as of now the pv is not claimed by any pvc and thus is available and waiting for a pvc to claim it.
after you deploy the persistentVolume(pv) & persistentVolumeClaim (pvc) you can assign it to your running pod using below kind 

---
apiVersionv1
kindPersistentVolume
metadata
  namepv
spec:
  accessModes
- ReadWriteOnce
storageClassNamessd
capacity
   storage10Gi
hostPath:
   path: "/mnt/mydata"
...
---
apiVersionv1
kindPersistentVolumeClaim
metadata
  namepvc
spec:
  accessModes
- ReadWriteOnce
storageClassNamessd
capacity
   storage10Gi
...

Deployment in kubernetes

Deployment is all about scaling and updating your release. You deploy your container inside a pod and scale them using replicaSet. It is not like only updating replicaSet will do the rolling update, we need to add a strategy in deployment manifest to get the job done
strategy:
 typeRollingUpdate
 rollingUpdate:
   maxUnavailable25%
   maxSurge1

an ideal deployment manifest will look like deployment.yml
its deployment manifest you need to update every time when you want to scale you application tune your number of replicaSet, if you want to update the app modify your image version or anything just tweak deployment manifest and it will redeploy your pod communicating with apiServer 
$ kubectl apply -f deployment.yml



Autoscaling in kubernetes
when demand goes up, spin up more Pods but not via replicas this time. horizontal pod autoScaler is the answer

IF
---
apiVersionv1
kindDeployment
metadata
  namemydeploy
spec:
  replicas4
...

THEN
---
apiVersionautoscaling/v1
kindHorizontalPodAutoscaller
...
spec
  scaleTargetRef
    apiVersionapps/v1
    kinddeployment
    namemydeploy
  minReplicas1
  maxReplicas10
targetCPUUtilizationPercentage50
...

kubernetes cheatsheet
FunctionCommand                                         
kubectl autocompleteecho "source <(kubectl completion bash)" >> ~/.bashrc
Initialize cluster
verify k8s cluster-info
IP address show
reset cluster
delete tunl0 iface

deregister a node from cluster
(Unscheduling enabled)

   Scheduling enabled
kubeadm init --apiserver-advertise-address=MASTERIP --pod-network-cidr=192.168.0.0/16
kubectl cluster-info
ip a s

kubeadm reset -f && rm -rf /etc/kubernetes/
modprobe -r ipip

kubectl drain nodeName
kubectl drain nodeName --ignore-daemonsets --delete-local-data --force
kubectl delete node nodeName

kubectl uncordon nodeName
listing namespaces
setting namespace preference
validate current namespace
kubectl get namespace
kubectl config set-context --current --namespace=<namespace-name>
kubectl config view --minify | grep namespace
investigate any object
investigate kubelet service  
kubectl describe node/deployment/svc <objectName>
sudo journalctl -u kubelet
exposing deployment as service
scaling your deployment
kubectl expose deployment my-deployment --type=NodePort --name=my-service
kubectl expose deploy my-deployment --port=9443 --target-port=61002 --name=my-service --type=LoadBalancer
kubectl scale --current-replicas=3 --replicas=4 deployment/my-deployment
kubectl scale deployment/my-deployment --replicas=2 -n my-namespace
all possible attribute of an obj
wide details of running pods
delete a pod forcefully
delete bulk rsrc frm a namespace
kubectl explain pod --recursive
kubectl get pods -o wide
kubectl delete pod mypodName --grace-period=0 --force --namespace myNamespace
kubectl delete --all po/podName -n myNamespace
open a bash terminal in pod appkubectl exec -it app --bash
create a yaml manifest, 
without sending it to the cluster
kubectl create deploy web --image=nginx --dry-run -o yaml > web.yaml
edit deployment web runtimekubectl edit deploy/web

Br
Punit

11 February 2018

JENKINS: a continuous integration tool

Getting started with Jenkins is 3 step process:

- Install Jenkins
- Download the required plugins
- Configure the plugins & create project

# Installing Jenkins
$ wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add - $ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ \
  > /etc/apt/sources.list.d/jenkins.list'
$ sudo apt-get update
$ sudo apt-get install jenkins
above steps will automatically start Jenkins

# Starting Jenkins when configured as a service
$ sudo systemctl start jenkins
$ sudo systemctl status jenkin

# Upgrade Jenkins
$ sudo apt-get update
$ sudo apt-get install jenkins

# If your /etc/init.d/jenkins file fails to start jenkins then,
edit the /etc/default/jenkins to replace the HTTP_PORT=8080 to HTTP_PORT=8081
or
$ cd /var/share/jenkins/
$ java -jar jenkins.war // this will setup a new jenkins from begining

# I prefer to use as below, this will start old jenkins without any delay
$ service jenkins restart

# Running docker container of Jenkins
$ docker pull punitporwal07/jenkins
$ docker run -d -p 8081:8080 -v jenkins-data:/software/jenkins punitporwal07/jenkins:tag
  to understand this command in brief check here: Docker


Useful Plugins
you can push from manage plugins tab/ push from back-end into plugins dir
- Build pipeline: to chain multiple jobs
- Delivery pipeline: this will visualize deliver pipelines (upstream/downstream)
- Weblogic deployer: this is used to deploy a jar/war/ear to any weblogic target
- Deploy to container: to deploy war/ear to a tomcat/glassfish container
Roles strategy: this plugin allows you to assign roles to different user of jenkins

Automate deployment on Tomcat using Jenkins pipeline
(benefits.war as example on tomcat 8.x for Linux)
- install Deploy to container plugin, restart jenkins to reflect changes
- create new project/workspace & select post build action as: Deploy war/ear to a container
- add properties as below:-
   - war/ear files: **/*.war
   - context path: benefits.war (provided you need to push this war file into your workspace)
   - select container from drop-down list: tomcat 8.x
   - Add credentials: tomcat/tomcat (provided you have added this user in conf/tomcat-user.xml with all the required roles)
   - Tomcat URL: http://localhost:8080/
- apply/save you project and build it to validate the result.

Automate deployment on Weblogic using jenkins pipeline
(benefits.war as example on Weblogic 10.3.6)
- install Weblogic deployer plugin, restart jenkins to reflect changes
- configure the plugin,
- create new project/workspace
- Add post build action as: Deploy the artifact to any weblogic environment (if no configuration has been set, the plugin will display an error message, else it will open up a new window)
- add properties as below:-
   - Task Name: give any task name
   - Environment: from drop down list select your AdminServer ( provided you have created configuration.xml and added it to Weblogic deployer Plugin)
   - Name: The name used by WebLogic server to display the deployed component
   - Base directory of deployment : give path to your deployment.war or push it to your workspace and leave it blank
   - Built resource to deploy: give your deployment.war name
   - Targets: give target name
- Apply/save you project and build it to validate the result.

k/r,
P

29 August 2018

Quick guide on Amazon Web Services for beginners

amazon, first company who come up with an idea of bundling all the 7 layers of OSI model in form of services aka web services which are built on compute capabilities. At the time of writing this article there are more than 90 service in AWS.

there are 4 core foundation elements

Compute: EC2, Paas Elastic beanstalk, Faas Lambda, Auto Scaling
Storage: S3, Glacier(used to archive), Elastic object, block storage, Elastic file system
Database: daas, custom DB, mysql
Network: VPC, CluodFront, Route53 for DNS, API gateway, Direct-connect

where auto-scaling is sufficiently great, due to its auto provision property, it can helps in increased demand, as well as on reduced demand.

here are some topic which may help you to start with AWS journey

What is the difference b/w EC2 and Elastic beanstalk ?
with EC2 instance you are manually going to launch an instance and tell the system what kind of OS, memory/CPU and other resources you want to spin. whereas with beanstalk you tell the system about your requirement and system will spin up all suitable and eligible resources for you.
ex: if you have a .net application you tell the system and it will launch all the app and db instance required for a .net application to work.

What is an EBS Volume?
An Elastic Block Store Volume is a network drive you can attach to your instances while they are running. It allows your instances to persist data, even after their termination, they can only be mounted to one instance at a time & they are bound to a specific availability zone i.e. you cannot use EBS present in one zone to attach to an instance on another zone instead you used a method called snapshot.
Think of them as a "USB stick" but attached at network level.

What is Geo targeting in cloud front ?
It works on the principal of caching, and is handled globally, which provide data to user from very nearest server. ( URL remains same, you can modify the content and customize the content).
in Geo targeting cloud front detects the country code and forward it to origin server, then origin server sent the specific content to cache server and will be stored for ever and then the user will get specific content images defined specifically for their region/country.

how do you upgrade or downgrade a system with near zero downtime ?
- Launch another system parallel may be with bigger EC2 capacity 
- Install all the software/packages needed 
- Launch the instance and test locally
- If works, swap the IPs if using route 53, update the IPs and it gonna send traffic to new servers in 
0 Downtime

What is Amazon S3 bucket ?
An Amazon S3 bucket is a public cloud storage resource backed by AWS formally known as Simple Storage Service (S3), an object storage offering.
S3 buckets are similar to file folders, store objects, which consist of data and its descriptive metadata.
An S3 user first creates a bucket in an AWS region of choice and gives it a globally unique name. AWS recommends that customers choose regions geographically close to them to reduce latency and costs.
Once the bucket has been created, the user then selects a tier for the data, with different S3 tiers having different levels of redundancy, prices and accessibility. One bucket can store objects from different S3 storage tiers.
User than specify access privileges for the objects stored in a bucket, via IAM mechanisms, bucket policies and access control lists.
User can interact with an S3 bucket via the AWS Management Console, AWS CLI or application programming interfaces (APIs).
There is no limit to the amount of objects a user can store in a bucket, though buckets cannot exist inside of other buckets.

What is Amazon CloudWatch? 
A place from where you can track all the infrastructure logs at one place.

What if provisioned service is not available in region/country ?
not all services available in all region, it all depends on liking of the services, all depending on requirements. always find nearest region to serve your customer, else you will face high latency.

what is Amazon Elastic container service ?
- It is highly scalable.
- Its a high performance container management.
- It allows you to run application on manged clusters of EC2 instances.
- It can be used to launch or stop container-enabled applications.

some useful services when trying to achieve CI/CD:

CodeCommit: as source repository S3 bucket GitHub | used for version controlling
CodeDeploy: to deploy a sample/custom deployment on an EC2 instances
CodePipeline: service that deploy, build & test your code
  • for continuous deployment we need to create/enable versioning
  • configure | set-AWSCredentials for user by providing Accesskey and secretkey
                                                                  AccessKey AKIAIKOAWUJQB75WLFLA
                                                                  SecretKey XHNKW8EixLu4fBVjL+KKj5wSjohG4ypipKlfR2/E

How to configure AWS PowerShell (if working on windows) Download from here

- services > IAM > Users > Create a user > security Credentials > create Access Key > Download the File (*.csv)
then Launch AWS PowerShell or AWS Configure and give:

- Access key
- Secret Key
- Region

input keys you get from downloaded .csv files, and region depending on your geographical location


How to use Codecommitused for version controlling and a useful tool for developers for CI/CD

First thing is to get AWSCredentials for your AWS environments
- services > IAM  > Users > codecommit

now configure your credentials for codecommit

$ cd C:\Program Files (x86)\AWS Tools\CodeCommit
$ C:\Program Files (x86)\AWS Tools\CodeCommit> .\git-credential-    AWSS4.exe -p codecommit

create a Repository
  • services > codecommit > create a repo(MyRepo) > cloneURL via Https
$ git clone 'https-clone-url'  (other developer all do same)
$ git config user.mail 'mailId'
$ git config user.name 'name'
   (start working)
$ notepad index.html
$ git status
$ git add index.html
$ git status
$ git commit -m 'initial commit'
$ git push origin master (it will connect via https url and push the file to MyRepo)
$ git log

How to use CodeDeploy to deploy an App : to automate the deployments and adding new features continuously

first thing is to setup codeDeploy role for instance
create another role for service
go to
  • services > EC2 > LaunchInstance >
create an application
  • services > codeDeploy > create App > custom Deployment > skipWalkThrough > GiveDetails > App-DemoApp Group-DemoAppInstance > Amazon EC2 Instance > Key-Name Value-Dev > DeploymentConfig-OneAtATime > role-CDServiceRole > createApplication
How to use CodePipeline: used to deploy code direct from S3/GitHub/codeCommitRepo
  • services > codePipeline > create > name-pipeline > source-gitHub > connect > Add Repo-aws-codeDeploy-Linux > branch-Master > buildProvider- noBuild > deploymentProvider-AWS CodeDeploy >  App-DemoApp Group-DemoAppInstance > roleName-AWS-CodePipeline-Service > create

How to use CloudFormation to setup Jenkins Server: using jenkins-server template 
  • services > cloudFormation > CreateNewStack > upload the template > stackName-Jenkins > microInstance > dropdownList > IPrange-0.0.0.0/0 > acknowledge > complete
now you could see a new EC2 instance being created and running as Jenkins Server and ready to use

Importantly how do you connect to your EC2-Linux-Instance running on Windows
for that you need to have Putty and PuttyGen (since Putty wont recognize keypair.pem provided by aws) 
so you need to convert keypair.pem to keypair.ppk using keygen
launch-puttygen > Load-*.pem > savePrivateKey
launch-putty > hostname-aws-instance-publicName > Data-autoLogin-ec2-user > SSH > Auth > supply generated *.ppk file > open session

now unlock Jenkins by: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
-------------------------------------
Installing docker on AWS-EC2-Instance
#sudo yum update -y
#sudo amazon-linux-extras install docker
#sudo service docker start
#sudo usermod -a -G docker ec2-user (adding ec2-user to docker group)
-------------------------------------

k/r
P

16 March 2020

RANCHER - A cluster management platform

Rancher is 100% free and open source software platform that enables enterprise to run containers in production. It is a complete software stack for teams adopting containers. It has the capability that it can import your k8s-cluster no matter from where it comes from. It is a single cluster-multi tenancy tool. It addresses the operational and security challenges of managing multiple k8s-clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
Resources like Istio, pipeline, prometheus, grafana are integrated with Rancher.

getting start with rancher via Docker

$ docker pull rancher/rancher
$ docker run -d --restart=unless-stopped -p 61090:80 -p 61091:443 -v /software/bea/rancher:/var/lib/rancher --privileged       rancher/rancher:latest

access the Rancher console by hitting - https:localhost:61091/
follow the welcome instructions on the screen and it will land up to your Global screen of clusters.
once your setup is complete, start adding any of your k8s-cluster.

in this example I am going to add my vanilla k8s-cluster which is running on-prem.

navigate to Add-cluster > other cluster > give a name to your cluster 
rancher will generate commands for you to import your cluster, which you need to run in your cluster CLI as below

which will deploy required resources

$ kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user [USER_ACCOUNT]
$ kubectl apply -f https://localhost:61091/v3/import/s8dkk7demo6fp2f6qffhmlkr.yaml

# if you get a certificate related error try running on insecure channel
$ curl --insecure -sfL https://localhost:61091/v3/import/s8dkk7demo6fp2f6qffhmlkr.yaml | \
kubectl apply -f

# following resources will gets created
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created namespace/cattle-system created serviceaccount/cattle created clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created secret/cattle-credentials-4c43de3 created clusterrole.rbac.authorization.k8s.io/cattle-admin created deployment.apps/cattle-cluster-agent created


you will be able to see your cluster added into rancher now
keep reading.. lot more to come...

k/r
P

18 August 2019

Rundeck - Runbook Automation tool

Rundeck is an opensource tool that helps you automate & schedule your operational jobs. It provides number of features like scheduling jobs, automating execution of ansible playbooks, notifying about the status of your job in form of sending emails in my favourite.
Configuring rundeck is straight forward, you can install rundeck as a service in your linux host or use it as a docker image as well.

quick setup
$ wget http://repo.rundeck.org/latest.rpm
$ rpm -Uvh latest.rpm $ yum install rundeck java $ service rundeckd start
$ service rundeckd status
 rundeckd.service - SYSV: rundeckd, providing rundeckd
   Loaded: loaded (/etc/rc.d/init.d/rundeckd; bad; vendor preset: disabled)
   Active: active (running) since Mon 2020-08-17 13:23:14 BST; 20h ago
$ tail -f /var/log/rundeck/service.log
[2020-08-14T09:02:28,539] INFO  rundeckapp.BootStrap - Rundeck is ACTIVE: executions can be run.
[2020-08-14T09:02:28,635] WARN  rundeckapp.BootStrap - [Development Mode] Usage of H2 database is recommended only for development and testing
[2020-08-14T09:02:28,899] INFO  rundeckapp.BootStrap - Rundeck startup finished in 646ms
[2020-08-14T09:02:28,991] INFO  rundeckapp.Application - Started Application in 25.616 seconds (JVM running for 28.068)
Grails application running at http://localhost:4440 in environment: production

quick setup as a docker Image and config customization
$ docker pull rundeck/rundeck

# editing default port if it is blocked (4440), modify below three files
$ vi /etc/rundeck/profile
$ vi /etc/rundeck/framework.properties
$ vi /etc/rundeck/rundeck-config.properties

# changing the default password of rundeck
$ cd /etc/rundeck/
edit realm.properties and change the admin values to something new

# adding a new user
$ cd /etc/rundeck/
$ sudo vi realm.properties
(add following lines next to admin:admin,user,admin line)
        user1: user1pass,user,admin,architect,deploy,build
   where user,admin,architect,deploy,build are different roles we can assign to user1


now login to rundeck console with admin access and navigate to 

settings > Access Control  > + Create ACL Policy

add following two scopes in order to give read access as an example to user user1

# Project scope
descriptionuser1 with read access to projects.
context:
  project'.*'
for:
  resource:
    - equals:
        kindjob
      allow: [read# allow to read jobs
    - equals:
        kindnode
      allow: [read# allow to read node sources
    - equals:
        kindevent
      allow: [read]
  job:
    - allow: [read# allow read of all jobs
  adhoc:
    - deny: [run# don't allow adhoc execution
  node:
    - allow: [run# allow run on nodes with the tag 'mytag'
    
by:
  groupadmin

---
# Application scope
descriptionapplication level ACL.
context:
  application'rundeck'
for:
  resource:
    - equals:
        kindproject
      allow: [read]
    - equals:
        kindsystem
      allow: [read]
    - equals:
        kindsystem_acl
      allow: [read]
    - equals:
        kinduser
      allow: [admin]
  project:
    - match:
        name'.*'
      allow: [read]

by:
  groupadmin

happy rundecking!