Racism is unacceptable and has no place in our bloggers community. #BlackLivesMatter #WhatMatters2020

30 April 2020

Architecting hybrid infrastructure with Anthos

A multi-cloud service platform
Anthos is based on kubernetes. It runs on-premise and supports multi and hybrid cloud environments which is fully managed by Google. it lets you write your jobs once and run it anywhere. It gives you flexibility to move on-prem apps to cloud when you are ready & finally it allows you to keep using the technology you are already using while improving security.

"fun fact - Anthos means flower in Greek. Flower grow on-premise but they need rain from the cloud to flourish"

Anthos Technology stack
Kubernetes engine
GKE on-prem
Anthos Config Management
Migrate for Anthos

lets take an example to understand it more clearly. In today's world every company is trying to move there infrastructure from on-premise to cloud. And its not a good idea to directly develop and modify your application into the cloud directly instead if we get a way to modify the same application which will work in cloud can be modified in your on-premise infrastructure first, while gaining some benefits of working in cloud and maintaining in on-premise. Or you have leverage to extend your on-premise to your cloud environment to really work nice those things then you have Anthos to work for you.
  • It provides a platform to manage applications in hybrid cloud environment 
  • It helps managing hybrid infrastructure by using one single control plan.
which benefits you in :
  • write once deploy in any cloud.
  • consistency across environments.
  • Increased workload mobility.
  • Avoid vendor lock in.
  • A techno-stack that runs in data centers, next to enterprise workload organisations currently run on premise.
General Idea

In Anthos you setup admin workstation which includes Admin cluster as well as user cluster, so it’s like cluster within a cluster. Admin cluster takes care of user cluster. Which eventually means admin control plane takes cares of user control plane. The diagram will help you understand more clearly.

Consider kubernetes master is control plane in Anthos

  • Where admin control plane handles all administrative API calls to and from GKE on-prem
  • Use gkectl to create manage and delete cluster


You will setup admin workstation as part of on-prem installation of GKE

  • it automate deployment on top of v-sphere shipped as a virtual appliance.
  • simple CLI installation with local master.
  • DHCP or Static IP allocation support.
  • Integration with existing private or public container registry.
GKE on-prem Networking

you have two mode in GKE networking

1. Island mode -  Pod IP addresses are not routable in data center, i.e. from your on-prem services you cannot reach to your pods directly instead you need to use endpoints like we do in k8s. (expose endpoint to reach pods)
2. Flood IP mode    -  In this mode, you can reach to your pods and allows to set routing table.
Data plane hybrid connection
3 way to connect GKE-on-prem cluster to Google N/w (on-prem to cloud)
1. cloud VPN - private IP access over the internet with static or dynamic routing over BGP.
2. Partner Interconnect - private IP access thru a partner, data does not traverse thru the public internet.
3. Dedicated Interconnect - private IP over a direct physical connection to google's n/w. for 10GB connection & above.
so economically you need to calculate which way you should connect to Google n/w considering the latency and region displacement.

In my lab I exercised this using QWIKLABS provided by google cloud training team

search for AHYBRID020 Managing Hybrid Clusters using Kubernetes Engine and launch the Lab
get the credentials and login to google cloud console

after getting in, verify the project which you been assigned by quiklabs 
        ex: qwiklabs-gcp-00-a08abc03add9

by default you will see a Kube-cluster up and running, navigate to Kubernetes Engine > clusters
activate the cloud shell ( which will be coming from a VM provided by GCP holding all necessary packages you need to exercise anthos )
test the set of command provided by lab such as:
$ gcloud auth list (to list the active accounts)
$ gcloud config list project ( to list project id)
now first thing is to enable API services:
        $ gcloud services enable \
     cloudresourcemanager.googleapis.com \
     container.googleapis.com \
     gkeconnect.googleapis.com \
     gkehub.googleapis.com \
     serviceusage.googleapis.com \
on successful completion you'l see : Operation "operations/acf.60730f01-d4a2-4eaa-8dbc-5aab27d0fd3e" finished successfully.
or if an error occurs when executing this step it means Anthos API access is not properly enabled.
now download files from github repository:
        $ git clone -b workshop-v1 https://github.com/GoogleCloudPlatform/anthos-workshop.git anthos-workshop
        $ cd anthos-workshop
connect cloudshell to your cluster
        $ source ./common/connect-kops-remote.sh 
this will create remote cluster which will be detached to Anthos for now & later we access this cluster and register it to GKE hub
GKE-HUB: it is a centralized dashboard that allows you to view and manage all of your Kubernetes clusters from one central location. cluster can be from anywhere, may your on-prem or from other cloud or from Google.
Switch kubectl context to remote
        $ kubectx remote 
kubectx is a tool which sets the configuration used by the kubectl command.

        $ kubectl get nodes
verify remote cluster, you'l two worker and a master nodes
now you need to grant access to service account to register clusters
        $ export PROJECT=$(gcloud config get-value project)
        $ export GKE_SA_CREDS=$WORK_DIR/anthos-connect-creds.json
        $ gcloud projects add-iam-policy-binding $PROJECT \
          --member="serviceAccount:$PROJECT@$PROJECT.iam.gserviceaccount.com" \
so this policy binding will grant access to gkehub.connect api's
generate private key file for service account
        $ gcloud iam service-accounts keys create $GKE_SA_CREDS \
     --iam-account=$PROJECT@$PROJECT.iam.gserviceaccount.com \
now finally register the remote cluster using gcloud which creates the membership and installs the connect agent, but first export remote cluster variable
        $ export REMOTE_CLUSTER_NAME="remote"
        $ gcloud container hub memberships register $REMOTE_CLUSTER_NAME \
--context=$REMOTE_CLUSTER_NAME \ --service-account-key-file=$GKE_SA_CREDS \ --project=$PROJECT
    which means you are able to see your remote cluster on GKE-HUB now
Refresh the cluster page to see the remote cluster, but you need to login into it before it is fully connected.
        $ kubectx remote
        $ export KSA=remote-admin-sa 
        $ kubectl create serviceaccount $KSA
                   creating KSA to login into remote cluster

assigning cluster-admin ClusterRole
        $ kubectl create clusterrolebinding ksa-admin-binding \
     --clusterrole cluster-admin \
     --serviceaccount default:$KSA
Extract token 
        $ printf "\n$(kubectl describe secret $KSA | sed -ne 's/^token: *//p')\n\n"
copy the extracted token & and login to remote cluster using token option. this way you now have access to your remote cluster and its metadata via GKE-HUB.


03 April 2020

Deploying IaC using Terraform

TERRAFORM is used to automate deployment in infrastructure across multi providers in both public and private clouds. Provisioning of infrastructure through 'software' to achieve 'consistent' and 'predictable' environments is Infrastructure as code

               IaC - In simple words replacement of SOPs which is automated on top of it.

core concepts to achieve this:
  • Defined in Code: Iac should be defined in code weather in form of json yaml or hcl.
  • Stored in source control: the code should be stored somewhere in version source control repository like GitHub.
  • Declarative & Imperative: In imperative I am going to tell software each and everything which it needs to do the job. In declarative software already have some sort of Idea or a predefined routine, what it is going to do with taking some references.so terraform is an example of declarative approach to deploy IaC
  • Idempotent & consistence: once a job is done, and if again I get a request to do the same job it is Idempotent behavior of terraform to not repeat the steps done while fulfilling this job, instead will say there is no change in configuration and current requirement is same as the desired one so no changes needs to be made. otherwise in non-idempotent world each time this job comes it gonna repeat the same steps again and again to fulfil the requirement which is already in place.
  • Push & pull : terraform works on the principle of push mechanism where it pushes the configuration to its target.
Key benefit here is - everything is documented in code, which makes you understand about your infrastructure in more details.
key terraform components
In this exercise I am trying to demonstrate as how you can quickly deploy t2.micro instance of amazon linux without login into aws console by just writing a terraform plan
to being with you need to fulfill a prerequisite:
                             "you should have an IAM user with AWS CLI access"

and to form a terraform configuration file with .tf as extension, below are few blocks which terraform tend to use to define things

#VARIABLES - input variables ca be declared here
#PROVIDER - AWS, google like providers can be declared here
#DATA - data from provider collected here in form of data source
#RESOURCE - feeding info about resources from provider here
#OUTPUT - data is outputted when apply is called

defining variables in terraform can be achieved in multiple ways, so here I tried to define variable from an external file

To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars with the following contents:

aws_access_key = "AKIA5O3GDEMOVOBSE4RA"
aws_secret_key = "+bh/vVqoDEMOErxv7YlrSs/sdRwN9ZzeKDtAjCP" key_name = "tfkeypair"
private_key_path = "C:\\tfkeypair.pem"

For all files which match terraform.tfvars or *.auto.tfvars present in the config directory terraform reads the variable

So in this exercise I tried to deploy an t2.micro instance on Amazon EC instance with nginx up and running on it.

and at the end your terraform configuration files structure will look like:

let start composing a terraform configuration file


first we are going to define set of variables here, that are used during the configuration. I have defined keypairs so that we can SSH to our AWS instance, with default region where my instance will be deployed

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "key_name" {} variable "private_key_path" {} variable "region" { default = "us-east-1" }


here we are defining our providers, and feeding information about our key details defined in our variable section with syntax var.variableName

provider "aws" {
access_key = var.aws_access_key secret_key = var.aws_secret_key region = var.region }


here in data source block we are pulling data from the provider, in this exercise we are using amazon as provider and using linux ami for our EC2 instance

data "aws_ami" "aws-linux" {
most_recent = true owners = ["amazon"] filter { name = "name" values = ["amzn-ami-hvm*"] } filter { name = "root-device-type" values = ["ebs"] } filter { name = "virtualization-type" values = ["hvm"] } }


In this block we can define more then one resources, here I have used default VPC, so that it will not be deleted on destroying the instance. Next we have defined security group so that we can SSH to our instance which is going to run nginx in this example, and opening port 80 & 22 ; and for that we need to define VPC id so that it will create the security group.

resource "aws_default_vpc" "default" {
} resource "aws_security_group" "allow_ssh" { name = "nginx_demo" description = "allow ports for nginx demo" vpc_id = aws_default_vpc.default.id

# to allow traffic from outside to inside
ingress {
from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [""] } ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [""] }

# to allow traffic from inside to outside i.e. from instance to internet
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [""] } }

# in this block we are actually defining out instance which will be nginx with t2.micro as resource type
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id instance_type = "t2.micro" key_name = var.key_name vpc_security_group_ids = [aws_security_group.allow_ssh.id]

# since we are doing SSH so we need to define a connection in resource block, so that terraform understand where to connect with connection { type = "ssh" host = self.public_ip user = "ec2-user" private_key = file(var.private_key_path) } # since we want to remotely exec command so provisioner "remote-exec" { inline = [ "sudo yum install nginx -y" , "sudo service nginx start" ] } }


this block will help to give you the output of your configuration

output "aws_instance_public_dns" {
value = aws_instance.nginx.public_dns }


now to deploy above configuration, terraform deployment process follows a cycle:


$ terraform init
this initializes the terraform configuration and check for provider modules/plugins if its already not available and downloads the modules as shown below

$ terraform plan -out ami.tfplan
it looks for configuration file in pwd and loads any variables if found in form of terraform.tfvars file , and stores out the plan as shown below

$ terraform apply "ami.tfplan" 
it performs the configuration you created as code, applies it to provider and does the magic
now test the configuration by hitting above highlighted url and you will see result as:
and validate from your aws console you will see this:

now if you dont want the configs to be active and charge you money you can destroy it

$ terraform destroy
lastly from your config folder you can destroy the config you applied and it will destroy everything corresponding to your config