25 March 2020

Architecting hybrid infrastructure with Anthos

A multi-cloud service platform
Anthos is based on kubernetes. It runs on-premise and supports multi and hybrid cloud environments which is fully managed by Google. it lets you write your jobs once and run it anywhere. It gives you flexibility to move on-prem apps to cloud when you are ready & finally it allows you to keep using the technology you are already using while improving security.

"fun fact - Anthos means flower in Greek. Flower grow on-premise but they need rain from the cloud to flourish"

Anthos Technology stack
Kubernetes engine
GKE on-prem
Anthos Config Management
Migrate for Anthos

lets take an example to understand it more clearly. In today's world every company is trying to move their infrastructure from on-premise to cloud. And its not a good idea to develop and modify your application into the cloud directly instead if we get a way to modify the same application which will work in cloud can be modified in your on-prem infrastructure first, while gaining some benefits of working in cloud and maintaining in on-prem. With Anthos you do have leverage to extend your on-prem to your cloud environment to work more effectively.
  • It provides a platform to manage applications in hybrid cloud environment 
  • It helps managing hybrid infrastructure by using one single control plan.
which benefits you in :
  • write once deploy in any cloud.
  • consistency across environments.
  • Increased workload mobility.
  • Avoid vendor lock in.
  • A techno-stack that runs in data centers, next to enterprise workload organisations currently run on premise.
General Idea

In Anthos you setup admin workstation which includes Admin cluster as well as user cluster, so it’s like cluster within a cluster. Admin cluster takes care of user cluster. Which eventually means admin control plane takes cares of user control plane. The diagram will help you understand more clearly.

Consider kubernetes master is control plane in Anthos

  • Where admin control plane handles all administrative API calls to and from GKE on-prem
  • Use gkectl to create manage and delete cluster


You will setup admin workstation as part of on-prem installation of GKE

  • it automate deployment on top of v-sphere shipped as a virtual appliance.
  • simple CLI installation with local master.
  • DHCP or Static IP allocation support.
  • Integration with existing private or public container registry.
GKE on-prem Networking

you have two mode in GKE networking

1. Island mode -  Pod IP addresses are not routable in data center, i.e. from your on-prem services you cannot reach to your pods directly instead you need to use endpoints like we do in k8s. (expose endpoint to reach pods)
2. Flood IP mode    -  In this mode, you can reach to your pods and allows to set routing table.
Data plane hybrid connection
3 way to connect GKE-on-prem cluster to Google N/w (on-prem to cloud)
1. cloud VPN - private IP access over the internet with static or dynamic routing over BGP.
2. Partner Interconnect - private IP access thru a partner, data does not traverse thru the public internet.
3. Dedicated Interconnect - private IP over a direct physical connection to google's n/w. for 10GB connection & above.
so economically you need to calculate which way you should connect to Google n/w considering the latency and region displacement.

In my lab I exercised this using QWIKLABS provided by google cloud training team

search for AHYBRID020 Managing Hybrid Clusters using Kubernetes Engine and launch the Lab
get the credentials and login to google cloud console

after getting in, verify the project which you been assigned by quiklabs 
        ex: qwiklabs-gcp-00-a08abc03add9

by default you will see a Kube-cluster up and running, navigate to Kubernetes Engine > clusters
activate the cloud shell ( which will be coming from a VM provided by GCP holding all necessary packages you need to exercise anthos )

# test the set of command provided by lab such as
$ gcloud auth list // to list the active accounts $ gcloud config list project // to list project id # now first thing is to enable API services $ gcloud services enable \ cloudresourcemanager.googleapis.com \ container.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ serviceusage.googleapis.com \ anthos.googleapis.com # on successful completion you'l see
  Operation "operations/acf.60730f01-d4a2-4eaa-8dbc-5aab27d0fd3e" finished successfully.
  or if an error occurs when executing this step 
  it means Anthos API access is not properly enabled.
# now download files from github repository
$ git clone -b workshop-v1 \
  https://github.com/GoogleCloudPlatform/anthos-workshop.git anthos-workshop
$ cd anthos-workshop
# connect cloudshell to your cluster
$ source ./common/connect-kops-remote.sh 
  this will create remote cluster which will be detached to Anthos for now 
  & later we access this cluster and register it to GKE hub

 GKE-HUB: it is a centralized dashboard that allows you to view and manage all of your Kubernetes clusters from one central location. cluster can be from anywhere, may your on-prem or from other cloud or from Google.

# Switch kubectl context to remote
$ kubectx remote 
kubectx is a tool which sets the configuration used by the kubectl command.

# verify remote cluster, you'll have two worker and a master nodes
$ kubectl get nodes
# now you need to grant access to service account to register clusters
$ export PROJECT=$(gcloud config get-value project)
$ export GKE_SA_CREDS=$WORK_DIR/anthos-connect-creds.json
$ gcloud projects add-iam-policy-binding $PROJECT \
  --member="serviceAccount:$PROJECT@$PROJECT.iam.gserviceaccount.com" \
so this policy binding will grant access to gkehub.connect api's
# generate private key file for service account
$ gcloud iam service-accounts keys create $GKE_SA_CREDS \
  --iam-account=$PROJECT@$PROJECT.iam.gserviceaccount.com \
# now finally register the remote cluster using gcloud which creates 
  the membership and installs the connect agent, but first export remote cluster variable
$ export REMOTE_CLUSTER_NAME="remote"
$ gcloud container hub memberships register $REMOTE_CLUSTER_NAME \
  --context=$REMOTE_CLUSTER_NAME \
  --service-account-key-file=$GKE_SA_CREDS \
 which means you are able to see your remote cluster on GKE-HUB now
# Refresh the cluster page to see the remote cluster, 
  but you need to login into it before it is fully connected.
$ kubectx remote
$ export KSA=remote-admin-sa 
# creating KSA to login into remote cluster
$ kubectl create serviceaccount $KSA

# assigning cluster-admin ClusterRole
$ kubectl create clusterrolebinding ksa-admin-binding \
  --clusterrole cluster-admin \
  --serviceaccount default:$KSA
# Extract token 
$ printf "\n$(kubectl describe secret $KSA | sed -ne 's/^token: *//p')\n\n"

copy the extracted token & and login to remote cluster using token option. this way you now have access to your remote cluster and its metadata via GKE-HUB.


1 comment:

  1. Thanks for such a pleasant post. This post loaded with lots of useful information. Keep it up. If you are looking for the best information and suggestions related to Amazon Cloud Services In India then visit Webuters Technologies Pvt. Ltd.