03 April 2020

Google Cloud Platform cheatsheet

Google as a cloud provider gives your various web services to build your infrastructure and automate it for seamless delivery of your application in production environment in a highly secure way.
This post lists out all the gcloud command which you can apply in your google cloud operations. 

GCP comprises of 3 core components:

Network          ¬Moving
Compute         ¬Processing
Storage           ¬Remembering

GCP Basics 
FunctionCommand
Check version
settings
gcloud version
gcloud info
gcloud components list                                         
Init profile
gcloud init
list services
list enabled services
 gcloud services list
 gcloud services list --enabled
Upgrade local SDKgcloud components update
gcloud components update --version 219.0.1
List all sql instances gcloud sql instances list
List all zonesgcloud compute zones list                 

Project configs
FunctionCommand
List projectsgcloud projects list, gcloud config list                     
Show project infogcloud compute project-info describe
Switch project           gcloud config set project <project-id>
create project & set as defaultgcloud projects create mygcp-project-777 --name mygcp-project --set-as-default                       
set a default projectgcloud config set core/project mygcp-project-777                
set default compute regions & zonegcloud config set compute/region europe-west6
 gcloud config set compute/zone europe-west-6-a

Bucket Basics, gsutil=gcloud storage
FunctionCommand
List all buckets and filesgsutil lsgsutil ls -lh gs://<bucket-name>
Download filegsutil cp gs://<bucket-name>/<dir-path>/package-1.1.tgz .
Upload filegsutil cp <filename> gs://<bucket-name>/<directory>/
Cat filegsutil cat gs://<bucket-name>/<filepath>/
Delete filegsutil rm gs://<bucket-name>/<filepath>
Move filegsutil mv <src-filepath> gs://<bucket-name>/<directory>/<dest-filepath>
Copy foldergsutil cp -r ./conf gs://<bucket-name>/
Show disk usagegsutil du -h gs://<bucket-name/<directory>
Create bucketgsutil mb gs://<bucket-name>
Make all files readablegsutil -m acl set -R -a public-read gs://<bucket-name>/
Config auth    gsutil config -a
Grant bucket access                 gsutil iam ch user:pporwal@gmail.com:objectCreator,objectViewer gs://<bucket-name>       
Remove bucket access gsutil iam ch -d user:pporwal@gmail.com:objectCreator,objectViewer gs://<bucket-name>
Calculate file sha1sumgsha1sum syslog-migration-10.0.2.tgz, shasum syslog-migration-10.0.2.tgz
Gsutil helpgsutil help
 gsutil help options

Image & Containers
FunctionCommand
List all imagesgcloud compute images list     
List all container clustersgcloud container clusters list                                           
Set kubectl contextgcloud container clusters get-credentials <cluster-name>

GKE
FunctionCommand
Set the active accountgcloud config set account <ACCOUNT>                                      
Set kubectl contextgcloud container clusters get-credentials <cluster-name>
Change regiongcloud config set compute/region us-west
Change zonegcloud config set compute/zone us-west1-b
List all container clustersgcloud container clusters list

IAM
FunctionCommand
Authenticate client                   gcloud auth activate-service-account --key-file <key-file>
list of credentialed accounts  gcloud auth list                                                       
Set the active accountgcloud config set account <ACCOUNT>         
Auth to GCP container registrygcloud auth configure-docker
Print token for active accountgcloud auth print-access-tokengcloud auth print-refresh-token                          
Revoke generated credentialgcloud auth <application-default> revoke

Compute Instance
FunctionCommand
List all instances                                gcloud compute instances listgcloud compute instance-templates list
Show instance infogcloud compute instances describe "<instance-name>" --project "<project-name>" --zone "us-west2-a"
Stop an instancegcloud compute instances stop myinstance   
Start an instancegcloud compute instances start myinstance
Create an instancegcloud compute instances create vm1 --image image1 --tags test --zone "<zone>" --machine-type f1-micro    
SSH to instancegcloud compute ssh --project "<project-name>" --zone "<zone-name>" "<instance-name>"
Download filesgcloud compute copy-files example-instance:~/REMOTE-DIR ~/LOCAL-DIR --zone us-central1-a
Upload filesgcloud compute copy-files ~/LOCAL-FILE-1 example-instance:~/REMOTE-DIR --zone us-central1-a

Compute Columes/Disk
Function                              Command
List all disksgcloud compute disks list
List all disk typesgcloud compute disk-types list
List all snapshotsgcloud compute snapshots list
Create snapshotgcloud compute disks snapshot <diskname> --snapshotname <name1> --zone $zone

Compute Network
FunctionCommand
List all networksgcloud compute networks list                                               
Detail of one network                      gcloud compute networks describe <network-name> --format json                                        
Create network with auto subnetgcloud compute networks create <network-name>
Create n/w with subnetgcloud compute networks subnets create subnet1 --network my-vcp --range 192.168.0.0/24
Get a static ipgcloud compute addresses create --region us-west2-a vpn-1-static-ip
List all ip addressesgcloud compute addresses list
Describe ip addressgcloud compute addresses describe <ip-name> --region us-central1
List all routesgcloud compute routes list

DNS
FunctionCommand
List of all record-sets in my zonegcloud dns record-sets list --zone my_zone                                                     
List first 10 DNS recordsgcloud dns record-sets list --zone my_zone --limit=10

Compute Firewall
FunctionCommand
List all firewall rules                gcloud compute firewall-rules list
List all forwarding rulesgcloud compute forwarding-rules list
Describe one firewall rulegcloud compute firewall-rules describe <rule-name>
Create one firewall rulegcloud compute firewall-rules create my-rule --network default --allow tcp:9200 tcp:3306
Update one firewall rulegcloud compute firewall-rules update default --network default --allow tcp:9200 tcp:9300

Compute Services
FunctionCommand
List my backend servicesgcloud compute backend-services list
List all my health check endpointsgcloud compute http-health-checks list                                                              
List all URL mapsgcloud compute url-maps list

some points to remember in VPC

there are two modes of VPC
1. AUTO MODE
2. CUSTOM MODE

To create VPC, GCP API should be enabled.
A VPC network is global whereas Subnets are regional.
By default in VPC there is 1 subnet for all regions.
Each subnet is region comes up with 4 firewall rules.
    rule1 allow ICMP (ping)
    rule2 allow for internal use in CIDR
    rule3 allow TCP:3389 (RDP)
    rule4 allow TCP:22 (SSH)
all above rules are ingress type rules.
Firewall rules are global, can be applied by instance-level-tag/service account.
By default it blocks all the data coming in, & allows all the data going out.

to automatically create a subnet in every region:

Subnets have a */20 CIDR range (e.g. 192.168.0.0/20).
Get all subnets of a VPC network

$ gcloud compute networks subnets list --filter="network:my-vpc"

Filter syntax

Create a compute instance with a specific machine type
$ gcloud compute instances create i1 --machine-type=n1-standard-2

Machine type

Default machine type is n1-standard-1 (1 CPU, 3.75 GB RAM)
Instance name argument can be repeated to create multiple instances
Create a compute instance in a specific VPC network and subnet

$ gcloud compute instances create i1 --network my-vpc --subnet my-subnet-1

Default VPC network is default

If --network is set to a VPC network with “custom” subnet mode, then --subnet must also be specified
Instance name argument can be repeated to create multiple instances

Create a compute instance with a specific OS image

$ gcloud compute instances create i1 --image-family ubuntu-1804-lts --image-project ubuntu-os-cloud

Images

Default image family is debian-9
User either --image-family (uses latest image of this family) or --image (a concrete image)
--image-project serves as a namespace for --image and --image-family(may have multiple images/image families with same name in multiple projects)

List all available images (including projects and families) with:
$ gcloud compute images list

Get the VPC network and subnet of a compute instance
{
$ gcloud compute instances describe i1 --format "value(networkInterfaces.network)" | sed 's|.*/||'
$ gcloud compute instances describe i1 --format "value(networkInterfaces.subnetwork)" |sed 's|.*/||'
}


Format syntax
Get the names of all compute instances
$ gcloud compute instances list --format="value(name)"

Can be used, for example, for deleting all existing compute instances:
$ gcloud compute instances delete $(gcloud compute instances list --format="value(name)")

Allow ingress traffic to a VPC network
$ gcloud compute firewall-rules create my-vpc-allow-ssh-icmp --network my-vpc --allow tcp:22,icmp --source-ranges 0.0.0.0/0

0.0.0.0/0 is the default for --source-ranges and could be omitted.

This allows incoming ICMP and SSH (TCP port 22) traffic to any instances in the VPC network from any source (e.g. from the public Internet).

After creating this firewall rule, you’re able to:
Ping instances in the VPC network:
Ping EXTERNAL_IP
SSH to instances in the VPC network: 
$ gcloud compute ssh i1

Note that a newly created VPC network has no firewall rules applied and instances cannot be reached at all (not even from inside the VPC network). 
You have to create firewall rules to make compute instances reachable.
Create a regional static IP address
$ gcloud compute addresses create addr-1 --region=europe-west6

Regional IP addresses can be attached to compute instances, regional load balancers, etc. in the same region as the IP address.
The name argument can be repeated to create multiple addresses

One of --global or --region must be specified.
Create a global static IP address
$ gcloud compute addresses create addr-1 --global

Global IP addresses can only be attached to global HTTPS, SSL proxy, and TCP proxy load balancers.
The name argument can be repeated to create multiple addresses.

One of --global or --region must be specified.

keep clouding!!

25 March 2020

Architecting hybrid infrastructure with Anthos

A multi-cloud service platform
Anthos is an application management platform that provides a consistent development and operations experience for cloud and on-prem systems. Anthos is based on kubernetes. It runs on-premise and supports multi and hybrid cloud environments which are fully managed by Google. it lets you write your jobs once and run it anywhere. It gives you the flexibility to move on-prem apps to the cloud when you are ready & finally it allows you to keep using the technology you are already using while improving security.

"fun fact - Anthos means flower in Greek. Flower grow on-premise but they need rain from the cloud to flourish"

Anthos Technology stack look like
Kubernetes engine
GKE on-prem
Anthos Config Management
Istio
Migrate for Anthos
Marketplace

let's take an example to understand it more clearly. In today's world, every company is trying to move their infrastructure from on-premise to cloud. And it's not a good idea to develop and modify your application into the cloud directly, instead if we get a way to modify the same application which will work in the cloud can be modified in your on-prem infrastructure first while gaining some benefits of working in the cloud and maintaining it on-prem. With Anthos you do have the leverage to extend your on-prem to your cloud environment to work more effectively.
  • It provides a platform to manage applications in a hybrid cloud environment 
  • It helps to manage hybrid infrastructure by using one single control plane.
which benefits you in :
  • write once deploy in any cloud.
  • consistency across environments.
  • Increased workload mobility.
  • Avoid vendor lock-in.
  • A techno-stack that runs in data centres, next to enterprise workload organisations currently running on-premise.
General Idea

In Anthos you set up an admin workstation that includes an Admin cluster as well as a user cluster, so it’s like a cluster within a cluster. Admin cluster takes care of user cluster. This eventually means the admin control plane takes cares of the user control plane. The diagram will help you understand more clearly.
  
    





  • Where admin control plane handles all administrative API calls to and from GKE on-prem
  • Use gkectl to create manage and delete the cluster













Installation
You will set up an admin workstation as part of the on-prem installation of GKE


  • it automate deployment on top of v-sphere shipped as a virtual appliance.
  • simple CLI installation with a local master.
  • DHCP or Static IP allocation support.
  • Integration with existing private or public container registry.












GKE on-prem Networking

you have two modes in GKE networking

1. Island mode -  Pod IP addresses are not routable in the data centre, i.e. from your on-prem services you cannot reach your pods directly instead you need to use endpoints like we do in k8s. (expose endpoint to reach pods)
2. Flood IP mode    -  In this mode, you can reach your pods and allows to set a routing table.
Data plane hybrid connection
3 way to connect GKE on-prem cluster to Google N/w (on-prem to the cloud)
1. cloud VPN private IP access over the internet with static or dynamic routing over BGP.
2. Partner Interconnect private IP access thru a partner, data does not traverse thru the public internet.
3. Dedicated Interconnect private IP over a direct physical connection to google's n/w. for 10GB connection & above.
so economically you need to calculate which way you should connect to Google n/w considering the latency and region displacement.

Exercise
In my lab, I exercised this using QWIKLABS provided by the google cloud training team

search for AHYBRID020 Managing Hybrid Clusters using Kubernetes Engine and launch the Lab
get the credentials and login to the google cloud console

after getting in, verify the project which you been assigned by quiklabs 
        ex: qwiklabs-gcp-00-a08abc03add9

by default you will see a Kube-cluster up and running, navigate to Kubernetes Engine > clusters
activate the cloud shell ( which will be coming from a VM provided by GCP holding all necessary packages you need to exercise anthos )



# test the set of command provided by lab such as
$ gcloud auth list // to list the active accounts $ gcloud config list project // to list project id # now first thing is to enable API services $ gcloud services enable \ cloudresourcemanager.googleapis.com \ container.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ serviceusage.googleapis.com \ anthos.googleapis.com # on successful completion you'l see
  Operation "operations/acf.60730f01-d4a2-4eaa-8dbc-5aab27d0fd3e" finished successfully.
  or if an error occurs when executing this step 
  it means Anthos API access is not properly enabled.
				
# now download files from github repository
$ git clone -b workshop-v1 \
  https://github.com/GoogleCloudPlatform/anthos-workshop.git anthos-workshop
$ cd anthos-workshop
				
# connect cloudshell to your cluster
$ source ./common/connect-kops-remote.sh 
  this will create remote cluster which will be detached to Anthos for now 
  & later we access this cluster and register it to GKE hub
	

 GKE-HUB: it is a centralized dashboard that allows you to view and manage all of your Kubernetes clusters from one central location. a cluster can be from anywhere, may your on-prem or from another cloud or from Google.


# Switch kubectl context to remote
$ kubectx remote 
kubectx is a tool which sets the configuration used by the kubectl command.

# verify remote cluster, you'll have two worker and a master nodes
$ kubectl get nodes
			
# now you need to grant access to service account to register clusters
$ export PROJECT=$(gcloud config get-value project)
$ export GKE_SA_CREDS=$WORK_DIR/anthos-connect-creds.json
$ gcloud projects add-iam-policy-binding $PROJECT \
  --member="serviceAccount:$PROJECT@$PROJECT.iam.gserviceaccount.com" \
  --role="roles/gkehub.connect"
so this policy binding will grant access to gkehub.connect api's
				
# generate private key file for service account
$ gcloud iam service-accounts keys create $GKE_SA_CREDS \
  --iam-account=$PROJECT@$PROJECT.iam.gserviceaccount.com \
  --project=$PROJECT
				
# now finally register the remote cluster using gcloud which creates 
  the membership and installs the connect agent, but first export remote cluster variable
$ export REMOTE_CLUSTER_NAME="remote"
$ gcloud container hub memberships register $REMOTE_CLUSTER_NAME \
  --context=$REMOTE_CLUSTER_NAME \
  --service-account-key-file=$GKE_SA_CREDS \
  --project=$PROJECT			
 which means you are able to see your remote cluster on GKE-HUB now
				
# Refresh the cluster page to see the remote cluster, 
  but you need to login into it before it is fully connected.
$ kubectx remote
$ export KSA=remote-admin-sa 
# creating KSA to login into remote cluster
$ kubectl create serviceaccount $KSA

# assigning cluster-admin ClusterRole
$ kubectl create clusterrolebinding ksa-admin-binding \
  --clusterrole cluster-admin \
  --serviceaccount default:$KSA
				  
# Extract token 
$ printf "\n$(kubectl describe secret $KSA | sed -ne 's/^token: *//p')\n\n"


copy the extracted token & and login to the remote cluster using a token option. this way you now have access to your remote cluster and its metadata via GKE-HUB.


Br,
Punit