Tuesday, June 16, 2020

Free Google Certifications during lockdown

Google has recently rolled out free courses for beginners and professionals to learn new skills. Here are 5 courses by Google that you can enroll for.

1. Cloud DevJam:

Cloud DevJam is a program specially crafted for professionals to upskill their knowledge with the latest cloud technologies. Cloud DevJam is a series of multiple tracks, each one full of engaging learning activities like hackathons, multiple-choice questions, hands-on labs, webinars and Google Cloud certifications. Whether you are new to the cloud or building on an existing skillset, you can find the learning path tailored to your role or interest, by joining Cloud DevJam

2. Data Engineering with GCP Certificate:

The program offers skills you need to build career in data science. The content is combination of presentations, demos and labs. It will enable you for data-driven decision making by collecting, transforming, and publishing data. You will also have an opportunity to practice key job skills including designing, building and running data processing systems, and operationalising the machine learning models. The course may take 4 months to complete if you invest 4 hours per week. It is required for candidates to have basic proficiency with common query language such as SQL.

3. Reliable Google Cloud Infrastructure: Design and Process:

This course on Coursera aims to equip students with design patterns to build reliable cloud solution. The course is a combination of presentations and activities. The 8-hour long course is part of multiple other courses. It is a continuation of Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine courses. It is expected that students have hands-on experience with technologies covered in either of those courses.

4. Google Analytics Certification: Become Certified & Earn More:

This certification course on Udemy is designated to attract clients and improve your marketing skills. There is no requirement of any special skills to take this course. It is a short term course that you can complete in just 2 hours.

5. Google Cloud Platform Fundamentals: Core Infrastructure:

This is an extensive course that introduces you to important concepts and terminology for working with GCP. You will learn computing and storage services available in Google Cloud Platform, including Google App Engine, Google Compute Engine, Google Kubernetes Engine, Google Cloud Storage, Google Cloud SQL, and BigQuery. You learn about important resource and policy management tools, such as the Google Cloud Resource Manager hierarchy and Google Cloud Identity and Access Management. It will take approximately 12 hours to complete this course.

Monday, June 15, 2020

Kubernetes Journey: An interesting case study

Kubernetes Stories  ๐Ÿš€๐Ÿš€๐Ÿš€  


Kubernetes is like teenage sex; many people just talk about it a lot more than they actually do it.

Presenting 10 mind-blowing stories that should help you gain real-world Kubernetes usage.


● Box’s Kubernetes Journey: An interesting case study:)

A few years ago at Box, it was taking up to six months to build a new micro-service.
Fast forward to today, it takes only a couple of days.


How did they manage to speed up?
Two key factors made it possible,
1. Kubernetes technology
2. DevOps practices

Founded in 2005, Box was a monolithic PHP application and had grown over time to millions of
lines of code. The monolithic nature of their application led to them basically building very
tightly coupled designs, and this tight coupling was coming in their way. It was resulting in
them not being able to innovate as quickly as they wanted to.
Bugs in one part of their application would require them to roll back the entire application. So
many engineers working on the same code base with millions of lines of code, bugs were not
that uncommon. It was increasingly hard to ship features or even bug fixes on time. So they
looked out for a solution and decided to go with the micro-services approach. But then they
started to face another set of problems....

That's where Kubernetes came in:)

A 250 Year Old Bank's Cloud-Native Kubernetes Journey:


For them, it all started with using containers in the beginning and they began to face some problems
in the initial stages since it was a bank (financial sector), they usually face more challenges with
compliance, governance and the priority was more on security. On the cloud-native landscape, as
there are many tools, it was confusing for them to choose which tool for what as they didn't want
each developer team selecting different tools and facing a catastrophic separation from others and
licensing issues. 

So the need for them was to come up with some clear guidelines for developers, the best cloud
features they can consume easily before moving to a cloud-native approach to creating a uniform
way of working. They also came up with a plan of having a regulated team that previously worked on
tools and processes to share knowledge and best practices. They created a team called 'Stratus.'
The mission of this team 'Stratus' is to enable development teams to quickly deliver secure and
high-quality software by providing them with easy to use platforms, security, portability across
clouds on enterprise-level, and reusable software components.

How did 'Pokemon Go' able to scale so efficiently? 

The answer is Kubernetes.


500+ million downloads and 20+ million daily active users. That's HUGE.

Pokemon Go engineers never thought their user base would increase exponentially surpassing the
expectations within a short time. Even the servers couldn't handle this much traffic.

The Challenge:
The horizontal scaling on one side but Pokemon Go also faced a severe challenge when it came to
vertical scaling because of the real-time activity by millions of users worldwide. Niantic was not
prepared for this.

The Solution:
The magic of containers. The application logic for the game ran on Google Container Engine (GKE)
powered by the open-source Kubernetes project.

Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its
team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to
turn Pokรฉmon GO into a service for millions of players, continuously adapting and improving. This
got them more time to concentrate on building the game's application logic and new features rather
than worrying about the scaling part.

“Going Viral” is not always easy to predict but you can always have Kubernetes in your tech stack.


Shopify engineering team's journey to building their own PaaS with Kubernetes 


Shopify was one of the pioneers in large-scale users of Docker in production.
They ran 100% of their production traffic in hundreds of containers. Shopify engineering team saw
the real value of containerisation and also aspired to introduce a real orchestration layer.
They started looking at orchestration solutions, and the technology behind Kubernetes fascinated
them.

It all started in 2016 where all the engineers were happy running services everywhere with a simple
stack that included Chef, Docker, AWS, and Heroku. But just like any other company that is in the
growth phase, the Shopify encountered some challenges when this Canadian e-commerce company
saw 80k+ requests per second during peak demand. Wohooo:)

Many processes were not scalable, and they needed a quick solution. The Shopify team recognized
that they needed to increase their focus on tested infrastructure, and automation that works as
expected, every time.

The Shopify engineering team believed in three principles: providing a 'paved road, 'hide complexity'
and 'self-serve.'


Italy's biggest traditional bank is embracing Kubernetes.  

A conventional bank running its real business on such a young technology?
Are you kidding me?

Nope, I am not kidding. Italy's banking group, Intesa Sanpaolo, has made this transition.

These are banks who still run their ATM networks on 30-year-old mainframe technology and
embracing the hottest trend & tech is nearly unbelievable. Even though ING, the banking and
financial corporation changed the way the banks were seen by upgrading itself with Kubernetes and
DevOps practices very early in the game, there was still a stigma with adopting Kubernetes in the
highly regulated and controlled environments like Healthcare, Banks, etc.

The bank's engineering team came up with an initiative strategy in 2018 to throw away the old way
of thinking and started embracing the technologies like micro-services, container architecture, and
migrate from monolithic to multi-tier applications. It was transforming itself into a software
company, unbelievable.

Today the bank runs more than 3,000 applications. Of those, more than 120 are now running in
production using the new micro-services architecture, including two of the 10 most business-critical
for the bank.


Kubernetes success story at Pinterest  

With over 250 million monthly active users and serving over 10 billion recommendations every single
day, that is huge. (The numbers might have changed now)


As they knew these numbers are going to grow day by day, they began to realise the pain of
scalability and performance issues.

Their initial strategy was to move their workload from EC2 instances to Docker containers; hence
they first moved their services to Docker to free up engineering time spent on Puppet and to have an
immutable infrastructure.
And then the next strategy was to move to Kubernetes:)

Now they can take ideas from ideation to production in a matter of minutes whereas earlier they
used to take hours or even days. They have cut down so much of overhead cost by utilising
Kubernetes and have removed a lot of manual work without making engineers worry about the
underlying infrastructure.


 ● Airbnb's Kubernetes story

Airbnb's transition from a monolithic to a microservices architecture is pretty amazing. They
needed to scale continuous delivery horizontally, and the goal was to make continuous delivery
available to the company's 1000 or so engineers so they could add new services. Airbnb
adopted to support over 1000 engineers concurrently configuring and deploying over 250
critical services to Kubernetes (at a frequency of about 500 deploys per day on average).


The New York Times Kubernetes story

Today the majority of their customer-facing applications are running on Kubernetes.
What an amazing story:)
The biggest impact has been to speed up deployment and productivity. Legacy deployments
that took up to 45 minutes are now pushed in just a few.

It's also given developers more freedom and less bottlenecks.

The New York Times has gone from a ticket-based system for requesting resources and weekly
deploy schedules to allowing developers to push updates independently.


Kubernetes at Reddit

Reddit is one of the top busiest sites in the world.
Kubernetes forms the core of Reddit's internal Infrastructure.

For many years, the Reddit infrastructure team followed traditional ways of provisioning and
configuring. However, this didn't go long until they saw some huge drawbacks and failures
happening while doing the things the old way.
They moved to 'Kubernetes.'


 ● Tinder’s Kubernetes story:

Due to high traffic volume, Tinder's engineering team faced challenges of scale and stability. What
did they do? Kubernetes.' Yes, the answer is Kubernetes.

Tinder's engineering team solved interesting challenges to migrate 200 services and run a
Kubernetes cluster at scale totalling 1,000 nodes, 15,000 pods, and 48,000 running containers.

Was that easy? No ways. However, they had to do it for the smooth business operations going
further.

One of their Engineering leaders said, "As we on-boarded more and more services to Kubernetes,
we found ourselves running a DNS service that was answering 250,000 requests per second."

Fantastic culture, Tinder’s entire engineering organisation now has knowledge and experience on
how to containerise and deploy their applications on Kubernetes.



Try This Simple 5-Step Kubernetes CI/CD Process

Step 1. Develop your micro-service. This can be a .war or .jar file.

Step 2. Create a Docker Framework using Tomcat and Java-8 on Ubuntu as a base image.

Step 3. Create the Docker image for the micro-service by adding the .war/.jar file to the Docker
Framework.

Step 4. Create a Helm chart for the micro-service.

Step 5. Deploy the micro-service to the Kubernetes cluster using the Helm Chart.

Try: ​http://bit.ly/CICDPROCESS







Thanks:) :) :)

Thursday, June 11, 2020

Complete Guide to Setting-up Kubernetes



Installing Kubernetes Using Kubeadm

===========================================
#
Topic: Setting up Control-plane/Master node
#############################################
1)
# Install container runtime - Docker
Follow this source.
Source:
https://docs.docker.com/install/linux/docker-ce/ubuntu/
2)
# Installed Kubeadm
Follow this source or below commands.
Source:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Commands:
- sudo apt-get update
- sudo apt-get install -y apt-transport-https curl
- Add repository keys to download tools:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- Add repository:
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
- sudo apt-get update
- sudo apt-get install -y kubelet kubeadm kubectl
- sudo apt-mark hold kubelet kubeadm kubectl
3) 
# Initialize a new Kubernetes cluster
sudo kubeadm init
NOTE: Save this output as it's required to add worker nodes.
NOTE:
If you get 'swap on' error do below:
$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
6)
# As per the instruction from 'kubeadm init' command, To make kubectl work for your non-
root user, run these commands.
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#8) Verify if cluster is initialized successfully
$ kubectl get nodes
O/P:
NAME STATUS ROLES AGE VERSION
node1 NotReady master 2m43s v1.12.1
#9) Run the following kubectl command to find the reason why the cluster STATUS is
showing as NotReady.
- This command shows all Pods in all namespaces - this includes system Pods in the
system (kube-system) namespace.
- As we can see, none of the coredns Pods are running
- This is preventing the cluster from entering the Ready state, and is happening
because we haven’t created the Pod network yet.
O/P:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-...vt 0/1 ContainerCreating 0 8m33s
kube-system coredns-...xw 0/1 ContainerCreating 0 8m33s
kube-system etcd... 1/1 Running 0 7m46s
kube-system kube-api... 1/1 Running 0 7m36s
#7) Create Pod Network. You must install a pod network add-on so that your pods can
communicate with each other. (As per kubeadm init output)
$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version |
base64 | tr -d '\n')"
#8) Check if the status of Master is changed from 'NotReady' to 'Ready'
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 3m51s v1.12.1
GREAT - the cluster is ready and all dns system pods are now working. Cluster is ready
now.

Now that the cluster is up and running, it’s time to add some nodes.

#

Topic: Worker Node Setup & Joining to the cluster:
#############################################
1
# Create a worker node machine in GCP / AWS cloud platform.
2
# Install kubeadm
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

3
# Install container runtime
https://docs.docker.com/install/linux/docker-ce/ubuntu/
4
# To bootstrap a Kubernetes worker node and join it to the cluster run below command from
$kubeadm init output.
Note: bootstrap: It automatically installs kubectl and kubelet.
sudo kubeadm join 192.168.254.129:6443 --token lc1x37.v1dt857dfny4pszd \
--discovery-token-ca-cert-hash
sha256:c4b751ebfefc8ea842f654daab07504ad033873eee87d137c1eb95720c2db8de
# Verify node Join (Run below in Control-plane node)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane Ready master 26m v1.16.3
worker-node1 Ready <none> 3m18s v1.16.3
$ kubectl get nodes -o wide
--> this will display IP, OS, Kernel and more details about all Nodes

#
Project-1 [Nginx]
#############################################
Deploying/Creating a pod
#############################################
#
1.) Create Pod manifest file
$ mkdir nginx
$ vim pod.yaml
pod.yaml
=========
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80

pod.yaml - Manifest file description:
----------------------
- Straight away we can see four top-level resources.
• .apiVersion
• .kind
• .metadata
• .spec
--> .apiVersion:
- Tells API Server about what version of Yaml is used to create the object (Pod object
in this case)
- Pods are currently in v1 API group
--> .kind:
- Tell us kind of object being deployed. In this case we are creating a POD object.
- It tells the control plane what type of object is being defined.
--> .metadata:
- this section again has two sub-sections i.e name & labels
- You can name the Pod using "name" key.
- Using labels, we can identify a particular pod.
--> .spec:
- This is where we specify details about the containers that will run in the Pod.
- In this section we specify container name, image, ports ..etc.
#
2.) Creating a Pod
- Check if all Nodes are ready before creating a Pod
$ kubectl get nodes
- This POSTs the manifest file to API server and deploy/create a Pod from it
$ kubectl apply -f pod.yml
Note: Your Pod has been scheduled to a healthy node in the cluster and
is being monitored by the local kubelet process on the node.
- Check if Pod is created
$ kubectl get pods
$ kubectl get pods --watch (monitor the status continuously)
# Introspecting Running Pods
- Get IP and worker node of the Pod
$ kubectl get pod -o wide
- Get a full copy of the Pod manifest from cluster store. desired state is (.spec) and observed
state will be under (.status)
$ kubectl get pod -o yaml

- Another great kubernetes introspection command. Provides Pods(object's) lifecycle events.
$ kubectl describe pod nginx-pod
- Launch nginx server application running in the Pod from Control-plane node
$ curl http://10.44.0.1:80
- You can also log into the Pod container to get more information.
$ kubectl exec -it nginx-pod -- /bin/bash
Note: Let's add some code and launch our nginx application
- $ echo "Gamut Gurus Technologies" > /usr/share/nginx/html/index.html
- Launch nginx application
$ curl 10.44.0.1
- Login into a specific container in case you have multi container Pod
using --container or -c option.
$ kubectl exec -it nginx-pod --container nginx-container -- /bin/bash
#
3.) Deleting a Pod
$ kubectl get pods
$ kubectl delete pods nginx-pod
$ kubectl delete -f pod.yml
# Misc:
4.) Get all nodes IPs in Kubernetes cluster
$ kubectl get nodes -o wide
NOTE:
kubelet takes the PodSpec and is responsible for pulling all images and starting all
containers in the Pod.
What Next?
- If a Pod fails, it is not automatically rescheduled. Because of this, we usually deploy
them via higher-level objects such as Deployments.
- This adds things like "scalability" (scale-up/down), "self-healing", "rolling updates" and "roll
backs" and makes Kubernetes so powerful.

#
Project-2 [Nginx]
#############################################
Creating Deployments & Services
#############################################
- Pods don’t self-heal, they don’t scale, and they don’t allow for easy updates

- Deployments do all things like
- "scale" (scale-up/down)
- "self-heal"
- "rolling updates"
- "rollbacks"
- That's why we almost always deploy Pods via 'Deployments"
#
Creating Deployments
--------------------
# List all nodes in K8s cluster
$ kubectl get nodes
# List all pods in K8s cluster
$ kubectl get pods
$ kubectl get pods --watch
# Create the deployment
$ kubectl create -f deploy.yml
Note:
IP Range: 30,000 - 32,767
Networking --> VPC network --> Firewall rules
#
3.) Deleting a Pod
$ kubectl get pods
$ kubectl delete pods nginx-pod
$ kubectl delete -f pod.yml
# Misc:
4.) Get all nodes IPs in Kubernetes cluster
$ kubectl get nodes -o wide