Credits: ashleymcnamara/gophers

If you’ve been anywhere near the IT industry, you’ve very likely heard the term containers🚢. The adoption of containers is growing exponentially as they are lightweight, portable, and revolutionary fast. Nowadays, deployments over containers are holding an upper hand over VMs for larger environments.

→ Containers provide isolated runtime environments and allocated resources are exclusively presented to the container, and any alteration won’t affect other running containers.

→ Such environments provide greater efficiency with running, managing, and in resource consumption.

With great power, comes great responsibilities✌️. Imagine a situation where you have been using containers in production and your application starts getting massive traffic. All you need to do is scaling the application. How will you perform this? How will you decide inter-connection of containers within no time? How will you monitor all these containers and manage health-checks😥? We need an orchestration platform to scale and manage our containers.

This is where tools like Kubernetes☸️ comes into play.

What exactly is Kubernetes?

Kubernetes(K8s) an open-source project by Google, hosted by Cloud Native Computing Foundation that has become one of the most popular container orchestration (simply container management) tools around; it allows you to deploy and manage fault-tolerant, resource optimal container applications at scale.

→In simple words, you can bundle together groups of containers, and Kubernetes helps you easily and effectively manage those containers. It can span across on-premises, public, private or hybrid clouds. For this reason, Kubernetes is now an undisputed platform to host cloud-native applications that require rapid scaling. While in best practices, K8s is more often used with Docker. It provides :

  • Service Topology: Routing the traffic based upon cluster topology.
  • Service Discovery and Load Balancing: Service discovery is the process of figuring out how to connect to a service. Kubernetes has its own mechanism for that. It gives pods(basically one or more containers) unique IP addresses and a single IP to the collection of pods that enable load-balancing across the pods.
  • Storage Orchestration: This allows you to mount the storage system of your choice. Be it public, private cloud storage, NFS, and many more...
  • Horizontal Scaling: Scale your application up or down with commands, UI, or via auto-scaling seamlessly!
  • Automated Rollouts and Rollbacks: Monitoring the health-checks of the application ensures almost no downtime and matches the desired state for deployment solution. Kubernetes will rollout and rollback for you.
  • Self Healing: Always takes care of the desired state and restart, replace and reschedule the containers and only ready-to-work containers are advertised to the clients.

For a successful open source project, the community is as important as having great code. K8s has a very thriving community across the world. It has more than 3000 active contributors(according to the August 2019 report).

How Kubernetes Works?

→A working Kubernetes deployment is called a cluster. A cluster mainly contains the master node and several workers nodes. The role of the master node is to maintain the desired state and the worker nodes actually run the application workloads.

→If a worker node goes down, Kubernetes starts new pods on a functioning worker node. This makes the process of managing containers easy and simple. Your work involves configuring Kubernetes and defining nodes, pods, and the containers within them. Kubernetes handles orchestrating the containers.

Overview of a basic Kubernetes cluster

Some common terms for batter understanding of K8s:

  • Node: The machine that performs the tasks
  • Pod: The smallest communicable unit is a group of containers deployed in a node. Pods abstract network and storage from underlying containers.
  • Kubelet: It is responsible for maintaining a set of pods and ensures the defined containers are started and running.
  • etcd: It is a consistent and high-available key-value store.

The adoption of Kubernetes as the Go-To platform for hosting production-grade applications is ever increasing. Big brands like The New York Times, HBO, Reddit, Airbnb, Pinterest, Pokemon — all have their own K8s stories to tell. And many more are on their way to join them.

Kubernetes meets the real world: Airbnb’s story😄

→ Airbnb is an online marketplace for sharing, renting homes, and experiences. The transition from monolithic to the microservice architecture by Airbnb is quite commendable. The organization needed to scale horizontally to ensure continuous delivery and keep upscaling by adding new services. The sole purpose was to enable continuous delivery with a microservices architecture so that the team of over 1000 engineers can put up faster delivery.

→ Airbnb adopted to support the developer's team and configured, deployed over 250 critical production services to Kubernetes. Airbnb managed to scale with the microservice environment having over 20,000 deployments per week(all environments, all apps).

All slides available

→ Initially, the configuration was manual and rigid and not much evolved. Then, they shifted to configuration management with the chef on a monolithic. But the very complex hierarchy of services, inheriting configuration was not working as expected. Modifying the chef recipes was quite frequent and it would take down other services on convergence. And at the latest Airbnb moved to Kubernetes which automated orchestration of containerized setup.

More specifically with Kubernetes, the declarative approach seemed more resilient, and an easier scheduling process leads to optimization of cost. Also, it came with all features and advantages of Docker made the environment more granular. And most important, YML configurations are extremely human-understandable and provide a hassle-free development experience.

→Kube-gen, a tool for K8s, helped Airbnb take service parameters (defined in a single YAML file) and generate the complete K8s services manifests containing all the necessary configurations.

The outcome of this shift was quite decent, Kubernetes enabled Airbnb to add a layer of abstraction over the containers and set up an automated management workflow. Today, almost 50% of Airbnb workloads are running on K8s ☸

For more information and insights, go through the below keynote 👇:

Solutions like Kubernetes are buzzing with the spirit of DevOps

Getting Ready for the Kubernetes-driven future🤩:

Container-based microservices applications are the future and Kubernetes is their platform. It has reached a level where organizations can undoubtedly depend on and thrive in the competition. That’s why the big three cloud providers have all launched managed versions of K8s namely EKS by AWS, GKE by GCP, and AKS by Azure. RedHat Openshift is also a contender of Kubernetes distribution that one must not neglect.

== This seems extremely true and a bit relatable with frequently evolving K8s environments.

With its latest release in Dec. 2020, K8s depreciated dockershim(a component of kubelet) that favors the use of container runtime interfaces created for K8s. We are yet to witness the emergence of the implementation of runtimes like cri-o.

I really appreciate your time and attention in reading this piece. I’ll be grateful to have connections like you on Linkedln 🧑‍💼

In a continuous process of Technical Writing. Gathering, Organizing, Crafting the things that make sense.