$ kubectl get cluster -o architecture
If you have seen the logo of k8s carefully, it's a Helm of a Ship, clarifying the concept of steering and orchestrating containers within a complex system.
The complete process of automatically deploying and managing the application is known as Container Orchestration.
Kubernetes provides you with an orchestrating platform through which you can perform these tasks smoothly.
Let's dive deep into the ocean of cluster together.🏊🏻
Kubernetes architecture consists of two main components:
the Master Node and the Worker Node.
The Master Node is responsible for managing the overall state of the Kubernetes cluster, while the Worker Node runs the containerized applications.
Just like there is a control room and many other working rooms in any industry, the master and worker plane is just like that respectively.
$ kubectl run nginx --replica=2 — image=nginx
When you enter "
$ kubectl run" cmd, as you can see above, request firsts go to the API server which resides on the control plane.
Then all three checks start processing and if everything is fine and correct, the API server communicates with the ETCD to store the information in a key-value format.
Then the API server sends success response to the client that is "
$ nginx/pod created"
Behind the scene
This is how everything works under the hood👇
Let's see step by step.
ETCD and Control Manager are continuously watching to API Server.
Controller Manager(C-M) watching API Server and noticed that there is a demand for three replicas for a pod, then it informs the replica set controller to meet the desired state.☝️
While the scheduler watching the kube-api server, it detects the new configuration done by the replica set controller in etcd through the API server.
In the above☝️diagram, the
scheduler is selecting the perfect fit for the demanded pod.
It will check the CPU, memory, storage, affinity, non-affinity, taints & tolerations as such requirements of the pod.
When a node got selected, the request is redirected from the API server to
kubelet (agent, resides on worker node) with pod specifications so that it can create the pod.
If we go deeper inside the kubelet's work, many more things are happening under the hood. Let's see that.
In the above☝️diagram,
kubelet is now interacting with
containerd to make things manageable at the kernel level.
In the main architecture diagram, have a look once again, there we have cri and CNI, both are open standards by Kubernetes itself. We are using containerd as cri implementation.
CNI uses different networking plugins, such as Calico, Flannel, Weave, and others, to provide network connectivity and isolation for pods.
Meanwhile, CNI is doing its job by setting up all the requirements related to networking (containers joining the network, assigning IP addresses, and routing traffic between containers and to the external world.) in order to achieve communication.
Note: Container runtime invokes the CNI plugin when a container is added/deleted for it to do the necessary network configurations.
After completion of every process,
containerd starts initiating the creation of the container.
Throughout this procedure, numerous other components are involved, each of which holds enough value to warrant a dedicated blog post.
Different types of controllers.
OCI Runtime Specification
Scheduler — Filtering, Scoring, Selection
image layer Snapshotter
Thanks for reading the blog. Please let us know if a blog is needed for the above topics. Feel free to hit me up for any AWS/DevOps/Open Source-related discussions. 👋
Manoj Kumar — LinkedIn.
Poonam Pawar — LinkedIn