What is Kubernetes and How It Works

To better understand what Kubernetes is and why it is so important, it is recommended that you have some knowledge about containers. With that said, if you need an introduction to containers, we recommend that you go over our previous article: What Are Containers? An Introduction to Containerization.

As we previously discussed in the article mentioned above, when you are trying to manage a containerized deployment, you will need a platform to manage containers efficiently.

To be more specific, you need a container orchestrator that will automate the tasks required to successfully run your containerized applications. Although there are several container orchestrators, one of the most popular is Kubernetes.

What is Kubernetes?

Kubernetes, also known as k8s, is an open-source container orchestration system that allows you to automatically manage containerized workloads and services. This means that Kubernetes can help you automatically manage, scale, and deploy your containerized applications. Kubernetes was initially developed at Google but was open-sourced in 2014 when passed on to the Cloud Native Computing Foundation (CNCF).

Some of the important Kubernetes features are automatic load balancing, so you do not have to worry about servers being flooded. Container management; if your containers need to be managed, i.e., need to be activated, suspended, or just shut down, Kubernetes takes care of it. Also, if containers fail or malfunction, Kubernetes can replace them.

In addition, Kubernetes can dynamically scale; if you need to scale up or down your application based on demand, Kubernetes helps with that as well. In other words, Kubernetes ensures that your applications always work as expected.

Kubernetes Components

With a Kubernetes deployment, you get a Kubernetes Cluster. This cluster is a combination of several components that work together to successfully run your containerized applications. To understand a little more about Kubernetes and what a Kubernetes cluster is, we will review some of the components that make up a cluster.


Pod is a computing unit that can host one or more containers that can share resources such as storage and network. This means that a Pod can run a single container, but it can also run several ones that need to work together. In other words, Kubernetes uses Pods to manage and interact with containers. Pods can request computation resources and memory, depending on the task that it needs to get done.

Pods are tied to the machines where they are created and remain there until they are destroyed. However, you can have replicas of the same Pod in different machines.

Kubernetes uses a DaemonSet to ensure that all or some of your machines run a copy of a Pod. As you add machines to your cluster, Pods are added to those machines. Similarly, if a machine is removed, the DaemonSet ensures that the pods associated with that machine are also destroyed.

A Pod’s characteristic worth noting is that Pods are ephemeral, meaning they have a short lifetime. When a Pod fails, Kubernetes can replicate that Pod without interrupting the workflow.

Given that Pods are ephemeral, you can lose files when a container fails, or you might encounter issues when trying to share files among containers working on the same Pod. A solution to this is the usage of volumes and persistent volumes. If you want to learn more about volumes and Kubernetes storage, make sure to check out our next article: How to Choose the Best Kubernetes Storage?


Nodes are worker machines, also called worker nodes, which can be physical or virtual. Nodes usually contain several Pods and have all the services needed to run those Pods. In short, nodes are in charge of running your containerized applications. To give you more information about Nodes, we will review two components in a Node: kubelet and kube-Proxy.

Please note that the third component in a Node called container runtime needs to be installed to run Pods; however, we won’t be discussing that component in this article.


The kubelet is an agent responsible for scheduling and ensuring apps are properly running in a node. The kubelet is in charge of communicating with the Control Plane (discussed in the next section). If Pods fail, the kubelet follows the instructions from the Control Plane and can create or destroy a Pod accordingly. The kubelet also provides information about the Node’s health to the Control Plane.


The kube-proxy is a network proxy that allows network traffic to be redirected into the Pods in the Node. The kube-proxy allows network communication inside and outside of a Kubernetes cluster.

Suppose you have a Pod that runs a web page and a Pod that runs your database; if your web page needs to communicate with the database, the kube-proxy is in charge of making that communication successful.

Control Plane

The control plane contains several components that are in charge of controlling and managing all the Kubernetes services required to run and deploy an application successfully. The control plane manages all the Pods and ensures that the desired state is achieved. The control plane also makes sure all the resource requirements are met.

We will briefly discuss the components in the control plane, but you can find more information in the Control Plane Components section in the Kubernetes documentation.

  • kube-apiservice: exposes the Kubernetes API and allows you to interact with your cluster.
  • etcd: holds all the important data used by Kubernetes.
  • kube-scheduler: decides when and where to run a Pod. The scheduler determines which Node a Pod should run on based on the Node’s resource availability.
  • kube-controller-manager: this is responsible for handling the controller services in your cluster. Controllers help your cluster to reach the desired cluster’s state, making or requesting changes to achieve such a state.
  • cloud-controller-manager: allows you to connect your Kubernetes cluster to cloud providers.

All these components work together to simplify a containerized deployment and ensure it is successful. This, in part, is how Kubernetes solves the problems you could face with a containerized deployment. However, we did not discuss storage for Kubernetes.

As we mentioned before, Pods are ephemeral, and you could lose data if they are destroyed. For that reason, you need a reliable software storage solution to keep your data safe so that whenever a Pod fails, the new Pods can resume the task being performed by the faulty Pod.

Also, if you need or are planning to deploy stateful applications, you need somewhere to store your data. This is where you will need a storage solution that works well with Kubernetes to provide your Pods with Persistent Volumes. If you would like to learn more about storage for Kubernetes and how to take Kubernetes to the next level with stateful applications, make sure to read our next article: How to Choose the Best Kubernetes Storage?

Learn More

Talk to Us

We are here to answer all of your questions about how Quobyte can benefit your organization.

Are you ready to chat? Want a live demo?

Talk to us