kubernetes logo

What happens when one of your Kubernetes nodes fails?

We already know that Kubernetes is the No. 1 orchestration platform for container-based applications, automating the deployment and scaling of these apps, and streamlining maintenance operations. It coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. The Kubernetes cluster usually comprises multiple nodes, what happens when one of the nodes fails? we are going to look at how this scenario is being handled by Kubernetes in this post.

A quick recap on the Architecture

  1. Any Kubernetes cluster (example below) would have two types of resources:
    1. Master which controls the cluster
    2. Node is the workers’ nodes that run applications

      Kubernetes cluster
      Image – Kubernetes cluster
  2. The Master coordinates all activities in your cluster, such as scheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates.
  3. Each Node can be a VM or a physical computer that serves as a worker machine in a cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker.
  4. When any applications need to be deployed on Kubernetes, the master issues a command to start the application containers. The master schedules the containers to run on the cluster’s nodes.
  5. The nodes communicate with the master using the Kubernetes API, which the master exposes. End-users can also use the Kubernetes API directly to interact with the cluster.
Image – Kubernetes Architecture (Source: Kubernetes.io)

Master components provide the cluster’s control plane. Kubernetes Control Plane consists of a collection of below processes on your cluster:

  • Kubernetes Master is a collection of three processes
    • API Server that exposes the Kubernetes API. It is the front-end of the Kubernetes control plane.
    • Controller-Manager that runs controllers, which are designed to handle routine tasks in the cluster.
    • Scheduler is to keep watch for newly created pods that have no node assigned and selects a node for them to run on.
  • Each individual non-master node on the cluster runs two processes:
    • Kubelet – this is to communicate with Kubernetes Master
    • kube-proxy – this is nothing but network proxy (Kubernetes networking services) on each node.
    • Container runtime such as Docker.

Master components make global decisions about the cluster (like for example, scheduling applications), and detecting and responding to cluster events.

Now that we have a good understanding of Kubernetes architecture, we will look at what happens when one of the nodes fails?

What happens when one of your Kubernetes nodes fails?

This section details what happens during a node failure and what is expected during the recovery.

  1. Post node failure, in about 1 minutekubectl get nodes will report NotReady state.
  2. In about 5 minutes, the states of all the pods running on the NotReady node will change to either Unknown or NodeLost.This is based on pod eviction timeout settings, the default duration is five minutes.
  3. Irrespective of deployments (StatefuleSet or Deployment), Kubernetes will automatically evict the pod on the failed node and then try to recreate a new one with old volumes.
  4. If the node is back online within 5 – 6 minutes of the failure, Kubernetes will restart pods, unmount, and re-mount volumes.
  5. If incase if evicted pod gets stuck in Terminating state and the attached volumes cannot be released/reused, the newly created pod(s) will get stuck in ContainerCreating state. There are 2 options now:
    1. Either to forcefully delete the stuck pods manually (or)
    2. Kubernetes will take about another 6 minutes to delete the VolumeAttachment objects associated with the Pod and then finally detach the volume from the lost Node and allow it to be used by the new pod(s).

In summary, if the failed node is recovered later, Kubernetes will restart those terminating pods, detach the volumes, wait for the old VolumeAttachment cleanup, and reuse (re-attach & re-mount) the volumes. Typically these steps would take about 1 ~ 7 minutes.

Like this post? Don’t forget to share it!

What’s next :

What happens when one of your Kubernetes nodes fails?
Article Name
What happens when one of your Kubernetes nodes fails?
In this post, we are going to look at what happens when one of your Kubernetes nodes fails?
Publisher Name
Publisher Logo

Average Rating

5 Star
4 Star
3 Star
2 Star
1 Star

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous post How to aggregate logs and analyse with EFK stack ?
Next post Top 10 Logo Design Websites To Boost Your Creativity