Kubernetes Guides

What happens when one of your Kubernetes nodes fails?

We already know that Kubernetes is the No. 1 orchestration platform for container-based applications, automating the deployment and scaling of these apps, and streamlining maintenance operations. It coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. The Kubernetes cluster usually comprises multiple nodes, what happens when one of the nodes fails? we are going to look at how this scenario is being handled by Kubernetes in this post.

A quick recap on the Architecture

  1. Any Kubernetes cluster (example below) would have two types of resources:
    1. Master which controls the cluster
    2. Node is the workers’ nodes that run applications

      Image – Kubernetes cluster

  2. The Master coordinates all activities in your cluster, such as scheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates.
  3. Each Node can be a VM or a physical computer that serves as a worker machine in a cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker.
  4. When any applications need to be deployed on Kubernetes, the master issues a command to start the application containers. The master schedules the containers to run on the cluster’s nodes.
  5. The nodes communicate with the master using the Kubernetes API, which the master exposes. End-users can also use the Kubernetes API directly to interact with the cluster.
Image – Kubernetes Architecture (Source: Kubernetes.io)

Master components provide the cluster’s control plane. Kubernetes Control Plane consists of a collection of below processes on your cluster:

  • Kubernetes Master is a collection of three processes
    • API Server that exposes the Kubernetes API. It is the front-end of the Kubernetes control plane.
    • Controller-Manager that runs controllers, which are designed to handle routine tasks in the cluster.
    • Scheduler is to keep watch for newly created pods that have no node assigned and selects a node for them to run on.
  • Each individual non-master node on the cluster runs two processes:
    • Kubelet – this is to communicate with Kubernetes Master
    • kube-proxy – this is nothing but network proxy (Kubernetes networking services) on each node.
    • Container runtime such as Docker.

Master components make global decisions about the cluster (like for example, scheduling applications), and detecting and responding to cluster events.

Now that we have a good understanding of Kubernetes architecture, we will look at what happens when one of the nodes fails?

What happens when one of your Kubernetes nodes fails?

This section details what happens during a node failure and what is expected during the recovery.

  1. Post node failure, in about 1 minute, kubectl get nodes will report NotReady state.
  2. In about 5 minutes, the states of all the pods running on the NotReady node will change to either Unknown or NodeLost.This is based on pod eviction timeout settings, the default duration is five minutes.
  3. Irrespective of deployments (StatefuleSet or Deployment), Kubernetes will automatically evict the pod on the failed node and then try to recreate a new one with old volumes.
  4. If the node is back online within 5 – 6 minutes of the failure, Kubernetes will restart pods, unmount, and re-mount volumes.
  5. If incase if evicted pod gets stuck in Terminating state and the attached volumes cannot be released/reused, the newly created pod(s) will get stuck in ContainerCreating state. There are 2 options now:
    1. Either to forcefully delete the stuck pods manually (or)
    2. Kubernetes will take about another 6 minutes to delete the VolumeAttachment objects associated with the Pod and then finally detach the volume from the lost Node and allow it to be used by the new pod(s).

In summary, if the failed node is recovered later, Kubernetes will restart those terminating pods, detach the volumes, wait for the old VolumeAttachment cleanup, and reuse (re-attach & re-mount) the volumes. Typically these steps would take about 1 ~ 7 minutes.

Like this post? Don’t forget to share it!

What’s next :

Summary
Article Name
What happens when one of your Kubernetes nodes fails?
Description
In this post, we are going to look at what happens when one of your Kubernetes nodes fails?
Author
Publisher Name
Upnxtblog
Publisher Logo
Karthik

Allo! My name is Karthik,experienced IT professional.Upnxtblog covers key technology trends that impacts technology industry.This includes Cloud computing,Blockchain,Machine learning & AI,Best mobile apps, Best tools/open source libs etc.,I hope you would love it and you can be sure that each post is fantastic and will be worth your time.

Share
Published by
Karthik

Recent Posts

Navigating Volatility: Investing in Crypto Derivatives and Risk Management Strategies

The cryptocurrency market is famed for its volatility, presenting each opportunity and demanding situations for…

1 week ago

How Game Developers Use AI in Mobile Games in 2024?

Games since time immemorial have been winning at captivating the users and teleporting them onto…

1 week ago

The Impact of AI on Software Development

We are living within an innovation curve wherein cutting-edge technologies are making a hustle and…

2 weeks ago

AI Tools for Research Paper Writing: Learn What They Can Do

Whether it’s the healthcare industry or the automobile sector, artificial intelligence has left its impact…

3 weeks ago

Embracing Innovation: 5 Ways AI is Transforming the Landscape in 2024

Facts only- The big Artificial Intelligence push is unraveling in 2024. No, it wasn’t merely…

4 weeks ago

The Startup Guide to Acquiring Exceptional Developers

In the fiercely competitive world of Hire Developers for Startup, success hinges not just on…

1 month ago

This website uses cookies.