What is Service Mesh & why do we need it? + Linkered Tutorial

In the Microservice ecosystem, usually cross-cutting concerns such as service discovery, service-to-service, and origin-to-service security, observability and resiliency, etc., are deployed via shared assets such as an API gateway or ESB. As microservice grows in size and complexity, it can become harder to understand and manage.

The service mesh technique addresses these challenges where the implementation of these cross-cutting capabilities is configured as code. A service mesh provides an array of network proxies alongside containers. Each proxy serves as a gateway to each interaction that occurs, both between containers and between servers. The proxy accepts the connection and spreads the load across the service mesh. Service mesh serves as a dedicated infrastructure layer for handling service-to-service communication.

Service mesh offers consistent discovery, security, tracing, monitoring, and failure handling without the need for a shared asset such as an API gateway or ESB. So if you have service mesh on your cluster, you can achieve all the below items without making changes to your application code.

  • Automatic Load balancing
  • Fine-grained control of traffic behavior with routing rules, retries, failovers, etc.,
  • Pluggable policy layer
  • Configuration API supporting access controls, rate limits, and quotas
  • Service discovery
  • Service monitoring with automatic metrics, logs, and traces for all traffic
  • Secure service to service communication

In, Service Mesh model, each Microservice will have a companion proxy sidecar. Sidecar gets attached to the parent application and provides supporting features for the application. The sidecar also shares the same life cycle as the parent application, is created, and retired alongside the parent.

Side Car Pattern
Image – Side Car Pattern

Key Use Cases for Service Mesh

  • Service discovery: Service mesh provides service-level visibility and telemetry, which helps enterprises with service inventory information and dependency analysis.
  • Operation reliability: Metrics data from service mesh allows you to see how your services are performing, for example, how long did it take it to respond to service requests, how much resource it is using, etc., This data is useful to detect the issues & correct them.
  • Traffic governance: With service mesh, you can configure the mesh network to perform fine-grained traffic management policies without going back and changing the application. This includes all ingress and egress traffic to and from the mesh.
  • Access control: With service mesh, you can assign a policy that a service request can only be granted based on the location where the request came can only succeed if the requester passes the health check.
  • Secure service-to-service communications: You can enforce mutual TLS for service-to-service communications for all your service in the mesh. Also, you can enforce service-level authentication using either TLS or JSON Web Tokens.

Currently, the service mesh is being offered by Linkerd, Istio, and Conduit providers. A service mesh is ideal for multi-cloud scenarios since it offers a single abstraction layer that obscures the specifics of the underlying cloud. Enterprises can set policies with the service mesh, and have them enforced across different cloud providers.

In the next section, we can look at how to implement Linkerd service mesh for the sample application we have used before i.e., Nginx deployment

Linkered Service Mesh

Linkerd is a service sidecar and service mesh for Kubernetes and other frameworks. Linkerd sidecar is attached to the parent application and provides supporting features for the application. It also shares the same life cycle as the parent application, is created, and retired alongside the parent.

Applications and services often require related functionality, such as monitoring, logging, configuration, and networking services. Linkerd makes running your service easier and safer by giving you runtime debugging, observability, reliability, and security–all without requiring any changes to your code.


Linkerd has three basic components: (1) User Interface (both command-line and web-based options are available), (2) data plane, and (3) control plane.

Linkerd Architecture
Image – Linkerd Architecture

Key Components

  • User Interface is comprised of a CLI (linkerd) and a web UI. The CLI runs on your local machine; the web UI is hosted by the control plane.
  • Control plane is composed of a number of services that run on your cluster and drive the behavior of the data plane. It is responsible for aggregating telemetry data from data plane proxies.
  • Data plane is comprised of ultralight, transparent proxies that are deployed in front of service. These proxies automatically handle all traffic to and from the service.

Next steps, we will download and install Linkerd, deploy the Sample app.

If you’re looking for a quick start on a basic understanding of Kubernetes concepts, please refer to earlier posts for understanding on Kubernetes & how to create, deploy & rollout updates to the cluster.

Step #1: Validate Kubernetes Version

Check if you’re running Kubernetes cluster 1.9 or later by using kubectl version command.

Validate Kubernetes version
Image – Validate Kubernetes version

Step #2: Install Linkerd CLI

We will be using  CLI to interact with the Linkerd control plane, download the CLI onto your local machine using curl command.

You can also download the CLI directly via the Linkerd releases page.

 Linkerd CLI Installation
Image – Linkerd CLI Installation

Verify if the CLI is installed and running correctly using linkerd command.

Verify Linkerd Installation
Image – Verify Linkerd Installation

Step #3: Validate the Kubernetes cluster

To ensure that the Linkerd control plane will install correctly, we are going to run a pre-check to validate that everything is configured correctly.

Pre check Kubernetes cluster
Image – Pre check Kubernetes cluster

Step #4: Install Linkerd on the Kubernetes cluster

We are going to install the Linkerd control plane into its own namespace using linkerd installcommand. Post-installation, Linkerd control plane resources will be added to your cluster and start running immediately.

Linkerd Installation
Image – Linkerd Installation

Post installation, run linkerd check to check if everything is ok.

Linkerd Validation
Image – Linkerd Validation (1)

Post validation, you should be having [ok] status for all the items.

Linkerd Validation (2)
Image – Linkerd Validation (2)

Step #5: View Control plane components

We have installed the control plane and its running, To view the components of the control plane, use kubectl command.

Control Plane components
Image – Control Plane components

You can also view the Linkerd dashboard by running linkerd dashboard

Launch Linkerd Dashboard
Image – Launch Linkerd Dashboard

To view traffic, use linkerd -n linkerd top deploy/web command.

Linkerd Traffic
Image – Linkerd Traffic

Congrats! we have successfully installed and configured Linkerd components.

The next step is to set up a sample application, check the metrics.

Step #6: Deploy the sample app

We are going to use the Ngnix web app as a sample, to install run kubectl apply command

Deploy Sample application
Image – Deploy Sample application

Now the application is installed, the next step is to inject Linkerd into the app by piping linkerd inject and kubectl apply command. Kubernetes will execute a rolling deployment and update each pod with the data plane’s proxies, all without any downtime.

Inject Linkerd to application
Image – Inject Linkerd to application

If you’ve noticed, we have added Linkerd to existing services without touching the original YAML.

To view, high-level stats about the app, you can run linkerd -n ngnix-deployment stat deploy command.

The Linkerd dashboard provides a high-level view of what is happening with your services in real-time. It can be used to view the “golden” metrics (success rate, requests/second, and latency), visualize service dependencies, and understands the health of specific service routes. To view detailed metrics, you can use Grafana which is part of the Linkerd control plane and provides actionable dashboards for your services out of the box. It is possible to see high-level metrics and dig down into the details, even for pods.

Sample Grafana Dashboard - Top Line Metrics
Image – Sample Grafana Dashboard – Top Line Metrics

Today, we have learned how to install Linkerd & its components. We have also deployed sample service and able to view traffic and its metrics.

Like this post? Don’t forget to share it!

Additional Resources

Linkerd Introduction + Kubernetes Tutorial
Article Name
Linkerd Introduction + Kubernetes Tutorial
Linkerd is a service sidecar and service mesh for Kubernetes and other frameworks.In this post,we will look at how to install Linkerd plus adding to sample service.
Publisher Name
Publisher Logo

Average Rating

5 Star
4 Star
3 Star
2 Star
1 Star

One thought on “What is Service Mesh & why do we need it? + Linkered Tutorial

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

school Previous post TOP 15 Udemy Artificial Intelligence Courses
new year resolution Next post New Year Resolution #1: Take back your Privacy