Implementing secure containers using gVisor+Docker tutorial

Linux containers have been around since the early 2000s and architected into Linux in 2007. Due to the small footprint and portability of containers, the same hardware can support an exponentially larger number of containers than VMs, dramatically reducing infrastructure costs and enabling more apps to deploy faster. But due to usability issues, it didn’t kick-off enough interest until Docker (2013) came into the picture.

Unlike hypervisor (ex. Xen,hyper-v) virtualization, where virtual machines run on physical hardware via an intermediation layer (hypervisor), containers instead run userspace on top of an operating system’s kernel. That makes them very lightweight and fast.

Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable services that work together.

Now with the increasing adoption of containers and microservices in the enterprises, there are also risks that come along with containers. For example, If any one of the containers breaks out, it can allow unauthorized access across containers, hosts or data centers, etc., thus affecting all the containers hosted on the Host OS.

To mitigate these risks, we are going to take look at various approaches and specifically Google’s gVisor approach, which is a kind of sandbox that helps provide secure isolation for containers. It also integrates with Docker and Kubernetes container platforms thus making it simple and easy to run sandboxed containers in production environments.

With this context, now let’s check out how to implement sandboxed containers.

Cross-posted from: New Stack

Roundup of Container isolation mechanisms

#1.Machine-level virtualization exposes virtualized hardware to a guest kernel via a Virtual Machine Monitor (VMM). Running containers in distinct virtual machines can provide great isolation, compatibility, and performance but it often requires additional proxies and agents and may require a larger resource footprint and slower start-up times.

Image – Machine Level Virtualization

Comparison between Conventional Platform vs Machine Level Virtualization Enabled PlatformImage – Comparison between Conventional Platform vs Machine Level Virtualization Enabled Platform

KVM is one of the best examples of machine-level virtualization. Recently Amazon has also launched Firecracker, a new virtualization technology that makes use of a modified version of KVM.AWS Lambda/Fargate extensively uses Firecracker for provisioning and running secure sandboxes to execute customer functions.

KVM Virtualization infrastructure
Image – KVM Virtualization infrastructure

Another notable project based on KVM is Kata containers leverages lightweight virtual machine that seamlessly integrates within the container ecosystem like Docker or Kubernetes.

#2.Rule-based execution, for example, seccomp filters, allows the specification of a fine-grained security policy for an application or container. However, in practice it can be extremely difficult to reliably define a policy for applications, making this approach challenging to apply for all scenarios.

Image – Rule-Based Execution

To configure the same in Docker,Docker needs to be built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. To check if your kernel supports seccompand configured.

grep CONFIG_SECCOMP= /boot/config-$(uname -r)
Check if seccomp is enabled
Image – Check if seccomp is enabled

Docker by default runs on default seccomp profile,to override use --security-optoption during Docker run command. For example, the following explicitly specifies a policy:

$ docker run --rm \
             -it \
             --security-opt seccomp=/usr/local/profile.json \
             hello-world

The default seccomp profile provides running containers with seccomp and disables around 44 system calls out of 300+. It is moderately protective while providing wide application compatibility.The default Docker profile can be found here.

profile.json whitelists specific system calls and denies access to other system calls.

In the next section,we will look at gVisor (Google’s) approach to container isolation mechanisms.

Introducing gVisor

gVisor is a lightweight user-space kernel, written in Go, that implements a substantial portion of the Linux system surface. By implementing Linux system surface,it provides isolation between host and application. Also, it includes an Open Container Initiative (OCI) runtime called runsc so that the isolation boundary between the application and the host kernel is maintained.

It intercepts all application system calls and acts as the guest kernel, without the need for translation through virtualized hardware. Also, gVisor does not simply redirect the application system calls through to the host kernel. Instead, gVisor implements most kernel primitives (like signals, file systems, futexes, pipes, mm, etc.) and has complete system call handlers built on top of these primitives.

Image – gVisor Kernel

Unlike the above mechanisms, gVisor provides a strong isolation boundary by intercepting application system calls and acting as the guest kernel, all while running in user-space. Unlike a VM which requires a fixed set of resources on creation, gVisor can accommodate changing resources over time like normal Linux processes do.

Although gVisor implements a large portion of the Linux surface and its broadly compatible, there are unimplemented features and bugs. Please file a bug here, if you run into issues.

How to implement Sandboxed containers (for Docker application)

First step is to download  runsc  container runtime from the latest nightly build. Post downloading the binary, check it against the SHA512 checksum file.

wget https://storage.googleapis.com/gvisor/releases/nightly/latest/runsc
wget https://storage.googleapis.com/gvisor/releases/nightly/latest/runsc.sha512
sha512sum -c runsc.sha512
chmod a+x runsc
sudo mv runsc /usr/local/bin
runsc gVisor Docker runtime
Image – runsc gVisor Docker runtime

Next step is to configure Docker to use runsc by adding a runtime entry to Docker configuration (/etc/docker/daemon.json)

Docker configuration for runsc
Image – Docker configuration for runsc

Restart the Docker daemon post making changes.

Now the gVisor configuration is complete, we can now test it by running hello-world container using command docker run --runtime=runsc hello-world

Run hello world container using runsc (gVisor)
Image – Run hello world container using runsc (gVisor)

Let us try to run httpd server on gVisor,here test-apache-app would use httpd image with gVisor runtime.

Run httpd server on gVisor
Image – Run httpd server on gVisor

The runsc runtime can also run sandboxed pods in a Kubernetes cluster through the use of either the cri-o or cri-containerd projects, which convert messages from the Kubelet into OCI runtime commands.

Congrats! we have learned how to implement Sandboxed containers using gVisor.

Like this post? Don’t forget to share it!

Additional Resources :

Summary
Implementing secure container using gVisor+Docker tutorial
Article Name
Implementing secure container using gVisor+Docker tutorial
Description
In this post, we take look at Google's gVisor project that helps in providing secure isolation for containers, while being more lightweight than a virtual machine.
Author
Publisher Name
Upnxtblog
Publisher Logo

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

One thought on “Implementing secure containers using gVisor+Docker tutorial

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

github Previous post TOP 10 Open Source Projects for 2018
school Next post Year in Review – 10 Most Popular Coursera Specializations 2018