All You Need To Know About Kong API Gateway + tutorial

Kong is an open-source gateway (or an API Gateway/Microservices Abstraction Layer) for microservices requests while providing load balancing, logging, authentication, rate-limiting, transformations, and more through plugins. In this post, we take look at key features and samples on how to configure APIs.

Kong can be deployed in a variety of configurations, as an edge API gateway or an internal API proxy. OpenResty, through its Nginx modules, provides a strong and performant foundation, with Lua plugins for extensions. Kong can either use PostgreSQL for single-region deployments or Cassandra for multi-region configurations.

Kong
Image – Kong Logo

Kong’s high performance, its API-first approach (which enables automation of its configuration) and its ease of deployment as a container makes it the right fit for any project be it web, mobile or IoT (Internet of Things), etc.,

Kong API Gateway
Image – Kong API Gateway

Key Features

  • Cloud-Native: Platform agnostic, Kong can run from bare metal to Kubernetes.
  • Dynamic Load Balancing: Load balance traffic across multiple upstream services.
  • Circuit-Breaker: Intelligent tracking of unhealthy upstream services.
  • Health Checks: Active and passive monitoring of your upstream services.
  • Service Discovery: Resolve SRV records in third-party DNS resolvers like Consul.
  • Serverless: Invoke and secure AWS Lambda or OpenWhisk functions directly from Kong.
  • OAuth2.0: Easily add OAuth2.0 authentication to your APIs.
  • REST API: Kong can be operated with its RESTful API for maximum flexibility.
  • Geo-Replicated: Configs are always up-to-date across different regions.
  • Failure Detection & Recovery: Kong is unaffected if one of your Cassandra nodes goes down.
  • Clustering: All Kong nodes auto-join the cluster keeping their config updated across nodes.
  • Scalability: Distributed by nature, Kong scales horizontally by simply adding nodes.
  • Performance: Kong handles the load with ease by scaling and using NGINX at the core.

Kong is an open-source project and widely used in production at companies ranging from startups to Global 5000.

For large organizations, check out here Kong Enterprise. Kong is sponsored by Mashape, which also provides an enterprise offering integrating Kong with its proprietary API analytics and developer portal tools.

In the next section, we can look at how to configure Kong in Docker and subsequently add service, enable the plugin, and consume it.

Running Kong in Docker

For the below steps, I’m going to use Docker commands to create the network, containers, etc., alternatively, you can also use Docker Compose template located here.

#1.Create Docker Network

Create a network to allow Kong containers to discover and communicate with each other. Use the below command to create a new network:

docker network create kong-net

Image – Create new network

#2.Start Database Container

Kong supports both PostgreSQL and Cassandra, here I’m going to use PostgreSQL DB.

Use below command to start the DB container:

docker run -d --name kong-database \
               --network=kong-net \
               -p 5432:5432 \
               -e "POSTGRES_USER=kong" \
               -e "POSTGRES_DB=kong" \
               postgres:9.6

Image – Start Database Container

Now the database is ready, our next step is to prepare the database for running Kong.

Image – Postgres DB Container is up

#3.Prepare Database

Use the below command to prepare the database for Kong:

docker run --rm \
     --network=kong-net \
     -e "KONG_DATABASE=postgres" \
     -e "KONG_PG_HOST=kong-database" \
     -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
     kong:latest kong migrations bootstrap

Image – Prepare database for Kong
Image – DB Migration completed

When the migrations have run and the database is ready, start a Kong container that will connect to the database container.

#4.Start Kong

Use the below command to start the Kong container :

docker run -d --name kong \
     --network=kong-net \
     -e "KONG_DATABASE=postgres" \
     -e "KONG_PG_HOST=kong-database" \
     -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
     -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
     -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
     -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
     -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
     -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
     -p 8000:8000 \
     -p 8443:8443 \
     -p 8001:8001 \
     -p 8444:8444 \
     kong:latest
Image – Start Kong container

Check if Kong is up and running using the below command:

Image – Check if Kong container is up and running

Access Kong console using curl the command to check if all Admin components are up and running.

curl -i http://localhost:8001/
Image – Use curl command to check if Kong admin components are up

In the next section, we’ll be adding an API to Kong.

#5.Add new service

Before adding API, we will first need to add a Service. For this example, we would create a new Service pointing to the Mockbin API. Mockbin is nothing but an  “echo” type public website that returns the requests it gets back to the requester, as responses.

curl -i -X POST \
  --url http://localhost:8001/services/ \
  --data 'name=example-service' \
  --data 'url=http://mockbin.org'

Image – Adding service to Kong

Before you can start making requests against the Service, we will need to add a Route to it. Routes specify how (and if) requests are sent to their Services after they reach Kong. A single Service can have many Routes.

#6. Add a Route for the Service

Use the below command to add Route for the service we created:

curl -i -X POST \
  --url http://localhost:8001/services/example-service/routes \
  --data 'hosts[]=example.com'
Image – Add Route to the service we created

Verify if the requests are forwarded through Kong using the following command:

curl -i -X GET \
  --url http://localhost:8000/ \
  --header 'Host: example.com'
Image – Check if requests are forwarded through Kong

#7. Add Rate Limiting Plugin

Kong supports various plugins, here we are going to add Rate Limiting to limit how many HTTP requests a user can make in a given period of seconds, minutes, hours, days, months, or years. 

Use below command to add rate-limiting with a limit of 100 minutes.

curl -i -X POST \
--url http://localhost:8001/services/example-service/plugins/ \
--data 'name=rate-limiting' \
--data 'config.minute=100'
Image – Add Rate Limiting Plugin

#8.Make a Request from consumer

Use the below command to issue requests to the service we have created.

curl -i -X GET \
  --url http://localhost:8000/ \
  --header 'Host: example.com'


Here I have used curl the command to issue requests but this service can also be accessed from external by individuals.

Congrats! In this post, we have learned how to configure Kong in Docker and add new service, enable plugins, and consume it.

Additional Resources :

Like this post? Don’t forget to share it!

Summary
All You Need To Know About Kong API Gateway + tutorial
Article Name
All You Need To Know About Kong API Gateway + tutorial
Description
Kong is a open source gateway for microservices.In this post,we take look key features and sample on how to configure APIs.
Author
Publisher Name
upnxtblog
Publisher Logo

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous post Use AssertJ-Swagger to validate an API implementation’s compliance
Next post 9 Chrome Extensions to protect your online privacy