Introduction – What is API gateway ?
An API Gateway is basically a service that acts as the single entry point into an application from the outside world. It’s responsible for routing requests, API composition, authentication, and some other functions as well. It encapsulates the internal application architecture and services and exposes and API to its clients.
Main features and benifits of using an API gateway:
- Less coupling – When using an API gateway, the clients (mobile applications, web clients, etc..) does not need to know anything about the internal architecture of the backend application, not any of its internal routes or services. All the clients requests are recieved and routed internall by the API gateway, hence, the ciient only needs to know the API gateway adddress.
- Less requests and roundtrips – For example, nd API gateway can retrieve data from multiple services based on a single http request.
- Better security – Having the application internal architecture and services hidden from the outside worldd, reduces the potentional “attack” targets of an application.
- Simplifies communication for the clients – as it can translate from a standard public API protocol to any other protocol used internally by the application services
- And more ..
Draw backs of using API gateway
- Increases application complexity – As it is another service that needs to built, maintained,properly setup and configured.
- May incease increase response time, as it also introduces another network hop.
- May become a bottle neck, if not scaled properly.* Creates an additional potential point of failure for the application.
Implementation
We will create two simple web API projects that we will expose via the API gateway, and use Envoy as the API gateway to do all the routing work for us.
We’ll create a new solution, with two web API projects
Creating the solution
Create a new solution to hold our web API’s:
dotnet new sln --name apigateway
Next, create two web API projects:
dotnet new webapi --name users -o ./users-service
dotnet new webapi --name orders -o ./orders-service
Next, we need to add the new projects to the solution
dotnet sln add .\users-service
dotnet sln add .\orders-service
Creating the API controllers
Create a new controller in users-service
as follows:
[ApiController]
[Route("[controller]")]
public class UsersController: ControllerBase {
[HttpGet]
public ActionResult < string > Get() {
return "Hello from users service";
}
}
Create a new controller in orders-service
[ApiController]
[Route("[controller]")]
public class InventoryController: ControllerBase {
[HttpGet]
public IActionResult Get() {
return Ok("Hello from Orders service");
}
}
HTTPS Certificates
We need to create and assign certificates to each of the web API projects.
Use dotnet
CLI tools to create a new certificate for both of the API projects:
dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\orders.pfx -p pa55w0rd!
dotnet dev-certs https -ep $env:USERPFOFILE\.aspnet\https\users.pfx -p pa55w0rd!
dotnet dev-certs https --trust
The password used here is only for the sake of learning, and certainly should not be used in real production scenarios.
We can verify that the certificates were created, by navigating to the %USERPROFILE%\.aspnet\https
location and locating the .pfx files there.
Next, we need to create a secret key for each of the projects. To do this, generate a new GUID for each project, and add it to the csproj
file under any property group:
<UserSecretsId>orders-6FBC2E53-9DE4-4E1D-B05E-32CB0BF4BC40</UserSecretsId>
Set the user secret using the command line :
dotnet user-secrets set "Kestrel:Certificates:Development:Password" "pa55w0rd!
Important to notice that we used Development
as certificate type, which mean we need to ensure we set the proper environment variable for each service using the certificate in docker-compose file.
Docker setup
Create a Dockerfile for each of the projects:
Orders service API project:
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /src
COPY ["orders-service/orders.csproj", "orders-service/"]
COPY ./orders-service/NuGet.Config ./
RUN dotnet restore "./orders-service/orders.csproj" --interactive --configfile ./NuGet.Config
COPY . .
RUN dotnet build "./orders-service/orders.csproj" -c Release -o /app/build
FROM build-env AS publish
RUN dotnet publish "orders-service/orders.csproj" -c Release -o /app/publish
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build-env /app/build .
ENTRYPOINT ["dotnet", "orders.dll"]
Users service:
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /src
COPY ["users-service/users.csproj", "users-service/"]
COPY ./users-service/NuGet.Config ./
RUN dotnet restore "./users-service/users.csproj" --interactive --configfile ./NuGet.Config
COPY . .
RUN dotnet build "./users-service/users.csproj" -c Release -o /app/build
FROM build-env AS publish
RUN dotnet publish "users-service/users.csproj" -c Release -o /app/publish
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build-env /app/build .
ENTRYPOINT ["dotnet", "users.dll"]
Once that is done, we need to create a `docker-compose` file, to spin up both services and the `Envoy` proxy as well.
version: '3'
services:
envoygateway:
container_name: envoy
image: envoyproxy/envoy:v1.21.0
ports:
- "9901:9901"
- "10000:10000"
- "10001:10001"
networks:
- crossbridge
volumes:
- ./Envoy/envoy.yaml:/etc/envoy/envoy.yaml
- ./Envoy/https.crt:/etc/ssl/certs/https.crt
- ./Envoy/key.pem:/etc/ssl/certs/key.pem
ordersapi:
container_name: orders
build:
context: ./
dockerfile: ./orders-service/Dockerfile
ports:
- "8080:80"
- "8081:443"
environment:
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_HTTPS_PORT: "8081"
ASPNETCORE_ENVIRONMENT: "Development"
networks:
- crossbridge
volumes:
- ${APPDATA}\microsoft\UserSecrets\:/root/.microsoft/usersecrets
- ${USERPROFILE}\.aspnet\https:/root/.aspnet/https/
usersapi:
container_name: users
build:
context: ./
dockerfile: ./users-service/Dockerfile
ports:
- "8080:80"
- "8081:443"
environment:
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_HTTPS_PORT: "8081"
ASPNETCORE_ENVIRONMENT: "Development"
networks:
- crossbridge
volumes:
- ${APPDATA}\microsoft\UserSecrets\:/root/.microsoft/usersecrets
- ${USERPROFILE}\.aspnet\https:/root/.aspnet/https/
networks:
crossbridge:
Envoy Setup
Next, we need to configure Envoy and tell it how to route the incoming HTTP requests, to the matching services.
Create a new envoy.yaml
file inside the envoy folder, and add the following contents to it:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0_clients
address:
socket_address:
address: 0.0.0.0
# We will be using port 10001 for HTTPS
# port_value: 10001 --> This can be used for Non-secure traffic (between Envoy and the cluster, i.e, the backed service)
port_value: 10001
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
# Used to statistics and metrics purposes
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
http_filters:
- name: envoy.filters.http.router
route_config:
name: local_route
virtual_hosts:
- name: local_service
# Look and match any domains
domains: ["*"]
# An array of routes we want to define routing for
routes:
- match:
# The pattern we look for in the address URL for do the routing
prefix: "/users"
route:
# How to re-write the request URL for the routing
prefix_rewrite: "/users"
# Where to send the request? to which cluster
# A cluster has an upstream, which is basically and application accepting requests from clients
cluster: users_cluster
- match:
# The pattern we look for in the address URL for do the routing
prefix: "/orders"
route:
# How to re-write the request URL for the routing
prefix_rewrite: "/orders"
# Where to send the request? to which cluster
# A cluster has an upstream, which is basically and application accepting requests from clients
cluster: orders_cluster
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/ssl/certs/https.crt"
private_key:
filename: "/etc/ssl/certs/key.pem"
clusters:
- name: users_cluster
type: LOGICAL_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: users_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
# The URL to redirect the request to.
# This can be the service name from the docker-compose file, or a standard URL
address: usersapi
# This usually should be 80 - for non secure traffics
# This does not have to be the same as internal application ports in docker-compose, like 8081
port_value: 80
- name: orders_cluster
type: LOGICAL_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: orders_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
# The URL to redirect the request to.
# This can be the service name from the docker-compose file, or a standard URL
address: ordersapi
# This usually should be 80 - for non secure traffics
# This does not have to be the same as internal application ports in docker-compose, like 8081
port_value: 80
Enabling HTTPS support in Envoy
To enable HTTPS support in Envoy, we need to create certificates for it to use. To do so, we will be generating the certificates using WSL
(Windows Subsystem for Linux) and OpenSSL
tool.
Let’s go through the needed steps:
- Open
WSL
from your Windows machine. - Open explorer.exe from the current path, type:
Explorer.exe .
- Create a new file called https.config, and open it in your text editor (VS Code for example)
- Add the following contents to the file:
[ req ]
default_bits = 2048
default_md = sha256
default_keyfile = key.pem
prompt = no
encrypt_key = no
distinguished_name = req_distinguished_name
req_extensions = v3_req
x509_extensions = v3_req
[ req_distinguished_name ]
commonName = "localhost"
[ v3_req ]
subjectAltName = DNS:localhost
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = critical, 1.3.6.1.5.5.7.3.1, 1.3.6.1.5.5.7.3.2
- Open WSL command line again, and use the following command to generate the first file:
openssl req -config https.config -new -out csr.pem
- Now run the following command to generate the certificate based on the key we just created:
openssl x509 -req -days 365 -extfile https.config -extensions v3_req -in csr.pem -signkey key.pem -out https.crt
- copy the
https.crt
andkey.pem
files to a direct location (i.e to the Envoy location of the our solution) - update the
docker-compose.yaml
file Envoy section, to inlude mounted volumns than contains the certifacates:- ./Envoy/Certs/https.crt:/etc/ssl/certs/https.crt
- ./Envoy/Certs/key.pem:/etc/ssl/certs/key.pem - In addition, in the
docker-compose.yaml
, add a new port to be used for secured traffice, i.e https:- "10001:10001"
- Next, we need to injest/inject the newly created certificates to be used by Envoy, in the
envoy.yaml
file.
Add atls_context
section to theenvoy.yaml
file, under thename
sub-section offilters
section of the listener:
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/ssl/certs/https.crt"
private_key:
filename: "/etc/ssl/certs/key.pem"
- Install the certificate into out machine into the trusted root certification authorities.
- That’s it for configuring Envoy
Done
Now that we have everything set, we can access our API services, using Envoy as API gateway.
After running the docker-compose command, we can access any of the service via API gateway as follows: