A local dev setup with nginx TLS pass-through

A common principle to avoid or at least minimize the chances of “It works on my machine :(” awkwardness, is to reduce as many config/setup differences between dev and prod environments as possible. E.g. testing locally on a dev box with https://www-dev.fake.io is much better than http://localhost:8080. If you don’t agree, stop reading. 🙂

A scenario

So we have three web apps/services that need to talk to each other to work end to end. They are https://sso.fake.io, https://api.fake.io, and https://www.fake.io in production. On a dev box, we want them to run and accessible as https://sso-dev.fake.io, https://api-dev.fake.io, and https://www-dev.fake.io. From the user interaction point of view, the only difference between dev and prod is the -dev suffix in the domain names.

The problem

Behind the scene though, all three web services are running on the same box, specifically, the same host network namespace with different ports.

  • That means we’d require a reverse proxy solution that fans out the requests based on the domain name
  • The proxy needs to be super lightweight in terms of both setup effort and performance. We want minimal dev effort to reach the maximum dev/prod similarity
  • We need to pass TLS through as the three web services are already running with HTTPS

A solution

The idea looks like this:

Linux box

To reach the lightweight goal, there is no other solution but to run an Nginx reverse proxy as a container. Nginx is super light and fast. A single command launching it via a docker container is minimal dev effort. In fact zero if this command is part of the inner loop script.

docker run --name dev-nginx --mount type=bind,source="${PWD}/docker/dev-nginx/nginx_linux.conf",target=/etc/nginx/nginx.conf --network host -d nginx:alpine

For a Linux dev box, with the great --network host option, things are easier as Nginx lives in the same host network namespace as the three web services do. No explicit port mapping is needed either. Inside Nginx, the upstream web services can be reached via loopback/ as well.

stream {
    map $ssl_preread_server_name $name {
        sso-dev.fake.io sso_backend;
        api-dev.fake.io api_backend;
        www-dev.fake.io www_backend;

    upstream sso_backend {
        server; # host.docker.internal

    upstream api_backend {
        server; # host.docker.internal

    upstream www_backend {
        server; # host.docker.internal

    server {
        listen        443;
        proxy_pass  $name;
        ssl_preread    on;

For win/mac users, however, because docker runs inside a virtual machine, there isn’t the luxury of host network mode. So port mapping (-p 443:443) is required between the host and the container. Additionally, for the Nginx container to talk to the host network namespaced web services, one has to use the magic domain name host.docker.internal instead of the nice loopback address.

win/mac box
docker run --name dev-nginx --mount type=bind,source="${PWD}/docker/dev-nginx/nginx.conf",target=/etc/nginx/nginx.conf -p 80:80 -p 443:443 -d nginx:alpine

Nginx supports domain name based virtual server like a no-brainer. But more importantly, Nginx can pass TLS through easily with the support of ngx_stream_core_module which is available since 1.9.

Lastly, mapping the -dev names to and viola, on your local dev box, you have a setup super close to production experience, from the users’ perspective.

>_ cat /etc/hosts    sso-dev.fake.io api-dev.fake.io www-dev.fake.io

Hope this helps.

A local dev setup with nginx TLS pass-through