Setting up a VPS with multiple secure websites running on Docker with Nginx as a reverse proxy


So you’ve stubbled upon this article as you couldn’t find a single source that describes the full process. Welcome to the club, this is a journey and a guide as much to myself as I’ll inevitably forget how to do this in a couple of months after writing this. This guide does not claim to be the best and 100% secure way to do this. This is a way to do this, use at your own risk and adjust to your needs.

This guide uses

  • Ubuntu 22.04 VPS
  • Docker
  • Nginx Docker image
  • jwilder/nginx-proxy Docker image
  • nginxproxy/acme-companion Docker image
  • WordPress Docker image (:php8.2-fpm)
  • Mysql Docker image

Part 1: Docker

Follow this official guide.

In case the link no longer works, the short version is as follows.

Out with the old:

$ for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

Prepare for the new:

$ sudo apt-get update

$ sudo apt-get install ca-certificates curl gnupg

$ sudo install -m 0755 -d /etc/apt/keyrings

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /etc/apt/keyrings/docker.gpg

$ sudo chmod a+r /etc/apt/keyrings/docker.gpg

$ echo \
“deb [arch=”$(dpkg –print-architecture)” signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
“$(. /etc/os-release && echo “$VERSION_CODENAME”)” stable” | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

In with the new:

$ sudo apt-get update

$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Check the result:

$ sudo docker run hello-world

Running docker as sudo might not be the best idea so consider:

$ sudo usermod -a -G docker username

where username is your actual username. As incorrectly executed usermod can mess up your user try logging in from a new terminal and executing:

$ groups

You should see docker added you your group list. You can now close the old terminal and continue in the new. If you could not log in in a new terminal don’t close the old one before resolving the usermod mess.

Part 2: Nginx reverse proxy with TLS

Here are the official docs for jwilder/nginx-proxy and nginxproxy/acme-companion but I’ve taken some detours as I focus on using docker compose.

We’ll be using /var/www/ as the base dir for our web apps. Also be mindful of file and directory permissions if errors occur.

So let us

$ mkdir /var/www/nginx-proxy

$ cd /var/www/nginx-proxy

$ nano docker-compose.yml

Which in turn should contain something like this

version: "3.5"
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    restart: always
      - [email protected]
      - "80:80"
      - "443:443"
      - certs:/etc/nginx/certs
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - ./config/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf
      - /var/run/docker.sock:/tmp/docker.sock:ro

    image: nginxproxy/acme-companion
    container_name: nginx-proxy-acme
    restart: always
      - NGINX_PROXY_CONTAINER=nginx-proxy
      - certs:/etc/nginx/certs:rw
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro

    external: true
    driver: bridge
    name: nginx-proxy

    driver: local
      type: none
      o: bind
      device: "${PWD}/vhosts.d"

The example above could use some notes

  1. restart: always – we want this container to always be up.
  2. Do change your e-mail.
  3. Use both ports 80 and 443 as we’ll be setting up Letsencrypt.
  4. certs, vhost and html volumes will be shared with acme-companion.
  5. ./config/client_max_body_size.conf will be mapped locally as we want our server to handle larger file uploads.
  6. acme-companion is our automatic TLS cerificate provide via Letsencrypt
  7. NGINX_PROXY_CONTAINER=nginx-proxy if you change Nginx proxy container’s name, please adjust it here as well.
  8. We also define an nginx-proxy external network which will contain all the containers this reverse proxy will be able to point to. 
  9. The vhost volume will be locally mounted for later per-vhost configuration options.

Let’s continue with creating the ./config/client_max_body_size.conf file

$ mkdir config

$ echo ‘client_max_body_size 16m;’ > ./config/client_max_body_size.conf

Feel free to change the 16 MB limit to something that suits you better. Relevant Nginx docs.

We also need to create vhosts.d/default file 

$ mkdir vhosts.d

$ nano vhosts.d/default

And put these contents in:

## Start of configuration add by letsencrypt container
location ^~ /.well-known/acme-challenge/ {
    auth_basic off;
    auth_request off;
    allow all;
    root /usr/share/nginx/html;
    try_files $uri =404;
## End of configuration add by letsencrypt container

$ docker network create nginx-proxy

$ docker compose up -d

Now, if you visit your server’s IP in a browser you should see a 503 Service Temporarily Unavailable which is good as our reverse proxy is now working. While this is fun and all we likely want it to proxy to somewhere though.

Part 3: A WordPress website

While this doesn’t need to be WordPress, it just has to be a Docker container and WP is conveniently packaged into one so I’ll use it as an example. There are of course the official docs which are a useful reference, be mindful of the environment variables.

While we are currently in /var/www/nginx-proxy that’s done for now so let’s:

$ mkdir /var/www/mywebsite.com

$ cd /var/www/mywebsite.com

$ nano docker-compose.yml

And let’s paste the following there

version: '3.5'

    image: nginx:stable
    restart: always
      - 80
      VIRTUAL_HOST: mywebsite.com,www.mywebsite.com
      LETSENCRYPT_HOST: mywebsite.com,www.mywebsite.com
      - wordpress
      - ./wordpress:/var/www/html
      - ./nginx:/etc/nginx/conf.d
      - nginx-proxy

      context: .
      dockerfile: Dockerfile-wordpress-with-phpredis
    image: wordpress:php8.2-fpm
    restart: always
      - 9000
      WORDPRESS_DB_HOST: mywebsite-com-db
      WORDPRESS_DB_USER: wordpress-user
      WORDPRESS_DB_PASSWORD: super-secret-password
      WORDPRESS_DB_NAME: a-database-name
      WORDPRESS_REDIS_HOST: mywebsite-redis
      - mywebsite-com-db
      - ./wordpress:/var/www/html
      - ./php-uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
      - backend

    image: mysql:8.0
    restart: always
      MYSQL_DATABASE: a-database-name
      MYSQL_USER: wordpress-user
      MYSQL_PASSWORD: super-secret-password
      - db:/var/lib/mysql
      - backend

    image: redis:latest
      - 6379
    restart: always
      - backend


    driver: bridge
    external: true
    driver: bridge

~~ Now let’s unpack this massibe blob as there are aspects that require attention ~~

Notice the slight of hand and a departure from the official docs when we don’t build WordPress directly from the official image. Unfortunately it comes without Redis but we are not savages so we should add it.

$ nano Dockerfile-wordpress-with-phpredis

FROM wordpress:php8.2-fpm

RUN pecl install -f redis \
    &&  rm -rf /tmp/pear \
    &&  docker-php-ext-enable redis

Let us not forget the to connect our nginx container to the WordPress container. 

$ mkdir nginx

$ nano nginx/wordpress.conf

server {
    listen 80;
    server_name localhost;
    root /var/www/html;

    index index.php index.html index.htm;
    client_max_body_size 16M;

    location / {
         try_files $uri $uri/ /index.php$is_args$args;

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_pass wordpress:9000;
        fastcgi_index index.php;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #fixes timeouts
        fastcgi_read_timeout 600;
        include fastcgi_params;

    location ~ /\.ht {
        deny all;

We should also increase PHP’s maximum size for file uploads. Keeping the size similar or equal to Nginx’s client_max_body_size 16M; is a good idea.

$ nano php-uploads.ini

upload_max_filesize = 16M
post_max_size = 16M

So far this should work if you’re deploying a new website. In case you’re deploying one from a backup there are additional steps that are required so it works properly. Maybe another article?

And now we’re ready for

$ docker compose build wordpress

$ docker compose up -d

It will take a while to spin up and it will also take a while to obtain a cerificate, feel free to take a 5-10 minute break or so.

As Redis also needs configuration be sure to add this somewhere in the wp-config.php.

 * Redis config

define( 'WP_REDIS_HOST', getenv_docker('WORDPRESS_REDIS_HOST', 'localhost') );
define( 'WP_REDIS_PORT', getenv_docker('WORDPRESS_REDIS_PORT', 6379) );
define( 'WP_REDIS_PASSWORD', getenv_docker('WORDPRESS_REDIS_PASSWORD', '') );
define( 'WP_REDIS_TIMEOUT', getenv_docker('WORDPRESS_REDIS_TIMEOUT', 1) );
define( 'WP_REDIS_READ_TIMEOUT', getenv_docker('WORDPRESS_READ_TIMEOUT', 1) );
define( 'WP_REDIS_DATABASE', getenv_docker('WORDPRESS_REDIS_DATABASE', 0) );

Don’t forget to install the Redis Object Cache plugin to acticate Redis in the WordPress admin panel.

Part 4: A Laravel application

In this case, a Laravel application running on  Laradock 

I'll eventually expand the article