In this post, we'll set up a Ghost blog on a small Google Cloud Platform VM. The setup presented here uses Nginx as reverse proxy in front of Ghost and Certbot for obtaining and renewing SSL certificates from Let's Encrypt. We'll run Ghost, as well as Nginx and Certbot each in their own Docker container.

The instructions in this blog post stand on the shoulders of Run your blog with Ghost, Docker and LetsEncrypt by Alex Ellis, Build SSL HTTPS Website Using Docker by Abner Chou, New Blog! Ghost on Google Cloud free-tier compute instance by Stuart Clarkson and last but not least the instructions on

Ghost is an open-source content management system. It seems to strike a good balance between being customizable and easy to use. That's why I wanted to give it a try for my own blog. I used a f1-micro instance (1 vCPU, 0.6 GB memory) obtained from the GCP free tier offering, but the steps I describe below should work the same way on VMs obtained from other cloud providers. All the resources we need to set up here come for free except that you may want spend a few bucks to buy a domain if you don't have one already. The reason why I went for a containered solution that runs all tools in Docker was that it felt like a hassle-free and clean way to set things up. So I decided to go for the containerized setup and roll back if I ran into performance issues (hasn't happened so far).

To get the blog up and running, we'll first use Nginx with a minimal configuration in conjunction with Certbot to acquire an SSL certificate for our site from Let's Encrypt. After that's done, we'll reconfigure Nginx to act as reverse proxy for Ghost. Finally, we'll put a simple backup scheme in place. Altogether, we'll need to perform the following steps:

  1. Provision the VM and add a DNS record pointing to its IP.
  2. Obtain the SSL certificate for Nginx with Certbot.
  3. Launch the Ghost server behind Nginx.
  4. Set up a backup scheme.

In the end we'll have created the following files.

[email protected]:~$ tree -aI content ghost-blog
├── .env
├── ghost
│   └── docker-compose.yml
├── letsencrypt-manager
│   ├── cli.ini
│   └── docker-compose.yml
└── nginx-letsencrypt
    ├── docker-compose.yml
    └── nginx.conf.template

You can find all code at Head over there for a minimal set of instructions that assume you've already provisioned your VM.

1. Provisioning a VM and DNS record

First thing, we'll need a server to host our blog. We'll provision an f1-micro instance on GCP compute engine and assign it a static IP. You'll need to sign up for a GCP account, if you haven't already. We'll use the web console to provision the VM and assign it a static IP.

In the GCP console, go to Compute Engine > VM Instances > Create New Instance. You'll have to choose a name for your instance, a region and an instance type. For the instance name, anything goes. If you don't want to be charged for the instance, you have to make sure you choose us-west1 (Oregon), us-central1 (Iowa) or us-east1 (South Carolina) as region and f1-micro as instance type (refer to the section Compute Engine in the GCP Always Free Usage Limits docs


Also, it's a good idea to allow http and https traffic in the firewall settings.


We'll configure the gcloud cli to get ssh access to the VM. According to the GCP documentation the gcloud cli's ssh command is the recommended way to gain shell access to GCP VMs. Install the gcloud cli using the package manager of your OS. Afterwards, connect it to your account and configure ssh access. The gcloud init is going to connect you through the relevant steps.

[email protected]:~$ gcloud init

Find the <project id> on your home dashboard in the web console.

You should now be able to ssh into the instance with the gcloud ssh command.

[email protected]:~$ gcloud compute ssh <instance_name>

The <instance name> is the name you chose when provisioning the instance.

You could install docker-compose now. You may also have to install the latest docker from the official source.

[email protected]:~$ sudo apt-get remove docker docker-engine
[email protected]:~$ curl -sSL | sh
[email protected]:~$ sudo usermod -aG docker $USER

The docker-compose version in the offical Debian repositories is a few versions behind the most recent stable one, so let's get the most recent one from github (to make sure you really get the most recent version, check their releases page on github).

[email protected]:~$ sudo curl -L$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
[email protected]:~$ sudo chmod +x /usr/local/bin/docker-compose

Next, we'll add an A record to our DNS that points your blog's domain name to that IP. I did that by using the DNS settings page of the web interface of my domain name registrar. Unless you have a more sophisticated way of managing your domains, you might want to do the same. Following the example of others, I used the blog subdomain of a domain I had previously registered (here's the official namecheap documentation of how to achieve this).

2. Obtaining an SSL certificate with Certbot

We're now ready to set things up for obtaining an SSL certificate for our blog's domain name. We'll use Certbot to obtain a Let’s Encrypt certificate for your blog domain. Certbot is the ACME client tool that the Let's Encrypt initiative recommends for users with shell access.

[email protected]:~/ghost-blog$ cat nginx-letsencrypt/nginx.conf
events {}

http {
  include snippets/letsencryptauth.conf;
  include snippets/sslconfig.conf;

  server {
    listen 443 ssl;
    server_name $DOMAIN;

# We haven't created the certificate files, yet
#    ssl_certificate     /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
#    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;

    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains" always;

    location / {
 # Just return 200 OK for now so we can quickly check that the certificate works, as we'll only start up Ghost later
      return 200;     
 #     proxy_pass;
 #       proxy_set_header    X-Real-IP $remote_addr;
 #       proxy_set_header    Host      $http_host;
 #       proxy_set_header X-Forwarded-Proto https;
 #       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Before requesting the certificate using Certbot for the first time, we'll have to comment out the lines that specify the location of the ssl_certificate and ssl_certificate. Nginx would complain otherwise. Same is true for the proxy_pass command that proxies traffic to the ghost server, as it's not running, yet. We'll configure Nginx to simply return 200 OK for any request for now.

We'll interpolate the DOMAIN name using environment variables that we'll pass into the Nginx container. You can export them manually before running docker-compose or place them inside a file called .env in the directory from which you run docker-compose (the ghost-blog project's root would a lot of make sense).

[email protected]:~/ghost-blog$ cat nginx-letsencrypt/docker-compose.yml
  image: bringnow/nginx-letsencrypt
    - ./nginx.conf.template:/etc/nginx/nginx.conf.template
    - /etc/letsencrypt:/etc/letsencrypt
    - /var/acme-webroot:/var/acme-webroot
    - /srv/docker/nginx/dhparam:/etc/nginx/dhparam
    - "80:80"
    - "443:443"
  entrypoint: sh -c "envsubst '$$DOMAIN' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
  net: "host" 
[email protected]:~/ghost-blog$ cat .env
[email protected]

You may have noted that this nginx.conf includes two configuration snippets, letsencryptauth.conf and sslconfig.conf. These two snippets come with the nginx-letsencrypt docker image.

When requesting the certificate from Let's Encrypt, the certificate authority (Let's Encrypt) will ask Certbot to solve a challenge. It will request it to serve a file under a certain path that ends in a random string. We don't have to configure Nginx to do this manually, as the letsencryptauth.conf snippet takes care of this for us.

Let's inspect the letsencryptauth.conf snippet.

[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml exec nginx \
              cat /etc/nginx/snippets/letsencryptauth.conf
server {
  listen 80;

  location /.well-known/acme-challenge {
      alias /var/acme-webroot/.well-known/acme-challenge;
      location ~ /.well-known/acme-challenge/(.*) {
          add_header Content-Type application/jose+json;

  location / {
    return 301 https://$host$request_uri;

We can see that it configures Nginx to listen on port 80 and serve the acme challenge files. All other traffic is redirected to port 443.

The sslconfig.conf configures directives for SSL traffic that enable an A+ rating on the Qualys SSL Server Test. The same is true for the Strict-Transport-Security in the server part. These settings won't take effect right now, as the config doesn't have SSL enabled, yet. But we'll include it here, as we'll need it later and it doesn't interfere with the rest.

Let's create the host volumes we map to container volumes and start up Nginx . Also, in order to achieve an A+ rating one must also use 4096 bit DH parameters. You can create them before startup to reduce the startup time of the docker container itself. Generating the parameters will take a long time, up to an hour, so its a good idea to start the process with nohup, to avoid that it gets canceled when your ssh connection times out.

[email protected]:~/ghost-blog$ sudo mkdir -p /etc/letsencrypt /srv/docker/nginx/dhparam /var/acme-webroot
[email protected]:~/ghost-blog$ nohup sudo openssl dhparam -out /srv/docker/nginx/dhparam/dhparam.pem -5 4096 &
[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml up -d

Why do we want to map /etc/letsencrypt, /srv/docker/nginx/dhparam and /var/acme-webroot?
We need to map /etc/letsencrypt so that certificate doesn't get lost across container restarts.
We want to map /srv/docker/nginx/dhparam to create the DH parameters once and for all outside of the container.
And finally, /var/acme-webroot needs to be accessible by both Nginx and Certbot, so that Nginx can serve the challenge files that Certbot puts there.

Before we proceed, let's perform a few sanity checks of the Nginx configuration.

[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml exec nginx nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[email protected]:~/ghost-blog$ curl
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
[email protected]:~/ghost-blog$ curl
  <head><title>525 Origin SSL Handshake Error</title></head>
  <body bgcolor="white">
    <center><h1>525 Origin SSL Handshake Error</h1></center><hr> 
[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml logs
nginx_1 | <IP of your VM> - - [28/Mar/2020:13:36:03 +0000] "GET / HTTP/1.1" 301 186 "-" "curl/7.47.0"
nginx_1 | <IP of your VM> - - [28/Mar/2020:13:37:31 +0000] "GET / HTTP/1.1" 301 186 "-" "curl/7.52.1"nginx_1 | 2020/03/28 13:48:09 [error] 8#8: *2 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client:, server:

To interact with Certbot, I used a dockerized wrapper around certbot called letsencrypt-manager that I found via Abner Chou's Build SSL HTTPS Website Using Docker (note that the actively maintained version on github changed since he wrote the post, this is the version you'll want to clone).

The letsencrypt-manager comes with a docker-compose file that specifies two services, cli and cron.

[email protected]:~/ghost-blog$ cat letsencrypt-manager/docker-compose.yml
  image: neinkeinkaffee/letsencrypt-manager:latest
    - LE_RSA_KEY_SIZE=4096
    - /etc/letsencrypt:/etc/letsencrypt
    - /var/lib/letsencrypt:/var/lib/letsencrypt
    - /var/acme-webroot:/var/acme-webroot
    - ./cli.ini:/root/.config/letsencrypt/cli.ini

  image: neinkeinkaffee/letsencrypt-manager:latest
    - LE_RSA_KEY_SIZE=4096
    - /etc/letsencrypt:/etc/letsencrypt
    - /var/lib/letsencrypt:/var/lib/letsencrypt
    - /var/acme-webroot:/var/acme-webroot
    - ./cli.ini:/root/.config/letsencrypt/cli.ini
  command: cron-auto-renewal
  restart: always

There's one minor thing about the letsencrypt-manager that I have noted that I find slightly inconsistent. It's that it expects some Certbot parameters to be passed as environment variables (e.g. LE_RSA_KEY_SIZE and LE_EMAIL), while it expects to find most others in the Let's Encrypt config file that per default is called cli.ini. The way letsencrypt-manager wraps Certbot calls in the Docker entrypoint script prevents us from passing them in as command line parameters when requesting our certificate. So we'll have to mount the cli.ini into the container.

[email protected]:~/ghost-blog$ cat letsencrypt-manager/cli.ini
text = True
agree-tos = True

authenticator = webroot
webroot-path = /var/acme-webroot

We'll first run the cli service's add command. That command calls the Certbot tool and generates certificate files for our subdomain.

[email protected]:~/ghost-blog$ sudo mkdir /var/lib/letsencrypt
[email protected]:~/ghost-blog$ docker-compose -f letsencrypt-manager/docker-compose.yml run cli add
Adding domain ""...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
Using the webroot path /var/acme-webroot for all unmatched domains.
Waiting for verification...
Cleaning up challenges

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2019-12-16. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:
   Donating to EFF:          

Instead of, the domain you'd want to enter here would be your blog domain, i.e. the subdomain that maps to your VM's IP, probably something like If Certbot displays a success message similar to the one above, everything should have worked and we should now have our certificate files in a subdirectory of /etc/letsencrypt. As we mapped that directory to the host system, we can use it in the container running nginx. We can now uncomment the ssl_certificate and ssl_certificate_key in the nginx.conf.

And let’s restart Nginx container.

[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml restart

Nginx should now return 200 OK when getting requests from curl, and opening your subdomain in the browser should give a blank page.

[email protected]:~/ghost-blog$ curl -v # => 200 OK


We can now start up the cron service in detached mode. It starts the cron daemon inside of the container and configures a crontab that trys to renew the certificate every Sunday at 4:18 am.

[email protected]:~/ghost-blog$ docker-compose -f letsencrypt-manager/docker-compose.yml run -d cron

3. Launching the Ghost server behind Nginx

With the certificate files in place we can now reconfigure Nginx to act as reverse proxy for Ghost and start up the Ghost server.

Stuart Clarkson recommends to create some swap space to mediate the fact that the free f1-micro instance only has 0.6 GB RAM instead of the recommended 1 GB. We'll do that now.

[email protected]:~/ghost-blog$ sudo -s
[email protected]:~/ghost-blog$ dd if=/dev/zero of=/var/swap bs=1k count=1024k
[email protected]:~/ghost-blog$ mkswap /var/swap
[email protected]:~/ghost-blog$ swapon /var/swap
[email protected]:~/ghost-blog$ echo '/var/swap swap swap defaults 0 0' >> /etc/fstab
[email protected]:~/ghost-blog$ exit

Next, we'll modify the Nginx configuration to forward all traffic to the Ghost server, replacing the return 200 by a proxy_pass statement.

[email protected]:~/ghost-blog$ cat nginx-letsencrypt/nginx.conf

http {
  include snippets/letsencryptauth.conf;
  include snippets/sslconfig.conf;

  server {
    listen 443 ssl;
    server_name $DOMAIN;

    ssl_certificate     /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;

    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains" always;

    location / {
        proxy_set_header    X-Real-IP $remote_addr;
        proxy_set_header    Host      $http_host;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Now reload Nginx again and then finally, let's start up the Ghost container.

[email protected]:~/ghost-blog$ cat ghost/docker-compose.yml
  image: ghost:2.31.0-alpine
    - ""
    - url=https://$DOMAIN
    - ./content:/var/lib/ghost/content
  restart: always
[email protected]:~/ghost-blog$ docker-compose -f nginx-letsencrypt/docker-compose.yml restart
[email protected]:~/ghost-blog$ docker-compose -f ghost/docker-compose.yml up -d

When you now visit you blog domain, Ghost should greet you with a blog with the standard theme set up and populated with some getting-started content.


If that doesn't happen, try digging into the container logs.

[email protected]:~/ghost-blog$ docker-compose -f ghost/docker-compose.yml logs
2019-09-17 08:12:21] INFO Ghost is running in production...
[2019-09-17 08:12:21] INFO Your site is now available on
[2019-09-17 08:12:21] INFO Ctrl+C to shut down
[2019-09-17 08:12:21] INFO Ghost boot 15.164s

Note that Ghost's official Docker image uses SQLite as database. It takes up little memory and is easy to backup. While we could also run a MySQL database in a separate container, the included SQLite is performant enough. According to the section Appropriate Uses For SQLite from the SQLite documentation, you can expect SQLite to work well as database engine "for most low to medium traffic websites (which is to say, most websites)". They give 100K hits/day as a very conservative upper bound. With nginx caching enabled, your blog will be able to handle way more load should that ever become a requirement.

4. Setting up a backup scheme

It's a good idea to implement some kind of backup regime. The most low tech version of a backup regime that I could come up with was to sync the Ghost content folder with my Google Drive. You may want to store the backup somewhere else, e.g. in AWS S3 or with yet another provider, but here's how I sync the content folder with my Google Drive for backup.

[email protected]:~/ghost-blog$ wget \
           -O gdrive \ 
           && chmod +x gdrive \
           && sudo mv gdrive /usr/local/bin
[email protected]:~/ghost-blog$ gdrive list # => visit url, approve permissions, copy verification token into prompt
[email protected]:~/ghost-blog$ gdrive mkdir ghost_content
[email protected]:~/ghost-blog$ fileid=$(gdrive list -q "name contains 'ghost_content'" | tail -1 | cut -f 1 -d " ")
[email protected]:~/ghost-blog$ gdrive sync upload ghost/content $fileid

If the gdrive sync command worked, you can add the last two lines into a script and it to your crontab to backup your blog content, say, once per day.

[email protected]:~/ghost-blog$ (crontab -l 2>/dev/null; echo "0 23 * * * $(which sh) ghost-blog/") | crontab -

That's the same as opening your crontab file by issuing crontab -e and then adding the line 0 23 * * * /bin/sh <your home dir>/ghost-blog/ manually.

It's an even better idea to also test whether you can recreate your blog from the backup. For the above way of backing up by syncing the whole content folder to your Google Drive, you would probably do something similar to the following.

[email protected]:~/ghost-blog$ mv ghost/content ghost/content.bkp
[email protected]:~/ghost-blog$ fileid=$(gdrive list -q "name contains 'ghost_content'" | tail -1 | cut -f 1 -d " ")
[email protected]:~/ghost-blog$ gdrive download --recursive --path backup $fileid
[email protected]:~/ghost-blog$ mv backup/blog ghost/content
[email protected]:~/ghost-blog$ docker-compose -f ghost/docker-compose.yml restart

Happy blogging!