Wagtail on Cloud Run

1. Introduction


Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

It also natively interfaces with many other parts of the Google Cloud ecosystem, including Cloud SQL for managed databases, Cloud Storage for unified object storage, and Secret Manager for managing secrets.

Wagtail is an open source content management system (CMS) built on top of Django. Django is a high-level Python web framework.

In this tutorial, you will use these components to deploy a small Wagtail project.

What you'll learn

  • How to use the Cloud Shell
  • How to create a Cloud SQL database
  • How to create a Cloud Storage bucket
  • How to create Secret Manager secrets
  • How to connect Google Cloud components to a Cloud Run service
  • How to use the Google Container Registry
  • How to deploy to Cloud Run
  • How to run builds, migrations, and deployments in Cloud Build

2. Setup and requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.




  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs, and you can update it at any time.
  • The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as PROJECT_ID), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" after the project is created.
  • There is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console in order to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, follow any "clean-up" instructions found at the end of the codelab. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Google Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell 4292cbf4971c9786.png.


If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:


It should only take a few moments to provision and connect to Cloud Shell.


This virtual machine is loaded with all the development tools you need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project

Command output

project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

3. Enable the Cloud APIs

From Cloud Shell, enable the Cloud APIs for the components that will be used:

gcloud services enable \
  run.googleapis.com \
  sql-component.googleapis.com \
  sqladmin.googleapis.com \
  compute.googleapis.com \
  cloudbuild.googleapis.com \

You may encounter a dialog where gcloud requests your credentials. This is normal. Authorise the request (will happen once per Cloud Shell session).

This operation may take a few moments to complete.

Once completed, a success message similar to this one should appear:

Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully.

4. Create a template project

You'll use the default Wagtail project template as your sample Wagtail project. To do this, you'll temporarily install Wagtail to generate the template.

To create this template project, use Cloud Shell to create a new directory named wagtail-cloudrun and navigate to it:

mkdir ~/wagtail-cloudrun
cd ~/wagtail-cloudrun

Then, install Wagtail into a temporary virtual environment:

virtualenv venv
source venv/bin/activate
pip install wagtail

Then, create a new template project in the current folder:

wagtail start myproject .

You'll now have a template Wagtail project in the current folder:

ls -F
Dockerfile  home/  manage.py*  myproject/  requirements.txt  search/ venv/

You can now exit and remove your temporary virtual environment:

rm -rf venv

From here, Wagtail will be called within the container.

5. Create the backing services

You'll now create your backing services: a Cloud SQL database, a Cloud Storage bucket, and a number of Secret Manager values.

Securing the values of the passwords used in deployment is important to the security of any project, and ensures that no one accidentally puts passwords where they don't belong (for example, directly in settings files, or typed directly into your terminal where they could be retrieved from history.)

First, set two base environment variables, one for the project ID:

PROJECT_ID=$(gcloud config get-value core/project)

And one for the region:


Create the database

Now, create a Cloud SQL instance:

gcloud sql instances create myinstance --project $PROJECT_ID \
  --database-version POSTGRES_13 --tier db-f1-micro --region $REGION

This operation may take a few minutes to complete.

Then in that instance, create a database:

gcloud sql databases create mydatabase --instance myinstance

Then in that same instance, create a user:

DJPASS="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"
gcloud sql users create djuser --instance myinstance --password $DJPASS

Create the storage bucket

Finally, create a Cloud Storage bucket (noting the name must be globally unique):

gsutil mb -l ${REGION} gs://${GS_BUCKET_NAME}

Since objects stored in the bucket will have a different origin (a bucket URL rather than a Cloud Run URL), you need to configure the Cross Origin Resource Sharing (CORS) settings.

Create a new file called cors.json, with the following contents:

touch cors.json
cloudshell edit cors.json


      "origin": ["*"],
      "responseHeader": ["Content-Type"],
      "method": ["GET"],
      "maxAgeSeconds": 3600

Apply this CORS configuration to the newly created storage bucket:

gsutil cors set cors.json gs://$GS_BUCKET_NAME

Store configuration as secret

Having set up the backing services, you'll now store these values in a file protected using Secret Manager.

Secret Manager allows you to store, manage, and access secrets as binary blobs or text strings. It works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime.

First, create a file with the values for the database connection string, media bucket, a secret key for Django (used for cryptographic signing of sessions and tokens), and to enable debugging:

echo DATABASE_URL=\"postgres://djuser:${DJPASS}@//cloudsql/${PROJECT_ID}:${REGION}:myinstance/mydatabase\" > .env

echo GS_BUCKET_NAME=\"${GS_BUCKET_NAME}\" >> .env

echo SECRET_KEY=\"$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 50 | head -n 1)\" >> .env

echo DEBUG=\"True\" >> .env

Then, create a secret called application_settings, using that file as the secret:

gcloud secrets create application_settings --data-file .env

Allow Cloud Run access to access this secret:

export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)')
export CLOUDRUN=${PROJECTNUM}-compute@developer.gserviceaccount.com

gcloud secrets add-iam-policy-binding application_settings \
  --member serviceAccount:${CLOUDRUN} --role roles/secretmanager.secretAccessor

Confirm the secret has been created by listing the secrets:

gcloud secrets versions list application_settings

After confirming the secret has been created, remove the local file:

rm .env

6. Configure your project

The template project that you previously created now needs some alterations. These changes will reduce the complexity of the template settings configurations that come with Wagtail, and also integrate Wagtail with the backing services you previously created.

Configure settings

Find the generated base.py settings file, and rename it to basesettings.py in the main myproject folder:

mv myproject/settings/base.py myproject/basesettings.py

Next, use the Cloud Shell web editor to open the file and replace the entire file's contents the the following:

touch myproject/settings.py
cloudshell edit myproject/settings.py


import io
import os

import environ
import google.auth
from google.cloud import secretmanager as sm

# Import the original settings from each template
from .basesettings import *

    from .local import *
except ImportError:

# Pull django-environ settings file, stored in Secret Manager
SETTINGS_NAME = "application_settings"

_, project = google.auth.default()
client = sm.SecretManagerServiceClient()
name = f"projects/{project}/secrets/{SETTINGS_NAME}/versions/latest"
payload = client.access_secret_version(name=name).payload.data.decode("UTF-8")

env = environ.Env()

# Setting this value from django-environ

# Allow all hosts to access Django site

# Default false. True allows default landing pages to be visible
DEBUG = env("DEBUG")

# Set this value from django-environ
DATABASES = {"default": env.db()}

INSTALLED_APPS += ["storages"] # for django-storages
if "myproject" not in INSTALLED_APPS:
     INSTALLED_APPS += ["myproject"] # for custom data migration

# Define static storage via django-storages[google]
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"

Take the time to note the commentary added about each configuration.

Note that you may see linting errors on this file. This is expected. Cloud Shell does not have context of the requirements for this project, and thus may report invalid imports, and unused imports.

Then, remove the old settings folder.

rm myproject/settings/ -rf

You will then have two settings files: one from Wagtail, and one you just created that builds from these settings:

ls myproject/*settings*
myproject/basesettings.py  myproject/settings.py

Finally, open the manage.py settings file, and update the configuration to tell Wagtail to point to the main settings.py file.

cloudshell edit manage.py

manage.py line (before)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings.dev")

manage.py line (after)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

Make the same configuration change for the myproject/wsgi.py file:

cloudshell edit myproject/wsgi.py

myproject/wsgi.py line (before)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings.dev")

myproject/wsgi.py line (after)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

In the next section, you will replace the Wagtail template Dockerfile file. Remove the automatically created Dockerfile:

rm Dockerfile

Python dependencies

Locate the requirements.txt file, and append the following packages:

cloudshell edit requirements.txt

requirements.txt (append)


7. Containerize your app and upload it to Container Registry

Container Registry is a private container image registry that runs on Google Cloud. You'll use it to store your containerized project.

To containerize the template project, first create a new file named Dockerfile in the top level of your project (in the same directory as manage.py), and copy the following content:

To containerize the template project, create a Dockerfile and add the following content:

touch Dockerfile
cloudshell edit Dockerfile


# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.9-slim


# Install dependencies.
COPY requirements.txt .
RUN pip install -U pip && pip install -r requirements.txt

# Copy local code to the container image.
COPY . .

# Service must listen to $PORT environment variable.
# This default value facilitates local development.

# Setting this ensures print statements and log messages
# promptly appear in Cloud Logging.

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application

Now, build your container image using Cloud Build, by running the following command from the directory containing the Dockerfile:

gcloud builds submit --tag gcr.io/$PROJECT_ID/wagtail-cloudrun

Once pushed to the registry, you'll see a SUCCESS message containing the image name. The image is stored in Container Registry and can be re-used if desired.

You can list all the container images associated with your current project using this command:

gcloud container images list

8. Run the migration steps

To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your static assets, you need to run migrate and collectstatic.

These base Django migration commands need to be run within the context of your built container with access to your database.

You will also need to run createsuperuser to create an administrator account to log into the Django admin.

Allow access to components

For this step, we're going to use Cloud Build to run Django commands, so Cloud Build will need access to the Django configuration stored in Secret Manager.

As earlier, set the IAM policy to explicitly allow the Cloud Build service account access to the secret settings:

export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)')
export CLOUDBUILD=${PROJECTNUM}@cloudbuild.gserviceaccount.com

gcloud secrets add-iam-policy-binding application_settings \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor

Additionally, allow Cloud Build to connect to Cloud SQL in order to apply the database migrations:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
    --member serviceAccount:${CLOUDBUILD} --role roles/cloudsql.client

Create your Django superuser

To create the superuser, you're going to use a data migration. This migration needs to be created in the migrations folder under my myproject.

Firstly, create the base folder structure:

mkdir myproject/migrations
touch myproject/migrations/__init__.py

Then, create the new migration, copying the following contents:

touch myproject/migrations/0001_createsuperuser.py
cloudshell edit myproject/migrations/0001_createsuperuser.py


from django.db import migrations

import google.auth
from google.cloud import secretmanager as sm

def createsuperuser(apps, schema_editor):

    # Retrieve secret from Secret Manager 
    _, project = google.auth.default()
    client = sm.SecretManagerServiceClient()
    name = f"projects/{project}/secrets/admin_password/versions/latest"
    admin_password = client.access_secret_version(name=name).payload.data.decode("UTF-8")

    # Create a new user using acquired password
    from django.contrib.auth.models import User
    User.objects.create_superuser("admin", password=admin_password)

class Migration(migrations.Migration):

    initial = True

    dependencies = [

    operations = [

Now back in the terminal, create the admin_password as within Secret Manager, and only allow it to be seen by Cloud Build:

gcloud secrets create admin_password --replication-policy automatic

admin_password="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"

echo -n "${admin_password}" | gcloud secrets versions add admin_password --data-file=-

gcloud secrets add-iam-policy-binding admin_password \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor

Create the migration configuration

Next, create the following Cloud Build configuration file:

touch cloudmigrate.yaml
cloudshell edit cloudmigrate.yaml


- name: "gcr.io/cloud-builders/docker"
  args: ["build", "-t", "gcr.io/${PROJECT_ID}/wagtail-cloudrun", "."]

- name: "gcr.io/cloud-builders/docker"
  args: ["push", "gcr.io/${PROJECT_ID}/wagtail-cloudrun"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/wagtail-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "migrate"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/wagtail-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "collectstatic", "--no-input"]

Run the migration

Finally, run all the initial migrations through Cloud Build:

gcloud builds submit --config cloudmigrate.yaml \
    --substitutions _REGION=$REGION

9. Deploy to Cloud Run

With the backing services created and populated, you can now create the Cloud Run service to access them.

The initial deployment of your containerized application to Cloud Run is created using the following command:

gcloud run deploy wagtail-cloudrun --platform managed --region $REGION \
  --image gcr.io/$PROJECT_ID/wagtail-cloudrun \
  --add-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \

Wait a few moments until the deployment is complete. On success, the command line displays the service URL:

Service [wagtail-cloudrun] revision [wagtail-cloudrun-00001-...] has been deployed and is serving 100 percent of traffic.
Service URL: https://wagtail-cloudrun-...-uc.a.run.app

You can also retrieve the service URL with this command:

gcloud run services describe wagtail-cloudrun \
  --platform managed \
  --region $REGION  \
  --format "value(status.url)"

You can now visit your deployed container by opening this URL in a web browser:


You can also log into the Django admin interface (add /admin to the URL) with the username "admin" and the admin password, which you can retrieve using the following command:

gcloud secrets versions access latest --secret admin_password && echo ""



Deploying again

If you want to make any changes to your Wagtail project, you'll need to build your image again:

gcloud builds submit --tag gcr.io/$PROJECT_ID/wagtail-cloudrun

Should your change include static or database alterations, be sure to run your migrations as well:

gcloud builds submit --config cloudmigrate.yaml \
  --substitutions _REGION=$REGION

Finally, re-deploy:

gcloud run deploy wagtail-cloudrun --platform managed --region $REGION \
  --image gcr.io/$PROJECT_ID/wagtail-cloudrun

10. Congratulations!

You have just deployed a complex project to Cloud Run!

  • Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.
  • Cloud SQL allows you to provision a managed PostgreSQL instance that is maintained automatically for you, and integrates natively into many Google Cloud systems.
  • Cloud Storage lets you have cloud storage in a way that is accessible seamlessly in Django.
  • Secret Manager allows you to store secrets, and have them accessible by certain parts of Google Cloud and not others.

Clean up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  • In the Cloud Console, go to the Manage resources page.
  • In the project list, select your project then click Delete.
  • In the dialog, type the project ID and then click Shut down to delete the project.

Learn more