Wagtail on Cloud Run

1. Introduction

894762ebb681671c.png

Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

It also natively interfaces with many other parts of the Google Cloud ecosystem, including Cloud SQL for managed databases, Cloud Storage for unified object storage, and Secret Manager for managing secrets.

Wagtail is an open source content management system (CMS) built on top of Django. Django is a high-level Python web framework.

In this tutorial, you will use these components to deploy a small Wagtail project.

Note: This codelab was last verified with Wagtail 5.2.2, which supports Django 5.

What you'll learn

  • How to use the Cloud Shell
  • How to create a Cloud SQL database
  • How to create a Cloud Storage bucket
  • How to create Secret Manager secrets
  • How to use Secrets from different Google Cloud services
  • How to connect Google Cloud components to a Cloud Run service
  • How to use Container Registry to store built containers
  • How to deploy to Cloud Run
  • How to run database schema migrations in Cloud Build

2. Setup and requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

fbef9caa1602edd0.png

a99b7ace416376c4.png

5e3ff691252acf41.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as PROJECT_ID). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project.
  • For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.

Google Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell 853e55310c205094.png.

3c1dabeca90e44e5.png

If this is your first time starting Cloud Shell, you're presented with an intermediate screen describing what it is. If you were presented with an intermediate screen, click Continue.

9c92662c6a846a5c.png

It should only take a few moments to provision and connect to Cloud Shell.

9f0e51b578fecce5.png

This virtual machine is loaded with all the development tools needed. It offers a persistent 5 GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with a browser.

Once connected to Cloud Shell, you should see that you are authenticated and that the project is set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

3. Enable the Cloud APIs

From Cloud Shell, enable the Cloud APIs for the components that will be used:

gcloud services enable \
  run.googleapis.com \
  sql-component.googleapis.com \
  sqladmin.googleapis.com \
  compute.googleapis.com \
  cloudbuild.googleapis.com \
  secretmanager.googleapis.com \
  artifactregistry.googleapis.com

Since this is the first time you're calling APIs from gcloud, you'll be asked to authorize using your credentials to make this request. This will happen once per Cloud Shell session.

This operation may take a few moments to complete.

Once completed, a success message similar to this one should appear:

Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully.

4. Create a template project

You'll use the default Wagtail project template as your sample Wagtail project. To do this, you'll temporarily install Wagtail to generate the template.

To create this template project, use Cloud Shell to create a new directory named wagtail-cloudrun and navigate to it:

mkdir ~/wagtail-cloudrun
cd ~/wagtail-cloudrun

Then, install Wagtail into a temporary virtual environment:

virtualenv venv
source venv/bin/activate
pip install wagtail

Then, create a new template project in the current folder:

wagtail start myproject .

You'll now have a template Wagtail project in the current folder:

ls -F
Dockerfile  home/  manage.py*  myproject/  requirements.txt  search/ venv/

You can now exit and remove your temporary virtual environment:

deactivate
rm -rf venv

From here, Wagtail will be called within the container.

5. Create the backing services

You'll now create your backing services: a dedicated service account, an Artifact Registry, a Cloud SQL database, a Cloud Storage bucket, and a number of Secret Manager values.

Securing the values of the passwords used in deployment is important to the security of any project, and ensures that no one accidentally puts passwords where they don't belong (for example, directly in settings files, or typed directly into your terminal where they could be retrieved from history.)

To begin, set two base environment variables, one for the Project ID:

PROJECT_ID=$(gcloud config get-value core/project)

And one for the region:

REGION=us-central1

Create a service account

To limit the access the service will have to other parts of Google Cloud, create a dedicated service account:

gcloud iam service-accounts create cloudrun-serviceaccount

You will reference this account by its email in future sections of this codelab. Set that value in an environment variable:

SERVICE_ACCOUNT=$(gcloud iam service-accounts list \
    --filter cloudrun-serviceaccount --format "value(email)")

Create an Artifact Registry

To store the built container image, create a container registry in your chosen region:

gcloud artifacts repositories create containers --repository-format docker --location $REGION

You will reference this registry by name in future sections of this codelab:

ARTIFACT_REGISTRY=${REGION}-docker.pkg.dev/${PROJECT_ID}/containers

Create the database

Create a Cloud SQL instance:

gcloud sql instances create myinstance --project $PROJECT_ID \
  --database-version POSTGRES_14 --tier db-f1-micro --region $REGION

This operation may take a few minutes to complete.

In that instance, create a database:

gcloud sql databases create mydatabase --instance myinstance

In that same instance, create a user:

DJPASS="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"
gcloud sql users create djuser --instance myinstance --password $DJPASS

Grant the service account permission to connect to the instance:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:${SERVICE_ACCOUNT} \
    --role roles/cloudsql.client

Create the storage bucket

Create a Cloud Storage bucket (noting the name must be globally unique):

GS_BUCKET_NAME=${PROJECT_ID}-media
gcloud storage buckets create gs://${GS_BUCKET_NAME} --location ${REGION} 

Grant permissions for the service account to administer the bucket:

gcloud storage buckets add-iam-policy-binding gs://${GS_BUCKET_NAME} \
    --member serviceAccount:${SERVICE_ACCOUNT} \
    --role roles/storage.admin

Since objects stored in the bucket will have a different origin (a bucket URL rather than a Cloud Run URL), you need to configure the Cross Origin Resource Sharing (CORS) settings.

Create a new file called cors.json, with the following contents:

touch cors.json
cloudshell edit cors.json

cors.json

[
    {
      "origin": ["*"],
      "responseHeader": ["Content-Type"],
      "method": ["GET"],
      "maxAgeSeconds": 3600
    }
]

Apply this CORS configuration to the newly created storage bucket:

gsutil cors set cors.json gs://$GS_BUCKET_NAME

Store configuration as secret

Having set up the backing services, you'll now store these values in a file protected using Secret Manager.

Secret Manager allows you to store, manage, and access secrets as binary blobs or text strings. It works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime.

First, create a file with the values for the database connection string, media bucket, a secret key for Django (used for cryptographic signing of sessions and tokens), and to enable debugging:

echo DATABASE_URL=\"postgres://djuser:${DJPASS}@//cloudsql/${PROJECT_ID}:${REGION}:myinstance/mydatabase\" > .env

echo GS_BUCKET_NAME=\"${GS_BUCKET_NAME}\" >> .env

echo SECRET_KEY=\"$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 50 | head -n 1)\" >> .env

echo DEBUG=True >> .env

Then, create a secret called application_settings, using that file as the secret:

gcloud secrets create application_settings --data-file .env

Allow the service account access to this secret:

gcloud secrets add-iam-policy-binding application_settings \
  --member serviceAccount:${SERVICE_ACCOUNT} --role roles/secretmanager.secretAccessor

Confirm the secret has been created by listing the secrets:

gcloud secrets versions list application_settings

After confirming the secret has been created, remove the local file:

rm .env

6. Configure your application

The template project that you previously created now needs some alterations. These changes will reduce the complexity of the template settings configurations that come with Wagtail, and also integrate Wagtail with the backing services you previously created.

Configure settings

Find the generated base.py settings file, and rename it to basesettings.py in the main myproject folder:

mv myproject/settings/base.py myproject/basesettings.py

Using the Cloud Shell web editor, create a new settings.py file, with the following code:

touch myproject/settings.py
cloudshell edit myproject/settings.py

myproject/settings.py

import io
import os
from urllib.parse import urlparse

import environ

# Import the original settings from each template
from .basesettings import *

# Load the settings from the environment variable
env = environ.Env()
env.read_env(io.StringIO(os.environ.get("APPLICATION_SETTINGS", None)))

# Setting this value from django-environ
SECRET_KEY = env("SECRET_KEY")

# Ensure myproject is added to the installed applications
if "myproject" not in INSTALLED_APPS:
    INSTALLED_APPS.append("myproject")

# If defined, add service URLs to Django security settings
CLOUDRUN_SERVICE_URLS = env("CLOUDRUN_SERVICE_URLS", default=None)
if CLOUDRUN_SERVICE_URLS:
    CSRF_TRUSTED_ORIGINS = env("CLOUDRUN_SERVICE_URLS").split(",")
    # Remove the scheme from URLs for ALLOWED_HOSTS
    ALLOWED_HOSTS = [urlparse(url).netloc for url in CSRF_TRUSTED_ORIGINS]
else:
    ALLOWED_HOSTS = ["*"]

# Default false. True allows default landing pages to be visible
DEBUG = env("DEBUG", default=False)

# Set this value from django-environ
DATABASES = {"default": env.db()}

# Change database settings if using the Cloud SQL Auth Proxy
if os.getenv("USE_CLOUD_SQL_AUTH_PROXY", None):
    DATABASES["default"]["HOST"] = "127.0.0.1"
    DATABASES["default"]["PORT"] = 5432

# Define static storage via django-storages[google]
GS_BUCKET_NAME = env("GS_BUCKET_NAME")
STATICFILES_DIRS = []
GS_DEFAULT_ACL = "publicRead"
STORAGES = {
    "default": {
        "BACKEND": "storages.backends.gcloud.GoogleCloudStorage",
    },
    "staticfiles": {
        "BACKEND": "storages.backends.gcloud.GoogleCloudStorage",
    },
}

Take the time to read the commentary added about each configuration.

Note that you may see linting errors on this file. This is expected. Cloud Shell does not have context of the requirements for this project, and thus may report invalid imports, and unused imports.

Then, remove the old settings folder.

rm -rf myproject/settings/

You will then have two settings files: one from Wagtail, and one you just created that builds from these settings:

ls myproject/*settings*
myproject/basesettings.py  myproject/settings.py

Finally, open the manage.py settings file, and update the configuration to tell Wagtail to point to the main settings.py file.

cloudshell edit manage.py

manage.py line (before)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings.dev")

manage.py line (after)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

Make the same configuration change for the myproject/wsgi.py file:

cloudshell edit myproject/wsgi.py

myproject/wsgi.py line (before)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings.dev")

myproject/wsgi.py line (after)

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

Remove the automatically created Dockerfile:

rm Dockerfile

Python dependencies

Locate the requirements.txt file, and append the following packages:

cloudshell edit requirements.txt

requirements.txt (append)

gunicorn
psycopg2-binary
django-storages[google]
django-environ

Define your application image

Cloud Run will run any container as long as it conforms to the Cloud Run Container Contract. This tutorial opts to omit a Dockerfile, but instead use Cloud Native Buildpacks. Buildpacks assist in building containers for common languages, including Python.

This tutorial opts to customize the Procfile used to start the web application.

To containerize the template project, first create a new file named Procfile in the top level of your project (in the same directory as manage.py), and copy the following content:

touch Procfile
cloudshell edit Procfile

Procfile

web: gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application

7. Configure, build, and run the migration steps

To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your static assets, you need to run migrate and collectstatic.

These base Django migration commands need to be run within the context of your built container image with access to your database.

You will also need to run createsuperuser to create an administrator account to log into the Django admin.

To do this, you will use Cloud Run Jobs to perform these tasks. Cloud Run jobs allow you to run processes that have a defined ending, making them ideal for administration tasks.

Define your Django superuser password

To create the superuser, you'll use the non-interactive version of the createsuperuser command. This command requires a specially named environment variable to use in place of a prompt to enter the password.

Create a new secret, using a randomly generated password:

echo -n $(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1) | gcloud secrets create django_superuser_password --data-file=-

Allow your service account to access this secret:

gcloud secrets add-iam-policy-binding django_superuser_password \
  --member serviceAccount:${SERVICE_ACCOUNT} \
  --role roles/secretmanager.secretAccessor

Update your Procfile

To help with the clarity of your Cloud Run jobs, create shortcuts in your Procfile, append the following entrypoints to Procfile:

migrate: python manage.py migrate && python manage.py collectstatic --noinput --clear
createuser: python manage.py createsuperuser --username admin --email noop@example.com --noinput

You should now have three entries: the default web entrypoint, the migrate entrypoint to apply database migrations, and the createuser entrypoint to run the createsuperuser command.

Build your application image

With your Procfile updates in place, build the image:

gcloud builds submit --pack image=${ARTIFACT_REGISTRY}/myimage

Create Cloud Run jobs

Now that the image exists, you can create Cloud Run jobs using it.

These jobs use the image previously built, but use different command values. These map to the values in the Procfile.

Create a job for the migration:

gcloud run jobs create migrate \
  --region $REGION \
  --image ${ARTIFACT_REGISTRY}/myimage \
  --set-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \
  --set-secrets APPLICATION_SETTINGS=application_settings:latest \
  --service-account $SERVICE_ACCOUNT \
  --command migrate

Create a job for the user creation:

gcloud run jobs create createuser \
  --region $REGION \
  --image ${ARTIFACT_REGISTRY}/myimage \
  --set-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \
  --set-secrets APPLICATION_SETTINGS=application_settings:latest \
  --set-secrets DJANGO_SUPERUSER_PASSWORD=django_superuser_password:latest \
  --service-account $SERVICE_ACCOUNT \
  --command createuser

Execute Cloud Run jobs

With the job configurations in place, run the migrations:

gcloud run jobs execute migrate --region $REGION --wait

Ensure this command output says the execution "successfully completed".

You will run this command later when you make updates to your application.

With the database setup, create the user using the job:

gcloud run jobs execute createuser --region $REGION --wait

Ensure this command output says the execution "successfully completed".

You will not have to run this command again.

8. Deploy to Cloud Run

With the backing services created and populated, you can now create the Cloud Run service to access them.

The initial deployment of your containerized application to Cloud Run is created using the following command:

gcloud run deploy wagtail-cloudrun \
  --region $REGION \
  --image ${ARTIFACT_REGISTRY}/myimage \
  --set-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \
  --set-secrets APPLICATION_SETTINGS=application_settings:latest \
  --service-account $SERVICE_ACCOUNT \
  --allow-unauthenticated

Wait a few moments until the deployment is complete. On success, the command line displays the service URL:

Service [wagtail-cloudrun] revision [wagtail-cloudrun-00001-...] has been deployed and is serving 100 percent of traffic.
Service URL: https://wagtail-cloudrun-...run.app

You can now visit your deployed container by opening this URL in a web browser:

c2f23d1f5b97a79a.png

9. Accessing the Django Admin

Updating CSRF settings

Django includes protections against Cross-Site Request Forgery (CSRF). Any time a form is submitted on your Django site, including logging into the Django admin, the Trusted Origins setting is checked. If it doesn't match the origin of the request, Django returns an error.

In the mysite/settings.py file, if the CLOUDRUN_SERVICE_URL environment variable is defined it's used in the CSRF_TRUSTED_ORIGINS and ALLOWED_HOSTS settings. While defining ALLOWED_HOSTS isn't mandatory, it's good practice to add this since it's already required for CSRF_TRUSTED_ORIGINS.

Because you need your service URL, this configuration can't be added until after your first deployment.

You will have to update your service to add this environment variable. It could be added to the application_settings secret, or added directly as an environment variable.

The below implementation takes advantage of gcloud formatting and escaping.

Retrieve your service URL:

CLOUDRUN_SERVICE_URLS=$(gcloud run services describe wagtail-cloudrun \
  --region $REGION  \
  --format "value(metadata.annotations[\"run.googleapis.com/urls\"])" | tr -d '"[]')
echo $CLOUDRUN_SERVICE_URLS

Set this value as an environment variable on your Cloud Run service:

gcloud run services update wagtail-cloudrun \
  --region $REGION \
  --update-env-vars "^##^CLOUDRUN_SERVICE_URLS=$CLOUDRUN_SERVICE_URLS"

Logging into the Django Admin

To access the Django admin interface, append /admin to your service URL.

Now log in with the username "admin" and retrieve your password using the following command:

gcloud secrets versions access latest --secret django_superuser_password && echo ""

2b9139acc7208827.png

8ad565366c53ba3c.png

10. Developing your application

As you develop your application, you will want to test it locally. To do that, you will need to either connect to your Cloud SQL ("production") database, or a local ("test") database.

Connect to your production database

You can connect to your Cloud SQL instances by using the Cloud SQL Auth Proxy. This application creates a connection from your local machine to the database.

Once you have installed the Cloud SQL Auth Proxy, follow these steps:

# Create a virtualenv
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt

# Copy the application settings to your local machine
gcloud secrets versions access latest --secret application_settings > temp_settings

# Run the Cloud SQL Auth Proxy
./cloud-sql-proxy ${PROJECT_ID}:${REGION}:myinstance

# In a new tab, start the local web server using these new settings
USE_CLOUD_SQL_AUTH_PROXY=true APPLICATION_SETTINGS=$(cat temp_settings) python manage.py runserver

Ensure you remove the temp_settings file after you have finished your work.

Connect to a local SQLite database

Alternatively, you can use a local database when developing your application. Django supports both PostgreSQL and SQLite databases, and there are some features PostgreSQL has that SQLite doesn't, but in many cases, the functionality is identical.

To setup SQLite, you will have to update your application settings, to point to a local database, and then you will have to apply your schema migrations.

To setup this method:

# Create a virtualenv
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt

# Copy the application settings to your local machine
gcloud secrets versions access latest --secret application_settings > temp_settings

# Edit the DATABASE_URL setting to use a local sqlite file. For example:
DATABASE_URL=sqlite:////tmp/my-tmp-sqlite.db

# Set the updated settings as an environment variable
APPLICATION_SETTINGS=$(cat temp_settings) 

# Apply migrations to the local database
python manage.py migrate

# Start the local web server
python manage.py runserver

Ensure you remove the temp_settings file after you have finished your work.

Creating migrations

When making changes to your database models, you may need to generate Django's migration files by running python manage.py makemigrations.

You can run this command after setting up the production or test database connection. Alternatively, you can generate the migration files without a database by giving empty settings:

SECRET_KEY="" DATABASE_URL="" GS_BUCKET_NAME="" python manage.py makemigrations

Applying application updates

To apply changes to your application, you will need to:

  • build your changes into a new image,
  • apply any database or static migrations, and then
  • update your Cloud Run service to use the new image.

To build your image:

gcloud builds submit --pack image=${ARTIFACT_REGISTRY}/myimage

If you have any migrations to apply, run the Cloud Run job:

gcloud run jobs execute migrate --region $REGION --wait

To update your service with the new image:

gcloud run services update wagtail-cloudrun \
  --region $REGION \
  --image ${ARTIFACT_REGISTRY}/myimage

11. Congratulations!

You have just deployed a complex project to Cloud Run!

  • Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.
  • Cloud SQL allows you to provision a managed PostgreSQL instance that is maintained automatically for you, and integrates natively into many Google Cloud systems.
  • Cloud Storage lets you have cloud storage in a way that is accessible seamlessly in Django.
  • Secret Manager allows you to store secrets, and have them accessible by certain parts of Google Cloud and not others.

Clean up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  • In the Cloud Console, go to the Manage resources page.
  • In the project list, select your project then click Delete.
  • In the dialog, type the project ID and then click Shut down to delete the project.

Learn more

/