Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

It also natively interfaces with many other parts of the Google Cloud ecosystem, including Cloud SQL for managed databases, Cloud Storage for unified object storage, and Secret Manager for managing secrets.

In this tutorial, you will use these components to deploy a small Django project.

What you'll learn

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running.

New users of Google Cloud are eligible for a $300 free trial.

Google Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell .

If you've never started Cloud Shell before, you'll be presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:

It should only take a few moments to provision and connect to Cloud Shell.

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

From Cloud Shell, enable the Cloud APIs for the components that will be used:

gcloud services enable \
  run.googleapis.com \
  sql-component.googleapis.com \
  sqladmin.googleapis.com \
  compute.googleapis.com \
  cloudbuild.googleapis.com \
  secretmanager.googleapis.com

This should produce a successful message similar to this one:

Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully.

You'll use the default Django project template as your sample Django project.

To create this template project, use Cloud Shell to create a new directory named django-cloudrun and navigate to it:

mkdir ~/django-cloudrun
cd ~/django-cloudrun

Then, temporarily install Django into your local environment:

python3 -m pip install --user Django

Then, create a new template project:

python3 -m django startproject myproject .

You'll now have a file called manage.py, and a folder called myproject which will contain a number of files, including a settings.py.

.
+-- manage.py
+-- myproject
    +-- __init__.py
    +-- asgi.py
    +-- settings.py
    +-- urls.py
    +-- wsgi.py

1 directory, 6 files

You can now uninstall Django from your Cloud Shell.

pip3 uninstall Django -y

From here, Django will be called within the container.

You'll now create your backing services: a Cloud SQL database, a Cloud Storage bucket, and a number of Secret Manager values.

Securing the values of the passwords used in deployment is important to the security of any project, and ensures that no one accidentally puts passwords where they don't belong (for example, directly in settings files, or typed directly into your terminal where they could be retrieved from history.)

First, set two base environment variables, one for the project ID:

PROJECT_ID=$(gcloud config get-value core/project)

And one for the region:

REGION=us-central1

Create the database

Now, create a Cloud SQL instance:

gcloud sql instances create myinstance --project $PROJECT_ID \
  --database-version POSTGRES_11 --tier db-f1-micro --region $REGION

This operation may take a few minutes to complete.

Then in that instance, create a database:

gcloud sql databases create mydatabase --instance myinstance

Then in that same instance, create a user:

DJPASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"
gcloud sql users create djuser --instance myinstance --password $DJPASS

Create the storage bucket

Finally, create a Cloud Storage bucket (noting the name must be globally unique):

GS_BUCKET_NAME=${PROJECT_ID}-media

gsutil mb -l ${REGION} gs://${GS_BUCKET_NAME}

Store configuration as secret

Having set up the backing services, you'll now store these values in a file protected using Secret Manager.

Secret Manager allows you to store, manage, and access secrets as binary blobs or text strings. It works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime.

First, create a file with the values for the database connection string, media bucket, and a secret key for Django (used for cryptographic signing of sessions and tokens):

echo DATABASE_URL=\"postgres://djuser:${DJPASS}@//cloudsql/${PROJECT_ID}:${REGION}:myinstance/mydatabase\" > .env

echo GS_BUCKET_NAME=\"${GS_BUCKET_NAME}\" >> .env

echo SECRET_KEY=\"$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 50 | head -n 1)\" >> .env

Then, create a secret called django_settings, using that file as the secret:

gcloud secrets create django_settings --replication-policy automatic

gcloud secrets versions add django_settings --data-file .env

Allow Cloud Run access to access this secret:

export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)')
export CLOUDRUN=${PROJECTNUM}-compute@developer.gserviceaccount.com

gcloud secrets add-iam-policy-binding django_settings \
  --member serviceAccount:${CLOUDRUN} --role roles/secretmanager.secretAccessor

Confirm the secret has been created by listing the secrets:

gcloud secrets list

After confirming the secret has been created, remove the local file:

rm .env

Given the backing services you just created, you'll need to make some changes to the template project to suit. This will include using django-environ to use environment variables as your configuration settings, which you'll seed with the values you defined as secrets.

Find the generated settings.py file, and using one of your preferred command line editors (nano, vim, or emacs) or the Cloud Shell web editor (click on the "Launch code editor" pen-shaped icon), open the file and replace the entire file's contents with the following:

myproject/settings.py

import os

import environ
import google.auth
from google.cloud import secretmanager_v1beta1 as sm

# Import settings with django-environ
env = environ.Env()

# Import settings from Secret Manager
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
env_file = os.path.join(BASE_DIR,  ".env")

if not os.path.isfile('.env'):
    import google.auth
    from google.cloud import secretmanager_v1beta1 as sm

    _, project = google.auth.default()

    if project:
        client = sm.SecretManagerServiceClient()
        path = client.secret_version_path(project, "django_settings", "latest")
        payload = client.access_secret_version(path).payload.data.decode("UTF-8")

        with open(env_file, "w") as f:
            f.write(payload)

env = environ.Env()
env.read_env(env_file)

# Pull value from environment
SECRET_KEY = env("SECRET_KEY")

# Allow Django to load from any domain
ALLOWED_HOSTS = ["*"]

# Enabling debugging shows the 'Install success' rocketship page
DEBUG = True

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
    "myproject",  # for a later data migration
    "storages",  # for django-storages
]

MIDDLEWARE = [
    "django.middleware.security.SecurityMiddleware",
    "django.contrib.sessions.middleware.SessionMiddleware",
    "django.middleware.common.CommonMiddleware",
    "django.middleware.csrf.CsrfViewMiddleware",
    "django.contrib.auth.middleware.AuthenticationMiddleware",
    "django.contrib.messages.middleware.MessageMiddleware",
    "django.middleware.clickjacking.XFrameOptionsMiddleware",
]

ROOT_URLCONF = "myproject.urls"

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "DIRS": [],
        "APP_DIRS": True,
        "OPTIONS": {
            "context_processors": [
                "django.template.context_processors.debug",
                "django.template.context_processors.request",
                "django.contrib.auth.context_processors.auth",
                "django.contrib.messages.context_processors.messages",
            ],
        },
    },
]

WSGI_APPLICATION = "myproject.wsgi.application"

# Use django-environ to define the connection string
DATABASES = {"default": env.db()}

AUTH_PASSWORD_VALIDATORS = [
    {"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",},
    {"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",},
    {"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",},
    {"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",},
]

LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_L10N = True
USE_TZ = True

# Define static storage via django-storages[google]
GS_BUCKET_NAME = env("GS_BUCKET_NAME", None)

DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"

Take the time to note the commentary added about each configuration. Any sections without comments are directly taken from the templated settings.py file without modification.

Finally, create a new file called requirements.txt in the top level of your project (where manage.py sits) with the following packages:

requirements.txt

asgiref==3.2.7
cachetools==4.1.0
certifi==2020.4.5.1
chardet==3.0.4
Django==3.0.6
django-environ==0.4.5
django-storages==1.9.1
google-api-core==1.17.0
google-auth==1.14.3
google-cloud-core==1.3.0
google-cloud-secret-manager==0.2.0
google-cloud-storage==1.28.1
google-resumable-media==0.5.0
googleapis-common-protos==1.51.0
grpc-google-iam-v1==0.12.3
grpcio==1.28.1
gunicorn==20.0.4
idna==2.9
protobuf==3.11.3
psycopg2-binary==2.8.5
pyasn1==0.4.8
pyasn1-modules==0.2.8
pytz==2020.1
requests==2.23.0
rsa==4.0
six==1.14.0
sqlparse==0.3.1
urllib3==1.25.9

Container Registry is a private container image registry that runs on Google Cloud. You'll use it to store your containerized project.

To containerize the template project, first create a new file named Dockerfile in the top level of your project (in the same directory as manage.py), and copy the following content:

Dockerfile

# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8-slim

ENV APP_HOME /app
WORKDIR $APP_HOME

# Install dependencies.
COPY requirements.txt .
RUN pip install -r requirements.txt

# Copy local code to the container image.
COPY . .

# Service must listen to $PORT environment variable.
# This default value facilitates local development.
ENV PORT 8080

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application

Your file structure should now look like this:

.
+-- Dockerfile
+-- manage.py
+-- myproject
+   +-- asgi.py
+   +-- __init__.py
+   +-- settings.py
+   +-- urls.py
+   +-- wsgi.py
+-- requirements.txt

1 directory, 8 files

Now, build your container image using Cloud Build, by running the following command from the directory containing the Dockerfile:

gcloud builds submit --tag gcr.io/$PROJECT_ID/django-cloudrun

Once pushed to the registry, you'll see a SUCCESS message containing the image name. The image is stored in Container Registry and can be re-used if desired.

You can list all the container images associated with your current project using this command:

gcloud container images list

To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your media assets, you need to run migrate and collectstatic.

These base Django migration commands need to be run within the context of your built container with access to your database.

You will also need to run createsuperuser to create an administrator account to log into the Django admin.

Allow access to components

For this step, we're going to use Cloud Build to run Django commands, so CLoud Run will need access to the Django configuration stored in Secret Manager.

As earlier, set the IAM policy to explicitly allow the Cloud Build service account access to the secret settings:

export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)')
export CLOUDBUILD=${PROJECTNUM}@cloudbuild.gserviceaccount.com

gcloud secrets add-iam-policy-binding django_settings \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor

Additionally, allow Cloud Build to connect to Cloud SQL in order to apply the database migrations:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
    --member serviceAccount:${CLOUDBUILD} --role roles/cloudsql.client

Create your Django superuser

To create the superuser, you're going to use a data migration.

Firstly, create a new folder within your project for the migration:

mkdir myproject/migrations
touch myproject/migrations/__init__.py
touch myproject/migrations/0001_createsuperuser.py

Then, in the new 0001_createsuperuser.py file, copy the following contents:

myproject/migrations/0001_createsuperuser.py

from django.db import migrations

import google.auth
from google.cloud import secretmanager_v1beta1 as sm


def createsuperuser(apps, schema_editor):

    # Retrieve secret from Secret Manager 
    _, project = google.auth.default()
    client = sm.SecretManagerServiceClient()
    path = client.secret_version_path(project, "admin_password", "latest")
    admin_password = client.access_secret_version(path).payload.data.decode("UTF-8")

    # Create a new user using acquired password
    from django.contrib.auth.models import User
    User.objects.create_superuser("admin", password=admin_password)


class Migration(migrations.Migration):

    initial = True

    dependencies = [
    ]

    operations = [
        migrations.RunPython(createsuperuser)
    ]

Now back in the terminal, create the admin_password as within Secret Manager, and only allow it to be seen by Cloud Build:

gcloud secrets create admin_password --replication-policy automatic

admin_password="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"

echo -n "${admin_password}" | gcloud secrets versions add admin_password --data-file=-

gcloud secrets add-iam-policy-binding admin_password \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor

Create the migration configuration

Next, create the following Cloud Build configuration file, in the top level of your project (where manage.py sits):

cloudmigrate.yaml

steps:
- name: "gcr.io/cloud-builders/docker"
  args: ["build", "-t", "gcr.io/${PROJECT_ID}/django-cloudrun", "."]

- name: "gcr.io/cloud-builders/docker"
  args: ["push", "gcr.io/${PROJECT_ID}/django-cloudrun"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/django-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "migrate"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/django-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "collectstatic", "--no-input"]

Run the migration

Finally, run all the initial migrations through Cloud Build:

gcloud builds submit --config cloudmigrate.yaml \
    --substitutions _REGION=$REGION

With the backing services created and populated, you can now create the Cloud Run service to access them.

The initial deployment of your containerized application to Cloud Run is created using the following command:

gcloud run deploy django-cloudrun --platform managed --region $REGION \
  --image gcr.io/$PROJECT_ID/django-cloudrun \
  --add-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \
  --allow-unauthenticated

Wait a few moments until the deployment is complete. On success, the command line displays the service URL:

Service [django-cloudrun] revision [django-cloudrun-...] has been deployed
and is serving traffic at https://django-cloudrun-...-uc.a.run.app

You can also retrieve the service URL with this command:

gcloud run services describe django-cloudrun \
  --platform managed \
  --region $REGION  \
  --format "value(status.url)"

You can now visit your deployed container by opening this URL in a web browser:

You can also log into the Django admin interface (add /admin to the URL) with the username "admin" and the admin password, which you can retrieve using the following command:

gcloud secrets versions access latest --secret admin_password

Deploying again

If you want to make any changes to your Django project, you'll need to build your image again:

gcloud builds submit --tag gcr.io/$PROJECT_ID/django-cloudrun

Should your change include static or database alterations, be sure to run your migrations as well:

gcloud builds submit --config cloudmigrate.yaml \
    --substitutions _REGION=$REGION

Finally, re-deploy:

gcloud run deploy django-cloudrun --platform managed --region $REGION \
  --image gcr.io/$PROJECT_ID/django-cloudrun

You have just deployed a complex project to Cloud Run!

Clean up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Learn more