1. Introduction
Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.
It also natively interfaces with many other parts of the Google Cloud ecosystem, including Cloud SQL for managed databases, Cloud Storage for unified object storage, and Secret Manager for managing secrets.
Django is a high-level Python web framework.
In this tutorial, you will use these components to deploy a small Django project.
What you'll learn
- How to use the Cloud Shell
- How to create a Cloud SQL database
- How to create a Cloud Storage bucket
- How to create Secret Manager secrets
- How to use Secrets from different Google Cloud services
- How to connect Google Cloud components to a Cloud Run service
- How to use Container Registry to store built containers
- How to deploy to Cloud Run
- How to run database schema migrations in Cloud Build
2. Setup and requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as
PROJECT_ID
). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project. - For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Google Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.
Activate Cloud Shell
- From the Cloud Console, click Activate Cloud Shell
.
If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:
It should only take a few moments to provision and connect to Cloud Shell.
This virtual machine is loaded with all the development tools you need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.
Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.
- Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list
Command output
Credentialed Accounts ACTIVE ACCOUNT * <my_account>@<my_domain.com> To set the active account, run: $ gcloud config set account `ACCOUNT`
- Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If it is not, you can set it with this command:
gcloud config set project <PROJECT_ID>
Command output
Updated property [core/project].
3. Enable the Cloud APIs
From Cloud Shell, enable the Cloud APIs for the components that will be used:
gcloud services enable \ run.googleapis.com \ sql-component.googleapis.com \ sqladmin.googleapis.com \ compute.googleapis.com \ cloudbuild.googleapis.com \ secretmanager.googleapis.com \ artifactregistry.googleapis.com
Since this is the first time you're calling APIs from gcloud, you'll be asked to authorize using your credentials to make this request. This will happen once per Cloud Shell session.
This operation may take a few moments to complete.
Once completed, a success message similar to this one should appear:
Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully.
4. Create a template project
You'll use the default Django project template as your sample Django project.
To create this template project, use Cloud Shell to create a new directory named django-cloudrun
and navigate to it:
mkdir ~/django-cloudrun cd ~/django-cloudrun
Then, install Django into a temporary virtual environment:
virtualenv venv source venv/bin/activate pip install Django
Save the list of packages installed to requirements.txt
pip freeze > requirements.txt
This list should include Django and its dependencies: sqlparse
and asgiref
.
Then, create a new template project:
django-admin startproject myproject .
You'll get a new file called manage.py
, and a new folder called myproject
which will contain a number of files, including a settings.py
.
Confirm the contents of your top level folder is as expected:
ls -F
manage.py myproject/ requirements.txt venv/
Confirm the contents of myproject
folder is as expected:
ls -F myproject/
__init__.py asgi.py settings.py urls.py wsgi.py
You can now exit and remove your temporary virtual environment:
deactivate rm -rf venv
From here, Django will be called within the container.
5. Create the backing services
You'll now create your backing services: a dedicated service account, a Cloud SQL database, a Cloud Storage bucket, and a number of Secret Manager values.
Securing the values of the passwords used in deployment is important to the security of any project, and ensures that no one accidentally puts passwords where they don't belong (for example, directly in settings files, or typed directly into your terminal where they could be retrieved from history.)
To begin, set two base environment variables, one for the Project ID:
PROJECT_ID=$(gcloud config get-value core/project)
And one for the region:
REGION=us-central1
Create a service account
To limit the access the service will have to other parts of Google Cloud, create a dedicated service account:
gcloud iam service-accounts create cloudrun-serviceaccount
You will reference this account by it's email in future sections of this codelab. Set that value in an environment variable:
SERVICE_ACCOUNT=$(gcloud iam service-accounts list \ --filter cloudrun-serviceaccount --format "value(email)")
Create the database
Create a Cloud SQL instance:
gcloud sql instances create myinstance --project $PROJECT_ID \ --database-version POSTGRES_13 --tier db-f1-micro --region $REGION
This operation may take a few minutes to complete.
In that instance, create a database:
gcloud sql databases create mydatabase --instance myinstance
In that same instance, create a user:
DJPASS="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)" gcloud sql users create djuser --instance myinstance --password $DJPASS
Grant the service account permission to connect to the instance:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:${SERVICE_ACCOUNT} \ --role roles/cloudsql.client
Create the storage bucket
Finally, create a Cloud Storage bucket (noting the name must be globally unique):
GS_BUCKET_NAME=${PROJECT_ID}-media gsutil mb -l ${REGION} gs://${GS_BUCKET_NAME}
Store configuration as secret
Having set up the backing services, you'll now store these values in a file protected using Secret Manager.
Secret Manager allows you to store, manage, and access secrets as binary blobs or text strings. It works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime.
First, create a file with the values for the database connection string, media bucket, a secret key for Django (used for cryptographic signing of sessions and tokens), and to enable debugging:
echo DATABASE_URL=\"postgres://djuser:${DJPASS}@//cloudsql/${PROJECT_ID}:${REGION}:myinstance/mydatabase\" > .env echo GS_BUCKET_NAME=\"${GS_BUCKET_NAME}\" >> .env echo SECRET_KEY=\"$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 50 | head -n 1)\" >> .env echo DEBUG=\"True\" >> .env
Then, create a secret called application_settings
, using that file as the secret:
gcloud secrets create application_settings --data-file .env
Allow the service account access to this secret:
gcloud secrets add-iam-policy-binding application_settings \ --member serviceAccount:${SERVICE_ACCOUNT} --role roles/secretmanager.secretAccessor
Confirm the secret has been created by listing the secrets:
gcloud secrets versions list application_settings
After confirming the secret has been created, remove the local file:
rm .env
6. Configure your application
Given the backing services you just created, you'll need to make some changes to the template project to suit.
This will include introducing django-environ
to use environment variables as your configuration settings, which you'll seed with the values you defined as secrets. To implement this, you'll extend the template settings. You will also need to add additional Python dependencies.
Configure settings
Find the generated settings.py
file, and rename it to basesettings.py:
mv myproject/settings.py myproject/basesettings.py
Next, use the Cloud Shell web editor to open the file and replace the entire file's contents the the following:
touch myproject/settings.py cloudshell edit myproject/settings.py
myproject/settings.py
import io
import os
from urllib.parse import urlparse
import environ
# Import the original settings from each template
from .basesettings import *
# Load the settings from the environment variable
env = environ.Env()
env.read_env(io.StringIO(os.environ.get("APPLICATION_SETTINGS", None)))
# Setting this value from django-environ
SECRET_KEY = env("SECRET_KEY")
# If defined, add service URL to Django security settings
CLOUDRUN_SERVICE_URL = env("CLOUDRUN_SERVICE_URL", default=None)
if CLOUDRUN_SERVICE_URL:
ALLOWED_HOSTS = [urlparse(CLOUDRUN_SERVICE_URL).netloc]
CSRF_TRUSTED_ORIGINS = [CLOUDRUN_SERVICE_URL]
else:
ALLOWED_HOSTS = ["*"]
# Default false. True allows default landing pages to be visible
DEBUG = env("DEBUG", default=False)
# Set this value from django-environ
DATABASES = {"default": env.db()}
# Change database settings if using the Cloud SQL Auth Proxy
if os.getenv("USE_CLOUD_SQL_AUTH_PROXY", None):
DATABASES["default"]["HOST"] = "127.0.0.1"
DATABASES["default"]["PORT"] = 5432
if "myproject" not in INSTALLED_APPS:
INSTALLED_APPS += ["myproject"] # for custom data migration
# Define static storage via django-storages[google]
GS_BUCKET_NAME = env("GS_BUCKET_NAME")
STATICFILES_DIRS = []
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"
Take the time to read the commentary added about each configuration.
Note that you may see linting errors on this file. This is expected. Cloud Shell does not have context of the requirements for this project, and thus may report invalid imports, and unused imports.
Python dependencies
Locate the requirements.txt
file, and append the following packages:
cloudshell edit requirements.txt
requirements.txt (append)
gunicorn==20.1.0 psycopg2-binary==2.9.5 django-storages[google]==1.12.3 django-environ==0.8.1
Define your application image
Cloud Run will run any container as long as it conforms to the Cloud Run Container Contract. This tutorial opts to not include a Dockerfile
, but instead use Cloud Native Buildpacks.
Buildpacks assist in building containers for common languages, including Python. To build a Python container, the only alteration to the code is to define the command to start the web service.
To containerize the template project, first create a new file named Procfile
in the top level of your project (in the same directory as manage.py
), and copy the following content:
touch Procfile cloudshell edit Procfile
Procfile
web: gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application
7. Configure, build, and run migration steps
To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your static assets, you need to run migrate
and collectstatic
.
These base Django migration commands need to be run within the context of your built container image with access to your database.
You will also need to run createsuperuser
to create an administrator account to log into the Django admin.
Allow access to components
For this step, we're going to use Cloud Build to run Django commands, so Cloud Build will need access to the Django configuration stored in Secret Manager.
As earlier, set the IAM policy to explicitly allow the Cloud Build service account access to the secret settings:
export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)') export CLOUDBUILD=${PROJECTNUM}@cloudbuild.gserviceaccount.com gcloud secrets add-iam-policy-binding application_settings \ --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor
Additionally, allow Cloud Build to connect to Cloud SQL in order to apply the database migrations:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${CLOUDBUILD} --role roles/cloudsql.client
Create your Django superuser
To create the superuser, you're going to use a data migration. This migration needs to be created in the migrations
folder under my myproject
.
Firstly, create the base folder structure:
mkdir myproject/migrations touch myproject/migrations/__init__.py
Then, create the new migration, copying the following contents:
touch myproject/migrations/0001_createsuperuser.py cloudshell edit myproject/migrations/0001_createsuperuser.py
myproject/migrations/0001_createsuperuser.py
import os
from django.contrib.auth.models import User
from django.db import migrations
def createsuperuser(apps, schema_editor):
admin_password = os.environ["ADMIN_PASSWORD"]
User.objects.create_superuser("admin", password=admin_password)
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.RunPython(createsuperuser)
]
Now back in the terminal, create the admin_password
as within Secret Manager, and allow it to be accessed by Cloud Build:
admin_password="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)" echo -n "${admin_password}" | gcloud secrets create admin_password --data-file=- gcloud secrets add-iam-policy-binding admin_password \ --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor
Create the migration configuration
Create a migration configuration file that Cloud Build will use to run the database and static migration commands:
touch migrate.yaml cloudshell edit migrate.yaml
migrate.yaml
steps:
# This step creates a new image, adding the Cloud SQL Auth Proxy to allow Cloud Build to connect securely to Cloud SQL
- id: "docker-layer"
name: "gcr.io/cloud-builders/docker"
entrypoint: bash
args:
- "-c"
- "echo \"FROM ${_IMAGE_NAME}\nCOPY --from=gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy /cloudsql/cloud_sql_proxy\" > Dockerfile-proxy && docker build -f Dockerfile-proxy -t ${_IMAGE_NAME}-proxy ."
# This step runs the Django migration commands using the image built in the previous step
# It starts the Cloud SQL Auth Proxy as a background process, then runs the Django commands.
- id: "migrate"
name: "${_IMAGE_NAME}-proxy"
env:
- USE_CLOUD_SQL_AUTH_PROXY=true
secretEnv:
- APPLICATION_SETTINGS
- ADMIN_PASSWORD
entrypoint: launcher
args:
- "bash"
- "-c"
- "(/cloudsql/cloud_sql_proxy -instances=${_INSTANCE_CONNECTION_NAME}=tcp:5432 & sleep 2) &&
python3 manage.py migrate &&
python3 manage.py collectstatic --noinput"
substitutions:
_INSTANCE_CONNECTION_NAME: "${PROJECT_ID}:${_REGION}:myinstance"
_IMAGE_NAME: "gcr.io/${PROJECT_ID}/myimage"
_REGION: us-central1
availableSecrets:
secretManager:
- versionName: projects/${PROJECT_ID}/secrets/application_settings/versions/latest
env: APPLICATION_SETTINGS
- versionName: projects/${PROJECT_ID}/secrets/admin_password/versions/latest
env: ADMIN_PASSWORD
options:
dynamicSubstitutions: true
Build your application image
You can now build your image, which will be named myimage
:
gcloud builds submit --pack image=gcr.io/${PROJECT_ID}/myimage
Run the migration
With the configurations in place, run the migrations:
gcloud builds submit --config migrate.yaml
Note: if you chose a region other than "us-central1", specify that value in your command:
gcloud builds submit --config migrate.yaml \ --substitutions _REGION=$REGION
8. Deploy to Cloud Run
With the backing services created and populated, you can now create the Cloud Run service to access them.
Deploy the service to Cloud Run, using the image you build earlier, with the following command:
gcloud run deploy django-cloudrun \ --platform managed \ --region $REGION \ --image gcr.io/${PROJECT_ID}/myimage \ --set-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \ --set-secrets APPLICATION_SETTINGS=application_settings:latest \ --service-account $SERVICE_ACCOUNT \ --allow-unauthenticated
On success, the command line displays the service URL:
Service [django-cloudrun] revision [django-cloudrun-00001-...] has been deployed and is serving 100 percent of traffic. Service URL: https://django-cloudrun-...-uc.a.run.app
You can also retrieve the service URL with this command:
CLOUDRUN_SERVICE_URL=$(gcloud run services describe django-cloudrun \ --platform managed \ --region $REGION \ --format "value(status.url)") echo $CLOUDRUN_SERVICE_URL
You can now visit your deployed container by opening this URL in a web browser:
9. Accessing the Django Admin
One of the main features of Django CMS is its interactive admin.
Updating CSRF settings
Django includes protections against Cross-Site Request Forgery (CSRF). Any time a form is submitted on your Django site, including logging into the Django admin, the Trusted Origins setting is checked. If it doesn't match the origin of the request, Django returns an error.
In the mysite/settings.py
file, if the CLOUDRUN_SERVICE_URL
environment variable is defined it's used in the CSRF_TRUSTED_ORIGINS
and ALLOWED_HOSTS
settings. While defining ALLOWED_HOSTS
isn't mandatory, it's good practice to add this since it's already required for CSRF_TRUSTED_ORIGINS
.
Because you need your service URL, this configuration can't be added until after your first deployment.
You'll have to make a new version of the application settings secret in order to add this setting:
# Save a copy of the secret to your machine gcloud secrets versions access latest --secret application_settings > temp_settings # Append the service URL echo CLOUDRUN_SERVICE_URL=${CLOUDRUN_SERVICE_URL} >> temp_settings # Create a new version of the secret gcloud secrets versions add application_settings --data-file temp_settings # Delete your local copy rm temp_settings
Now, re-deploy the service:
gcloud run services update django-cloudrun \ --platform managed \ --region $REGION \ --image gcr.io/${PROJECT_ID}/myimage
The service has been configured to use the most recent version of the application settings secret, so updating the service will force it to use the new secret version.
Logging into the Django Admin
To access the Django admin interface, append /admin
to your service URL.
Now log in with the username "admin" and retrieve your password using the following command:
gcloud secrets versions access latest --secret admin_password && echo ""
10. Applying application updates
If you want to make changes to your Django site, you will need to:
- build your changes into a new image,
- apply any database or static migrations, and then
- update your Cloud Run service to use the new image.
To build your image:
gcloud builds submit --pack image=gcr.io/${PROJECT_ID}/myimage
To apply database and static migrations:
gcloud builds submit --config migrate.yaml
To update your service:
gcloud run services update django-cloudrun \ --platform managed \ --region $REGION \ --image gcr.io/${PROJECT_ID}/myimage
Running makemigrations
When making changes to your database models, you may need to generate Django's migration files by running python manage.py makemigrations
.
There are several ways this can be added to your workflow:
Within Cloud Build
Changing the migrate.yaml
to include the command will generate the migrations before they're applied, but the changes will not be generated on your local machine.
On your local machine, against the Cloud SQL Database
You can run the Cloud SQL Auth Proxy on your local machine, connecting to your deployed database and applying the one off command. This step runs commands similar to those in migrate.yaml
on your local machine.
Once you have installed the Cloud SQL Auth Proxy, follow these steps:
# Create a virtualenv virtualenv venv source venv/bin/activate pip install -r requirements.txt # Copy the application settings to your local machine gcloud secrets versions access latest --secret application_settings > temp_settings # Run the Cloud SQL Auth Proxy ./cloud_sql_proxy --instances=${PROJECT_ID}:${REGION}:myinstance=tcp:5432 # In a new tab, run commands using these local settings and setting the proxy flag USE_CLOUD_SQL_AUTH_PROXY=true APPLICATION_SETTINGS=$(cat temp_settings) python manage.py makemigrations # Disable and remove your virtualenv when complete deactivate rm -rf venv
On your local machine, against an SQLite database
You can override the database settings and use a local SQLite file as a local database to generate the migration files. This may not work for all changes given the differences between PostgreSQL and SQLite databases.
To apply this method:
# Create a virtualenv virtualenv venv source venv/bin/activate pip install -r requirements.txt # Copy the application settings to your local machine gcloud secrets versions access latest --secret application_settings > temp_settings # Edit the DATABASE_URL setting to use a local sqlite file. For example: DATABASE_URL=sqlite:////tmp/my-tmp-sqlite.db # Run commands using these local settings APPLICATION_SETTINGS=$(cat temp_settings) python manage.py makemigrations # Disable and remove your virtualenv when complete deactivate rm -rf venv
Applying migrations
Once your migrations have been generated, you can apply them by building your application container image and running the Cloud Build migration command, as specified above.
11. Congratulations!
You have just deployed a complex project to Cloud Run!
- Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.
- Cloud SQL allows you to provision a managed PostgreSQL instance that is maintained automatically for you, and integrates natively into many Google Cloud systems.
- Cloud Storage lets you have cloud storage in a way that is accessible seamlessly in Django.
- Secret Manager allows you to store secrets, and have them accessible by certain parts of Google Cloud and not others.
Clean up
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
- In the Cloud Console, go to the Manage resources page.
- In the project list, select your project then click Delete.
- In the dialog, type the project ID and then click Shut down to delete the project.
Learn more
- Django on Cloud Run: https://cloud.google.com/python/django/run
- Hello Cloud Run with Python: https://codelabs.developers.google.com/codelabs/cloud-run-hello-python3
- Python on Google Cloud: https://cloud.google.com/python
- Google Cloud Python client: https://github.com/googleapis/google-cloud-python