In modern software development, having a highly available application that can quickly scale to meet demand is a highly desirable trait. In this tutorial you will learn how to use Google Cloud SQL and Google Kubernetes Engine to make a simple application have these traits..

What is Cloud SQL?

Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, and administer your relational PostgreSQL and MySQL databases in the cloud. Cloud SQL offers high performance, vertical scalability, and convenience. Hosted on Google Cloud Platform, Cloud SQL provides a database infrastructure for applications running anywhere.

What is Google Kubernetes Engine (GKE)?

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.

Google Kubernetes Engine (GKE) is a managed environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.

Why a meme generator?

Memes are everywhere in the web, this is a tiny application written in Python that generates custom memes based on the templates available and you can upload your own template.

What you will build?

The meme generator will look like the image below:

At the end of this codelab you will know how to:

Your Project ID

Through this tutorial it will be necessary in multiple occasions to know what is your Project ID. To know which one is your Project ID access the Google Cloud Dashboard:

Copy the ID and save it for later use.

Activating APIs

To proceed, it is necessary to enable some APIs on our console. To enable an API, use the following instructions:

In the Products & services menu on the left, click on APIs and Services > Library.

Type in the name of the API you are searching for in the search bar:

You may see several results. Click the API you would like to enable, and then click the Enable button (If they button says 'managed' then it is already enabled). After a few minutes, a page will appear showing the API usage. You can use the sidebar on the left to go back to the library and activate more APIs.

You will need to make sure that the following APIs are enabled for this lab:

Opening the Shell Environment

The Google Cloud Shell is a shell environment that is accessible to all Google Cloud users. This environment is already equipped with several useful utilities such as git, python, gcloud, and kubectl.

To access your Cloud Shell, log into the Google Cloud Console and click on the Shell Console button in the top right corner of the screen:

If you see a pop up, click ‘START CLOUD SHELL' and wait a few minutes for your environment to be created.

Editing Files from Google Cloud Shell

Throughout this Codelab, you will need to edit several files.

If you are comfortable with CLI interactions, the Cloud Shell environment comes equipped with nano, vi, and vim.

If you prefer to use a graphical interface, the Cloud Shell also has a built in editor. In the top right corner of the shell window, click the icon shaped like a pencil. This will open the editor. Clicking on a file will allow you to modify and save changes.

Inside your cloud shell use the following command to create a PostgreSQL Cloud SQL instance.

gcloud sql instances create --gce-zone us-central1-f \
--database-version POSTGRES_9_6 --memory 4 --cpu 2 memegen-db 

There is a lot going on with this command, so let's examine it:

The base command is gcloud sql instances create memegen-db, which creates a new Cloud SQL INSTANCE_ID as memgen-db. If you wish, you can substitute a different instance name.

The --gce-zone flag specifies that the instance should be created in the us-central1-f zone.

The flag --database-version POSTGRES_9_6 specifies you want the instance to run PostgreSQL.

Once your instance has been successfully created, you will get confirmation from the command-line interface that your instance was created successfully.

Then it's time to create the schema where the application will store the data, for this step it's necessary to run the command line gcloud replacing [DATABASE_NAME] for something like memegen and [INSTANCE_ID] in this case for memegen-db:

gcloud sql databases create [DATABASE_NAME] --instance=[INSTANCE_ID]


Finally, we need to set the password for the postgres user. We will use this user to connect our application to the database:

gcloud sql users set-password postgres \
   --instance [INSTANCE_ID] --password [PASSWORD]

A Cloud SQL instance configured for high availability is called a regional instance. A regional instance is located in two zones within the configured region, if it is unable to serve data from its primary zone, it fails over and continues to serve data from its secondary zone.

If an instance configured for high availability experiences an outage or becomes unresponsive, Cloud SQL automatically switches to serving data from the secondary zone. This is called a failover.

When is failover triggered?

Failover is triggered when one of the following scenarios occur:

The instance must be in a normal operating state (not stopped or undergoing maintenance). Failover can also be started manually.

Activating High Availability

You can configure an instance for high availability when you create the instance, or you can enable high availability on an existing instance.

Click on the "SQL" menu item from the menu. A list with all your instances will be available. Notice that the column "High Availability" has a link written Add. Click on this link for your freshly created instance:

A popup will show up for you to confirm your action, click on "Enable". This step may take several minutes to run. You can refresh the page and see that High Availability is being applied to your instance.

You can also have High Availability enabled upon instance creation if you expand the "Show configuration options":

Then on the "Configuration options" section, click on "Enable auto backups and High Availability" and change the "Availability" option from "Single zone" to "High availability (regional)":

Forcing failover to secondary zone

The instance fails over and is not available to serve data for a few minutes.

Using the Google Cloud Shell

gcloud sql instances failover [INSTANCE_ID]

Using the Google Cloud Console

Click on your instance, details will be displayed. On the top right there is an option: "Failover". Click on it.

A popup will show up asking to confirm the failover by typing the instance name. Type it and click on "Trigger Failover". A warning message will appear on the top of the screen:

The Cloud SQL Proxy is a safe and secure way to connect to your Cloud SQL instance, no matter your location. The proxy will listen on a local port on your machine, and will act as an application endpoint for connections to your Cloud SQL instance. In this section, we will setup a connection to your database with the Cloud SQL Proxy, and verify that your app can connect to it successfully.

Install the Proxy

Download the Cloud SQL Proxy and make it executable with the following commands:

wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy

The first command downloads the cloud_sql_proxy binary and the second one gives permission to it be executed.

Create Proxy Credentials

In order to connect to your Cloud SQL instance, the proxy needs to be provided with some kind of credentials. The best way to do this is by creating a service account and granting it permission to connect to your Cloud SQL instance. Service accounts are an easy and secure way to authenticate your applications.

A service account is a special type of Google account that belongs to your project. Instead of using an individual user's credential, a service account can be used to authenticate your application instead. Service accounts should have only required permissions - this limits the damage caused if an account is compromised. You can create multiple private keys per service account, which can be used for authentication when connecting to Google Cloud. You can find more information about service accounts in the documentation here.

Creating Service Account for use on the Proxy

Create a new service account with the following command:

gcloud iam service-accounts create proxy-user --display-name "proxy-user"

Verify the email of the service account, which will be used in the following steps:

gcloud iam service-accounts list

Next, grant your service account the CloudSQL Client role. This will allow the account to connect the proxy on your behalf:

gcloud projects add-iam-policy-binding [PROJECT_ID] --member \
serviceAccount:[SERVICE_ACCOUNT_EMAIL] --role roles/cloudsql.client

Finally, create a file called key.json that will be used to authenticate with your service account:

gcloud iam service-accounts keys create key.json --iam-account [SERVICE_ACCOUNT_EMAIL]

Start Cloud SQL Proxy

For this step you will need the INSTANCE_CONNECTION_NAME which you can get in two ways, one is using the Cloud Shell and running:

gcloud sql instances describe memegen-db | grep connectionName

The other way is by accessing your instance details page on the "Connect to this instance" section.

In the previous step you created a service account, now you can use it to connect the proxy to your Cloud SQL instance:

./cloud_sql_proxy -instances=[INSTANCE_CONNECTION_NAME]=tcp:5432 -credential_file=key.json &

The proxy will run in the background until the process is killed.

Run your App Locally

Inside your Google Cloud Shell, start by cloning the application:

git clone https://github.com/GoogleCloudPlatform/gmemegen.git

Next, change directory into the project you just cloned:

cd gmemegen

Next, we need to setup a virtual environment for our application. A virtual environment allows us to safely install the apps requirements without affecting any system files. First, set up and activate a new virtual environment with the following commands:

virtualenv -p /usr/bin/python3 env
source env/bin/activate

Next, install the requirements while your virtualenv is active:

pip install -r app/requirements.txt

Start the application with the following command, making sure to replace DB_USER and DB_PASS with your database user and password:

python app/main.py --db-user [DB_USER] --db-pass [DB_PASS]


The application should be running on 127.0.0.1:8080 of your Cloud Shell. Click on the "Web Preview" button on the top right of your Cloud Shell's window to view the application as if it was running on your machine.

A new tab should open up and you will see this:

Press Ctrl+C when you are finished to stop the application - but leave the proxy running the background for the next step.

Next, we need to place our application inside a container. Containers allow us to deploy applications in isolated, reproducible environments that can then be managed by Kubernetes.

Build your Container

To build the container image, use the following command from inside the folder containing the Dockerfile:

docker build . -t gmemegen


The base command is "docker build .", which builds a container image from the Dockerfile in the specified folder. The -t flag tags the built container image with the gmemegen tag.

You can run an instance of your containerized application with the following command, but before running make sure to replace [DATABASE_USER], [DATABASE_NAME],and [DATABASE_PASSWORD] with your database username, password, and name, respectively:

docker run --net="host" -d --rm --name runtime \
-e "DB_USER=[DATABASE_USER]" -e "DB_PASS=[DATABASE_PASSWORD]" \
-e "DB_NAME=[DATABASE_NAME]" gmemegen

The base command is docker run gmemegen, which creates a container based on the image gmemegen.

The --net="host" flag signals to use the host's network interface, which allows the container to connect to the Cloud SQL proxy. The -e flag allows for environment variables to be run inside the container, in this case it's passed DB_USER and DB_PASS.

The -d flag runs the container in a ‘detached' mode, letting it run in the background, while the --rm flag removed the container once it is stopped.

Finally, the --name flag gives the runtime a name, which adds a more convenient handle to stop it later.

Use the Web Preview to verify the containerized version of your application is still working.

When finished, stop both the container runtime and the Cloud SQL Proxy with the following commands:

docker stop runtime
killall cloud_sql_proxy

Upload your Container to the Google Container Registry

The Google Container Registry (Also known as gcr.io) is a fast and secure way to store your container images. By uploading your container to gcr.io, you will be able to access them from your Google Kubernetes Engineer deployments.

Before accessing the Google Container Registry, we need to configure docker to connect to GCR with our credentials:

docker-credential-gcr configure-docker

Next, re-tag the image with the correct gcr.io url. This:

docker tag gmemegen gcr.io/[PROJECT_ID]/gmemegen

Finally, use gcloud to push the Docker image to the GCR:

docker push gcr.io/[PROJECT_ID]/gmemegen

You can check your images of Google Container Registry by accessing the dashboard menu: "Tools > Container Registry > Images":

You should see your image as it is shown below:

Understanding Kubernetes Terminology

In Kubernetes, there are a number of concepts and terms you need to understand to make use of the technology.

A node is an independent machine, and is either a VM or a bare metal server. In GKE, nodes are formed by Google Compute Engine instances.

A pod is a group of one or more containers, with shared storage/network, and a specification for how to run the containers. For this lab, we will create a pod that contains two containers: one for our application, and one for the Google Cloud Proxy.

A deployment is a controller that manages which pods are running on what nodes. A deployment describes the ideal state of the pods, and Kubernetes will attempt to create pods and replicas to achieve this state .

Create a new Kubernetes Cluster

Next, we need to create a Kubernetes cluster. In Kubernetes, a cluster is a collection of nodes that will run your workloads. In addition, Google Kubernetes Engine (GKE), provides auto-scaling out-of-the box if you need to scale the size of your cluster on-demand.

Use this gcloud command to create a new cluster:

gcloud container clusters create my-cluster --zone us-central1-f \
--machine-type=n1-standard-2  --enable-autorepair \
--enable-autoscaling --max-nodes=10 --min-nodes=1

This base command is gcloud container clusters create my-cluster, which will create a new cluster named ‘my-cluster'.

The --zone flag also specifies that we want our cluster to be running in the us-cental1-f zone. For the best performance, this should be the same zone as our Cloud SQL instance (or as close as possible).

The --enable-auto repair flag enables GKE's Node Auto-Repair feature, which periodically checks the health of nodes running in your cluster. If it detects a node in poor health, it will drain it of services and recreate it.

Finally, we can enable the Cluster Autoscaler feature with the --enable-autoscaling flag. This allows GKE to automatically resize your cluster based on the current workload. The --max-nodes and --min-nodes set limits on how many nodes the cluster can container.

Connecting with Kubectl

Kubernetes clusters are controlled with the kubectl command. Using gcloud, it is easy to connect kubectl to a GKE cluster.

Verify that your cluster was successfully created with the following command:

gcloud container clusters list


If created successfully, it will be listed in the output. Use the following command to authenticate kubectl:

gcloud container clusters get-credentials my-cluster \
--zone us-central1-f

Create Kubernetes Secrets

In order for our container to access the proxy connection or database, we need to create secrets that Kubernetes will pass to the containers as environment variables. By creating secrets instead of hardcoding values, they can be exchanged and rotated without having to rebuild the container.

We can turn the key.json file into a Kubernetes secret, which will then be used to connect the proxy from within a container:

kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=../key.json

We can also create secrets from just from text. With the following command, we will create a secret that contains the database credentials the application needs to log into the application, replace the [DB_USER] and [DB_PASS] with your database username and password:

kubectl create secret generic cloudsql-db-credentials \
    --from-literal=username=[DB_USER] \
    --from-literal=password=[DB_PASS] \
    --from-literal=dbname=[DB_NAME]

Describe your Deployment

Next, we need to create a deployment that describes how we want our pods to run. For our use we want a deployment consisting of two containers: gmemegen and cloud-sql-proxy. The application will run in the gmemegen container, and will connect to the Cloud SQL instance through what is called a ‘sidecar' container.

If you open up the gmemegen_deployment.yaml, look under the section spec - this is where we will describe the layout of our deployment. There are two important subsections: containers and volumes.

Main Container

Our first step is to describe our main container. The first container is called gmemegen, and is described in the containers section as follows:

       - name: gmemegen
          image: gcr.io/[PROJECT_ID]/gmemegen
          ports:
            - containerPort: 8080
          # Set env variables used for Postgres Connection
          env:
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            - name: DB_PASS
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
            - name: DB_NAME
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: dbname

As you can see, it creates a container called gmemgen from the image located at gcr.io/[PROJECT_ID]/gmemegen that is listening on port 8080. You will need to update [PROJECT_ID] with the correct name of your project.

We also pass along the cloud-db-credentials secret created earlier by creating the DB_USER and DB_PASS environment variables. Our application has been coded to use these variables when creating the connection to our database.

Sidecar Container

Next, we want to describe the sidecar container. This container will contain our proxy, and allow the main container to connect to the Cloud SQL instance. The second container should be described like this:

       - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
               "-instances=<INSTANCE_CONNECTION_NAME>=tcp:5432",
               "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: my-secrets-volume
              mountPath: /secrets/cloudsql
              readOnly: true

If you inspect the description, you will see the command argument is almost the same as the command we used to start our proxy earlier. You will need to update the <INSTANCE_CONNECTION_NAME> to point to the your Cloud SQL instance.

In Kubernetes, a volume is persistent storage. To make use of something in a volume, you mount it to a directory in a container. In this case, we've mounted the my-secrets-volume container to /secrets/cloudsql. The proxy targets this directory for it's credential file.

In order to mount a volume, you must also describe it and its contents. This is done in the volume section, which precedes containers. This section should look this:

     volumes:
        - name: my-secrets-volume
          secret:
            secretName: cloudsql-instance-credentials


This section converts our cloudsql-instance-credentials file into a volume called my-secrets-volume. This volume is the one that is mounted to our sidecar proxy as described above.

Create your Deployment:

Now that we've described our deployment, it's very easy to deploy it onto the cluster:

kubectl create -f gmemegen_deployment.yaml

After a few minutes, you can get the status of your pod with the following command:

kubectl get pods

If everything is set up correctly, you should see a 2/2 indicating both containers inside your pod are running correctly.

Next, you need to create a service that exposes your deployment to the outside world. We want to forward port 80 to our containers port at 8080, so we can create a LoadBalancer with the following command:

kubectl expose deployment gmemegen --type "LoadBalancer" --port 80 --target-port 8080

After a few minutes, you can describe the service to get the LoadBalancer Ingress:

kubectl describe services gmemegen


Your service should be ready at http://[LoadBalancer Ingress]:80. Navigate to your URL and make some memes!

You have successfully launched an application attached to a PostgreSQL Server with High Availability!