1. Overview
This series of codelabs (self-paced, hands-on tutorials) aims to help Google App Engine (standard environment) developers modernize their apps. The codelabs guide users through a series of migrations, primarily moving away from legacy bundled services. The net effect is to make apps more portable, giving developers more options and flexibility. Some of these options include migrating to a standalone Cloud service, updating apps to the latest App Engine runtimes, and switching to sister severless platforms like Cloud Functions or Cloud Run, or to other compute products.
The purpose of this codelab is to show Python 2 App Engine developers how to migrate from App Engine Memcache to Cloud Memorystore (for Redis). There is also an implicit migration from App Engine ndb
to Cloud NDB, but that's primarily covered in the Module 2 codelab; check it out for more step-by-step information.
You'll learn how to
- Set up a Cloud Memorystore instance (from the Google Cloud Console or the gcloud tool)
- Set up a Cloud Serverless VPC connector (from the Cloud Console or the gcloud tool)
- Migrate from App Engine Memcache to Cloud Memorystore
- Implement caching with Cloud Memorystore in a sample app
- Migrate from App Engine
ndb
to Cloud NDB
What you'll need
- A Google Cloud project with an active billing account (this is not a free codelab)
- Basic Python skills
- Working knowledge of common Linux commands
- Basic knowledge of developing and deploying App Engine apps
- A working Module 12 App Engine sample app. Complete the Module 12 codelab (recommended) or copy and use the Module 12 Python 2 app from the GitHub repo. You can also start from the Python 3 version, but this codelab focuses on our early adopters.
Survey
How will you use this tutorial?
How would you rate your experience with Python?
How would you rate your experience with using Google Cloud services?
2. Background
This codelab demonstrates how to migrate a sample app from App Engine Memcache (and NDB) to Cloud Memorystore (and Cloud NDB). This process involves replacing dependencies on App Engine bundled services, making your apps more portable. You can choose to either stay on App Engine or consider moving to any of the alternatives described earlier.
This migration requires more effort compared to the others in this series. The recommended replacement for App Engine Memcache is Cloud Memorystore, a fully-managed cloud-based caching service. Memorystore supports a pair of popular open source caching engines, Redis and Memcached. This migration module uses Cloud Memorystore for Redis. You can learn more in the Memorystore and Redis overview.
Because Memorystore requires a running server, Cloud VPC is also needed. Specifically a Serverless VPC connector must be created so the App Engine app can connect to the Memorystore instance via its private IP address. When you've completed this exercise, you will have updated the app so that while it behaves as before, Cloud Memorystore will be the caching service, replacing App Engine's Memcache service.
This tutorial begins with the Module 12 sample app in Python 2 followed by an additional, optional, minor upgrade to Python 3. If you're already familiar with accessing App Engine bundled services from Python 3 via the Python 3 App Engine SDK, you can start with the Python 3 version of the Module 12 sample app instead. Doing so will entail removing use of the SDK since Memorystore is not an App Engine bundled service. Learning how to use the Python 3 App Engine SDK is out-of-scope of this tutorial.
This tutorial features the following key steps:
- Setup/prework
- Set up caching services
- Update configuration files
- Update main application
3. Setup/prework
Prepare Cloud project
We recommend reusing the same project as the one you used for completing the Module 12 codelab. Alternatively, you can create a brand new project or reuse another existing project. Every codelab in this series has a "START" (the baseline code to start from) and a "FINISH" (the migrated app). The FINISH code is provided so you can compare your solutions with ours in case you have issues. You can always rollback to START over if something goes wrong. These checkpoints are designed to ensure you're successful in learning how to perform the migrations.
Whichever Cloud project you use, be sure it has an active billing account. Also ensure that App Engine is enabled. Review and be sure you understand the general cost implications in doing these tutorials. Unlike others in this series however, this codelab uses Cloud resources that do not have a free tier, so some costs will be incurred to complete the exercise. More specific cost information will be provided along with recommendations for reduced usage, including instructions at the end on releasing resources to minimize billing charges.
Get baseline sample app
From the baseline Module 12 code we're STARTing from, this codelab walks you through the migration step-by-step. When complete, you'll arrive at a working Module 13 app closely resembling the code in one of the FINISH folders. Here are those resources:
- START: Module 12 Python 2 (
mod12
) or Python 3 (mod12b
) app - FINISH: Module 13 Python 2 (
mod13a
) or Python 3 (mod13b
) app - Entire migration repo (clone or download ZIP)
The START folder should contain the following files:
$ ls README.md app.yaml main.py requirements.txt templates
If you're starting from the Python 2 version, there will also be an appengine_config.py
file and possibly a lib
folder if you completed the Module 12 codelab.
(Re)Deploy Module 12 app
Your remaining prework steps:
- Re-familiarize yourself with the
gcloud
command-line tool (if necessary) - (Re)deploy the Module 12 code to App Engine (if necessary)
Python 2 users should delete and re-install the lib
folder with these commands:
rm -rf ./lib; pip install -t lib -r requirements.txt
Now everyone (Python 2 and 3 users) should upload the code to App Engine with this command:
gcloud app deploy
Once successfully deployed, confirm the app looks and functions just like the app in Module 12, a web app that tracks visits, caching them for the same user for an hour:
Because the most recent visits are cached, page refreshes should load fairly quickly.
4. Set up caching services
Cloud Memorystore is not serverless. An instance is required; in this case one running Redis. Unlike Memcache, Memorystore is a standalone Cloud product and does not have a free tier, so be sure to check Memorystore for Redis pricing information before proceeding. To minimize costs for this exercise, we recommend the least amount of resources to operate: a Basic service tier and a 1 GB capacity.
In addition to a Memorystore instance, a Serverless VPC connector must be created so App Engine can connect to that instance. To minimize VPC costs, opt for the instance type (f1-micro
) and the fewest number of instances to request (we suggest minimum 2, maximum 3). Also check out the VPC pricing information page.
We repeat these recommendations for reducing costs as we lead you through creating each required resource. Furthermore, when you create Memorystore and VPC resources in the Cloud Console, you'll see the pricing calculator for each product in the upper-right corner, giving you a monthly cost estimate (see illustration below). Those values automatically adjust if you change your options. This is roughly what you should expect to see:
Both resources are required, and it doesn't matter which one you create first. If you create the Memorystore instance first, your App Engine app can't reach it without a serverless VPC connector. Likewise, if you make the VPC connector first, there's nothing on that VPC network for your App Engine app to talk to. This tutorial has you creating the Memorystore instance first followed by the VPC connector.
Once both resources are online, you are going to add the relevant information to app.yaml
so your app can access the cache. You can also reference the Python 2 or Python 3 guides in the official documentation. The data caching guide on the Cloud NDB migration page ( Python 2 or Python 3) is also worth referencing.
Create a Cloud Memorystore instance
Because Cloud Memorystore has no free tier, we recommend allocating the least amount of resources to complete the codelab. You can keep costs to a minimum by using these settings:
- Select the lowest service tier: Basic (console default: "Standard",
gcloud
default: "Basic"). - Choose the least amount of storage: 1 GB (console default: 16 GB,
gcloud
default: 1 GB). - Typically the newest versions of any software require the greatest amount of resources, but selecting the oldest version is probably not recommended either. The second latest version currently is Redis version 5.0 (console default: 6.x)
With those settings in mind, the next section will lead you through creating the instance from the Cloud Console. If you prefer to do it from the command-line, skip ahead.
From the Cloud Console
Go to the Cloud Memorystore page in the Cloud Console (you may be prompted for billing information). If you haven't enabled Memorystore yet, you will be prompted to do so:
Once you enable it (and possibly along with billing), you'll arrive at the Memorystore dashboard. This is where you can see all instances created in your project. The project shown below doesn't have any, so that's why you see, "No rows to display". To create a Memorystore instance, click "Create instance" at the top:
This page features a form to complete with your desired settings to create the Memorystore instance:
To keep costs down for the sample app, follow the recommendations covered earlier. After you've made your selections, click Create. The creation process takes several minutes. When it finishes, copy the instance's IP address and port number so that you can add that to app.yaml
.
From command-line
While it is visually informative to create Memorystore instances from the Cloud Console, some prefer the command-line. Be sure to have gcloud
installed and initialized before moving ahead.
As with the Cloud Console, Cloud Memorystore for Redis must be enabled. Issue the gcloud services enable redis.googleapis.com
command and wait for it to complete, like this example:
$ gcloud services enable redis.googleapis.com Operation "operations/acat.p2-aaa-bbb-ccc-ddd-eee-ffffff" finished successfully.
If the service has already been enabled, running the command (again) has no (negative) side effects. With the service enabled, let's create a Memorystore instance. That command looks like this:
gcloud redis instances create NAME --redis-version VERSION \ --region REGION --project PROJECT_ID
Choose a name for your Memorystore instance; this lab uses "demo-ms
" as the name along with a project ID of "my-project
". This sample app's region is us-central1
(same as us-central
), but you may use one closer to you if latency is a concern. Both the Memorystore instance and VPC connector must be created in the same region as the App Engine app. You can select any Redis version you prefer, but we are using version 5 as recommended earlier. Given those settings, this is the command you'd issue (along with associated output):
$ gcloud redis instances create demo-ms --region us-central1 \ --redis-version redis_5_0 --project my-project Create request issued for: [demo-ms] Waiting for operation [projects/my-project/locations/us-central1/operations/operation-xxxx] to complete...done. Created instance [demo-ms].
Unlike the Cloud Console defaults, gcloud
defaults to minimal resources. The result is that neither the service tier nor the amount of storage were required in that command. Creating a Memorystore instance takes several minutes, and when it's done, note the instance's IP address and port number as they will be added to app.yaml
soon.
Confirm instance created
From Cloud Console or command-line
Whether you created your instance from the Cloud Console or command-line, you can confirm it's available and ready for use with this command: gcloud redis instances list --region REGION
Here's the command for checking instances in region us-central1
along with the expected output showing the instance we just created:
$ gcloud redis instances list --region us-central1 INSTANCE_NAME VERSION REGION TIER SIZE_GB HOST PORT NETWORK RESERVED_IP STATUS CREATE_TIME demo-ms REDIS_5_0 us-central1 BASIC 1 10.aa.bb.cc 6379 default 10.aa.bb.dd/29 READY 2022-01-28T09:24:45
When asked for the instance information or to configure your app, be sure to use HOST
and PORT
(not RESERVED_IP
). The Cloud Memorystore dashboard in the Cloud Console should now display that instance:
From Compute Engine virtual machine
If you have a Compute Engine virtual machine (VM), you can also send your Memorystore instance direct commands from a VM to confirm it's working. Be aware that a VM may have associated costs.
Create serverless VPC connector
Like with Cloud Memorystore, you can create the serverless Cloud VPC connector in the Cloud Console or on the command-line. Similarly, Cloud VPC has no free tier, so we recommend allocating the least amount of resources to complete the codelab in the interests of keeping costs to a minimum, and that can be achieved with these settings:
- Select the lowest maximum number of instances: 3 (console &
gcloud
default: 10) - Choose the lowest-cost machine type:
f1-micro
(console default:e2-micro
, nogcloud
default)
The next section will lead you through creating the connector from the Cloud Console using the above Cloud VPC settings. If you prefer to do it from the command-line, skip to the next section.
From Cloud Console
Go to the Cloud Networking "Serverless VPC access" page in the Cloud Console (you may be prompted for billing information). If you haven't enabled VPC yet, you will be prompted to do so:
Once you enable it (and possibly along with billing), you'll arrive at the Serverless VPC access dashboard. The dashboard displays all of the VPC connectors created. This project doesn't have any, so that's why it says, "No rows to display". To create a serverless VPC connectors, click "Create Connector" at the top:
Now you are brought to a form to complete with your desired settings to create the VPC connector:
Choose the appropriate settings for your own applications. For our sample app with minimal needs, we want to keep costs down, so follow the recommendations covered earlier. Once you've made your selections, click "Create". Like creating the Memorystore instance, requisitioning a VPC connector will also take a few minutes to complete.
From command-line
As with creating Memorystore instances, you must enable Cloud VPC access first. by issuing this command and observing the expected output:
$ gcloud services enable vpcaccess.googleapis.com Operation "operations/acf.p2-aaa-bbb-ccc-ddd-eee-ffffff" finished successfully.
With Cloud VPC enabled, a VPC connector is created with a command that looks like this:
gcloud compute networks vpc-access connectors create CONNECTOR_NAME \ --range 10.8.0.0/28 --region REGION --project PROJECT_ID
You need to pick a name for your connector as well as an unused /28
CIDR block starting IP address. If our project is my-project
, we chose a VPC connector name of demo-vpc
with min instances 2 (default) and max instances 3, f1-micro
instances, in region us-central1
, and an IPv4 CIDR block 10.8.0.0/28
(recommended in the cloud console), this is the command you should execute now along with expected output:
$ gcloud compute networks vpc-access connectors create demo-vpc \ --max-instances 3 --range 10.8.0.0/28 --machine-type f1-micro \ --region us-central1 --project my-project Create request issued for: [demo-vpc] Waiting for operation [projects/my-project/locations/us-central1/operations/xxx] to complete...done. Created connector [demo-vpc].
The values you don't see above, such as min instances of 2, a network named default
, etc., are the defaults. Creating a VPC connector also takes several minutes to complete.
Confirm connector created
Once the process has completed, issue the following gcloud
command, assuming it is region us-central1
, to confirm that it has been created and ready for use:
$ gcloud compute networks vpc-access connectors list --region us-central1 CONNECTOR_ID REGION NETWORK IP_CIDR_RANGE SUBNET SUBNET_PROJECT MIN_THROUGHPUT MAX_THROUGHPUT STATE demo-vpc us-central1 default 10.8.0.0/28 200 300 READY
Similarly, the Cloud Console's Serverless VPC connector dashboard should now display the connector you just created:
You'll be adding the Cloud project ID, the VPC connector name, and the region to app.yaml
.
Now that we've created the caching resources needed for our app, let's move onto the changes required in our application, starting with the configuration files. The work you've accomplished here provides the Cloud Memorystore instance's IP address and port number as well as the VPC connector information for the configuration changes ahead in the next section.
5. Update configuration files
The first step is to make all necessary updates to the configuration files. Helping Python 2 users migrate is the main goal of this codelab, however that content is usually followed up with information on further porting to Python 3 in each section below.
Update requirements.txt
In the Module 12 requirements.txt
, we only had Flask. Now let's add packages to support Cloud Memorystore as well as Cloud NDB. Since we're using Cloud Memorystore for Redis, it suffices to use the standard Redis client for Python (redis
). (There is no Cloud Memorystore client library per se.)
- Add
google-cloud-ndb
- Add
redis
The following diagram illustrates the changes you should make to requirements.txt
:
We recommend the latest versions of each library, so specific version numbers have been omitted. If any incompatibilities occur, version numbers will be added to requirements.txt
in the repo as necessary.
Simplify app.yaml
New sections to add
The Python 2 App Engine runtime requires specific third-party packages when using Cloud APIs like Cloud NDB, namely grpcio
and setuptools
. Python 2 users must list built-in libraries like these along with an available version in app.yaml
. If you don't have a libraries
section yet, create one and add both libraries like the following:
libraries: - name: grpcio version: 1.0.0 - name: setuptools version: 36.6.0
When migrating your app, it may already have a libraries
section. If it does, and either grpcio
and setuptools
are missing, just add them to your existing libraries
section.
Next, our sample app needs the Cloud Memorystore instance and VPC connector information, so add the following two new sections to app.yaml
regardless of which Python runtime you're using:
env_variables: REDIS_HOST: 'YOUR_REDIS_HOST' REDIS_PORT: 'YOUR_REDIS_PORT' vpc_access_connector: name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
That's it as far as the required updates go. Your updated app.yaml
should now look like this:
runtime: python27
threadsafe: yes
api_version: 1
handlers:
- url: /.*
script: main.app
libraries:
- name: grpcio
version: 1.0.0
- name: setuptools
version: 36.6.0
env_variables:
REDIS_HOST: 'YOUR_REDIS_HOST'
REDIS_PORT: 'YOUR_REDIS_PORT'
vpc_access_connector:
name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR
Below is a "before and after" illustrating the updates you should apply to app.yaml
:
*Python 3 differences
This section is optional and only if you're porting to Python 3. To do that, there are a number of changes to make to your Python 2 configuration. Skip this section if you're not upgrading at this time.
Neither threadsafe
nor api_version
are used for the Python 3 runtime, so delete both these settings. The latest App Engine runtime does not support built-in third-party libraries nor the copying of non-built-in libraries. The only requirement for third-party packages is to list them in requirements.txt
. As a result, the entire libraries
section of app.yaml
can be deleted.
Next, the Python 3 runtime requires use of web frameworks that do their own routing, hence why we showed developers how to migrate from webp2 to Flask in Module 1. As a result, all script handlers must be changed to auto
. Since this app doesn't serve any static files, it's "pointless" to have handlers listed (since they are all auto
), so the entire handlers
section can be removed as well. As a result, your new, abbreviated app.yaml
tweaked for Python 3 should be shortened to look like this:
runtime: python39
env_variables:
REDIS_HOST: 'YOUR_REDIS_HOST'
REDIS_PORT: 'YOUR_REDIS_PORT'
vpc_access_connector:
name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR
Summarizing the differences in app.yaml
when porting to Python 3:
- Delete
threadsafe
andapi_version
settings - Delete
libraries
section - Delete
handlers
section (or justscript
handlers if your app serves static files)
Replace the values
The values in the new sections for Memorystore and the VPC connector are just placeholders. Replace those capitalized values (YOUR_REDIS_HOST, YOUR_REDIS_PORT, PROJECT_ID, REGION, CONNECTOR_NAME
) with the values saved from when you created those resources earlier. With regards to your Memorystore instance, be sure to use HOST
(not RESERVED_IP
) and PORT
. Here is a quick command-line way to get the HOST
and PORT
assuming an instance name of demo-ms
and the REGION
is us-central1
:
$ gcloud redis instances describe demo-ms --region us-central1 \ --format "value(host,port)" 10.251.161.51 6379
If our example Redis instance IP address was 10.10.10.10
using port 6379
in our project my-project
located in region us-central1
with a VPC connector name of demo-vpc
, these sections in app.yaml
will look like this:
env_variables:
REDIS_HOST: '10.10.10.10'
REDIS_PORT: '6379'
vpc_access_connector:
name: projects/my-project/locations/us-central1/connectors/demo-vpc
Create or update appengine_config.py
Add support for built-in third-party libraries
Just like what we did with app.yaml
earlier, we need to support grpcio
and setuptools
. In this case, we need to modify appengine_config.py
to support built-in third-party libraries. If this seems familiar, it's because this was also required back in Module 2 when migrating from App Engine ndb
to Cloud NDB. The exact change required is to add the lib
folder to the setuptools.pkg_resources
working set:
*Python 3 differences
This section is optional and only if you're porting to Python 3. One of the welcome App Engine second generation changes is that copying (sometimes called "vendoring") of (non-built-in) 3rd-party packages and referencing built-in 3rd-party packages in app.yaml
are no longer necessary, meaning you can delete the entire appengine_config.py
file.
6. Update application files
There is only one application file, main.py
, so all changes in this section affect just that file. We've provided a pictorial representation of the changes we're going to make to migrate this application to Cloud Memorystore. It's for illustrative purposes only and not meant for you to analyze closely. All the work is in the changes we make to the code.
Let's tackle these one section at a time, starting at the top.
Update imports
The import section in main.py
for Module 12 uses Cloud NDB and Cloud Tasks; here are their imports:
- BEFORE:
from flask import Flask, render_template, request
from google.appengine.api import memcache
from google.appengine.ext import ndb
Switching to Memorystore requires reading environment variables, meaning we need the Python os
module as well as redis
, the Python Redis client. Since Redis can't cache Python objects, we need to marshall the most recent visits list using pickle
, so import that too. (One benefit of Memcache is that object serialization happens automatically whereas Memorystore is a bit more "DIY.") Finally, upgrade from App Engine ndb
to Cloud NDB by replacing google.appengine.ext.ndb
with google.cloud.ndb
. After these changes, your imports should now look like this:
- AFTER:
import os
import pickle
from flask import Flask, render_template, request
from google.cloud import ndb
import redis
Update initialization
Module 12 initialization consists of instantiating the Flask application object app
and setting a constant for an hour's worth of caching:
- BEFORE:
app = Flask(__name__)
HOUR = 3600
Use of Cloud APIs requires a client, so instantiate a Cloud NDB client right after Flask. Next, get the IP address and port number for the Memorystore instance from the environment variables you set in app.yaml
. Armed with that information, instantiate a Redis client. Here is what your code looks like after those updates:
- AFTER:
app = Flask(__name__)
ds_client = ndb.Client()
HOUR = 3600
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = os.environ.get('REDIS_PORT', '6379')
REDIS = redis.Redis(host=REDIS_HOST, port=REDIS_PORT)
*Python 3 migration
This section is optional and if you're starting from the Python 3 version of the Module 12 app. If so, there are several required changes related to imports and initialization.
First, because Memcache is an App Engine bundled service, its use in a Python 3 app requires the App Engine SDK, specifically wrapping the WSGI application (as well as other necessary configuration):
- BEFORE:
from flask import Flask, render_template, request
from google.appengine.api import memcache, wrap_wsgi_app
from google.appengine.ext import ndb
app = Flask(__name__)
app.wsgi_app = wrap_wsgi_app(app.wsgi_app)
HOUR = 3600
Since we're migrating to Cloud Memorystore (not an App Engine bundled service like Memcache), the SDK usage must be removed. This is straightforward as you'll simply delete that entire line that imports both memcache
and wrap_wsgi_app
. Also delete the line calling wrap_wsgi_app()
. These updates leave this part of the app (actually, the entire app) identical to the Python 2 version.
- AFTER:
import os
import pickle
from flask import Flask, render_template, request
from google.cloud import ndb
import redis
app = Flask(__name__)
ds_client = ndb.Client()
HOUR = 3600
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = os.environ.get('REDIS_PORT', '6379')
REDIS = redis.Redis(host=REDIS_HOST, port=REDIS_PORT)
Finally, remove use of the SDK from app.yaml
(delete the line: app_engine_apis: true
) and requirements.txt
(delete the line: appengine-python-standard
).
Migrate to Cloud Memorystore (and Cloud NDB)
Cloud NDB's data model is intended to be compatible with App Engine ndb
's, meaning the definition of Visit
objects stays the same. Mimicking the Module 2 migration to Cloud NDB, all Datastore calls in store_visit()
and fetch_visits()
are augmented and embedded in a new with
block (as use of the Cloud NDB context manager is required). Here are those calls before that change:
- BEFORE:
def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
def fetch_visits(limit):
'get most recent visits'
return Visit.query().order(-Visit.timestamp).fetch(limit)
Add a with ds_client.context()
block to both functions, and put the Datastore calls inside (and indented). In this case, no changes are necessary for the calls themselves:
- AFTER:
def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
with ds_client.context():
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
def fetch_visits(limit):
'get most recent visits'
with ds_client.context():
return Visit.query().order(-Visit.timestamp).fetch(limit)
Next, let's look at the caching changes. Here is the main()
function from Module 12:
- BEFORE:
@app.route('/')
def root():
'main application (GET) handler'
# check for (hour-)cached visits
ip_addr, usr_agt = request.remote_addr, request.user_agent
visitor = '{}: {}'.format(ip_addr, usr_agt)
visits = memcache.get('visits')
# register visit & run DB query if cache empty or new visitor
if not visits or visits[0].visitor != visitor:
store_visit(ip_addr, usr_agt)
visits = list(fetch_visits(10))
memcache.set('visits', visits, HOUR) # set() not add()
return render_template('index.html', visits=visits)
Redis has "get" and "set" calls, just like Memcache. All we do is swap the respective client libraries, right? Almost. As mentioned earlier, we can't cache a Python list with Redis (because it needs to be serialized first, something Memcache takes care of automatically), so in the set()
call, "pickle" the visits into a string with pickle.dumps()
. Similarly, when retrieving visits from the cache, you need to unpickle it with pickle.loads()
right after the get()
. Here is the main handler after implementing those changes:
- AFTER:
@app.route('/')
def root():
'main application (GET) handler'
# check for (hour-)cached visits
ip_addr, usr_agt = request.remote_addr, request.user_agent
visitor = '{}: {}'.format(ip_addr, usr_agt)
rsp = REDIS.get('visits')
visits = pickle.loads(rsp) if rsp else None
# register visit & run DB query if cache empty or new visitor
if not visits or visits[0].visitor != visitor:
store_visit(ip_addr, usr_agt)
visits = list(fetch_visits(10))
REDIS.set('visits', pickle.dumps(visits), ex=HOUR)
return render_template('index.html', visits=visits)
This concludes the changes required in main.py
converting the sample app's use of Memcache to Cloud Memorystore. What about the HTML template and porting to Python 3?
Update HTML template file and port to Python 3?
Surprise! There's nothing to do here as the application was designed to run on both Python 2 and 3 without any code changes nor compatibility libraries. You'll find main.py
. identical across the mod13a
(2.x) and mod13b
(3.x) "FINISH" folders. The same goes for requirements.txt
, aside from any differences in version numbers (if used). Because the user interface remains unchanged, there are no updates to templates/index.html
either.
Everything necessary to run this app on Python 3 App Engine was completed earlier in configuration: unnecessary directives were removed from app.yaml
and both appengine_config.py
and the lib
folder were deleted as they're unused in Python 3.
7. Summary/Cleanup
Deploy application
The last check is always to deploy the sample app. Python 2 developers: delete and reinstall lib
with the commands below. (If you have both Python 2 and 3 installed on your system, you may need to explicitly run pip2
instead.)
rm -rf ./lib pip install -t lib -r requirements.txt
Both Python 2 and 3 developers should now deploy their apps with:
gcloud app deploy
As you merely rewired things under the hood for a completely different caching service, the app itself should operate identically to your Module 12 app:
This step completes codelab. We invite you to compare your updated sample app to either of the Module 13 folders, mod13a
(Python 2) or mod13b
(Python 3).
Clean up and/or disable app
In this tutorial, you used a four of Cloud products:
- App Engine
- Cloud Datastore
- Cloud Memorystore
- Cloud VPC
Below are directions for releasing these resources and to avoid/minimize billing charges.
Shutdown Memorystore instance and VPC connector
These are the products without a free tier, so you're incurring billing right now. If you don't shut down your Cloud project (see next section), you must delete both your Memorystore instance as well as the VPC connector to stop the billing. Similar to when you created these resources, you can also release them either from the Cloud Console or the command-line.
From Cloud Console
To delete the Memorystore instance, go back to the Memorystore dashboard and click on the instance ID:
Once on that instance's details page, click on "Delete" and confirm:
To delete the Serverless VPC connector, go to its dashboard and select the checkbox next to the connector you wish to delete, then click on "Delete" and confirm:
From command-line
The following pair of gcloud
commands delete both the Memorystore instance and VPC connector, respectively:
gcloud redis instances delete INSTANCE --region REGION
gcloud compute networks vpc-access connectors delete CONNECTOR --region REGION
If you haven't set your project ID with gcloud config set project
, you may have to provide --project PROJECT_ID
. If your Memorystore instance is called demo-ms
and VPC connector called demo-vpc
, and both are in region us-central1
, issue the following pair of commands and confirm:
$ gcloud redis instances delete demo-ms --region us-central1 You are about to delete instance [demo-ms] in [us-central1]. Any associated data will be lost. Do you want to continue (Y/n)? Delete request issued for: [demo-ms] Waiting for operation [projects/PROJECT/locations/REGION/operations/operation-aaaaa-bbbbb-ccccc-ddddd] to complete...done. Deleted instance [demo-ms]. $ $ gcloud compute networks vpc-access connectors delete demo-vpc --region us-central1 You are about to delete connector [demo-vpc] in [us-central1]. Any associated data will be lost. Do you want to continue (Y/n)? Delete request issued for: [demo-vpc] Waiting for operation [projects/PROJECT/locations/REGION/operations/aaaaa-bbbb-cccc-dddd-eeeee] to complete...done. Deleted connector [demo-vpc].
Each request takes a few minutes to run. These steps are optional if you choose to shut down your entire Cloud project, however you will still incur billing until the shut down process has completed. More on project shut down in the next section.
Disable App Engine app and reduce Datastore usage or shut down Cloud project
If you're not ready to go to the next tutorial yet or don't wish to process requests, disable your app to avoid incurring App Engine charges. When you're ready to move to the next codelab, you can re-enable either one. While your apps are disabled, they won't get any traffic to incur charges, however Datastore usage may be billable if it exceeds its free quota, so delete enough to fall under that limit.
On the other hand, if you're not going to continue with migrations and want to delete everything completely, shut down your Cloud project.
Next steps
Beyond this tutorial, other migration modules to consider include:
- Module 2: migrate from App Engine
ndb
to Cloud NDB - Modules 7-9: migrate from App Engine push tasks (
taskqueue
) to Cloud Tasks - Module 11: migrate from App Engine to Cloud Functions
- Migrate from App Engine to Cloud Run: see Module 4 if you're familiar with Docker or Module 5 if you don't do containers or Dockerfiles
- Module 15-16: migrate from App Engine Blobstore to Google Cloud storage (forthcoming)
8. Additional resources
App Engine migration module codelabs issues/feedback
If you find any issues with this codelab, please search for your issue first before filing. Links to search and create new issues:
Migration resources
Links to the repo folders for Module 12 (START) and Module 13 (FINISH) can be found in the table below. They can also be accessed from the repo for all App Engine codelab migrations which you can clone or download a ZIP file.
Codelab | Python 2 | Python 3 |
Module 13 |
Online resources
Below are online resources which may be relevant for this tutorial:
App Engine
- App Engine documentation
- Python 2 App Engine (standard environment) runtime
- Python 3 App Engine (standard environment) runtime
- Differences between Python 2 & 3 App Engine (standard environment) runtimes
- Python 2 to 3 App Engine (standard environment) migration guide
- App Engine pricing and quotas information
App Engine ndb
and Cloud NDB
- App Engine ndb overview
- App Engine ndb Datastore usage
- Google Cloud NDB docs
- Google Cloud NDB repo
- Cloud Datastore pricing information
App Engine Memcache and Cloud Memorystore
- App Engine Memcache overview
- Python 2 App Engine
memcache
reference - Python 3 App Engine
memcache
reference - App Engine
memcache
to Cloud Memorystore migration guide - Cloud Memorystore documentation
- Cloud Memorystore for Redis documentation
- Cloud Memorystore for Redis pricing information
- Cloud Memorystore supported Redis versions
- Cloud Memorystore home page
- Create new Memorystore instance in Cloud Console
- Python Redis client home page
- Python Redis client library documentation
Cloud VPC
- Google Cloud VPC docs
- Google Cloud VPC home page
- Cloud VPC pricing information
- Create new Serverless VPC connector in Cloud Console
Other Cloud information
- Python on Google Cloud Platform
- Google Cloud Python client libraries
- Google Cloud "Always Free" tier
- Google Cloud SDK (
gcloud
command-line tool) - All Google Cloud documentation
License
This work is licensed under a Creative Commons Attribution 2.0 Generic License.