1. Overview
The Serverless Migration Station series of codelabs (self-paced, hands-on tutorials) and related videos aim to help Google Cloud serverless developers modernize their appications by guiding them through one or more migrations, primarily moving away from legacy services. Doing so makes your apps more portable and gives you more options and flexibility, enabling you to integrate with and access a wider range of Cloud products and more easily upgrade to newer language releases. While initially focusing on the earliest Cloud users, primarily App Engine (standard environment) developers, this series is broad enough to include other serverless platforms like Cloud Functions and Cloud Run, or elsewhere if applicable.
The purpose of this codelab is to show Python 2 App Engine developers how to migrate from App Engine Task Queue pull tasks to Cloud Pub/Sub. There is also an implicit migration from App Engine NDB to Cloud NDB for Datastore access (primarily covered in Module 2) as well as an upgrade to Python 3.
In Module 18, you learn how to add the use of pull tasks in your app. In this module, you will take the finished Module 18 app and migrate that usage to Cloud Pub/Sub. Those using Task Queues for push tasks will instead migrate to Cloud Tasks and should refer to Modules 7-9 instead.
You'll learn how to
- Replace the use of App Engine Task Queue (pull tasks) with Cloud Pub/Sub
- Replace the use of App Engine NDB with Cloud NDB (also see Module 2)
- Port the app to Python 3
What you'll need
- A Google Cloud Platform project with an active GCP billing account
- Basic Python skills
- Working knowledge of common Linux commands
- Basic knowledge of developing and deploying App Engine apps
- A working Module 18 App Engine sample app
Survey
How will you use this tutorial?
How would you rate your experience with Python?
How would you rate your experience with using Google Cloud services?
2. Background
App Engine Task Queue supports both push and pull tasks. To improve application portability, Google Cloud recommends migrating from legacy bundled services like Task Queue to other Cloud standalone or 3rd-party equivalent services.
- Task Queue push task users should migrate to Cloud Tasks.
- Task Queue pull task users should migrate to Cloud Pub/Sub.
Migration Modules 7-9 cover push task migration while Modules 18-19 focus on pull task migration. While Cloud Tasks matches Task Queue push tasks more closely, Pub/Sub is not as close an analog to Task Queue pull tasks.
Pub/Sub has more features than the pull functionality provided by Task Queue. For example, Pub/Sub also has push functionality, however Cloud Tasks is more like Task Queue push tasks, so Pub/Sub push is not covered by any of the Migration Modules. This Module 19 codelab demonstrates switching the queuing mechanism from Task Queue pull queues to Pub/Sub as well as migrating from App Engine NDB to Cloud NDB for Datastore access, repeating the Module 2 migration.
While the Module 18 code is "advertised" as a Python 2 sample app, the source itself is Python 2 and 3 compatible, and it remains that way even after migrating to Cloud Pub/Sub (and Cloud NDB) here in Module 19.
This tutorial features the following steps:
- Setup/Prework
- Update configuration
- Modify application code
3. Setup/Prework
This section explains how to:
- Set up your Cloud project
- Get baseline sample app
- (Re)Deploy and validate baseline app
- Enable new Google Cloud services/APIs
These steps ensure you're starting with working code and that it's ready for migration to Cloud services.
1. Setup project
If you completed the Module 18 codelab, reuse that same project (and code). Alternatively, create a brand new project or reuse another existing project. Ensure the project has an active billing account and an enabled App Engine app. Find your project ID as you need to have it handy during this codelab, using it whenever you encounter the PROJECT_ID
variable.
2. Get baseline sample app
One of the prerequisites is a working Module 18 App Engine app, so either complete its codelab (recommended; link above) or copy the Module 18 code from the repo. Whether you use yours or ours, this is where we'll begin ("START"). This codelab walks you through the migration, concluding with code that resembles what's in the Module 19 repo folder ("FINISH").
- START: Module 18 folder (Python 2)
- FINISH: Module 19 folder (Python 2 and 3)
- Entire repo (to clone or download ZIP file)
Regardless which Module 18 app you use, the folder should look like the below, possibly with a lib
folder as well:
$ ls README.md appengine_config.py queue.yaml templates app.yaml main.py requirements.txt
3. (Re)Deploy and validate baseline app
Execute the following steps to deploy the Module 18 app:
- Delete the
lib
folder if there is one and runpip install -t lib -r requirements.txt
to repopulatelib
. You may need to usepip2
instead if you have both Python 2 and 3 installed on your development machine. - Ensure you've installed and initialized the
gcloud
command-line tool and reviewed its usage. - (optional) Set your Cloud project with
gcloud config set project
PROJECT_ID
if you don't want to enter thePROJECT_ID
with eachgcloud
command you issue. - Deploy the sample app with
gcloud app deploy
- Confirm the app runs as expected without issue. If you completed the Module 18 codelab, the app displays the top visitors along with the most recent visits (illustrated below). If not, there may not be any visitor counts to display.
Before migrating the Module 18 sample app, you must first enable the Cloud services that the modified app will use.
4. Enable new Google Cloud services/APIs
The old app used App Engine bundled services which don't require additional setup, but standalone Cloud services do, and the updated app will employ both Cloud Pub/Sub and Cloud Datastore (via the Cloud NDB client library). App Engine and both Cloud APIs have "Always Free" tier quotas, and so long as you stay under those limits, you shouldn't incur charges completing this tutorial. Cloud APIs can be enabled from either the Cloud Console or from the command-line, depending on your preference.
From the Cloud Console
Go to the API Manager's Library page (for the correct project) in the Cloud Console, and search for the Cloud Datastore and Cloud Pub/Sub APIs using the search bar in the middle of the page:
Click the Enable button for each API separately—you may be prompted for billing information. For example, this is the Cloud Pub/Sub API Library page:
From the command-line
While it is visually informative enabling APIs from the console, some prefer the command-line. Issue the gcloud services enable pubsub.googleapis.com datastore.googleapis.com
command to enable both APIs at the same time:
$ gcloud services enable pubsub.googleapis.com datastore.googleapis.com Operation "operations/acat.p2-aaa-bbb-ccc-ddd-eee-ffffff" finished successfully.
You may be prompted for billing information. If you wish to enable other Cloud APIs and wish to know what their URIs are, they can be found at the bottom of each API's library page. For example, observe pubsub.googleapis.com
as the "Service name" at the bottom of the Pub/Sub page just above.
After the steps are complete, your project will be able to access the APIs. Now it's time to update the application to use those APIs.
4. Create Pub/Sub resources
Recapping the sequence order of the Task Queue workflow from Module 18:
- Module 18 used the
queue.yaml
file to create a pull queue namedpullq
. - The app adds tasks to the pull queue to track visitors.
- Tasks are eventually processed by a worker, leased for a finite amount of time (an hour).
- Tasks are executed to tally recent visitor counts.
- Tasks are deleted from the queue upon completion.
You are going to replicate a similar workflow with Pub/Sub. The next section introduces basic Pub/Sub terminology, with three different ways to create the necessary Pub/Sub resources.
App Engine Task Queue (pull) vs. Cloud Pub/Sub terminology
Switching to Pub/Sub requires a slight adjustment to your vocabulary. Listed below are the primary categories along with relevant terms from both products. Also review the migration guide which features similar comparisons.
- Queuing data structure: With Task Queue, the data goes into pull queues; with Pub/Sub, data goes into topics.
- Units of queued data: Pull tasks with Task Queue are called messages with Pub/Sub.
- Data processors: With Task Queue, workers access pull tasks; with Pub/Sub, you need subscriptions/subscribers to receive messages
- Data extraction: Leasing a pull task is the same as pulling a message from a topic (via a subscription).
- Clean-up/completion: Deleting a Task Queue task from a pull queue when you're done is analogous to acknowledging a Pub/Sub message
Although the queuing product changes, the workflow remains relatively similar:
- Rather than a pull queue, the app uses a topic named
pullq
. - Rather than adding tasks to a pull queue, the app sends messages to a topic (
pullq
). - Rather than a worker leasing tasks from the pull queue, a subscriber named
worker
pulls messages from thepullq
topic. - The app processes message payloads, incrementing visitor counts in Datastore.
- Rather than deleting tasks from the pull queue, the app acknowledges the processed messages.
With Task Queue, the setup involves creating the pull queue. With Pub/Sub, setting up requires creating both a topic and a subscription. In Module 18, we processed queue.yaml
outside of app execution; now the same has to be done with Pub/Sub.
There are three options for creating topics and subscriptions:
- From the Cloud console
- From the command-line, or
- From code (short Python script)
Pick one of the options below and follow the corresponding instructions to create your Pub/Sub resources.
From the Cloud console
To create a topic from the Cloud Console, follow these steps:
- Go to the Cloud console Pub/Sub Topics page.
- Click Create topic at the top; a new dialog window opens (see image below)
- In the Topic ID field, enter
pullq
. - Unselect all checked options and select Google-managed encryption key.
- Click the Create topic button.
This is what the topic creation dialog looks like:
Now that you have a topic, a subscription for that topic must be created:
- Go to the Cloud console Pub/Sub Subscriptions page.
- Click Create subscription at the top (see image below).
- Enter
worker
in the Subscription ID field. - Pick
pullq
from the Select a Cloud Pub/Sub topic pulldown, noting its "fully-qualified pathname," for example,projects/PROJECT_ID/topics/pullq
- For Delivery type, select Pull.
- Leave all other options as-is and click the Create button.
This is what the subscription creation screen looks like:
You can also create a subscription from the Topics page—this "shortcut" might be useful for you in helping associate topics with subscriptions. To learn more about creating subscriptions, see the documentation.
From the command-line
Pub/Sub users can create topics and subscriptions with the commands gcloud pubsub topics create
TOPIC_ID
and gcloud pubsub subscriptions create
SUBSCRIPTION_ID
--topic=
TOPIC_ID
, respectively. Executing these with a TOPIC_ID
of pullq
and a SUBSCRIPTION_ID
of worker
results in the following output for project PROJECT_ID
:
$ gcloud pubsub topics create pullq Created topic [projects/PROJECT_ID/topics/pullq]. $ gcloud pubsub subscriptions create worker --topic=pullq Created subscription [projects/PROJECT_ID/subscriptions/worker].
Also see this page in the Quickstart documentation. Using the command-line might simplify workflows where topics and subscriptions are created on a regular basis, and such commands can be used in shell scripts for this purpose.
From code (short Python script)
Another way to automate creating topics and subscriptions is by using the Pub/Sub API in source code. Below is the code for the maker.py
script in the Module 19 repo folder.
from __future__ import print_function
import google.auth
from google.api_core import exceptions
from google.cloud import pubsub
_, PROJECT_ID = google.auth.default()
TOPIC = 'pullq'
SBSCR = 'worker'
ppc_client = pubsub.PublisherClient()
psc_client = pubsub.SubscriberClient()
TOP_PATH = ppc_client.topic_path(PROJECT_ID, TOPIC)
SUB_PATH = psc_client.subscription_path(PROJECT_ID, SBSCR)
def make_top():
try:
top = ppc_client.create_topic(name=TOP_PATH)
print('Created topic %r (%s)' % (TOPIC, top.name))
except exceptions.AlreadyExists:
print('Topic %r already exists at %r' % (TOPIC, TOP_PATH))
def make_sub():
try:
sub = psc_client.create_subscription(name=SUB_PATH, topic=TOP_PATH)
print('Subscription created %r (%s)' % (SBSCR, sub.name))
except exceptions.AlreadyExists:
print('Subscription %r already exists at %r' % (SBSCR, SUB_PATH))
try:
psc_client.close()
except AttributeError: # special Py2 handler for grpcio<1.12.0
pass
make_top()
make_sub()
Executing this script results in the expected output (provided there are no errors):
$ python3 maker.py Created topic 'pullq' (projects/PROJECT_ID/topics/pullq) Subscription created 'worker' (projects/PROJECT_ID/subscriptions/worker)
Calling the API to create already-existing resources results in a google.api_core.exceptions.AlreadyExists
exception thrown by the client library, handled gracefully by the script:
$ python3 maker.py Topic 'pullq' already exists at 'projects/PROJECT_ID/topics/pullq' Subscription 'worker' already exists at 'projects/PROJECT_ID/subscriptions/worker'
If you're new to Pub/Sub, see the Pub/Sub architecture white paper for additional insight.
5. Update configuration
Updates in configuration include both changing various configuration files as well as creating the equivalent of App Engine pull queues but within the Cloud Pub/Sub ecosystem.
Delete queue.yaml
We're moving away from Task Queue entirely, so delete queue.yaml
because Pub/Sub doesn't use this file. Rather than creating a pull queue, you will create a Pub/Sub topic (and subscription).
requirements.txt
Append both google-cloud-ndb
and google-cloud-pubsub
to requirements.txt
so as to join flask
from Module 18. Your updated Module 19 requirements.txt
should now look like this:
flask
google-cloud-ndb
google-cloud-pubsub
This requirements.txt
file doesn't feature any version numbers, meaning the latest versions are selected. If any incompatibilities arise, follow the standard practice of using version numbers to lock-in working versions for an app.
app.yaml
The changes to app.yaml
differ depending on whether you're staying with Python 2 or upgrading to Python 3.
Python 2
The above update to requirements.txt
adds use of Google Cloud client libraries. These require additional support from App Engine, namely a couple of built-in libraries, setuptools
and grpcio
. Use of built-in libraries requires a libraries
section in app.yaml
and library version numbers, or "latest" for the latest available on App Engine servers. The Module 18 app.yaml
does not yet have one of those sections:
BEFORE:
runtime: python27
threadsafe: yes
api_version: 1
handlers:
- url: /.*
script: main.app
Add a libraries
section to app.yaml
along with entries for both setuptools
and grpcio
, selecting their latest versions. Also add a placeholder runtime
entry for Python 3, commented out along with a current 3.x release, for example, 3.10, at the time of this writing. With these changes, app.yaml
now looks like this:
AFTER:
#runtime: python310
runtime: python27
threadsafe: yes
api_version: 1
handlers:
- url: /.*
script: main.app
libraries:
- name: setuptools
version: latest
- name: grpcio
version: latest
Python 3
For Python 3 users and app.yaml
, it's all about removing things. In this section, you'll delete the handlers
section, the threadsafe
and api_version
directives, and you won't create a libraries
section.
Second generation runtimes do not provide built-in 3rd-party libraries, so a libraries
section is not needed in app.yaml
. Furthermore, copying (sometimes known as vendoring or self-bundling) non-built-in 3rd-party packages is no longer required. You only need to list 3rd-party libraries your app uses in requirements.txt
.
The handlers
section in app.yaml
is for specifying application (script) and static file handlers. Since the Python 3 runtime requires web frameworks to perform their own routing, all script handlers must be changed to auto
. If your app (like Module 18's) does not serve static files, all routes would then be auto
, making them irrelevant. As a result, the handlers
section isn't needed either, so delete that.
Finally, neither the threadsafe
nor api_version
directives are used in Python 3, so delete those as well. The bottom-line is that you should delete all sections of app.yaml
so that only the runtime
directive remains, specifying a modern version of Python 3, for example, 3.10. Here is what app.yaml
looks like before and after these updates:
BEFORE:
runtime: python27
threadsafe: yes
api_version: 1
handlers:
- url: /.*
script: main.app
AFTER:
runtime: python310
For those who are not ready to delete everything from their app.yaml
for Python 3, we've provided an app3.yaml
alternative file in the Module 19 repo folder. If you wish to use that instead for deployments, be sure to append this filename to the end of your command: gcloud app deploy app3.yaml
(otherwise, it will default to and deploy your app with the Python 2 app.yaml
file that you left unchanged).
appengine_config.py
If you're upgrading to Python 3, there's no need for appengine_config.py
, so delete it. The reason it's not necessary is that 3rd-party library support only requires specifying them in requirements.txt
. Python 2 users, read on.
The Module 18 appengine_config.py
has the appropriate code to support 3rd-party libraries, for example, Flask and the Cloud client libraries just added to requirements.txt
:
BEFORE:
from google.appengine.ext import vendor
# Set PATH to your libraries folder.
PATH = 'lib'
# Add libraries installed in the PATH folder.
vendor.add(PATH)
However, this code alone does not suffice to support the just-added built-in libraries (setuptools
, grpcio
). A few more lines are needed, so update appengine_config.py
so it looks like this:
AFTER:
import pkg_resources
from google.appengine.ext import vendor
# Set PATH to your libraries folder.
PATH = 'lib'
# Add libraries installed in the PATH folder.
vendor.add(PATH)
# Add libraries to pkg_resources working set to find the distribution.
pkg_resources.working_set.add_entry(PATH)
More details on changes required to support Cloud client libraries can be found in the migrating bundled services documentation.
Other configuration updates
If you have a lib
folder, delete it. If you're a Python 2 user, replenish the lib
folder by issuing the following command:
pip install -t lib -r requirements.txt # or pip2
If you have both Python 2 and 3 installed on your development system, you may need to use pip2
instead of pip
.
6. Modify application code
This section features updates to the main application file, main.py
, replacing use of App Engine Task Queue pull queues with Cloud Pub/Sub. There are no changes to the web template, templates/index.html
. Both apps should operate identically, displaying the same data.
Update imports and initialization
There are several updates to imports and initialization:
- For the imports, replace App Engine NDB and Task Queue with Cloud NDB and Pub/Sub.
- Rename
pullq
from aQUEUE
name to aTOPIC
name. - With pull tasks, the worker leased them for an hour, but with Pub/Sub, timeouts are measured on a per-message basis, so delete the
HOUR
constant. - Cloud APIs require use of an API client, so initiate those for Cloud NDB and Cloud Pub/Sub, with the latter providing clients for both topics and subscriptions.
- Pub/Sub requires the Cloud project ID, so import and get it from
google.auth.default()
. - Pub/Sub requires "fully-qualified pathnames" for topics and subscriptions, so create those using the
*_path()
convenience functions.
Below are the imports and initialization from Module 18 followed by how the sections should look after the implementing the changes above, with most of the new code being various Pub/Sub resources:
BEFORE:
from flask import Flask, render_template, request
from google.appengine.api import taskqueue
from google.appengine.ext import ndb
HOUR = 3600
LIMIT = 10
TASKS = 1000
QNAME = 'pullq'
QUEUE = taskqueue.Queue(QNAME)
app = Flask(__name__)
AFTER:
from flask import Flask, render_template, request
import google.auth
from google.cloud import ndb, pubsub
LIMIT = 10
TASKS = 1000
TOPIC = 'pullq'
SBSCR = 'worker'
app = Flask(__name__)
ds_client = ndb.Client()
ppc_client = pubsub.PublisherClient()
psc_client = pubsub.SubscriberClient()
_, PROJECT_ID = google.auth.default()
TOP_PATH = ppc_client.topic_path(PROJECT_ID, TOPIC)
SUB_PATH = psc_client.subscription_path(PROJECT_ID, SBSCR)
Visit data model updates
The Visit
data model doesn't change. Datastore access requires explicit use of the Cloud NDB API client context manager, ds_client.context()
. In code, this means you wrap Datastore calls in both store_visit()
and fetch_visits()
inside Python with
blocks. This update is identical to what is covered in Module 2.
The most relevant change for Pub/Sub is to replace the enqueuing of a Task Queue pull task with the publishing of a Pub/Sub message to the pullq
topic. Below is the code before and after making these updates:
BEFORE:
class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)
def store_visit(remote_addr, user_agent):
'create new Visit in Datastore and queue request to bump visitor count'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
QUEUE.add(taskqueue.Task(payload=remote_addr, method='PULL'))
def fetch_visits(limit):
'get most recent visits'
return Visit.query().order(-Visit.timestamp).fetch(limit)
AFTER:
class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)
def store_visit(remote_addr, user_agent):
'create new Visit in Datastore and queue request to bump visitor count'
with ds_client.context():
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
ppc_client.publish(TOP_PATH, remote_addr.encode('utf-8'))
def fetch_visits(limit):
'get most recent visits'
with ds_client.context():
return Visit.query().order(-Visit.timestamp).fetch(limit)
VisitorCount data model updates
The VisitorCount
data model doesn't change and does fetch_counts()
except for wrapping its Datastore query inside a with
block, as illustrated below:
BEFORE:
class VisitorCount(ndb.Model):
visitor = ndb.StringProperty(repeated=False, required=True)
counter = ndb.IntegerProperty()
def fetch_counts(limit):
'get top visitors'
return VisitorCount.query().order(-VisitorCount.counter).fetch(limit)
AFTER:
class VisitorCount(ndb.Model):
visitor = ndb.StringProperty(repeated=False, required=True)
counter = ndb.IntegerProperty()
def fetch_counts(limit):
'get top visitors'
with ds_client.context():
return VisitorCount.query().order(-VisitorCount.counter).fetch(limit)
Update worker code
The worker code updates as far as replacing NDB with Cloud NDB and Task Queue with Pub/Sub, but its workflow remains the same.
- Wrap Datastore calls in the Cloud NDB context manager
with
block. - Task Queue cleanup involves deleting all the tasks from the pull queue. With Pub/Sub, "acknowledgement IDs" are collected in
acks
and then deleted/acknowledged at the end. - Task Queue pull tasks are leased in a similar way that Pub/Sub messages are pulled. While deletion of pull tasks are done with the task objects themselves, Pub/Sub messages are deleted via their acknowledgement IDs.
- Pub/Sub message payloads require bytes (not Python strings), so there's some UTF-8 encoding and decoding when publishing to and pulling messages from a topic, respectively.
Replace log_visitors()
with the updated code below implementing the changes just described:
BEFORE:
@app.route('/log')
def log_visitors():
'worker processes recent visitor counts and updates them in Datastore'
# tally recent visitor counts from queue then delete those tasks
tallies = {}
tasks = QUEUE.lease_tasks(HOUR, TASKS)
for task in tasks:
visitor = task.payload
tallies[visitor] = tallies.get(visitor, 0) + 1
if tasks:
QUEUE.delete_tasks(tasks)
# increment those counts in Datastore and return
for visitor in tallies:
counter = VisitorCount.query(VisitorCount.visitor == visitor).get()
if not counter:
counter = VisitorCount(visitor=visitor, counter=0)
counter.put()
counter.counter += tallies[visitor]
counter.put()
return 'DONE (with %d task[s] logging %d visitor[s])\r\n' % (
len(tasks), len(tallies))
AFTER:
@app.route('/log')
def log_visitors():
'worker processes recent visitor counts and updates them in Datastore'
# tally recent visitor counts from queue then delete those tasks
tallies = {}
acks = set()
rsp = psc_client.pull(subscription=SUB_PATH, max_messages=TASKS)
msgs = rsp.received_messages
for rcvd_msg in msgs:
acks.add(rcvd_msg.ack_id)
visitor = rcvd_msg.message.data.decode('utf-8')
tallies[visitor] = tallies.get(visitor, 0) + 1
if acks:
psc_client.acknowledge(subscription=SUB_PATH, ack_ids=acks)
try:
psc_client.close()
except AttributeError: # special handler for grpcio<1.12.0
pass
# increment those counts in Datastore and return
if tallies:
with ds_client.context():
for visitor in tallies:
counter = VisitorCount.query(VisitorCount.visitor == visitor).get()
if not counter:
counter = VisitorCount(visitor=visitor, counter=0)
counter.put()
counter.counter += tallies[visitor]
counter.put()
return 'DONE (with %d task[s] logging %d visitor[s])\r\n' % (
len(msgs), len(tallies))
There are no changes to the main application handler root()
. No changes are needed in the HTML template file, templates/index.html
, either, so this wraps all the necessary updates. Congratulations for arriving at your new Module 19 application using Cloud Pub/Sub!
7. Summary/Cleanup
Deploy your app to verify it works as intended and in any reflected output. Also run the worker to process the visitor counts. After app validation, perform any clean-up steps and consider next steps.
Deploy and verify application
Ensure you've already created the pullq
topic and worker
subscription. If that has been completed and your sample app ready to go, deploy your app with gcloud app deploy
. The output should be identical to the Module 18 app except that you've successfully replaced the entire underlying queuing mechanism:
The web frontend of the app now verifies this part of the application works. While this part of the app successfully queries for and displays top visitors and most recent visits, recall the app registers this visit along with creating a pull task to add this visitor to the overall count. That task is now in the queue waiting to be processed.
You can execute this with an App Engine backend service, a cron
job, browsing to /log
, or issuing a command-line HTTP request. Here's one sample execution and out of calling the worker code with curl
(substitute your PROJECT_ID
):
$ curl https://PROJECT_ID.appspot.com/log DONE (with 1 task[s] logging 1 visitor[s])
The updated count will then be reflected on the next website visit. That's it!
Clean up
General
If you are done for now, we recommend you disable your App Engine app to avoid incurring billing. However if you wish to test or experiment some more, the App Engine platform has a free quota, and so as long as you don't exceed that usage tier, you shouldn't be charged. That's for compute, but there may also be charges for relevant App Engine services, so check its pricing page for more information. If this migration involves other Cloud services, those are billed separately. In either case, if applicable, see the "Specific to this codelab" section below.
For full disclosure, deploying to a Google Cloud serverless compute platform like App Engine incurs minor build and storage costs. Cloud Build has its own free quota as does Cloud Storage. Storage of that image uses up some of that quota. However, you might live in a region that does not have such a free tier, so be aware of your storage usage to minimize potential costs. Specific Cloud Storage "folders" you should review include:
console.cloud.google.com/storage/browser/LOC.artifacts.PROJECT_ID.appspot.com/containers/images
console.cloud.google.com/storage/browser/staging.PROJECT_ID.appspot.com
- The storage links above depend on your
PROJECT_ID
and *LOC
*ation, for example, "us
" if your app is hosted in the USA.
On the other hand, if you're not going to continue with this application or other related migration codelabs and want to delete everything completely, shut down your project.
Specific to this codelab
The services listed below are unique to this codelab. Refer to each product's documentation for more information:
- Different components of Cloud Pub/Sub have a free tier; determine your overall usage to get a better idea of cost implications and see its pricing page for more details.
- The App Engine Datastore service is provided by Cloud Datastore (Cloud Firestore in Datastore mode) which also has a free tier; see its pricing page for more information.
Next steps
Beyond this tutorial, other migration modules that focus on moving away from the legacy bundled services to consider include:
- Module 2: migrate from App Engine
ndb
to Cloud NDB - Modules 7-9: migrate from App Engine Task Queue (push tasks) to Cloud Tasks
- Modules 12-13: migrate from App Engine Memcache to Cloud Memorystore
- Modules 15-16: migrate from App Engine Blobstore to Cloud Storage
App Engine is no longer the only serverless platform in Google Cloud. If you have a small App Engine app or one that has limited functionality and wish to turn it into a standalone microservice, or you want to break-up a monolithic app into multiple reusable components, these are good reasons to consider moving to Cloud Functions. If containerization has become part of your application development workflow, particularly if it consists of a CI/CD (continuous integration/continuous delivery or deployment) pipeline, consider migrating to Cloud Run. These scenarios are covered by the following modules:
- Migrate from App Engine to Cloud Functions: see Module 11
- Migrate from App Engine to Cloud Run: see Module 4 to containerize your app with Docker, or Module 5 to do it without containers, Docker knowledge, or
Dockerfile
s
Switching to another serverless platform is optional, and we recommend considering the best options for your apps and use cases before making any changes.
Regardless of which migration module you consider next, all Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. The repo's README
also provides guidance on which migrations to consider and any relevant "order" of Migration Modules.
8. Additional resources
Listed below are additional resources for developers further exploring this or related Migration Module as well as related products. This includes places to provide feedback on this content, links to the code, and various pieces of documentation you may find useful.
Codelabs issues/feedback
If you find any issues with this codelab, please search for your issue first before filing. Links to search and create new issues:
Migration resources
Links to the repo folders for Module 18 (START) and Module 19 (FINISH) can be found in the table below.
Codelab | Python 2 | Python 3 |
(n/a) | ||
Module 19 (this codelab) | (same as Python 2 except use app3.yaml unless you updated app.yaml as covered above) |
Online references
Below are resources relevant for this tutorial:
App Engine Task Queue
- App Engine Task Queue overview
- App Engine Task Queue pull queues overview
- App Engine Task Queue pull queue full sample app
- Creating Task Queue pull queues
- Google I/O 2011 pull queue launch video ( Votelator sample app)
queue.yaml
referencequeue.yaml
vs. Cloud Tasks- Pull queues to Pub/Sub migration guide
Cloud Pub/Sub
- Cloud Pub/Sub product page
- Using Pub/Sub client libraries
- Pub/Sub Python client library samples
- Pub/Sub Python client library documentation
- Create & manage Pub/Sub topics
- Pub/Sub topic naming guidelines
- Create & manage Pub/Sub subscriptions
- App Engine (Flexible) sample app (deployable to Standard too; Python 3)
- Repo for sample app above
- Pub/Sub pull subscriptions
- Pub/Sub push subscriptions
- App Engine Pub/Sub push sample app (Python 3)
- App Engine Pub/Sub push sample app repo
- Pub/Sub pricing information
- Cloud Tasks or Cloud Pub/Sub? (push vs. pull)
App Engine NDB and Cloud NDB (Datastore)
- App Engine NDB docs
- App Engine NDB repo
- Google Cloud NDB docs
- Google Cloud NDB repo
- Cloud Datastore pricing information
App Engine platform
- App Engine documentation
- Python 2 App Engine (standard environment) runtime
- Using App Engine built-in libraries on Python 2 App Engine
- Python 3 App Engine (standard environment) runtime
- Differences between Python 2 & 3 App Engine (standard environment) runtimes
- Python 2 to 3 App Engine (standard environment) migration guide
- App Engine pricing and quotas information
- Second generation App Engine platform launch (2018)
- Comparing first & second generation platforms
- Long-term support for legacy runtimes
- Documentation migration samples
- Community-contributed migration samples
Other Cloud information
- Python on Google Cloud Platform
- Google Cloud Python client libraries
- Google Cloud "Always Free" tier
- Google Cloud SDK (
gcloud
command-line tool) - All Google Cloud documentation
Videos
- Serverless Migration Station
- Serverless Expeditions
- Subscribe to Google Cloud Tech
- Subscribe to Google Developers
License
This work is licensed under a Creative Commons Attribution 2.0 Generic License.