1. Overview
This codelab imagines a possible enterprise workflow: image archiving, analysis, and report generation. Imagine your organization having a series of images taking up space on a constrained resource. You want to archive that data, analyze those images, and most importantly, generate a report summarizing the archived locations plus the results of the analysis, collated and ready for consumption by management. Google Cloud provides the tools to make this happen, utilizing APIs from two of its product lines, Google Workspace (previously, G Suite or Google Apps) and Google Cloud (previously, GCP).
In our scenario, the business user will have images on Google Drive. It makes sense to back those up to "colder," cheaper storage, such as the storage classes available from Google Cloud Storage. Google Cloud Vision allows developers to easily integrate vision detection features within applications, including object and landmark detection, optical character recognition (OCR), etc. Finally, a Google Sheets spreadsheet is a useful visualization tool for summarizing all of this for your boss.
After completing this codelab to build a solution that leverages all of Google Cloud, we hope you'll be inspired to build something even more impactful for your organization or your customers'.
What you'll learn
- How to use Cloud Shell
- How to Authenticate API requests
- How to install the Google APIs client library for Python
- How to enable Google APIs
- How to download files from Google Drive
- How to upload objects/blobs to Cloud Storage
- How to analyze data with Cloud Vision
- How to write rows to Google Sheets
What you'll need
- A Google account (Google Workspace accounts may require administrator approval)
- A Google Cloud project with an active Google Cloud billing account
- Familiarity with operating system terminal/shell commands
- Basic skills in Python (2 or 3), but you can use any supported language
Having experience with the four Google Cloud products listed above would be helpful but not required. If time allows for you to become familiar with them separately first, you're welcome to do codelabs for each before tackling the exercise here:
- Google Drive (Using the Google Workspace APIs) intro (Python)
- Using Cloud Vision with Python (Python)
- Build customized reporting tools with the Sheets API (JS/Node)
- Upload objects to Google Cloud Storage (no coding required)
Survey
How will you use this tutorial?
How would you rate your experience with Python?
How would you rate your experience with using Google Cloud services?
How would you rate your experience with using Google Workspace developer services?
Would you like to see more "business-oriented" codelabs vs. those which are product feature introductions?
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
- The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as
PROJECT_ID
). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project. - For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Start Cloud Shell
Summary
While you can develop code locally on your laptop, a secondary goal of this codelab is to teach you how to use the Google Cloud Shell, a command-line environment running in the cloud via your modern web browser.
Activate Cloud Shell
- From the Cloud Console, click Activate Cloud Shell .
If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:
It should only take a few moments to provision and connect to Cloud Shell.
This virtual machine is loaded with all the development tools you need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.
Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.
- Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list
Command output
Credentialed Accounts ACTIVE ACCOUNT * <my_account>@<my_domain.com> To set the active account, run: $ gcloud config set account `ACCOUNT`
- Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If it is not, you can set it with this command:
gcloud config set project <PROJECT_ID>
Command output
Updated property [core/project].
3. Confirm Python environment
This codelab requires you to use the Python language (although many languages are supported by the Google APIs client libraries, so feel free to build something equivalent in your favorite development tool and simply use the Python as pseudocode). In particular, this codelab supports Python 2 and 3, but we recommend moving to 3.x as soon as possible.
The Cloud Shell is a convenient tool available for users directly from the Cloud Console and doesn't require a local development environment, so this tutorial can be done completely in the cloud with a web browser. More specifically for this codelab, the Cloud Shell has already pre-installed both versions of Python.
The Cloud Shell also has IPython installed: it is a higher-level interactive Python interpreter which we recommend, especially if you are part of the data science or machine learning community. If you are, IPython is the default interpreter for Jupyter Notebooks as well as Colab, Jupyter Notebooks hosted by Google Research.
IPython favors a Python 3 interpreter first but falls back to Python 2 if 3.x isn't available. IPython can be accessed from the Cloud Shell but can also be installed in a local development environment. Exit with ^D (Ctrl-d) and accept the offer to exit. Example output of starting ipython
will look like this:
$ ipython Python 3.7.3 (default, Mar 4 2020, 23:11:43) Type 'copyright', 'credits' or 'license' for more information IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]:
If IPython isn't your preference, use of a standard Python interactive interpreter (either the Cloud Shell or your local development environment) is perfectly acceptable (also exit with ^D):
$ python Python 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> $ python3 Python 3.7.3 (default, Mar 10 2020, 02:33:39) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>>
The codelab also assumes you have the pip
installation tool (Python package manager and dependency resolver). It comes bundled with versions 2.7.9+ or 3.4+. If you have an older Python version, see this guide for installation instructions. Depending on your permissions, you may need to have sudo
or superuser access, but generally this isn't the case. You can also explicitly use pip2
or pip3
to execute pip
for specific Python versions.
The remainder of the codelab assumes you're using Python 3—specific instructions will be provided for Python 2 if they differ significantly from 3.x.
[optional] Create and use virtual environments
This section is optional and only really required for those who must use a virtual environment for this codelab (per the warning sidebar above). If you only have Python 3 on your computer, you can simply issue this command to create a virtualenv called my_env
(you can choose another name if desired):
virtualenv my_env
However, if you have both Python 2 & 3 on your computer, we recommend you install a Python 3 virtualenv which you can do with the -p flag
like this:
virtualenv -p python3 my_env
Enter your newly created virtualenv by "activating" it like this:
source my_env/bin/activate
Confirm you're in the environment by observing your shell prompt is now preceded with your environment name, i.e.,
(my_env) $
Now you should be able to pip install
any required packages, execute code within this eivonment, etc. Another benefit is that if you completely mess it up, get into a situation where your Python installation is corrupted, etc., you can blow away this entire environment without affecting the rest of your system.
4. Install the Google APIs client library for Python
This codelab requires the use of the Google APIs client library for Python, so either it's a simple install process, or, you may not have to do anything at all.
We earlier recommended you consider using Cloud Shell for convenience. You can complete the entire tutorial from a web browser in the cloud. Another reason to use Cloud Shell is that many popular development tools and necessary libraries are already pre-installed.
*Install client libraries
(optional) This can be skipped if you're using Cloud Shell or a local environment where you've already installed the client libraries. You only need to do this if you're developing locally and haven't (or unsure you've) installed them. The easiest way is to use pip
(or pip3
) to do the install (including updating pip
itself if necessary):
pip install -U pip google-api-python-client oauth2client
Confirm installation
This command installs the client library as well as any packages it depends on. Whether you're using Cloud Shell or your own environment, verify the client library is installed by importing the necessary packages and confirm there are no import errors (nor output):
python3 -c "import googleapiclient, httplib2, oauth2client"
If you use Python 2 instead (from Cloud Shell), you'll get a warning that support for it has been deprecated:
******************************************************************************* Python 2 is deprecated. Upgrade to Python 3 as soon as possible. See https://cloud.google.com/python/docs/python2-sunset To suppress this warning, create an empty ~/.cloudshell/no-python-warning file. The command will automatically proceed in seconds or on any key. *******************************************************************************
Once you can run that import "test" command successfully (no errors/output), you're ready to start talking to Google APIs!
Summary
As this is an intermediate codelab, the assumption is that you already have experience with creating & using projects in the console. If you're new to Google APIs, and Google Workspace APIs specifically, try the Google Workspace APIs introductory codelab first. Additionally, if you know how to create (or reuse existing) user account (not service account) credentials, drop the client_secret.json
file into your work directory, skip the next module, and jump to "Enable Google APIs."
5. *Authorize API requests (user authorization)
This section can be skipped if you've already created user account authorization credentials and are familiar with the process. It is different from service account authorization whose technique differs, so please continue below.
Intro to authorization (plus some authentication)
In order to make requests to the APIs, your application needs to have the proper authorization. Authentication, a similar word, describes login credentials—you authenticate yourself when logging into your Google account with a login & password. Once authenticated, the next step is whether you are—or rather, your code, is—authorized to access data, such as blob files on Cloud Storage or a user's personal files on Google Drive.
Google APIs support several types of authorization, but the one most common for G Suite API users is user authorization since the example application in this codelab accesses data belonging to end-users. Those end-users must grant permission for your app to access their data. This means your code must obtain user account OAuth2 credentials.
To get OAuth2 credentials for user authorization, go back to the API manager and select the "Credentials" tab on the left-nav:
When you get there, you'll see all your credentials in three separate sections:
The first is for API keys, the second OAuth 2.0 client IDs, and the last OAuth2 service accts—we're using the one in the middle.
Creating credentials
From the Credentials page, click on the + Create Credentials button at the top, which then gives you a dialog where you'd choose "OAuth client ID:"
On the next screen, you have 2 actions: configuring your app's authorization "consent screen" and choosing the application type:
If you have not set a consent screen, you will see the warning in the console and would need to do so now. (Skip this these next steps if your consent screen has already been setup.)
OAuth consent screen
Click on "Configure consent screen" where you select an "External" app (or "Internal" if you're a G Suite customer):
Note that for the purposes of this exercise, it doesn't matter which you pick because you're not publishing your codelab sample. Most people will select "External" to be taken to a more complex screen, but you really only need to complete the "Application name" field at the top:
The only thing you need at this time is just an application name so pick someone that reflects the codelab you're doing then click Save.
Creating OAuth client ID (user acct auth)
Now go back to the Credentials tab to create an OAuth2 client ID. Here you'll see a variety of OAuth client IDs you can create:
We're developing a command-line tool, which is Other, so choose that then click the Create button. Choose a client ID name reflecting the app you're creating or simply take the default name, which is usually, "Other client N".
Saving your credentials
- A dialog with the new credentials appears; click OK to close
- Back on the Credentials page, scroll down to the "OAuth2 Client IDs" section find and click the download icon to the far right bottom of your newly-created client ID.
- This open a dialog to save a file named
client_secret-
LONG-HASH-STRING
.apps.googleusercontent.com.json
, likely to your Downloads folder. We recommend shortening to an easier name likeclient_secret.json
(which is what the sample app uses), then save it to the directory/folder where you'll be creating the sample app in this codelab.
Summary
Now you're ready to enable the Google APIs employed in this codelab. Also, for the application name in the OAuth consent screen, we picked, "Vision API demo", so expect to see this in some of the forthcoming screenshots.
6. Enable Google APIs
This codelab uses four (4) Google Cloud APIs, a pair from Google Cloud (Cloud Storage and Cloud Vision) and another pair from Google Workspace (Google Drive and Google Sheets). Below are general instructions for enabling Google APIs. Once you know how to enable one API, the others are similar.
Regardless of which Google API you want to use in your application, they must be enabled. APIs can be enabled from the command-line or from the Cloud console. The process of enabling APIs is identical, so once you enable one API, you can enable others in a similar way.
Option 1: gcloud
command-line interface (Cloud Shell or local environment)
While enabling APIs from the Cloud Console is more common, some developers prefer doing everything from the command line. To do so, you need to look up an API's "service name." It looks like a URL: SERVICE_NAME
.googleapis.com
. You can find these in the Supported products chart, or you can programmatically query for them with the Google Discovery API.
Armed with this information, using Cloud Shell (or your local development environment with the gcloud
command-line tool installed), you can enable an API or service, as follows:
gcloud services enable SERVICE_NAME.googleapis.com
Example 1: Enable the Cloud Vision API
gcloud services enable vision.googleapis.com
Example 2: Enable the Google App Engine serverless compute platform
gcloud services enable appengine.googleapis.com
Example 3: Enable multiple APIs with one request. For example, if this codelab has viewers deploying an app using the Cloud Translation API to App Engine, Cloud Functions, and Cloud Run, the command line would be:
gcloud services enable appengine.googleapis.com cloudfunctions.googleapis.com artifactregistry.googleapis.com run.googleapis.com translate.googleapis.com
This command enables App Engine, Cloud Functions, Cloud Run, and the Cloud Translation API. Furthermore, it enables the Cloud Artifact Registry because that's where container images must be registered by the Cloud Build system in order to deploy to Cloud Run.
There are also a few commands to either query for APIs to enable or which APIs have already been enabled for your project.
Example 4: Query for Google APIs available to enable for your project
gcloud services list --available --filter="name:googleapis.com"
Example 5: Query for Google APIs enabled for your project
gcloud services list
For more information on the above commands, see the enabling and disabling services and listing services documentation.
Option 2: Cloud Console
You can also enable the Google APIs in the API Manager. From the Cloud Console, go to API Manager. On this dashboard page, you'll see some traffic information for your app, graphs showing application requests, errors generated by your app, and your app's response times:
Below these charts are a list of Google APIs enabled for your project:
To enable (or disable) APIs, click Enable APIs and Services at the top:
Alternatively, go to the left-navigation bar and select APIs & Services → Library:
Either way, you'll arrive at the API Library page:
Enter an API name to search for and see matching results:
Select the API you're seeking to enable and click the Enable button:
The process of enabling all APIs is similar, regardless of which Google API you wish to use.
Cost
Many Google APIs can be used without fees, however, there are costs when using most Google Cloud products and APIs. When enabling Cloud APIs, you may be asked for an active billing account. However, some Google Cloud products feature an "Always Free" tier, which you have to exceed in order to incur billing charges.
New Google Cloud users qualify for the Free Trial, currently $300USD good for the first 90 days. Codelabs generally don't incur much or any billing, so we suggest you hold off on the Free Trial until you're really ready to give it a test drive, especially since it's a one-time offer. The Free Tier quotas don't expire and apply regardless of whether you use the Free Trial or not.
Users should reference the pricing information for any API before enabling (example: Cloud Vision API pricing page), especially noting whether it has a free tier, and if so, what it is. So long as you stay within specified daily or monthly limits in aggregate, you should not incur any charges. Pricing and free tiers vary between Google product group APIs. Examples:
- Google Cloud — each product is billed differently and generally pay-per-use; see free tier information above.
- Google Maps — features a suite of APIs and offers users an overall $200USD free monthly credit.
- Google Workspace (formerly G Suite) APIs — provides usage (up to certain limits) covered by a Google Workspace monthly subscription fee, so there's no direct billing for use of APIs for applications like Gmail, Google Drive, Calendar, Docs, Sheets, or Slides.
Different Google products are billed differently, so be sure to reference the appropriate documentation for that information.
Summary
Now that Cloud Vision has been enabled, turn on the other three APIs (Google Drive, Cloud Storage, Google Sheets) in the same way. From the Cloud Shell, use gcloud services enable
, or from the Cloud console:
- Go back to the API Library
- Start a search by typing a few letters of its name
- Select the desired API, and
- Enable
Lather, rinse, and repeat. For Cloud Storage, there are several choices: choose the "Google Cloud Storage JSON API". The Cloud Storage API will also expect an active billing account.
7. Step 0: Setup imports & authorization code
This is the beginning of a medium-sized piece of code, so somewhat following agile practices helps ensure a common, stable, and working piece of infrastructure before tackling the main application. Doublecheck client_secret.json
is available in your current directory and either startup ipython
and enter the following code snippet, or save it to analyze_gsimg.py
and run it from the shell (the latter is preferred because we'll continue to add to the code sample):
from __future__ import print_function
from googleapiclient import discovery, http
from httplib2 import Http
from oauth2client import file, client, tools
# process credentials for OAuth2 tokens
SCOPES = 'https://www.googleapis.com/auth/drive.readonly'
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
# create API service endpoints
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
This core component includes code blocks for module/package imports, processing user auth credentials, and creating API service endpoints. The key pieces of the code you should review:
- Importing the
print()
function makes this sample Python 2-3 compatible, and the Google library imports bring in all of the tools necessary to communicate with Google APIs. - The
SCOPES
variable represents the permissions to request from the user—there's only one for now: the permission to read data from their Google Drive - The remainder of the credentials processing code reads in cached OAuth2 tokens, possibly updating to a new access token with the refresh token if the original access token had expired.
- If no tokens have been created or retrieving a valid access token failed for another reason, the user must go through the OAuth2 3-legged flow (3LO): create the dialog with permissions requested and prompt the user to accept. Once they do, the app continues, otherwise
tools.run_flow()
throws an exception and execution halts. - Once the user grants permission, an HTTP client is created to communicate with the server, and all requests are signed with the user's credentials for security. Then a service endpoint to the Google Drive API (version 3) is created with that HTTP client then assigned to
DRIVE
.
Running the application
The first time you execute the script, it won't have the authorization to access the user's files on Drive (yours). The output looks like this with execution paused:
$ python3 ./analyze_gsimg.py /usr/local/lib/python3.6/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access storage.json: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename)) Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?client_id=LONG-STRING.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.readonly&access_type=offline&response_type=code If your browser is on a different machine then exit and re-run this application with the command-line parameter --noauth_local_webserver
If you're running from the Cloud Shell, skip ahead to the "From Cloud Shell" section then scroll back to review the relevant screens in "From local development environment" when appropriate.
From local development environment
The command-line script is paused as a browser window opens. You may get a scary-looking warning page that looks like this:
This is a legitimate concern, as you're trying to run an app that accesses user data. Since this is just a demo app, and you're the developer, hopefully you trust yourself enough to proceed. To understand this better, put yourself in your user's shoes: you're being asked to allow someone else's code to access your data. If you intend to publish an app like this, you'll go through the verification process so your users won't see this screen.
After clicking the "go to ‘unsafe' app" link, you'll get an OAuth2 permissions dialog that looks something like the below—we're always improving our user interface so don't worry if it's not an exact match:
The OAuth2 flow dialog reflects the permissions the developer is requesting (via the SCOPES
variable). In this case, it's the ability to view and download from the user's Google Drive. In application code, these permission scopes appear as URIs, but they're translated into the language specified by the user's locale. Here the user must give explicit authorization for the requested permission(s) otherwise an exception is thrown so the script does not proceed further.
You may even get one more dialog asking for your confirmation:
NOTE: Some use multiple web browsers logged into different accounts, so this authorization request may go to the wrong browser tab/window, and you may have to cut-n-paste the link for this request into a browser that's logged in with the correct account.
From Cloud Shell
From Cloud Shell, no browser window pops up, leaving you stuck. Realize the diagnostic message at the bottom was meant for you:
If your browser is on a different machine then exit and re-run this application with the command-line parameter --noauth_local_webserver
You'll have to ^C (Ctrl-C or other keypress to halt script execution), and run it from your shell with the extra flag. When you run it this way, you'll get the following output instead:
$ python3 analyze_gsimg.py --noauth_local_webserver /usr/local/lib/python3.7/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access storage.json: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename)) Go to the following link in your browser: https://accounts.google.com/o/oauth2/auth?client_id=LONG-STRING.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.readonly&access_type=offline&response_type=code Enter verification code:
(Ignoring the warning because we know storage.json
hasn't been created yet and) Following the instructions in another browser tab with that URL, you'll get an experience nearly identical to what was described above for local development environments (see screenshots above). At the end is one final screen with the verification code to enter in the Cloud Shell:
Copy and paste this code into the terminal window.
Summary
Other than, "Authentication successful
", don't expect any additional output. Recall this is just the setup... you haven't done anything yet. What you have done is successfully begun your journey to something more likely to execute correctly the first time. (The best part is you were only prompted for authorization once; all successive executions skip it because your permissions have been cached.) Now let's make the code do some real work resulting in actual output.
Troubleshooting
If you get an error instead of no output, it may be due to one or more causes, perhaps this one:
8. Step 1: Download image from Google Drive
In the previous step, we recommended creating the code as analyze_gsimg.py
and edit from there. It's also possible to just cut-n-paste everything directly into iPython or standard Python shell, however it's more cumbersome as we're going to continue building the app piece by piece.
Assume your app has been authorized and API service endpoint created. In your code, it's represented by the DRIVE
variable. Now let's find an image file on your Google Drive and
set it to a variable called NAME
. Enter that plus the following drive_get_img()
function just below the code from Step 0:
FILE = 'YOUR_IMG_ON_DRIVE' # fill-in with name of your Drive file
def drive_get_img(fname):
'download file from Drive and return file info & binary if found'
# search for file on Google Drive
rsp = DRIVE.files().list(q="name='%s'" % fname,
fields='files(id,name,mimeType,modifiedTime)'
).execute().get('files', [])
# download binary & return file info if found, else return None
if rsp:
target = rsp[0] # use first matching file
fileId = target['id']
fname = target['name']
mtype = target['mimeType']
binary = DRIVE.files().get_media(fileId=fileId).execute()
return fname, mtype, target['modifiedTime'], binary
The Drive files()
collection has a list()
method which performs a query (the q
parameter) for the file specified. The fields
parameter is used to specify which return values you're interested in—why bother getting everything back and slowing things down if you don't care about the other values? If you're new to field masks for filtering API return values, check out this blog post & video. Otherwise execute the query and grab the files
attribute returned, defaulting to an empty list array if there are no matches.
If there are no results, the rest of the function is skipped and None
is returned (implicitly). Otherwise grab the first matching response (rsp[0]
), return the filename, its MIMEtype, the last modification timestamp, and finally, its binary payload, retrieved by the get_media()
function (via its file ID), also in the files()
collection. (Method names may differ slightly with other language client libraries.)
The final part is the "main" body driving the entire application:
if __name__ == '__main__':
# download img file & info from Drive
rsp = drive_get_img(FILE)
if rsp:
fname, mtype, ftime, data = rsp
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
else:
print('ERROR: Cannot download %r from Drive' % fname)
Assuming an image named section-work-card-img_2x.jpg
on Drive and set to FILE
, upon successful script execution, you should see output confirming it was able to read the file from Drive (but not saved to your computer):
$ python3 analyze_gsimg.py Downloaded 'section-work-card-img_2x.jpg' (image/jpeg, 2020-02-27T09:27:22.095Z, size: 27781)
Troubleshooting
If you don't get the successful output like the above, it may be due to one or more causes, perhaps this one:
Summary
In this section, you learned how (in 2 separate API calls) to connect to the Drive API querying for a specific file then downloading it. The business use-case: archive your Drive data and perhaps analyze it, such as with Google Cloud tools. The code for your app at this stage should match what's in the repo atstep1-drive/analyze_gsimg.py
.
Read more about downloading files on Google Drive here or check out this blog post & video. This part of the codelab is nearly identical to the entire intro to Google Workspace APIs codelab—instead of downloading a file, it displays the first 100 files/folders on a user's Google Drive and uses a more restrictive scope.
9. Step 2: Archive file to Cloud Storage
The next step is to add support for Google Cloud Storage. For this we need to import another Python package, io
. Ensure the top section of your imports now looks like this:
from __future__ import print_function
import io
In addition to the Drive filename, we need some information on where to store this file on Cloud Storage, specifically the name of the "bucket" you're going to put it in and any "parent folder" prefix(es). More on this in a moment:
FILE = 'YOUR_IMG_ON_DRIVE'
BUCKET = 'YOUR_BUCKET_NAME'
PARENT = '' # YOUR IMG FILE PREFIX
A word on buckets: Cloud Storage provides amorphous blob storage. When uploading files there, it doesn't understand the concept of file types, extensions, etc., like the way Google Drive does. They're just "blobs" to Cloud Storage. Furthermore, there's no concept of folders or subdirectories in Cloud Storage.
Yes, you can have slashes (/
) in filenames to represent the abstraction of multiple sub-folders, but at the end of the day, all your blobs go into a bucket, and "/
"s are just characters in their filenames. Check out the bucket and object naming conventions page for more info.
Step 1 above requested the Drive read-only scope. At the time, that's all you needed. Now, upload (read-write) permission to Cloud Storage is required. Change SCOPES
from a single string variable to an array (Python tuple [or list]) of permission scopes so it looks like this:
SCOPES = (
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/devstorage.full_control',
)
Now create a service endpoint to Cloud Storage right below the one for Drive. Note we slightly altered the call to reuse the same HTTP client object as there's no need to make a new one when it can be a shared resource.
# create API service endpoints
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
GCS = discovery.build('storage', 'v1', http=HTTP)
Now add this function (after drive_get_img()
) which uploads to Cloud Storage:
def gcs_blob_upload(fname, bucket, media, mimetype):
'upload an object to a Google Cloud Storage bucket'
# build blob metadata and upload via GCS API
body = {'name': fname, 'uploadType': 'multipart', 'contentType': mimetype}
return GCS.objects().insert(bucket=bucket, body=body,
media_body=http.MediaIoBaseUpload(io.BytesIO(media), mimetype),
fields='bucket,name').execute()
The objects.().insert()
call requires the bucket name, file metadata, and the binary blob itself. To filter out the return values, the fields
variable requests just the bucket and object names returned from the API. To learn more about these field masks on API read requests, check out this post & video.
Now integrate the use of gcs_blob_upload()
into the main application:
# upload file to GCS
gcsname = '%s/%s'% (PARENT, fname)
rsp = gcs_blob_upload(gcsname, BUCKET, data, mtype)
if rsp:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
else:
print('ERROR: Cannot upload %r to Cloud Storage' % gcsname)
The gcsname
variable merges any "parent subdirectory" name(s) appended with the filename itself, and when prefixed with the bucket name, gives off the impression you're archiving the file at "/bucket/parent.../filename
". Slip this chunk right after the first print()
function just above the else
clause so the entire "main" looks like this:
if __name__ == '__main__':
# download img file & info from Drive
rsp = drive_get_img(FILE)
if rsp:
fname, mtype, ftime, data = rsp
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
# upload file to GCS
gcsname = '%s/%s'% (PARENT, fname)
rsp = gcs_blob_upload(gcsname, BUCKET, data, mtype)
if rsp:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
else:
print('ERROR: Cannot upload %r to Cloud Storage' % gcsname)
else:
print('ERROR: Cannot download %r from Drive' % fname)
Let's say we specify a bucket named "vision-demo
" with "analyzed_imgs
" as a "parent subdirectory". Once you set those variables and run the script again, section-work-card-img_2x.jpg
will be downloaded from Drive then uploaded to Cloud Storage, right? NOT!
$ python3 analyze_gsimg.py Downloaded 'section-work-card-img_2x.jpg' (image/jpeg, 2020-02-27T09:27:22.095Z, size: 27781) Traceback (most recent call last): File "analyze_gsimg.py", line 85, in <module> io.BytesIO(data), mimetype=mtype), mtype) File "analyze_gsimg.py", line 72, in gcs_blob_upload media_body=media, fields='bucket,name').execute() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 134, in positional_wrapper return wrapped(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/googleapiclient/http.py", line 898, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: <HttpError 403 when requesting https://storage.googleapis.com/upload/storage/v1/b/PROJECT_ID/o?fields=bucket%2Cname&alt=json&uploadType=multipart returned "Insufficient Permission">
Look carefully, while the Drive download succeeded, the upload to Cloud Storage failed. Why?
The reason is that when we authorized this application originally for Step 1, we only authorized the read-only access to Google Drive. While we added the read-write scope for Cloud Storage, we never prompted the user to authorize that access. To make it work, we need to blow away the storage.json
file which is missing this scope and re-run.
After you re-authorize (confirm this by looking inside storage.json
and see both scopes there), your output will then be as expected:
$ python3 analyze_gsimg.py . . . Authentication successful. Downloaded 'section-work-card-img_2x.jpg' (image/jpeg, 2020-02-27T09:27:22.095Z, size: 27781) Uploaded 'analyzed_imgs/section-work-card-img_2x.jpg' to GCS bucket 'vision-demo'
Summary
This is a big deal, showing you, in relatively few lines of code, how to transfer files between both Cloud-based storage systems. The business use-case here is to backup a possibly constrained resource to "colder," cheaper storage as mentioned earlier. Cloud Storage offers different storage classes depending on whether you access your data regularly, monthly, quarterly, or annually.
Of course, developers do ask us from time-to-time why both Google Drive and Cloud Storage exist. After all, aren't they both file storage in the cloud? That's why we made this video. Your code at this stage should match what's in the repo atstep2-gcs/analyze_gsimg.py
.
10. Step 3: Analyze with Cloud Vision
While we now know you can move data between Google Cloud and Google Workspace, we haven't done any analysis yet, so time to send the image to Cloud Vision for label annotation a.k.a. object detection. To do so, we need to Base64-encode the data, meaning another Python module, base64
. Ensure your top import section now looks like this:
from __future__ import print_function
import base64
import io
By default, the Vision API returns all the labels it finds. To keep things consistent, let's request just the top 5 (adjustable by the user of course). We'll use a constant variable TOP
for this; add it under all the other constants:
FILE = 'YOUR_IMG_ON_DRIVE'
BUCKET = 'YOUR_BUCKET_NAME'
PARENT = '' # YOUR IMG FILE PREFIX
TOP = 5 # TOP # of VISION LABELS TO SAVE
As with earlier steps, we need another permission scope, this time for the Vision API. Update SCOPES
with its string:
SCOPES = (
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/devstorage.full_control',
'https://www.googleapis.com/auth/cloud-vision',
)
Now create a service endpoint to Cloud Vision so it lines up with the others like this:
# create API service endpoints
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
GCS = discovery.build('storage', 'v1', http=HTTP)
VISION = discovery.build('vision', 'v1', http=HTTP)
Now add this function that sends the image payload to Cloud Vision:
def vision_label_img(img, top):
'send image to Vision API for label annotation'
# build image metadata and call Vision API to process
body = {'requests': [{
'image': {'content': img},
'features': [{'type': 'LABEL_DETECTION', 'maxResults': top}],
}]}
rsp = VISION.images().annotate(body=body).execute().get('responses', [{}])[0]
# return top labels for image as CSV for Sheet (row)
if 'labelAnnotations' in rsp:
return ', '.join('(%.2f%%) %s' % (
label['score']*100., label['description']) \
for label in rsp['labelAnnotations'])
The images().annotate()
call requires the data plus desired API features. The top 5 label cap is part of the payload too (but completely optional). If the call is successful, the payload returns the top 5 labels of objects plus a confidence score an object is in the image. (If no response comes back, assign an empty Python dictionary so the following if
statement doesn't fail.) This function simply collates that data into a CSV string for eventual use in our report.
These 5 lines that call vision_label_img()
should be placed right after the successful upload to Cloud Storage:
# process w/Vision
rsp = vision_label_img(base64.b64encode(data).decode('utf-8'), TOP)
if rsp:
print('Top %d labels from Vision API: %s' % (TOP, rsp))
else:
print('ERROR: Vision API cannot analyze %r' % fname)
With that addition, the entire main driver should look like this:
if __name__ == '__main__':
# download img file & info from Drive
rsp = drive_get_img(FILE)
if rsp:
fname, mtype, ftime, data = rsp
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
# upload file to GCS
gcsname = '%s/%s'% (PARENT, fname)
rsp = gcs_blob_upload(gcsname, BUCKET, data, mtype)
if rsp:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
# process w/Vision
rsp = vision_label_img(base64.b64encode(data).decode('utf-8'), TOP)
if rsp:
print('Top %d labels from Vision API: %s' % (TOP, rsp))
else:
print('ERROR: Vision API cannot analyze %r' % fname)
else:
print('ERROR: Cannot upload %r to Cloud Storage' % gcsname)
else:
print('ERROR: Cannot download %r from Drive' % fname)
Deleting storage.json
to refresh the scopes and re-running the updated application should result in output similar to the following, noting the addition of Cloud Vision analysis:
$ python3 analyze_gsimg.py . . . Authentication successful. Downloaded 'section-work-card-img_2x.jpg' (image/jpeg, 2020-02-27T09:27:22.095Z, size: 27781) Uploaded 'analyzed_imgs/section-work-card-img_2x.jpg' to GCS bucket 'vision-demo' Top 5 labels from Vision API: (89.94%) Sitting, (86.09%) Interior design, (82.08%) Furniture, (81.52%) Table, (80.85%) Room
Summary
Not everyone has the machine learning expertise to create and train their own ML models to analyze their data. The Google Cloud team has made available some of Google's pre-trained models for general use and put them behind APIs, helping democratize AI & ML for everyone.
If you're a developer and can call an API, you can use machine learning. Cloud Vision is just one of the API services you can use to analyze your data with. Learn about the others here. Your code should now match what's in the repo atstep3-vision/analyze_gsimg.py
.
11. Step 4: Generate a report with Google Sheets
At this point, you've been able to archive corporate data and analyze it, but what's lacking is a summary of this work. Let's organize all results into a single report you can hand to your boss. What's more presentable to management than a spreadsheet?
No additional imports are needed for the Google Sheets API, and the only new piece of information needed is the file ID of an existing spreadsheet already formatted and awaiting a new row of data, hence the SHEET
constant. We recommend you create a new spreadsheet that looks similar to the following:
The URL for that spreadsheet will look like the following: https://docs.google.com/spreadsheets/d/
FILE_ID
/edit
. Grab that FILE_ID
and assign it as a sting to SHEET
.
We also snuck in a tiny function named k_ize()
which converts bytes to kilobytes, defining it as a Python lambda
since it's a simple 1-liner. Both of these integrated with the other constants looks like this:
k_ize = lambda b: '%6.2fK' % (b/1000.) # bytes to kBs
FILE = 'YOUR_IMG_ON_DRIVE'
BUCKET = 'YOUR_BUCKET_NAME'
PARENT = '' # YOUR IMG FILE PREFIX
SHEET = 'YOUR_SHEET_ID'
TOP = 5 # TOP # of VISION LABELS TO SAVE
As with earlier steps, we need another permission scope, this time read-write for the Sheets API. SCOPES
now has all 4 needed:
SCOPES = (
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/devstorage.full_control',
'https://www.googleapis.com/auth/cloud-vision',
'https://www.googleapis.com/auth/spreadsheets',
)
Now create a service endpoint to Google Sheets near the others, so it looks like this:
# create API service endpoints
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
GCS = discovery.build('storage', 'v1', http=HTTP)
VISION = discovery.build('vision', 'v1', http=HTTP)
SHEETS = discovery.build('sheets', 'v4', http=HTTP)
The functionality of sheet_append_row()
is straightforward: take a row of data and a Sheet's ID, then add that row to that Sheet:
def sheet_append_row(sheet, row):
'append row to a Google Sheet, return #cells added'
# call Sheets API to write row to Sheet (via its ID)
rsp = SHEETS.spreadsheets().values().append(
spreadsheetId=sheet, range='Sheet1',
valueInputOption='USER_ENTERED', body={'values': [row]}
).execute()
if rsp:
return rsp.get('updates').get('updatedCells')
The spreadsheets().values().append()
call requires Sheet's file ID, a range of cells, how the data should be entered, and the data itself. The file ID is straightforward, the range of cells is given in A1 notation. A range of "Sheet1
" means the entire Sheet—this signals to the API to append the row after all the data in the Sheet. There are a pair of choices on how the data should be added to the Sheet, "RAW
" (enter the string data verbatim) or "USER_ENTERED
" (write the data as if a user entered it on their keyboard with the Google Sheets application, preserving any cell formatting features).
If the call is successful, the return value doesn't really have anything super useful, so we opted for getting the number of cells updated by the API request. Below is the code that calls that function:
# push results to Sheet, get cells-saved count
fsize = k_ize(len(data))
row = [PARENT,
'=HYPERLINK("storage.cloud.google.com/%s/%s", "%s")' % (
BUCKET, gcsname, fname), mtype, ftime, fsize, rsp
]
rsp = sheet_append_row(SHEET, row)
if rsp:
print('Updated %d cells in Google Sheet' % rsp)
else:
print('ERROR: Cannot write row to Google Sheets')
The Google Sheet has columns representing data such as any parent "subdirectory," the location of the archived file on Cloud Storage (bucket + filename), the file's MIMEtype, the file size (originally in bytes, but converted to kilobytes with k_ize()
), and the Cloud Vision labels string. Also note the archived location is a hyperlink so your manager can click to confirm it's been backed up safely.
Adding the block of code above right after displaying the results from Cloud Vision, the main portion driving the app is now complete, although structurally a bit complex:
if __name__ == '__main__':
# download img file & info from Drive
rsp = drive_get_img(FILE)
if rsp:
fname, mtype, ftime, data = rsp
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
# upload file to GCS
gcsname = '%s/%s'% (PARENT, fname)
rsp = gcs_blob_upload(gcsname, BUCKET, data, mtype)
if rsp:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
# process w/Vision
rsp = vision_label_img(base64.b64encode(data).decode('utf-8'))
if rsp:
print('Top %d labels from Vision API: %s' % (TOP, rsp))
# push results to Sheet, get cells-saved count
fsize = k_ize(len(data))
row = [PARENT,
'=HYPERLINK("storage.cloud.google.com/%s/%s", "%s")' % (
BUCKET, gcsname, fname), mtype, ftime, fsize, rsp
]
rsp = sheet_append_row(SHEET, row)
if rsp:
print('Updated %d cells in Google Sheet' % rsp)
else:
print('ERROR: Cannot write row to Google Sheets')
else:
print('ERROR: Vision API cannot analyze %r' % fname)
else:
print('ERROR: Cannot upload %r to Cloud Storage' % gcsname)
else:
print('ERROR: Cannot download %r from Drive' % fname)
Deleting storage.json
one last time and re-running the updated application should result in output similar to the following, noting the addition of Cloud Vision analysis:
$ python3 analyze_gsimg.py . . . Authentication successful. Downloaded 'section-work-card-img_2x.jpg' (image/jpeg, 2020-02-27T09:27:22.095Z, size: 27781) Uploaded 'analyzed_imgs/section-work-card-img_2x.jpg' to GCS bucket 'vision-demo' Top 5 labels from Vision API: (89.94%) Sitting, (86.09%) Interior design, (82.08%) Furniture, (81.52%) Table, (80.85%) Room Updated 6 cells in Google Sheet
The extra line of output, while useful, is better visualized by taking a peek at the updated Google Sheet, with the last line (row 7 in the example below) added to the existing data set added prior:
Summary
In the first 3 steps of this tutorial, you connected with Google Workspace and Google Cloud APIs to move data and to analyze it, representing 80% of all the work. However at the end of day, none of this means anything if you can't present to management all you've accomplished. To better visualize the results, summarizing all the results in a generated report speaks volumes.
To further enhance the usefulness of the analysis, in addition to writing the results into a spreadsheet, one possible enhancement would be to index these top 5 labels for each image so that an internal database can be built allowing authorized employees to query for images by search team, but we leave that as an exercise for readers.
For now, our results are in a Sheet and accessible to management. The code for your app at this stage should match what's in the repo atstep4-sheets/analyze_gsimg.py
. The final step is to clean-up the code and turn it into a usable script.
12. *Final step: refactor
(optional) It's good to have a working app, however can we improve it? Yes, especially the main application which seems like a jumbled mess. Let's put that into its own function and drive it allowing for user input rather than fixed constants. We'll do that with the argparse
module. Furthermore, let's launch a web browser tab to display the Sheet once we've written our row of data to it. This is doable with the webbrowser
module. Weave these imports with the others so the top imports look like this:
from __future__ import print_function
import argparse
import base64
import io
import webbrowser
To be able to use this code in other applications, we need the ability to suppress the output, so let's add a DEBUG
flag to make that happen, adding this line to end of the constants section near the top:
DEBUG = False
Now, about the main body. As we were building this sample, you should've begun to feel "uncomfortable" as our code adds another level of nesting with each service added. If you felt that way, you're not alone, as this adds to code complexity as described in this Google Testing Blog post.
Following this best practice, let's reorganize the main part of the app into a function and return
at each "break point" instead of nesting (returning None
if any step fails and True
if all succeed):
def main(fname, bucket, sheet_id, folder, top, debug):
'"main()" drives process from image download through report generation'
# download img file & info from Drive
rsp = drive_get_img(fname)
if not rsp:
return
fname, mtype, ftime, data = rsp
if debug:
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
# upload file to GCS
gcsname = '%s/%s'% (folder, fname)
rsp = gcs_blob_upload(gcsname, bucket, data, mtype)
if not rsp:
return
if debug:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
# process w/Vision
rsp = vision_label_img(base64.b64encode(data).decode('utf-8'))
if not rsp:
return
if debug:
print('Top %d labels from Vision API: %s' % (top, rsp))
# push results to Sheet, get cells-saved count
fsize = k_ize(len(data))
row = [folder,
'=HYPERLINK("storage.cloud.google.com/%s/%s", "%s")' % (
bucket, gcsname, fname), mtype, ftime, fsize, rsp
]
rsp = sheet_append_row(sheet_id, row)
if not rsp:
return
if debug:
print('Added %d cells to Google Sheet' % rsp)
return True
It's neater and cleaner, leaving behind that recursive if-else
chain feeling along with reducing code complexity as described above. The last piece of the puzzle is to create a "real" main driver, allowing for user customization, and minimizing output (unless desired):
if __name__ == '__main__':
# args: [-hv] [-i imgfile] [-b bucket] [-f folder] [-s Sheet ID] [-t top labels]
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--imgfile", action="store_true",
default=FILE, help="image file filename")
parser.add_argument("-b", "--bucket_id", action="store_true",
default=BUCKET, help="Google Cloud Storage bucket name")
parser.add_argument("-f", "--folder", action="store_true",
default=PARENT, help="Google Cloud Storage image folder")
parser.add_argument("-s", "--sheet_id", action="store_true",
default=SHEET, help="Google Sheet Drive file ID (44-char str)")
parser.add_argument("-t", "--viz_top", action="store_true",
default=TOP, help="return top N (default %d) Vision API labels" % TOP)
parser.add_argument("-v", "--verbose", action="store_true",
default=DEBUG, help="verbose display output")
args = parser.parse_args()
print('Processing file %r... please wait' % args.imgfile)
rsp = main(args.imgfile, args.bucket_id,
args.sheet_id, args.folder, args.viz_top, args.verbose)
if rsp:
sheet_url = 'https://docs.google.com/spreadsheets/d/%s/edit' % args.sheet_id
print('DONE: opening web browser to it, or see %s' % sheet_url)
webbrowser.open(sheet_url, new=1, autoraise=True)
else:
print('ERROR: could not process %r' % args.imgfile)
If all steps are successful, the script launches a web browser to the spreadsheet specified where the new data row was added.
Summary
No need to delete storage.json
since no scope changes occurred. Re-run the updated application reveals a new browser window opened to the modified Sheet, fewer lines of output, and issuing a -h
option shows users their options, including -v
to restore the now-suppressed lines of output seen earlier:
$ python3 analyze_gsimg.py Processing file 'section-work-card-img_2x.jpg'... please wait DONE: opening web browser to it, or see https://docs.google.com/spreadsheets/d/SHEET_ID/edit $ python3 analyze_gsimg.py -h usage: analyze_gsimg.py [-h] [-i] [-t] [-f] [-b] [-s] [-v] optional arguments: -h, --help show this help message and exit -i, --imgfile image file filename -t, --viz_top return top N (default 5) Vision API labels -f, --folder Google Cloud Storage image folder -b, --bucket_id Google Cloud Storage bucket name -s, --sheet_id Google Sheet Drive file ID (44-char str) -v, --verbose verbose display output
The other options let users choose different Drive file names, Cloud Storage "subdirectory" and bucket names, top "N" results from Cloud Vision, and spreadsheet (Sheets) file IDs. With these last updates, the final version of your code should now match what's in the repo atfinal/analyze_gsimg.py
as well as here, in its entirety:
'''
analyze_gsimg.py - analyze Google Workspace image processing workflow
Download image from Google Drive, archive to Google Cloud Storage, send
to Google Cloud Vision for processing, add results row to Google Sheet.
'''
from __future__ import print_function
import argparse
import base64
import io
import webbrowser
from googleapiclient import discovery, http
from httplib2 import Http
from oauth2client import file, client, tools
k_ize = lambda b: '%6.2fK' % (b/1000.) # bytes to kBs
FILE = 'YOUR_IMG_ON_DRIVE'
BUCKET = 'YOUR_BUCKET_NAME'
PARENT = '' # YOUR IMG FILE PREFIX
SHEET = 'YOUR_SHEET_ID'
TOP = 5 # TOP # of VISION LABELS TO SAVE
DEBUG = False
# process credentials for OAuth2 tokens
SCOPES = (
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/devstorage.full_control',
'https://www.googleapis.com/auth/cloud-vision',
'https://www.googleapis.com/auth/spreadsheets',
)
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
# create API service endpoints
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
GCS = discovery.build('storage', 'v1', http=HTTP)
VISION = discovery.build('vision', 'v1', http=HTTP)
SHEETS = discovery.build('sheets', 'v4', http=HTTP)
def drive_get_img(fname):
'download file from Drive and return file info & binary if found'
# search for file on Google Drive
rsp = DRIVE.files().list(q="name='%s'" % fname,
fields='files(id,name,mimeType,modifiedTime)'
).execute().get('files', [])
# download binary & return file info if found, else return None
if rsp:
target = rsp[0] # use first matching file
fileId = target['id']
fname = target['name']
mtype = target['mimeType']
binary = DRIVE.files().get_media(fileId=fileId).execute()
return fname, mtype, target['modifiedTime'], binary
def gcs_blob_upload(fname, bucket, media, mimetype):
'upload an object to a Google Cloud Storage bucket'
# build blob metadata and upload via GCS API
body = {'name': fname, 'uploadType': 'multipart', 'contentType': mimetype}
return GCS.objects().insert(bucket=bucket, body=body,
media_body=http.MediaIoBaseUpload(io.BytesIO(media), mimetype),
fields='bucket,name').execute()
def vision_label_img(img, top):
'send image to Vision API for label annotation'
# build image metadata and call Vision API to process
body = {'requests': [{
'image': {'content': img},
'features': [{'type': 'LABEL_DETECTION', 'maxResults': top}],
}]}
rsp = VISION.images().annotate(body=body).execute().get('responses', [{}])[0]
# return top labels for image as CSV for Sheet (row)
if 'labelAnnotations' in rsp:
return ', '.join('(%.2f%%) %s' % (
label['score']*100., label['description']) \
for label in rsp['labelAnnotations'])
def sheet_append_row(sheet, row):
'append row to a Google Sheet, return #cells added'
# call Sheets API to write row to Sheet (via its ID)
rsp = SHEETS.spreadsheets().values().append(
spreadsheetId=sheet, range='Sheet1',
valueInputOption='USER_ENTERED', body={'values': [row]}
).execute()
if rsp:
return rsp.get('updates').get('updatedCells')
def main(fname, bucket, sheet_id, folder, top, debug):
'"main()" drives process from image download through report generation'
# download img file & info from Drive
rsp = drive_get_img(fname)
if not rsp:
return
fname, mtype, ftime, data = rsp
if debug:
print('Downloaded %r (%s, %s, size: %d)' % (fname, mtype, ftime, len(data)))
# upload file to GCS
gcsname = '%s/%s'% (folder, fname)
rsp = gcs_blob_upload(gcsname, bucket, data, mtype)
if not rsp:
return
if debug:
print('Uploaded %r to GCS bucket %r' % (rsp['name'], rsp['bucket']))
# process w/Vision
rsp = vision_label_img(base64.b64encode(data).decode('utf-8'), top)
if not rsp:
return
if debug:
print('Top %d labels from Vision API: %s' % (top, rsp))
# push results to Sheet, get cells-saved count
fsize = k_ize(len(data))
row = [folder,
'=HYPERLINK("storage.cloud.google.com/%s/%s", "%s")' % (
bucket, gcsname, fname), mtype, ftime, fsize, rsp
]
rsp = sheet_append_row(sheet_id, row)
if not rsp:
return
if debug:
print('Added %d cells to Google Sheet' % rsp)
return True
if __name__ == '__main__':
# args: [-hv] [-i imgfile] [-b bucket] [-f folder] [-s Sheet ID] [-t top labels]
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--imgfile", action="store_true",
default=FILE, help="image file filename")
parser.add_argument("-b", "--bucket_id", action="store_true",
default=BUCKET, help="Google Cloud Storage bucket name")
parser.add_argument("-f", "--folder", action="store_true",
default=PARENT, help="Google Cloud Storage image folder")
parser.add_argument("-s", "--sheet_id", action="store_true",
default=SHEET, help="Google Sheet Drive file ID (44-char str)")
parser.add_argument("-t", "--viz_top", action="store_true",
default=TOP, help="return top N (default %d) Vision API labels" % TOP)
parser.add_argument("-v", "--verbose", action="store_true",
default=DEBUG, help="verbose display output")
args = parser.parse_args()
print('Processing file %r... please wait' % args.imgfile)
rsp = main(args.imgfile, args.bucket_id,
args.sheet_id, args.folder, args.viz_top, args.verbose)
if rsp:
sheet_url = 'https://docs.google.com/spreadsheets/d/%s/edit' % args.sheet_id
print('DONE: opening web browser to it, or see %s' % sheet_url)
webbrowser.open(sheet_url, new=1, autoraise=True)
else:
print('ERROR: could not process %r' % args.imgfile)
We will make every attempt to keep this tutorial's contents up-to-date, but there will be occasions where the repo will have the most recent version of the code.
13. Congratulations!
There was certainly a lot of learning in this codelab, and you achieved that, surviving one of the longer codelabs. As a result, you tackled a possible enterprise scenario with about ~130 lines of Python, leveraging all of Google Cloud and Google Workspace, and moving data between them to build a working solution. Feel free to explore the open source repo for all versions of this app (more info below).
Clean up
- Use of Google Cloud APIs are not free while Google Workspace APIs are covered by your monthly Google Workspace subscription fee (consumer Gmail users have a monthly fee of zero), so there's no API clean-up/turndown required for Google Workspace users. For Google Cloud, you can go to your Cloud console dashboard and check the Billing "card" for estimated charges.
- For Cloud Vision, you're allowed a fixed number of API calls per month for free. So as long as you stay under those limits, there's no need to shut anything down nor must you disable/delete your project. More information on the Vision API's billing and free quota can be found on its pricing page.
- Some Cloud Storage users receive a free amount of storage per month. If the images you archive using this codelab do not cause you to exceed that quota, you will not incur any charges. More information on GCS billing and free quota can be found on its pricing page. You can view and easily delete blobs from the Cloud Storage browser.
- Your use of Google Drive may also have a storage quota, and if you exceed it (or are close to it), you may actually consider using the tool you built in this codelab to archive those images to Cloud Storage to give yourself more space on Drive. More information on Google Drive storage can be found on the appropriate pricing page for Google Workspace Basic users or Gmail/consumer users.
While most Google Workspace Business and Enterprise plans have unlimited storage, this could cause your Drive folders to be cluttered and/or overwhelming, and the app you built in this tutorial is a great way to archive extraneous files and clean-up your Google Drive.
Alternate versions
While final/analyze_gsimg.py
is the "last" official version you're working on in this tutorial, it's not the end of the story. One issue with the final version of the app is that it uses the older auth libraries which have been deprecated. We chose this path because, at the time of this writing, the newer auth libraries did not support several key elements: OAuth token storage management and thread safety.
Current (newer) auth libraries
However, at some point, the older auth libraries will no longer be supported, so we encourage you to review versions that use the newer (current) auth libraries in the repo's alt
folder even if they aren't threadsafe (but you can build your own solution that is). Look for files with *newauth*
in their names.
Google Cloud product client libraries
Google Cloud recommends all developers use the product client libraries when using Google Cloud APIs. Unfortunately non-Google Cloud APIs don't have such libraries at this time. Use of the lower-level libraries allow for consistent API usage and features better readability. Similar to the recommendation above, alternative versions using Google Cloud product client libraries are available in the repo's alt
folder for you to review. Look for files with *-gcp*
in their names.
Service account authorization
When working purely in the cloud, there generally aren't humans nor (human) user-owned data, so that's why service accounts and service account authorization are primarily used with Google Cloud. However, Google Workspace documents are generally owned by (human) users, so that's why this tutorial uses user account authorization. That doesn't mean it's not possible to use Google Workspace APIs with service accounts. As long as those accounts have the appropriate access level, they can certainly be used in applications. Similar to the above, alternative versions using service account authorization are available in the repo's alt
folder for you to review. Look for files with *-svc*
in their names.
Alternative version catalog
Below, you'll find all alternative versions of final/analyze_gsimg.py
, each having one or more of the properties above. In each version's filename, look for:
- "
oldauth
" for versions using the older auth libraries (in addition tofinal/analyze_gsimg.py
) - "
newauth
" for those using the current/newer auth libraries - "
gcp
" for those using Google Cloud product client libraries, i.e., google-cloud-storage, etc. - "
svc
" for those using a service account ("svc acct") auth instead of a user account
Here are all the versions:
Filename | Description |
| The primary sample; uses the older auth libraries |
| Same as |
| Same as |
| Same as |
| Same as |
| Same as |
| Same as |
| Same as |
Coupled with the original final/analyze_gsimg.py
, you have all possible combinations of the final solution, regardless of your Google API development environment, and can choose the one which best suits your needs. Also see alt/README.md
for a similar explanation.
Additional Study
Below are a few ideas of how you can take this exercise a step or two further. The problem set the current solution can handle can be expanded allowing you to make these enhancements:
- (multiple images in folders) Instead of processing one image, let's say you had images in Google Drive folders.
- (multiple images in ZIP files) Instead of a folder of images, how about ZIP archives containing image files? If using Python, consider the
zipfile
module. - (analyze Vision labels) Cluster similar images together, perhaps start by looking for the most common labels, then the 2nd most common, and so on.
- (create charts) Follow-up #3, generate charts with the Sheets API based on the Vision API analysis and categorization
- (categorize documents) Instead of analyzing images with the Cloud Vision API, let's say you have PDF files to categorize with the Cloud Natural Language API. Using your solutions above, these PDFs can be in Drive folders or ZIP archives on Drive.
- (create presentations) Use the Slides API to generate a slide deck from the contents of the Google Sheet report. For inspiration, check out this blog post & video on generating slides from spreadsheet data.
- (export as PDF) Export the spreadsheet and/or slide deck as PDF, however this isn't a feature of either the Sheets nor Slides APIs. Hint: Google Drive API. Extra credit: merge both the Sheets and Slides PDFs into one master PDF with tools like Ghostscript (Linux, Windows) or
Combine PDF Pages.action
(Mac OS X).
Learn More
Codelabs
- Intro to Google Workspace APIs (Google Drive API) (Python)
- Using Cloud Vision with Python (Python)
- Build customized reporting tools (Google Sheets API) (JS/Node)
- Upload objects to Google Cloud Storage (no coding required)
General
Google Workspace
- Google Drive API home page
- Google Sheets API home page
- Google Workspace developer overview & documentation
Google Cloud
- Google Cloud Storage home page
- Google Cloud Vision home page & live demo
- Cloud Vision API documentation
- Vision API image labeling docs
- Python on Google Cloud
- Google Cloud product client libraries
- Google Cloud documentation
License
This work is licensed under a Creative Commons Attribution 2.0 Generic License.