1. Overview
The goal of this codelab is to gain experience with "serverless" services offered by Google Cloud Platform:
- Cloud Functions — to deploy small units of business logic in the shape of functions, that react to various events (Pub/Sub messages, new files in Cloud Storage, HTTP requests, and more),
- App Engine — to deploy and serve web apps, web APIs, mobile backends, static assets, with fast scale up and down capabilities,
- Cloud Run — to deploy and scale containers, that can contain any language, runtime or library.
And to discover how to take advantage of those serverless services to deploy and scale Web and REST APIs, while also seeing some good RESTful design principles along the way.
In this workshop, we'll create a bookshelf explorer consisting of:
- A Cloud Function: to import the initial dataset of books available in our library, in the Cloud Firestore document database,
- A Cloud Run container: that will expose a REST API over the content of our database,
- An App Engine web frontend: to browse through the list of books, by calling our REST API.
Here is what the web frontend will look like at the end of this codelab:
What you'll learn
- Cloud Functions
- Cloud Firestore
- Cloud Run
- App Engine
2. Setup and requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
3. Prepare the environment and enable cloud APIs
In order to use the various services we will need throughout this project, we will enable a few APIs. We will do so by launching the following command in Cloud Shell:
$ gcloud services enable \ appengine.googleapis.com \ cloudbuild.googleapis.com \ cloudfunctions.googleapis.com \ compute.googleapis.com \ firestore.googleapis.com \ run.googleapis.com
After some time, you should see the operation finish successfully:
Operation "operations/acf.5c5ef4f6-f734-455d-b2f0-ee70b5a17322" finished successfully.
We will also setup an environment variable that we will need along the way: the cloud region where we will deploy our function, app and container:
$ export REGION=europe-west3
As we will store data in the Cloud Firestore database, we will need to create the database:
$ gcloud app create --region=${REGION} $ gcloud firestore databases create --location=${REGION}
Later on in this codelab, when implementing the REST API, we will need to sort and filter through the data. For that purpose, we will create three indexes:
$ gcloud firestore indexes composite create --collection-group=books \ --field-config field-path=language,order=ascending \ --field-config field-path=updated,order=descending $ gcloud firestore indexes composite create --collection-group=books \ --field-config field-path=author,order=ascending \ --field-config field-path=updated,order=descending
Those 3 indexes correspond to searches we will do by author or language, while maintaining ordering in the collection via an updated field.
4. Get the code
Get the code from the following Github repository:
$ git clone https://github.com/glaforge/serverless-web-apis
The application code is written using Node.JS.
You will have the following folder structure that's relevant for this lab:
serverless-web-apis | ├── data | ├── books.json | ├── function-import | ├── index.js | ├── package.json | ├── run-crud | ├── index.js | ├── package.json | ├── Dockerfile | ├── appengine-frontend ├── public | ├── css/style.css | ├── html/index.html | ├── js/app.js ├── index.js ├── package.json ├── app.yaml
These are the relevant folders:
data
— This folder contains a sample data of a list of 100 books.function-import
— This function will offer an endpoint to import sample data.run-crud
— This container will expose a Web API to access the book data stored in Cloud Firestore.appengine-frontend
— This App Engine web application will display a simple read-only frontend to browse through the list of books.
5. Sample book library data
In the data folder, we have a books.json
file that contains a list of a hundred books, probably worth reading. This JSON document is an array containing JSON objects. Let's have a look at the shape of the data we will ingest via a Cloud Function:
[
{
"isbn": "9780435272463",
"author": "Chinua Achebe",
"language": "English",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958
},
{
"isbn": "9781414251196",
"author": "Hans Christian Andersen",
"language": "Danish",
"pages": 784,
"title": "Fairy tales",
"year": 1836
},
...
]
All our book entries in this array contain the following information:
isbn
— The ISBN-13 code identifying the book.author
— The name of the author of the book.language
— The spoken language in which the book is written.pages
— The number of pages in the book.title
— The title of the book.year
— The year the book was published.
6. A function endpoint to import sample book data
In this first section, we will implement the endpoint that will be used to import sample book data. We'll use Cloud Functions for this purpose.
Explore the code
Let's start by looking at the package.json
file:
{
"name": "function-import",
"description": "Import sample book data",
"license": "Apache-2.0",
"dependencies": {
"@google-cloud/firestore": "^4.9.9"
},
"devDependencies": {
"@google-cloud/functions-framework": "^3.1.0"
},
"scripts": {
"start": "npx @google-cloud/functions-framework --target=parseBooks"
}
}
In the runtime dependencies, we only need the @google-cloud/firestore
NPM module to access the database and store our book data. Under the hood, the Cloud Functions runtime also provides the Express web framework, so we don't need to declare it as a dependency.
In the development dependencies, we declare the Functions Framework (@google-cloud/functions-framework
), which is the runtime framework used to invoke your functions. It's an open source framework that you can also use locally on your machine (in our case, inside Cloud Shell) to run functions without deploying each time you make a change, thus improving the development feedback loop.
To install the dependencies, use the install
command:
$ npm install
The start
script uses the Functions Framework to give you a command that you can use to run the function locally with the following instruction:
$ npm start
You can use curl or potentially the Cloud Shell web preview for HTTP GET requests to interact with the function.
Let's now have a look at the index.js
file that contains the logic of our book data import function:
const Firestore = require('@google-cloud/firestore');
const firestore = new Firestore();
const bookStore = firestore.collection('books');
We instantiate the Firestore module, and point at the books collection (similar to a table in relational databases).
functions.http('parseBooks', async (req, resp) => {
if (req.method !== "POST") {
resp.status(405).send({error: "Only method POST allowed"});
return;
}
if (req.headers['content-type'] !== "application/json") {
resp.status(406).send({error: "Only application/json accepted"});
return;
}
...
})
We are exporting the parseBooks
JavaScript function. This is the function we will declare when we deploy it later on.
The next couple of instructions are checking that:
- We are only accepting HTTP
POST
requests, and otherwise return a405
status code to indicate that the other HTTP methods are not allowed. - We are only accepting
application/json
payloads, and otherwise send a406
status code to indicate that this is not an acceptable payload format.
const books = req.body;
const writeBatch = firestore.batch();
for (const book of books) {
const doc = bookStore.doc(book.isbn);
writeBatch.set(doc, {
title: book.title,
author: book.author,
language: book.language,
pages: book.pages,
year: book.year,
updated: Firestore.Timestamp.now()
});
}
Then, we can retrieve the JSON payload via the body
of the request. We are preparing a Firestore batch operation, to store all the books in bulk. We iterate over the JSON array consisting of the book details, going through the isbn
, title
, author
, language
, pages
, and year
fields. The ISBN code of the book will serve as its primary key or identifier.
try {
await writeBatch.commit();
console.log("Saved books in Firestore");
} catch (e) {
console.error("Error saving books:", e);
resp.status(400).send({error: "Error saving books"});
return;
};
resp.status(202).send({status: "OK"});
Now that the bulk of data is ready, we can commit the operation. If the storage operation fails, we return a 400
status code to tell that it fails. Otherwise, we can return an OK response, with a 202
status code indicating that the bulk save request was accepted.
Running and testing the import function
Before running the code, we will install the dependencies with:
$ npm install
To run the function locally, thanks to the Functions Framework, we'll use the start
script command we defined in package.json
:
$ npm start > start > npx @google-cloud/functions-framework --target=parseBooks Serving function... Function: parseBooks URL: http://localhost:8080/
To send an HTTP POST
request to your local function, you can run:
$ curl -d "@../data/books.json" \ -H "Content-Type: application/json" \ http://localhost:8080/
When launching this command, you will see the following output, confirming the function is running locally:
{"status":"OK"}
You can also go to the Cloud Console UI to check that the data is indeed stored in Firestore:
In the above screenshot, we can see the books
collection created, the list of book documents identified by the book ISBN code, and the details of that particular book entry on the right.
Deploying the function in the cloud
To deploy the function in Cloud Functions, we will use the following command in the function-import
directory:
$ gcloud functions deploy bulk-import \ --gen2 \ --trigger-http \ --runtime=nodejs20 \ --allow-unauthenticated \ --max-instances=30 --region=${REGION} \ --source=. \ --entry-point=parseBooks
We deploy the function with a symbolic name of bulk-import
. This function is triggered via HTTP requests. We use the Node.JS 20 runtime. We deploy the function publicly (ideally, we should secure that endpoint). We specify the region where we want the function to reside. And we point at the sources in the local directory and use parseBooks
(the exported JavaScript function) as entry point.
After a couple minutes or less, the function is deployed in the cloud. In the Cloud Console UI, you should see the function appear:
In the deployment output, you should be able to see the URL of your function, which follows a certain naming convention (https://${REGION}-${GOOGLE_CLOUD_PROJECT}.cloudfunctions.net/${FUNCTION_NAME}
), and of course, you can also find this HTTP trigger URL in the Cloud Console UI, in the trigger tab:
You can also retrieve the URL via the command-line with gcloud
:
$ export BULK_IMPORT_URL=$(gcloud functions describe bulk-import \ --region=$REGION \ --format 'value(httpsTrigger.url)') $ echo $BULK_IMPORT_URL
Let's store it in the BULK_IMPORT_URL
environment variable, so that we can reuse it for testing our deployed function.
Testing the deployed function
With a similar curl command that we used earlier to test the function running locally, we'll test the deployed function. The sole change will be the URL:
$ curl -d "@../data/books.json" \ -H "Content-Type: application/json" \ $BULK_IMPORT_URL
Again, if successful, it should return the following output:
{"status":"OK"}
Now that our import function is deployed and ready, that we have uploaded our sample data, it's time we develop the REST API exposing this dataset.
7. The REST API contract
Although we're not defining an API contract using, for example, the Open API specification, we're going to have a look at the various endpoints of our REST API.
The API exchanges book JSON objects, consisting of:
isbn
(optional) — a 13-characterString
representing a valid ISBN code,author
— a non-emptyString
representing the name of the author of the book,language
— a non-emptyString
containing the language the book was written in,pages
— a positiveInteger
for the page count of the book,title
— a non-emptyString
with the title of the book,year
— anInteger
value for the year of publication of the book.
Example book payload:
{
"isbn": "9780435272463",
"author": "Chinua Achebe",
"language": "English",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958
}
GET /books
Get the list of all books, potentially filtered by author and/or language, and paginated by windows of 10 results at a time.
Body payload: none.
Query parameters:
author
(optional) — filters the book list by author,language
(optional) — filters the book list by language,page
(optional, default = 0) — indicates the rank of the page of results to return.
Returns: a JSON array of book objects.
Status codes:
200
— when the request succeeds to fetch the list of books,400
— if an error occurs.
POST /books and POST /books/{isbn}
Post a new book payload, either with an isbn
path parameter (in which case the isbn
code is not needed in the book payload) or without (in which case the isbn
code must be present in the book payload)
Body payload: a book object.
Query parameters: none.
Returns: nothing.
Status codes:
201
— when the book is stored successfully,406
— if theisbn
code is invalid,400
— if an error occurs.
GET /books/{isbn}
Retrieves a book from the library, identified by its isbn
code, passed as a path parameter.
Body payload: none.
Query parameters: none.
Returns: a book JSON object, or an error object if the book does not exist.
Status codes:
200
— if the book is found in the database,400
— if an error occurs,404
— if the book couldn't be found,406
— if theisbn
code is invalid.
PUT /books/{isbn}
Updates an existing book, identified by its isbn
passed as path parameter.
Body payload: a book object. Only the fields that need an update can be passed, the other ones being optional.
Query parameters: none.
Returns: the updated book.
Status codes:
200
— when the book is updated successfully,400
— if an error occurs,406
— if theisbn
code is invalid.
DELETE /books/{isbn}
Deletes an existing book, identified by its isbn
passed as path parameter.
Body payload: none.
Query parameters: none.
Returns: nothing.
Status codes:
204
— when the book is deleted successfully,400
— if an error occurs.
8. Deploy and expose a REST API in a container
Explore the code
Dockerfile
Let's start by looking at the Dockerfile
, which will be responsible for containerizing our application code:
FROM node:20-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
CMD [ "node", "index.js" ]
We are using a Node.JS 20 "slim" image. We're working in the /usr/src/app
directory. We are copying the package.json
file (details below) that defines our dependencies, among other things. We install the dependencies with npm install
, copying the source code. Last, we indicate how this application should be run, with the node index.js
command.
package.json
Next, we can have a look at the package.json
file:
{
"name": "run-crud",
"description": "CRUD operations over book data",
"license": "Apache-2.0",
"engines": {
"node": ">= 20.0.0"
},
"dependencies": {
"@google-cloud/firestore": "^4.9.9",
"cors": "^2.8.5",
"express": "^4.17.1",
"isbn3": "^1.1.10"
},
"scripts": {
"start": "node index.js"
}
}
We specify that we want to use Node.JS 14, as was the case with the Dockerfile
.
Our web API application depends on:
- The Firestore NPM module to access the book data in the database,
- The
cors
library to handle CORS (Cross Origin Resource Sharing) requests, as our REST API will be invoked from the client code of our App Engine web application frontend, - The Express framework, which will be our web framework for designing our API,
- And then the
isbn3
module which helps with validating book ISBN codes.
We also specify the start
script, which will come in handy for starting the application locally, for development and testing purposes.
index.js
Let's move on to the meat of the code, with an in-depth look at index.js
:
const Firestore = require('@google-cloud/firestore');
const firestore = new Firestore();
const bookStore = firestore.collection('books');
We require the Firestore module, and reference the books
collection, where our book data is stored.
const express = require('express');
const app = express();
const bodyParser = require('body-parser');
app.use(bodyParser.json());
const querystring = require('querystring');
const cors = require('cors');
app.use(cors({
exposedHeaders: ['Content-Length', 'Content-Type', 'Link'],
}));
We are using Express, as our web framework, to implement our REST API. We're using the body-parser
module to parse the JSON payloads exchanged with our API.
The querystring
module is helpful to manipulate URLs. This will be the case when we create Link
headers for pagination purposes (more on this later).
Then we configure the cors
module. We explicit the headers we want passed via CORS, as most are usually stripped away, but here, we want to keep the usual content length and type, as well as the Link
header that we'll specify for pagination.
const ISBN = require('isbn3');
function isbnOK(isbn, res) {
const parsedIsbn = ISBN.parse(isbn);
if (!parsedIsbn) {
res.status(406)
.send({error: `Invalid ISBN: ${isbn}`});
return false;
}
return parsedIsbn;
}
We will be using the isbn3
NPM module to parse and validate ISBN codes, and we develop a small utility function that will parse ISBN codes, and respond with a 406
status code on the response, if the ISBN codes are invalid.
GET /books
Let's have a look at the GET /books
endpoint, piece by piece:
app.get('/books', async (req, res) => {
try {
var query = new Firestore().collection('books');
if (!!req.query.author) {
console.log(`Filtering by author: ${req.query.author}`);
query = query.where("author", "==", req.query.author);
}
if (!!req.query.language) {
console.log(`Filtering by language: ${req.query.language}`);
query = query.where("language", "==", req.query.language);
}
const page = parseInt(req.query.page) || 0;
// - - ✄ - - ✄ - - ✄ - - ✄ - - ✄ - -
} catch (e) {
console.error('Failed to fetch books', e);
res.status(400)
.send({error: `Impossible to fetch books: ${e.message}`});
}
});
We're getting ready to query the database, by preparing a query. This query will depend on the optional query parameters, to filter by author and/or by language. We are also returning the book list by chunks of 10 books.
If there's an error along the way, while fetching the books, we return an error with a 400 status code.
Let's zoom into the snipped portion of that endpoint:
const snapshot = await query
.orderBy('updated', 'desc')
.limit(PAGE_SIZE)
.offset(PAGE_SIZE * page)
.get();
const books = [];
if (snapshot.empty) {
console.log('No book found');
} else {
snapshot.forEach(doc => {
const {title, author, pages, year, language, ...otherFields} = doc.data();
const book = {isbn: doc.id, title, author, pages, year, language};
books.push(book);
});
}
In the previous section, we filtered by author
and language
, but in this section, we're going to sort the list of books by order of last updated date (last updated comes first). And we will also paginate the result, by defining a limit (the number of elements to return), and an offset (the starting point from which to return the next batch of books).
We execute the query, get the snapshot of the data, and put those results in a JavaScript array that will be returned at the end of the function.
Let's finish the explanations of this endpoint, by looking at a good practice: using the Link
header to define URI links to the first, previous, next or last pages of data (in our case, we'll only provide previous and next).
var links = {};
if (page > 0) {
const prevQuery = querystring.stringify({...req.query, page: page - 1});
links.prev = `${req.path}${prevQuery != '' ? `?${prevQuery}` : ''}`;
}
if (snapshot.docs.length === PAGE_SIZE) {
const nextQuery = querystring.stringify({...req.query, page: page + 1});
links.next = `${req.path}${nextQuery != '' ? `?${nextQuery}` : ''}`;
}
if (Object.keys(links).length > 0) {
res.links(links);
}
res.status(200).send(books);
The logic may seem a bit complex here at first, but what we're doing is to add a previous link if we're not at the first page of data. And we add a next link if the page of data is full (ie. contains the maximum number of books as defined by the PAGE_SIZE
constant, assuming there's another one coming up with more data). We then use the resource#links()
function of Express to create the right header with the right syntax.
For your information, the link header will look something like this:
link: </books?page=1>; rel="prev", </books?page=3>; rel="next"
POST /books
andPOST /books/:isbn
Both endpoints are here to create a new book. One passes the ISBN code in the book payload, whereas the other passes it as a path parameter. Either way, both call our createBook()
function:
async function createBook(isbn, req, res) {
const parsedIsbn = isbnOK(isbn, res);
if (!parsedIsbn) return;
const {title, author, pages, year, language} = req.body;
try {
const docRef = bookStore.doc(parsedIsbn.isbn13);
await docRef.set({
title, author, pages, year, language,
updated: Firestore.Timestamp.now()
});
console.log(`Saved book ${parsedIsbn.isbn13}`);
res.status(201)
.location(`/books/${parsedIsbn.isbn13}`)
.send({status: `Book ${parsedIsbn.isbn13} created`});
} catch (e) {
console.error(`Failed to save book ${parsedIsbn.isbn13}`, e);
res.status(400)
.send({error: `Impossible to create book ${parsedIsbn.isbn13}: ${e.message}`});
}
}
We check the isbn
code is valid, otherwise return from the function (and setting a 406
status code). We retrieve the book fields from the payload passed in the body of the request. Then we're going to store the book details in Firestore. Returning 201
on success, and 400
on failure.
When returning successfully, we also set the location header, so as to give cues to the client of the API where the newly created resource is. The header will look as follows:
Location: /books/9781234567898
GET /books/:isbn
Let's fetch a book, identified via its ISBN, from Firestore.
app.get('/books/:isbn', async (req, res) => {
const parsedIsbn = isbnOK(req.params.isbn, res);
if (!parsedIsbn) return;
try {
const docRef = bookStore.doc(parsedIsbn.isbn13);
const docSnapshot = await docRef.get();
if (!docSnapshot.exists) {
console.log(`Book not found ${parsedIsbn.isbn13}`)
res.status(404)
.send({error: `Could not find book ${parsedIsbn.isbn13}`});
return;
}
console.log(`Fetched book ${parsedIsbn.isbn13}`, docSnapshot.data());
const {title, author, pages, year, language, ...otherFields} = docSnapshot.data();
const book = {isbn: parsedIsbn.isbn13, title, author, pages, year, language};
res.status(200).send(book);
} catch (e) {
console.error(`Failed to fetch book ${parsedIsbn.isbn13}`, e);
res.status(400)
.send({error: `Impossible to fetch book ${parsedIsbn.isbn13}: ${e.message}`});
}
});
As always, we check if the ISBN is valid. We make a query to Firestore to retrieve the book. The snapshot.exists
property is handy to know if indeed, a book was found. Otherwise, we send back an error and a 404
Not Found status code. We retrieve the book data, and create a JSON object representing the book, to be returned.
PUT /books/:isbn
We're using the PUT method to update an existing book.
app.put('/books/:isbn', async (req, res) => {
const parsedIsbn = isbnOK(req.params.isbn, res);
if (!parsedIsbn) return;
try {
const docRef = bookStore.doc(parsedIsbn.isbn13);
await docRef.set({
...req.body,
updated: Firestore.Timestamp.now()
}, {merge: true});
console.log(`Updated book ${parsedIsbn.isbn13}`);
res.status(201)
.location(`/books/${parsedIsbn.isbn13}`)
.send({status: `Book ${parsedIsbn.isbn13} updated`});
} catch (e) {
console.error(`Failed to update book ${parsedIsbn.isbn13}`, e);
res.status(400)
.send({error: `Impossible to update book ${parsedIsbn.isbn13}: ${e.message}`});
}
});
We update the updated
date/time field to remember when we last updated that record. We use the {merge:true}
strategy which replaces existing fields with their new values (otherwise, all fields are removed, and only the new fields in the payload would be saved, erasing existing fields from the previous update or the initial creation).
We also set the Location
header to point at the URI of the book.
DELETE /books/:isbn
Deleting books is pretty straightforward. We just call the delete()
method on the document reference. We return a 204 status code, as we're not returning any content.
app.delete('/books/:isbn', async (req, res) => {
const parsedIsbn = isbnOK(req.params.isbn, res);
if (!parsedIsbn) return;
try {
const docRef = bookStore.doc(parsedIsbn.isbn13);
await docRef.delete();
console.log(`Book ${parsedIsbn.isbn13} was deleted`);
res.status(204).end();
} catch (e) {
console.error(`Failed to delete book ${parsedIsbn.isbn13}`, e);
res.status(400)
.send({error: `Impossible to delete book ${parsedIsbn.isbn13}: ${e.message}`});
}
});
Start the Express / Node server
Last but not least, we start the server, listening on port 8080
by default:
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`Books Web API service: listening on port ${port}`);
console.log(`Node ${process.version}`);
});
Running the application locally
To run the application locally, we'll first install the dependencies with:
$ npm install
And we can then start with:
$ npm start
The server will start on localhost
and listen on port 8080 by default.
It's also possible to build a Docker container, and run the container image as well, with the following commands:
$ docker build -t crud-web-api . $ docker run --rm -p 8080:8080 -it crud-web-api
Running within Docker is also a great way to double check that the containerization of our application will run fine as we build it in the cloud with Cloud Build.
Testing the API
Regardless of how we run the REST API code (directly via Node or through a Docker container image), we're now able to run a few queries against it.
- Create a new book (ISBN in the body payload):
$ curl -XPOST -d '{"isbn":"9782070368228","title":"Book","author":"me","pages":123,"year":2021,"language":"French"}' \ -H "Content-Type: application/json" \ http://localhost:8080/books
- Create a new book (ISBN in a path parameter):
$ curl -XPOST -d '{"title":"Book","author":"me","pages":123,"year":2021,"language":"French"}' \ -H "Content-Type: application/json" \ http://localhost:8080/books/9782070368228
- Delete a book (the one we created):
$ curl -XDELETE http://localhost:8080/books/9782070368228
- Retrieve a book by ISBN:
$ curl http://localhost:8080/books/9780140449136 $ curl http://localhost:8080/books/9782070360536
- Update an existing book by changing just its title:
$ curl -XPUT \ -d '{"title":"Book"}' \ -H "Content-Type: application/json" \ http://localhost:8080/books/9780003701203
- Retrieve the list of books (the first 10):
$ curl http://localhost:8080/books
- Find the books written by a particular author:
$ curl http://localhost:8080/books?author=Virginia+Woolf
- List the books written in English:
$ curl http://localhost:8080/books?language=English
- Load the 4th page of books:
$ curl http://localhost:8080/books?page=3
We can also combine author
, language
, and books
query parameters to refine our search.
Building and deploying the containerized REST API
As we're happy that the REST API works according to plan, it's the right moment for deploying it in the Cloud, on Cloud Run!
We're going to do it in two steps:
- First, by building the container image with Cloud Build, with the following command:
$ gcloud builds submit \ --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/crud-web-api
- Then, by deploying the service with this second command:
$ gcloud run deploy run-crud \ --image gcr.io/${GOOGLE_CLOUD_PROJECT}/crud-web-api \ --allow-unauthenticated \ --region=${REGION} \ --platform=managed
With the first command, Cloud Build builds the container image and hosts it in Container Registry. The next command deploys the container image from the registry, and deploys it in the cloud region.
We can double check in the Cloud Console UI that our Cloud Run service now appears in the list:
One last step we'll do here, is to retrieve the URL of the freshly deployed Cloud Run service, thanks to the following command:
$ export RUN_CRUD_SERVICE_URL=$(gcloud run services describe run-crud \ --region=${REGION} \ --platform=managed \ --format='value(status.url)')
We will need the URL of our Cloud Run REST API in the next section, as our App Engine frontend code will interact with the API.
9. Host a web app to browse the library
The last piece of the puzzle to add some glitter to this project, is to provide a web frontend that will interact with our REST API. For that purpose, we'll use Google App Engine, with some client JavaScript code that will call the API via AJAX requests (using the client-side Fetch API).
Our application, although deployed on the Node.JS App Engine runtime, is mostly made of static resources! There isn't much backend code, as most of the user interaction will be in the browser via client-side JavaScript. We won't use any fancy frontend JavaScript framework, we will just use some "vanilla" Javascript, with a few Web Components for the UI using the Shoelace web component library:
- a select box to select the language of the book:
- a card component to display the details about a particular book (including a barcode to represent the ISBN of the book, using the JsBarcode library):
- and a button to load more books from the database:
When combining all those visual components together, the resulting web page to browse our library will look as follows:
The app.yaml
configuration file
Let's start diving in the code base of this App Engine application, by looking at its app.yaml
configuration file. This is a file that is specific to App Engine, and allows to configure things like environment variables, the various "handlers" of the application, or specifying that some resources are static assets, which are going to be served by App Engine's built-in CDN.
runtime: nodejs14
env_variables:
RUN_CRUD_SERVICE_URL: CHANGE_ME
handlers:
- url: /js
static_dir: public/js
- url: /css
static_dir: public/css
- url: /img
static_dir: public/img
- url: /(.+\.html)
static_files: public/html/\1
upload: public/(.+\.html)
- url: /
static_files: public/html/index.html
upload: public/html/index\.html
- url: /.*
secure: always
script: auto
We specify that our application is a Node.JS one, and that we want to use version 14.
Then we define an environment variable that points at our Cloud Run service URL. We'll need to update the CHANGE_ME placeholder with the correct URL (see below on how to change this).
After that, we define various handlers. The first 3 ones are pointing at the HTML, CSS, and JavaScript client-side code location, under the public/
folder and its sub-folders. The fourth one indicates that the root URL of our App Engine application should point at the index.html
page. That way, we won't see the index.html
suffix in the URL when accessing the root of the website. And the last one is the default one that will route all other URLs (/.*
) to our Node.JS application (ie. the "dynamic" part of the application, in contrast to the static assets we've described).
Let's update the Web API URL of the Cloud Run service now.
In the appengine-frontend/
directory, run the following command to update the environment variable pointing at the URL of our Cloud Run-based REST API:
$ sed -i -e "s|CHANGE_ME|${RUN_CRUD_SERVICE_URL}|" app.yaml
Or manually change the CHANGE_ME
string in app.yaml
with the correct URL:
env_variables:
RUN_CRUD_SERVICE_URL: CHANGE_ME
The Node.JS package.json
file
{
"name": "appengine-frontend",
"description": "Web frontend",
"license": "Apache-2.0",
"main": "index.js",
"engines": {
"node": "^14.0.0"
},
"dependencies": {
"express": "^4.17.1",
"isbn3": "^1.1.10"
},
"devDependencies": {
"nodemon": "^2.0.7"
},
"scripts": {
"start": "node index.js",
"dev": "nodemon --watch server --inspect index.js"
}
}
We stress again that we want to run this application using Node.JS 14. We depend on the Express framework, as well as the isbn3
NPM module for validating books' ISBN codes.
In the development dependencies, we are going to use the nodemon
module to monitor file changes. Although we can run our application locally with npm start
, make some changes to the code, stop the app with ^C
, and then relaunch it, it's a bit tedious. Instead we can use the following command to have the application automatically reloaded / restarted upon changes:
$ npm run dev
The index.js
Node.JS code
const express = require('express');
const app = express();
app.use(express.static('public'));
const bodyParser = require('body-parser');
app.use(bodyParser.json());
We require the Express web framework. We specify that the public directory contains static assets that can be served (at least when running locally in development mode) by the static
middleware. Lastly, we require body-parser
to parse our JSON payloads.
Let's have a look at the couple of routes that we have defined:
app.get('/', async (req, res) => {
res.redirect('/html/index.html');
});
app.get('/webapi', async (req, res) => {
res.send(process.env.RUN_CRUD_SERVICE_URL);
});
The first one matching /
will redirect to the index.html
in our public/html
directory. As in development mode we're not running within the App Engine runtime, we don't get App Engine's URL routing to take place. So instead, here, we're simply redirecting the root URL to the HTML file.
The second endpoint we define /webapi
will return the URL of our Cloud RUN REST API. That way, the client-side JavaScript code will know where to call to get the list of books.
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`Book library web frontend: listening on port ${port}`);
console.log(`Node ${process.version}`);
console.log(`Web API endpoint ${process.env.RUN_CRUD_SERVICE_URL}`);
});
To finish, we are running the Express web app and listening on port 8080 by default.
The index.html
page
We won't look at every line of this long HTML page. Instead, let's highlight some key lines.
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@shoelace-style/shoelace@2.0.0-beta.37/dist/themes/base.css">
<script type="module" src="https://cdn.jsdelivr.net/npm/@shoelace-style/shoelace@2.0.0-beta.37/dist/shoelace.js"></script>
<script src="https://cdn.jsdelivr.net/npm/jsbarcode@3.11.0/dist/barcodes/JsBarcode.ean-upc.min.js"></script>
<script src="/js/app.js"></script>
<link rel="stylesheet" type="text/css" href="/css/style.css">
The first two lines import the Shoelace web component library (a script and a stylesheet).
The next line imports the JsBarcode library, to create the barcodes of the book ISBN codes.
The last lines are importing our own JavaScript code and CSS stylesheet, that are located in our public/
subdirectories.
In the body
of the HTML page, we use the Shoelace components with their custom element tags, like:
<sl-icon name="book-half"></sl-icon>
...
<sl-select id="language-select" placeholder="Select a language..." clearable>
<sl-menu-item value="English">English</sl-menu-item>
<sl-menu-item value="French">French</sl-menu-item>
...
</sl-select>
...
<sl-button id="more-button" type="primary" size="large">
More books...
</sl-button>
...
And we also use HTML templates and their slot filling capability to represent a book. We'll create copies of that template to populate the list of books, and replace the values in the slots with the details of the books:
<template id="book-card">
<sl-card class="card-overview">
...
<slot name="author">Author</slot>
...
</sl-card>
</template>
Enough HTML, we're almost done with reviewing the code. One last meaty part remaining: the app.js
client-side JavaScript code that interacts with our REST API.
The app.js client-side JavaScript code
We start with a top-level event listener that waits for the DOM content to be loaded:
document.addEventListener("DOMContentLoaded", async function(event) {
...
}
Once it's ready, we can set up some key constants and variables:
const serverUrlResponse = await fetch('/webapi');
const serverUrl = await serverUrlResponse.text();
console.log('Web API endpoint:', serverUrl);
const server = serverUrl + '/books';
var page = 0;
var language = '';
First, we'll fetch the URL of our REST API, thanks to our App Engine node code that returns the environment variable that we set initially in app.yaml
. Thanks to the environment variable, the /webapi
endpoint, called from the JavaScript client-side code, we didn't have to hardcode the REST API URL in our frontend code.
We also define a page
and language
variables, that we will use to keep track of pagination and the language filtering.
const moreButton = document.getElementById('more-button');
moreButton.addEventListener('sl-focus', event => {
console.log('Button clicked');
moreButton.blur();
appendMoreBooks(server, page++, language);
});
We add an event handler on the button to load books. When it is clicked, it is going to call the appendMoreBooks()
function.
const langSelect = document.getElementById('language-select');
langSelect.addEventListener('sl-change', event => {
page = 0;
language = event.srcElement.value;
document.getElementById('library').replaceChildren();
console.log(`Language selected: "${language}"`);
appendMoreBooks(server, page++, language);
});
Similar thing for the select box, we add an event handler to be notified of changes in the language selection. And like with the button, we also call the appendMoreBooks()
function, passing the REST API URL, the current page and the language selection.
So let's have a look at that function that fetches and appends books:
async function appendMoreBooks(server, page, language) {
const searchUrl = new URL(server);
if (!!page) searchUrl.searchParams.append('page', page);
if (!!language) searchUrl.searchParams.append('language', language);
const response = await fetch(searchUrl.href);
const books = await response.json();
...
}
Above, we are crafting the exact URL to use to call the REST API. There are three query parameters that we can normally specify, but here in this UI, we only specify two:
page
— an integer indicating the current page for the pagination of the books,language
— a language string to filter by written language.
We then use the Fetch API to retrieve the JSON array containing our book details.
const linkHeader = response.headers.get('Link')
console.log('Link', linkHeader);
if (!!linkHeader && linkHeader.indexOf('rel="next"') > -1) {
console.log('Show more button');
document.getElementById('buttons').style.display = 'block';
} else {
console.log('Hide more button');
document.getElementById('buttons').style.display = 'none';
}
Depending on whether the Link
header is present in the response, we will show or hide the [More books...]
button, as the Link
header is a hint telling us if there are more books still to load (there will be a next
URL in the Link
header).
const library = document.getElementById('library');
const template = document.getElementById('book-card');
for (let book of books) {
const bookCard = template.content.cloneNode(true);
bookCard.querySelector('slot[name=title]').innerText = book.title;
bookCard.querySelector('slot[name=language]').innerText = book.language;
bookCard.querySelector('slot[name=author]').innerText = book.author;
bookCard.querySelector('slot[name=year]').innerText = book.year;
bookCard.querySelector('slot[name=pages]').innerText = book.pages;
const img = document.createElement('img');
img.setAttribute('id', book.isbn);
img.setAttribute('class', 'img-barcode-' + book.isbn)
bookCard.querySelector('slot[name=barcode]').appendChild(img);
library.appendChild(bookCard);
...
}
}
In the above section of the function, for each book returned by the REST API, we're going to clone the template with some web components representing a book, and we're populating the slots of the template with the details of the book.
JsBarcode('.img-barcode-' + book.isbn).EAN13(book.isbn, {fontSize: 18, textMargin: 0, height: 60}).render();
To make the ISBN code a bit prettier, we use the JsBarcode library to create a nice barcode like on the back cover of real books!
Running and testing the application locally
Enough code for now, it's time to see the application in action. First, we'll do so locally, within Cloud Shell, before deploying for real.
We install the NPM modules needed by our application with:
$ npm install
And we either run the app with the usual:
$ npm start
Or with auto-reloading of changes thanks to nodemon
, with:
$ npm run dev
The application is running locally, and we can access it from the browser, at http://localhost:8080
.
Deploying the App Engine application
Now that we're confident our application runs fine locally, it's time to deploy it on App Engine.
In order to deploy the application, let's launch the following command:
$ gcloud app deploy -q
After about a minute, the application should be deployed.
The application will be available at a URL of the shape: https://${GOOGLE_CLOUD_PROJECT}.appspot.com
.
Exploring the UI of our App Engine web application
Now you can:
- Click the
[More books...]
button to load more books. - Select a particular language to see books only in that language.
- You can clear the selection with the little cross in the select box, to come back to the list of all the books.
10. Clean up (optional)
If you don't intend to keep the app, you can clean up resources to save costs and to be an overall good cloud citizen by deleting the whole project:
gcloud projects delete ${GOOGLE_CLOUD_PROJECT}
11. Congratulations!
We created a set of services, thanks to Cloud Functions, App Engine and Cloud Run, to expose various Web API endpoints and web frontend, to store, update, and browse a library of books, following some good design patterns for REST API development along the way.
What we've covered
- Cloud Functions
- Cloud Firestore
- Cloud Run
- App Engine
Going further
If you want to further explore this concrete example and expand it, here's a list of things you might want to investigate:
- Take advantage of API Gateway to provide a common API façade to the data import function and REST API container, to add features like handling API keys to access the API, or define rate limitations for API consumers.
- Deploy the Swagger-UI node module in the App Engine application to document and offer a test playground for the REST API.
- On the frontend, beyond the existing browsing capability, add extra screens to edit the data, create new book entries. Also, since we are using the Cloud Firestore database, leverage its real-time feature to update the book data displayed as changes are made.