1. Overview
In the first code lab, you will store pictures in a bucket. This will generate a file creation event that will be handled by a service deployed in Cloud Run. The service will make a call to Vision API to do image analysis and save results in a datastore.
What you'll learn
- Cloud Storage
- Cloud Run
- Cloud Vision API
- Cloud Firestore
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
3. Enable APIs
For this lab, you will be using Cloud Functions and Vision API but first they need to be enabled either in Cloud Console or with gcloud
.
To enable Vision API in Cloud Console, search for Cloud Vision API
in the search bar:
You will land on the Cloud Vision API page:
Click the ENABLE
button.
Alternatively, you can also enable it Cloud Shell using the gcloud command line tool.
Inside Cloud Shell, run the following command:
gcloud services enable vision.googleapis.com
You should see the operation to finish successfully:
Operation "operations/acf.12dba18b-106f-4fd2-942d-fea80ecc5c1c" finished successfully.
Enable Cloud Run and Cloud Build as well:
gcloud services enable cloudbuild.googleapis.com \ run.googleapis.com
4. Create the bucket (console)
Create a storage bucket for the pictures. You can do this from Google Cloud Platform console ( console.cloud.google.com) or with gsutil command line tool from Cloud Shell or your local development environment.
Navigate to Storage
From the "hamburger" (☰) menu, navigate to the Storage
page.
Name your bucket
Click on the CREATE BUCKET
button.
Click CONTINUE
.
Choose Location
Create a multi-regional bucket in the region of your choice (here Europe
).
Click CONTINUE
.
Choose default storage class
Choose the Standard
storage class for your data.
Click CONTINUE
.
Set Access Control
As you will be working with publicly accessible images, you want all our pictures stored in this bucket to have the same uniform access control.
Choose the Uniform
access control option.
Click CONTINUE
.
Set Protection/Encryption
Keep default (Google-managed key)
, as you won't use your own encryption keys.
Click CREATE
, to eventually finalize our bucket creation.
Add allUsers as storage viewer
Go to the Permissions
tab:
Add an allUsers
member to the bucket, with a role of Storage > Storage Object Viewer
, as follows:
Click SAVE
.
5. Create the bucket (gsutil)
You can also use the gsutil
command line tool in Cloud Shell to create buckets.
In Cloud Shell, set a variable for the unique bucket name. Cloud Shell already has GOOGLE_CLOUD_PROJECT
set to your unique project id. You can append that to the bucket name.
For example:
export BUCKET_PICTURES=uploaded-pictures-${GOOGLE_CLOUD_PROJECT}
Create a standard multi-region zone in Europe:
gsutil mb -l EU gs://${BUCKET_PICTURES}
Ensure uniform bucket level access:
gsutil uniformbucketlevelaccess set on gs://${BUCKET_PICTURES}
Make the bucket public:
gsutil iam ch allUsers:objectViewer gs://${BUCKET_PICTURES}
If you go to Cloud Storage
section of the console, you should have a public uploaded-pictures
bucket:
Test that you can upload pictures to the bucket and the uploaded pictures are publicly available, as explained in the previous step.
6. Test public access to the bucket
Going back to the storage browser, you'll see your bucket in the list, with "Public" access (including a warning sign reminding you that anyone has access to the content of that bucket).
Your bucket is now ready to receive pictures.
If you click on the bucket name, you'll see the bucket details.
There, you can try the Upload files
button, to test that you can add a picture to the bucket. A file chooser popup will ask you to select a file. Once selected, it'll be uploaded to your bucket, and you will see again the public
access that has been automatically attributed to this new file.
Along the Public
access label, you will also see a little link icon. When clicking on it, your browser will navigate to the public URL of that image, which will be of the form:
https://storage.googleapis.com/BUCKET_NAME/PICTURE_FILE.png
With BUCKET_NAME
being the globally unique name you have chosen for your bucket, and then the file name of your picture.
By clicking on the check box along the picture name, the DELETE
button will be enabled, and you can delete this first image.
7. Prepare the database
You will store information about the picture given by the Vision API into the Cloud Firestore database, a fast, fully managed, serverless, cloud-native NoSQL document database. Prepare your database by going to the Firestore
section of the Cloud Console:
Two options are offered: Native mode
or Datastore mode
. Use the native mode, which offers extra features like offline support and real-time synchronization.
Click on SELECT NATIVE MODE
.
Pick a multi-region (here in Europe, but ideally at least the same region your function and storage bucket are).
Click the CREATE DATABASE
button.
Once the database is created, you should see the following:
Create a new collection by clicking the + START COLLECTION
button.
Name collection pictures
.
You don't need to create a document. You'll add them programmatically as new pictures are stored in Cloud Storage and analysed by the Vision API.
Click Save
.
Firestore creates a first default document in the newly created collection, you can safely delete that document as it doesn't contain any useful information:
The documents that will be created programmatically in our collection will contain 4 fields:
- name (string): the file name of the uploaded picture, which is also he key of the document
- labels (array of strings): the labels of recognised items by the Vision API
- color (string): the hexadecimal color code of the dominant color (ie. #ab12ef)
- created (date): the timestamp of when this image's metadata was stored
- thumbnail (boolean): an optional field that will be present and be true if a thumbnail image has been generated for this picture
As we will be searching in Firestore to find pictures that have thumbnails available, and sorting along the creation date, we'll need to create a search index.
You can create the index with the following command in Cloud Shell:
gcloud firestore indexes composite create \
--collection-group=pictures \
--field-config field-path=thumbnail,order=descending \
--field-config field-path=created,order=descending
Or you can also do it from the Cloud Console, by clicking on Indexes
, in the navigation column on the left, and then creating a composite index as shown below:
Click Create
. Index creation can take a few minutes.
8. Clone the code
Clone the code, if you haven't already in the previous code lab:
git clone https://github.com/GoogleCloudPlatform/serverless-photosharing-workshop
You can then go to the directory containing the service to start building the lab:
cd serverless-photosharing-workshop/services/image-analysis/java
You will have the following file layout for the service:
9. Explore the service code
You start by looking at how the Java Client Libraries are enabled in the pom.xml
using a BOM:
First, open the pom.xml
file which lists the dependencies of our Java app; focus is on the usage of the Vision, Cloud Storage and Firestore APIs
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.0-M3</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>services</groupId>
<artifactId>image-analysis</artifactId>
<version>0.0.1</version>
<name>image-analysis</name>
<description>Spring App for Image Analysis</description>
<properties>
<java.version>17</java.version>
<maven.compiler.target>17</maven.compiler.target>
<maven.compiler.source>17</maven.compiler.source>
<spring-cloud.version>2023.0.0-M2</spring-cloud.version>
<testcontainers.version>1.19.1</testcontainers.version>
</properties>
...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>libraries-bom</artifactId>
<version>26.24.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
—
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<dependency>
<groupId>com.google.cloud.functions</groupId>
<artifactId>functions-framework-api</artifactId>
<version>1.1.0</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-firestore</artifactId>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-vision</artifactId>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
</dependency>
The functionality is implemented in the EventController
class. Each time a new image is being uploaded to the bucket, the service will receive a notification to process:
@RestController
public class EventController {
private static final Logger logger = Logger.getLogger(EventController.class.getName());
private static final List<String> requiredFields = Arrays.asList("ce-id", "ce-source", "ce-type", "ce-specversion");
@RequestMapping(value = "/", method = RequestMethod.POST)
public ResponseEntity<String> receiveMessage(
@RequestBody Map<String, Object> body, @RequestHeader Map<String, String> headers) throws IOException, InterruptedException, ExecutionException {
...
}
The code will proceed to validate the Cloud Events
headers:
System.out.println("Header elements");
for (String field : requiredFields) {
if (headers.get(field) == null) {
String msg = String.format("Missing expected header: %s.", field);
System.out.println(msg);
return new ResponseEntity<String>(msg, HttpStatus.BAD_REQUEST);
} else {
System.out.println(field + " : " + headers.get(field));
}
}
System.out.println("Body elements");
for (String bodyField : body.keySet()) {
System.out.println(bodyField + " : " + body.get(bodyField));
}
if (headers.get("ce-subject") == null) {
String msg = "Missing expected header: ce-subject.";
System.out.println(msg);
return new ResponseEntity<String>(msg, HttpStatus.BAD_REQUEST);
}
A request can now be built and the code will prepare one such request to be sent to the Vision API
:
try (ImageAnnotatorClient vision = ImageAnnotatorClient.create()) {
List<AnnotateImageRequest> requests = new ArrayList<>();
ImageSource imageSource = ImageSource.newBuilder()
.setGcsImageUri("gs://" + bucketName + "/" + fileName)
.build();
Image image = Image.newBuilder()
.setSource(imageSource)
.build();
Feature featureLabel = Feature.newBuilder()
.setType(Type.LABEL_DETECTION)
.build();
Feature featureImageProps = Feature.newBuilder()
.setType(Type.IMAGE_PROPERTIES)
.build();
Feature featureSafeSearch = Feature.newBuilder()
.setType(Type.SAFE_SEARCH_DETECTION)
.build();
AnnotateImageRequest request = AnnotateImageRequest.newBuilder()
.addFeatures(featureLabel)
.addFeatures(featureImageProps)
.addFeatures(featureSafeSearch)
.setImage(image)
.build();
requests.add(request);
We're asking for 3 key capabilities of the Vision API:
- Label detection: to understand what's in those pictures
- Image properties: to give interesting attributes of the picture (we're interested in the dominant color of the picture)
- Safe search: to know if the image is safe to show (it shouldn't contain adult / medical / racy / violent content)
At this point, we can make the call to the Vision API:
...
logger.info("Calling the Vision API...");
BatchAnnotateImagesResponse result = vision.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = result.getResponsesList();
...
For reference, here's what the response from the Vision API looks like:
{
"faceAnnotations": [],
"landmarkAnnotations": [],
"logoAnnotations": [],
"labelAnnotations": [
{
"locations": [],
"properties": [],
"mid": "/m/01yrx",
"locale": "",
"description": "Cat",
"score": 0.9959855675697327,
"confidence": 0,
"topicality": 0.9959855675697327,
"boundingPoly": null
},
✄ - - - ✄
],
"textAnnotations": [],
"localizedObjectAnnotations": [],
"safeSearchAnnotation": {
"adult": "VERY_UNLIKELY",
"spoof": "UNLIKELY",
"medical": "VERY_UNLIKELY",
"violence": "VERY_UNLIKELY",
"racy": "VERY_UNLIKELY",
"adultConfidence": 0,
"spoofConfidence": 0,
"medicalConfidence": 0,
"violenceConfidence": 0,
"racyConfidence": 0,
"nsfwConfidence": 0
},
"imagePropertiesAnnotation": {
"dominantColors": {
"colors": [
{
"color": {
"red": 203,
"green": 201,
"blue": 201,
"alpha": null
},
"score": 0.4175916016101837,
"pixelFraction": 0.44456374645233154
},
✄ - - - ✄
]
}
},
"error": null,
"cropHintsAnnotation": {
"cropHints": [
{
"boundingPoly": {
"vertices": [
{ "x": 0, "y": 118 },
{ "x": 1177, "y": 118 },
{ "x": 1177, "y": 783 },
{ "x": 0, "y": 783 }
],
"normalizedVertices": []
},
"confidence": 0.41695669293403625,
"importanceFraction": 1
}
]
},
"fullTextAnnotation": null,
"webDetection": null,
"productSearchResults": null,
"context": null
}
If there's no error returned, we can move on, hence why we have this if block:
if (responses.size() == 0) {
logger.info("No response received from Vision API.");
return new ResponseEntity<String>(msg, HttpStatus.BAD_REQUEST);
}
AnnotateImageResponse response = responses.get(0);
if (response.hasError()) {
logger.info("Error: " + response.getError().getMessage());
return new ResponseEntity<String>(msg, HttpStatus.BAD_REQUEST);
}
We are going to get the labels of the things, categories or themes recognised in the picture:
List<String> labels = response.getLabelAnnotationsList().stream()
.map(annotation -> annotation.getDescription())
.collect(Collectors.toList());
logger.info("Annotations found:");
for (String label: labels) {
logger.info("- " + label);
}
We're interested in knowing the dominant color of the picture:
String mainColor = "#FFFFFF";
ImageProperties imgProps = response.getImagePropertiesAnnotation();
if (imgProps.hasDominantColors()) {
DominantColorsAnnotation colorsAnn = imgProps.getDominantColors();
ColorInfo colorInfo = colorsAnn.getColors(0);
mainColor = rgbHex(
colorInfo.getColor().getRed(),
colorInfo.getColor().getGreen(),
colorInfo.getColor().getBlue());
logger.info("Color: " + mainColor);
}
Let's check if the picture is safe to show:
boolean isSafe = false;
if (response.hasSafeSearchAnnotation()) {
SafeSearchAnnotation safeSearch = response.getSafeSearchAnnotation();
isSafe = Stream.of(
safeSearch.getAdult(), safeSearch.getMedical(), safeSearch.getRacy(),
safeSearch.getSpoof(), safeSearch.getViolence())
.allMatch( likelihood ->
likelihood != Likelihood.LIKELY && likelihood != Likelihood.VERY_LIKELY
);
logger.info("Safe? " + isSafe);
}
We're checking the adult / spoof / medical / violence / racy characteristics to see if they are not likely or very likely.
If the result of the safe search is okay, we can store metadata in Firestore:
// Saving result to Firestore
if (isSafe) {
ApiFuture<WriteResult> writeResult =
eventService.storeImage(fileName, labels,
mainColor);
logger.info("Picture metadata saved in Firestore at " +
writeResult.get().getUpdateTime());
}
...
public ApiFuture<WriteResult> storeImage(String fileName,
List<String> labels,
String mainColor) {
FirestoreOptions firestoreOptions = FirestoreOptions.getDefaultInstance();
Firestore pictureStore = firestoreOptions.getService();
DocumentReference doc = pictureStore.collection("pictures").document(fileName);
Map<String, Object> data = new HashMap<>();
data.put("labels", labels);
data.put("color", mainColor);
data.put("created", new Date());
return doc.set(data, SetOptions.merge());
}
10. Build App Images with GraalVM
In this optional step, you will build a JIT based app image
, then a Native Java app image
, using GraalVM.
To run the build, you will need to ensure that you have an appropriate JDK and the native-image builder installed and configured. There are several options available.
To start
, download the GraalVM 22.3.x Community Edition and follow the instructions on the GraalVM installation page.
This process can be greatly simplified with the help of SDKMAN!
To install the appropriate JDK distribution with SDKman
, start by using the install command:
sdk install java 17.0.8-graal
Instruct SDKman to use this version, for both JIT and AOT builds:
sdk use java 17.0.8-graal
In Cloudshell
, for your convenience, you can install GraalVM and the native-image utility with these simple commands:
# download GraalVM wget https://download.oracle.com/graalvm/17/latest/graalvm-jdk-17_linux-x64_bin.tar.gz tar -xzf graalvm-jdk-17_linux-x64_bin.tar.gz ls -lart # configure Java 17 and GraalVM for Java 17 # note the name of the latest GraalVM version, as unpacked by the tar command echo Existing JVM: $JAVA_HOME cd graalvm-jdk-17.0.8+9.1 export JAVA_HOME=$PWD cd bin export PATH=$PWD:$PATH echo JAVA HOME: $JAVA_HOME echo PATH: $PATH cd ../.. # validate the version with java -version # observe Java(TM) SE Runtime Environment Oracle GraalVM 17.0.8+9.1 (build 17.0.8+9-LTS-jvmci-23.0-b14) Java HotSpot(TM) 64-Bit Server VM Oracle GraalVM 17.0.8+9.1 (build 17.0.8+9-LTS-jvmci-23.0-b14, mixed mode, sharing)
First, set the GCP project environment variables:
export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)
You can then go to the directory containing the service to start building the lab:
cd serverless-photosharing-workshop/services/image-analysis/java
Build the JIT application image:
./mvnw package
Observe the build log in the terminal:
... [INFO] Results: [INFO] [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-jar-plugin:3.3.0:jar (default-jar) @ image-analysis --- [INFO] Building jar: /home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis-0.0.1.jar [INFO] [INFO] --- spring-boot-maven-plugin:3.2.0-M3:repackage (repackage) @ image-analysis --- [INFO] Replacing main artifact /home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis-0.0.1.jar with repackaged archive, adding nested dependencies in BOOT-INF/. [INFO] The original artifact has been renamed to /home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis-0.0.1.jar.original [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 15.335 s [INFO] Finished at: 2023-10-10T19:33:25Z [INFO] ------------------------------------------------------------------------
Build the Native(uses AOT) image:.
./mvnw native:compile -Pnative
Observe the build log in the terminal, including the native image build logs:
Note that the build takes quite a bit longer, depending on the machine you are testing on.
... [2/7] Performing analysis... [*********] (124.5s @ 4.53GB) 29,732 (93.19%) of 31,905 classes reachable 60,161 (70.30%) of 85,577 fields reachable 261,973 (67.29%) of 389,319 methods reachable 2,940 classes, 2,297 fields, and 97,421 methods registered for reflection 81 classes, 90 fields, and 62 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/7] Building universe... (11.7s @ 4.67GB) [4/7] Parsing methods... [***] (6.1s @ 5.91GB) [5/7] Inlining methods... [****] (4.5s @ 4.39GB) [6/7] Compiling methods... [******] (35.3s @ 4.60GB) [7/7] Creating image... (12.9s @ 4.61GB) 80.08MB (47.43%) for code area: 190,483 compilation units 73.81MB (43.72%) for image heap: 660,125 objects and 189 resources 14.95MB ( 8.86%) for other data 168.84MB in total ------------------------------------------------------------------------------------------------------------------------ Top 10 packages in code area: Top 10 object types in image heap: 2.66MB com.google.cloud.vision.v1p4beta1 18.51MB byte[] for code metadata 2.60MB com.google.cloud.vision.v1 9.27MB java.lang.Class 2.49MB com.google.protobuf 7.34MB byte[] for reflection metadata 2.40MB com.google.cloud.vision.v1p3beta1 6.35MB byte[] for java.lang.String 2.17MB com.google.storage.v2 5.72MB java.lang.String 2.12MB com.google.firestore.v1 4.46MB byte[] for embedded resources 1.64MB sun.security.ssl 4.30MB c.oracle.svm.core.reflect.SubstrateMethodAccessor 1.51MB i.g.xds.shaded.io.envoyproxy.envoy.config.core.v3 4.27MB byte[] for general heap data 1.47MB com.google.cloud.vision.v1p2beta1 2.50MB com.oracle.svm.core.hub.DynamicHubCompanion 1.34MB i.g.x.shaded.io.envoyproxy.envoy.config.route.v3 1.17MB java.lang.Object[] 58.34MB for 977 more packages 9.19MB for 4667 more object types ------------------------------------------------------------------------------------------------------------------------ 13.5s (5.7% of total time) in 75 GCs | Peak RSS: 9.44GB | CPU load: 6.13 ------------------------------------------------------------------------------------------------------------------------ Produced artifacts: /home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis (executable) /home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis.build_artifacts.txt (txt) ======================================================================================================================== Finished generating '/home/user/serverless-photosharing-workshop/services/image-analysis/java/target/image-analysis' in 3m 57s. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 04:28 min [INFO] Finished at: 2023-10-10T19:53:30Z [INFO] ------------------------------------------------------------------------
11. Build and Publish Container Images
Let's build a container image in two different versions: one as a JIT image
and the other as an Native Java image
.
First, set the GCP project environment variables:
export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)
Build the JIT image:.
./mvnw spring-boot:build-image -Pji
Observe the build log in the terminal:
[INFO] [creator] Timer: Saving docker.io/library/image-analysis-maven-jit:latest... started at 2023-10-10T20:00:31Z [INFO] [creator] *** Images (4c84122a1826): [INFO] [creator] docker.io/library/image-analysis-maven-jit:latest [INFO] [creator] Timer: Saving docker.io/library/image-analysis-maven-jit:latest... ran for 6.975913605s and ended at 2023-10-10T20:00:38Z [INFO] [creator] Timer: Exporter ran for 8.068588001s and ended at 2023-10-10T20:00:38Z [INFO] [creator] Timer: Cache started at 2023-10-10T20:00:38Z [INFO] [creator] Reusing cache layer 'paketo-buildpacks/syft:syft' [INFO] [creator] Adding cache layer 'buildpacksio/lifecycle:cache.sbom' [INFO] [creator] Timer: Cache ran for 200.449002ms and ended at 2023-10-10T20:00:38Z [INFO] [INFO] Successfully built image 'docker.io/library/image-analysis-maven-jit:latest' [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 43.887 s [INFO] Finished at: 2023-10-10T20:00:39Z [INFO] ------------------------------------------------------------------------
Build the AOT(Native) image:.
./mvnw spring-boot:build-image -Pnative
Observe the build log in the terminal, including the native image build logs.
Note:
- that the build takes quite a bit longer, depending on the machine you are testing on
- the images can be further compressed with UPX, however have a small negative impact on start-up performance, therefore this build does not use UPX - it is always a slight trade-off
... [INFO] [creator] Saving docker.io/library/image-analysis-maven-native:latest... [INFO] [creator] *** Images (13167702674e): [INFO] [creator] docker.io/library/image-analysis-maven-native:latest [INFO] [creator] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:native-image-svm' [INFO] [creator] Adding cache layer 'paketo-buildpacks/syft:syft' [INFO] [creator] Adding cache layer 'paketo-buildpacks/native-image:native-image' [INFO] [creator] Adding cache layer 'buildpacksio/lifecycle:cache.sbom' [INFO] [INFO] Successfully built image 'docker.io/library/image-analysis-maven-native:latest' [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:37 min [INFO] Finished at: 2023-10-10T20:05:16Z [INFO] ------------------------------------------------------------------------
Validate that the images have been built:
docker images | grep image-analysis
Tag and push the two images to GCR:
# JIT image docker tag image-analysis-maven-jit gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-jit docker push gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-jit # Native(AOT) image docker tag image-analysis-maven-native gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-native docker push gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-native
12. Deploy to Cloud Run
Time to deploy the service.
You will deploy service twice, once using the JIT image and the second time using the AOT(Native) image. Both service deployments will process the same image from the bucket in parallel, for comparison purposes.
First, set the GCP project environment variables:
export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project) gcloud config set project ${GOOGLE_CLOUD_PROJECT} gcloud config set run/region gcloud config set run/platform managed gcloud config set eventarc/location europe-west1
Deploy the JIT image and observe the deployment log in the console:
gcloud run deploy image-analysis-jit \ --image gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-jit \ --region europe-west1 \ --memory 2Gi --allow-unauthenticated ... Deploying container to Cloud Run service [image-analysis-jit] in project [...] region [europe-west1] ✓ Deploying... Done. ✓ Creating Revision... ✓ Routing traffic... ✓ Setting IAM Policy... Done. Service [image-analysis-jit] revision [image-analysis-jvm-00009-huc] has been deployed and is serving 100 percent of traffic. Service URL: https://image-analysis-jit-...-ew.a.run.app
Deploy the Native image and observe the deployment log in the console:
gcloud run deploy image-analysis-native \ --image gcr.io/${GOOGLE_CLOUD_PROJECT}/image-analysis-maven-native \ --region europe-west1 \ --memory 2Gi --allow-unauthenticated ... Deploying container to Cloud Run service [image-analysis-native] in project [...] region [europe-west1] ✓ Deploying... Done. ✓ Creating Revision... ✓ Routing traffic... ✓ Setting IAM Policy... Done. Service [image-analysis-native] revision [image-analysis-native-00005-ben] has been deployed and is serving 100 percent of traffic. Service URL: https://image-analysis-native-...-ew.a.run.app
13. Setup Eventarc Triggers
Eventarc offers a standardized solution to manage the flow of state changes, called events, between decoupled microservices. When triggered, Eventarc routes these events through Pub/Sub subscriptions to various destinations (in this document, see Event destinations) while managing delivery, security, authorization, observability, and error-handling for you.
You can create an Eventarc trigger so that your Cloud Run service receives notifications of a specified event or set of events. By specifying filters for the trigger, you can configure the routing of the event, including the event source and the target Cloud Run service.
First, set the GCP project environment variables:
export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project) gcloud config set project ${GOOGLE_CLOUD_PROJECT} gcloud config set run/region gcloud config set run/platform managed gcloud config set eventarc/location europe-west1
Grant pubsub.publisher
to the Cloud Storage service account:
SERVICE_ACCOUNT="$(gsutil kms serviceaccount -p ${GOOGLE_CLOUD_PROJECT})" gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \ --member="serviceAccount:${SERVICE_ACCOUNT}" \ --role='roles/pubsub.publisher'
Set up Eventarc triggers for both JIT and Native service images to process the image:
gcloud eventarc triggers list --location=eu gcloud eventarc triggers create image-analysis-jit-trigger \ --destination-run-service=image-analysis-jit \ --destination-run-region=europe-west1 \ --location=eu \ --event-filters="type=google.cloud.storage.object.v1.finalized" \ --event-filters="bucket=uploaded-pictures-${GOOGLE_CLOUD_PROJECT}" \ --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com gcloud eventarc triggers create image-analysis-native-trigger \ --destination-run-service=image-analysis-native \ --destination-run-region=europe-west1 \ --location=eu \ --event-filters="type=google.cloud.storage.object.v1.finalized" \ --event-filters="bucket=uploaded-pictures-${GOOGLE_CLOUD_PROJECT}" \ --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com
Observe that the two triggers have been created:
gcloud eventarc triggers list --location=eu
14. Test Service Versions
Once the service deployments are successful, you will post a picture to Cloud Storage, see if our services were invoked, what the Vision API returns, and if metadata is stored in Firestore.
Navigate back to Cloud Storage
, and click on the bucket we created at the beginning of the lab:
Once in the bucket details page, click on the Upload files
button to upload a picture.
For example, a GeekHour.jpeg
image is provided with your codebase under /services/image-analysis/java
. Select an image and press the Open button
:
You can now check the execution of the service, starting with image-analysis-jit
, followed by image-analysis-native
.
From the "hamburger" (☰) menu, navigate to the Cloud Run > image-analysis-jit
service.
Click on Logs and observe the output:
And indeed, in the list of logs, I can see that the JIT service image-analysis-jit
was invoked.
The logs indicate the start and end of the service execution. And in between, we can see the logs we put in our function with the log statements at INFO level. We see:
- The details of the event triggering our function,
- The raw results from the Vision API call,
- The labels that were found in the picture we uploaded,
- The dominant colors information,
- Whether the picture is safe to show,
- And eventually those metadata about the picture have been stored in Firestore.
You will repeat the process for the image-analysis-native
service.
From the "hamburger" (☰) menu, navigate to the Cloud Run > image-analysis-native
service.
Click on Logs and observe the output:
You will want to observe now whether the image metadata has been stored in Fiorestore.
Again from the "hamburger" (☰) menu, go to the Firestore
section. In the Data
subsection (shown by default), you should see the pictures
collection with a new document added, corresponding to the picture you just uploaded:
15. Clean up (Optional)
If you don't intend to continue with the other labs in the series, you can clean up resources to save costs and to be an overall good cloud citizen. You can clean up resources individually as follows.
Delete the bucket:
gsutil rb gs://${BUCKET_PICTURES}
Delete the function:
gcloud functions delete picture-uploaded --region europe-west1 -q
Delete the Firestore collection by selecting Delete collection from the collection:
Alternatively, you can delete the whole project:
gcloud projects delete ${GOOGLE_CLOUD_PROJECT}
16. Congratulations!
Congratulations! You've successfully implemented the first key service of the project!
What we've covered
- Cloud Storage
- Cloud Run
- Cloud Vision API
- Cloud Firestore
- Native Java Images