Host and scale a web app in Google Cloud with Compute Engine

There are many ways to deploy web sites in Google Cloud with each solution offering different features, capabilities, and levels of control. Compute Engine offers a deep level of control over the infrastructure used to run a web site, but also requires a little more operational management compared to solutions like Google Kubernetes Engine, App Engine, or others. With Compute Engine, you have fine-grained control of aspects of the infrastructure, including the virtual machines, load balancer, and more. Today, you'll deploy a sample app—the Fancy Store's ecommerce website—to show how a website can be deployed and scale easily with Compute Engine.

What you'll learn

At the end of the codelab, you'll have instances inside managed instance groups to provide autohealing, load balancing, autoscaling, and rolling updates for your website.

Prerequisites

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)

dMbN6g9RawQj_VXCSYpdYncY-DbaRzr2GbnwoV7jFf1u3avxJtmGPmKpMYgiaMH-qu80a_NJ9p2IIXFppYk8x3wyymZXavjglNLJJhuXieCem56H30hwXtd8PvXGpXJO9gEUDu3cZw

ci9Oe6PgnbNuSYlMyvbXF1JdQyiHoEgnhl4PlV_MFagm2ppzhueRkqX4eLjJllZco_2zCp0V0bpTupUSKji9KkQyWqj11pqit1K1faS1V6aFxLGQdkuzGp4rsQTan7F01iePL5DtqQ

8-tA_Lheyo8SscAVKrGii2coplQp2_D1Iosb2ViABY0UUO1A8cimXUu6Wf1R9zJIRExL5OB2j946aIiFtyKTzxDcNnuznmR45vZ2HMoK3o67jxuoUJCAnqvEX6NgPGFjCVNgASc-lg

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300USD Free Trial program.

Enable Compute Engine API

Next, you need to enable the Compute Engine API. Enabling an API requires you to accept the terms of service and billing responsibility for the API.

In Cloud Shell, execute the following to enable the Compute Engine API:

gcloud services enable compute.googleapis.com

Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you'll use Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

  1. To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell fEbHefbRynwXpq1vj2wJw6Dr17O0np8l-WOekxAZYlZQIORsWQE_xJl-cNhogjATLn-YxLVz8CgLvIW1Ncc0yXKJsfzJGMYgUeLsVB7zSwz7p6ItNgx4tXqQjag7BfWPcZN5kP-X3Q (it should only take a few moments to provision and connect to the environment).

I5aEsuNurCxHoDFjZRZrKBdarPPKPoKuExYpdagmdaOLKe7eig3DAKJitIKyuOpuwmrMAyZhp5AXpmD_k66cBuc1aUnWlJeSfo_aTKPY9aNMurhfegg1CYaE11jdpSTYNNIYARe01A

Screen Shot 2017-06-14 at 10.13.43 PM.png

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If, for some reason, the project is not set, simply issue the following command:

gcloud config set project <PROJECT_ID>

Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the Cloud Console dashboard:

R7chO4PKQfLC3bvFBNZJALLTUiCgyLEq_67ECX7ohs_0ZnSjC7GxDNxWrJJUaoM53LnqABYamrBJhCuXF-J9XBzuUgaz7VvaxNrkP2TAn93Drxccyj2-5zz4AxL-G3hzxZ4PsM5HHQ

Cloud Shell also sets some environment variables by default, which may be useful as you run future commands.

echo $GOOGLE_CLOUD_PROJECT

Command output

<PROJECT_ID>
  1. Finally, set the default zone and project configuration.
gcloud config set compute/zone us-central1-f

You can choose a variety of different zones. For more information, see Regions & Zones.

Create Cloud Storage bucket

We are going to use a Cloud Storage bucket to house our built code, as well as our startup scripts. In Cloud Shell, execute the following command to create a new Cloud Storage bucket:

gsutil mb gs://fancy-store-$DEVSHELL_PROJECT_ID

You'll use Fancy Store's existing ecommerce website based on the monolith-to-microservices repository as the basis for your website. You'll clone the source code from your repository so that you can focus on the aspects of deploying to Compute Engine. Later, you'll perform a small update to the code to demonstrate the simplicity of updates on Compute Engine.

You can automatically clone the code repository into the project, as well as open Cloud Shell and the built-in code editor, through the following link: Open in Cloud Shell.

Alternatively, you can manually clone the repository with the commands below inside Cloud Shell:

cd ~
git clone https://github.com/googlecodelabs/monolith-to-microservices.git
cd ~/monolith-to-microservices

At the Cloud Shell command prompt, run the initial build of the code to allow the app to run locally. It may take a few minutes for the script to run.

./setup.sh

Do your due diligence and test your app. Run the following command to start your web server:

cd microservices
npm start

Output:

Products microservice listening on port 8082!
Frontend microservice listening on port 8080!
Orders microservice listening on port 8081!

Preview your app by clicking the web preview icon and selecting "Preview on port 8080."

6634c06dd0b9172c.png

That should open a new window where you can see the frontend of the Fancy Store in action!

abf2ca314bf80d03.png

You can close this window after viewing the website. To stop the web server process, press Control+C (Command+C on Macintosh) in the terminal window.

Now that you have your working developer environment, you can deploy some Compute Engine instances! In the following steps, you will:

  1. Create a startup script to configure instances.
  2. Clone source code and upload it to Cloud Storage.
  3. Deploy a Compute Engine instance to host the backend microservices.
  4. Reconfigure the frontend code to utilize the backend microservices instance.
  5. Deploy a Compute Engine instance to host the frontend microservice.
  6. Configure the network to allow communication.

Create startup script

A startup script will be used to instruct the instance what to do each time it is started. This way the instances are automatically configured.

Click the pencil icon in the Cloud Shell ribbon to open the code editor.

Navigate to the monolith-to-microservices folder. Click on File > New File and create a file called startup-script.sh.

439553c934139b82.png

In the new file, paste the following code, some of which you will edit after you paste it:

#!/bin/bash

# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &

# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor psmisc

# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v8.12.0/node-v8.12.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm

# Get the application source code from the Google Cloud Storage bucket.
mkdir /fancy-store
gsutil -m cp -r gs://fancy-store-[DEVSHELL_PROJECT_ID]/monolith-to-microservices/microservices/* /fancy-store/

# Install app dependencies.
cd /fancy-store/
npm install

# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app

# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/fancy-store
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF

supervisorctl reread
supervisorctl update

Now, in the code editor, find the text [DEVSHELL_PROJECT_ID] and replace it with the output from the following command:

echo $DEVSHELL_PROJECT_ID

Example output:

my-gce-codelab-253520

The line of code in startup-script.sh should now be similar to the following:

gs://fancy-store-my-gce-codelab-253520/monolith-to-microservices/microservices/* /fancy-store/

The startup script performs the following tasks:

  • Installation of the Logging agent, which automatically collects logs from syslog
  • Installation of Node.js and Supervisor, which runs the app as a daemon
  • Cloning of the app's source code from the Cloud Storage bucket and installation of dependencies
  • Configuration of Supervisor, which runs the app, ensure that the app is restarted if it unexpectedly exits or is stopped by an admin or proces, and sends the app's stdout and stderr to syslog for the Logging agent to collect

Now copy the created startup-script.sh file into your previously created Cloud Storage bucket:

gsutil cp ~/monolith-to-microservices/startup-script.sh gs://fancy-store-$DEVSHELL_PROJECT_ID

It's now accessible at https://storage.googleapis.com/[BUCKET_NAME]/startup-script.sh. [BUCKET_NAME] represents the name of the Cloud Storage bucket. This will only be viewable by authorized users and service accounts by default, so it will be inaccessible through a web browser. Compute Engine instances will automatically be able to access it through their service accounts.

Copy code into Cloud Storage bucket

When instances launch, they pull code from the Cloud Storage bucket so that you can store some configuration variables in the ‘.env' file of the code.

Copy the cloned code into the Cloud Storage bucket:

cd ~
rm -rf monolith-to-microservices/*/node_modules
gsutil -m cp -r monolith-to-microservices gs://fancy-store-$DEVSHELL_PROJECT_ID/

Deploy backend instance

The first instance that you will deploy will be the backend instance, which will house the orders and products microservices.

Execute the following command in Cloud Shell to create an f1-micro instance that is configured to use your previously created startup script and tagged as a backend instance so that you can apply specific firewall rules to it later:

gcloud compute instances create backend \
    --machine-type=f1-micro \
    --image=debian-9-stretch-v20190905 \
    --image-project=debian-cloud \
    --tags=backend \
    --metadata=startup-script-url=https://storage.googleapis.com/fancy-store-$DEVSHELL_PROJECT_ID/startup-script.sh

Configure connection to backend

Before you deploy the frontend of the app, you need to update the configuration to point to the backend that you deployed.

Retrieve the external IP address of the backend, which can be viewed from the following command under the EXTERNAL_IP tab for the backend instance:

gcloud compute instances list

Example output:

NAME     ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
backend  us-central1-a  f1-micro                   10.128.0.2   34.68.223.88  RUNNING

In Cloud Shell's code editor, navigate to the folder monolith-to-microservices > react-app. From the Code Editor menu, select View > Toggle Hidden Files to see the .env file.

e7314ceda643e16.png

Edit the .env file to point to the external IP address of the backend. [BACKEND_ADDRESS] below represents the external IP address of the backend instance determined from the previous command in the gcloud tool.

REACT_APP_ORDERS_URL=http://[BACKEND_ADDRESS]:8081/api/orders
REACT_APP_PRODUCTS_URL=http://[BACKEND_ADDRESS]:8082/api/products

Save the file.

Use the following command to rebuild react-app, which will update the frontend code:

cd ~/monolith-to-microservices/react-app
npm install && npm run-script build

Copy the app code into the Cloud Storage bucket:

cd ~
rm -rf monolith-to-microservices/*/node_modules
gsutil -m cp -r monolith-to-microservices gs://fancy-store-$DEVSHELL_PROJECT_ID/

Deploy frontend instance

Now that the code is configured, you can deploy the frontend instance. Execute the following to deploy the frontend instance with a similar command as before, but this instance is tagged as "frontend" for firewall purposes.

gcloud compute instances create frontend \
    --machine-type=f1-micro \
    --image=debian-9-stretch-v20190905 \
    --image-project=debian-cloud \
    --tags=frontend \
    --metadata=startup-script-url=https://storage.googleapis.com/fancy-store-$DEVSHELL_PROJECT_ID/startup-script.sh 

Configure network

Create firewall rules to allow access to port 8080 for the frontend, and ports 8081 and 8082 for the backend. The firewall commands use the tags assigned during instance creation for app.

gcloud compute firewall-rules create fw-fe \
    --allow tcp:8080 \
    --target-tags=frontend
gcloud compute firewall-rules create fw-be \
    --allow tcp:8081-8082 \
    --target-tags=backend

The website should now be functional. Determine the external IP address of the frontend. The address can be determined by looking for the EXTERNAL_IP of the frontend instance:

gcloud compute instances list

Example output:

NAME      ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
backend   us-central1-a  f1-micro                   10.128.0.2   104.198.235.171  RUNNING
frontend  us-central1-a  f1-micro                   10.128.0.3   34.69.141.9      RUNNING

It may take a couple minutes for the instance to start and be configured. Execute the following to monitor the app's readiness:

watch -n 5 curl http://[EXTERNAL_IP]:8080 

Once you see output similar to the following, the website should be ready. Press Control+C (Command+C on Macintosh) at the command prompt to cancel the watch command.

80dc8721dc08d7e4.png

Browse to http://[FRONTEND_ADDRESS]:8080 with a new web browser tab to access the website, where [FRONTEND_ADDRESS] is the EXTERNAL_IP determined above.

Try navigating to the Products and Orders pages, which should also work.

a11460a1fffb07d8.png

To allow your application to scale, managed instance groups will be created and will use the frontend and backend instances as instance templates.

A managed instance group contains identical instances that you can manage as a single entity in a single zone. Managed instance groups maintain high availability of your apps by proactively keeping your instances available, that is, in the RUNNING state. You'll use managed instance groups for your frontend and backend instances to provide autohealing, load balancing, autoscaling, and rolling updates.

Create instance template from source instance

Before you can create a managed instance group, you need to create an instance template that will be the foundation for the group. Instance templates allow you to define the machine type, boot disk image or container image, network, and other instance properties to use when creating new virtual machine (VM) instances. You can use instance templates to create instances in a managed instance group or even to create individual instances.

To create the instance template, use the existing instances that you created.

First, you must stop both instances.

gcloud compute instances stop frontend
gcloud compute instances stop backend

Now, create the instance template from the source instances.

gcloud compute instance-templates create fancy-fe \
    --source-instance=frontend
gcloud compute instance-templates create fancy-be \
    --source-instance=backend

Confirm that the instance templates were created:

gcloud compute instance-templates list

Example output:

NAME      MACHINE_TYPE  PREEMPTIBLE  CREATION_TIMESTAMP
fancy-be  f1-micro                   2019-09-12T07:52:57.544-07:00
fancy-fe  f1-micro                   2019-09-12T07:52:48.238-07:00

Create managed instance group

You'll create two managed instance groups, one for the frontend and one for the backend. Those managed instance groups will use the previously created instance templates and be configured for two instances each in each group to start. The instances will be automatically named based on the "base-instance-name" specified with random characters appended.

gcloud compute instance-groups managed create fancy-fe-mig \
    --base-instance-name fancy-fe \
    --size 2 \
    --template fancy-fe
gcloud compute instance-groups managed create fancy-be-mig \
    --base-instance-name fancy-be \
    --size 2 \
    --template fancy-be

For your application, the frontend microservice runs on port 8080, and the backend microservices run on port 8081 for orders and port 8082 for products. Given that these are nonstandard ports, you'll specify named ports to identify them. Named ports are key:value pair metadata representing the service name and the port that it's running on. Named ports can be assigned to an instance group, which indicates that the service is available on all instances in the group. That information is used by the load balancer, which you'll configure later.

gcloud compute instance-groups set-named-ports fancy-fe-mig \ 
    --named-ports frontend:8080
gcloud compute instance-groups set-named-ports fancy-be-mig \
    --named-ports orders:8081,products:8082

Configure autohealing

To improve the availability of the app itself and verify that it's responding, you can configure an autohealing policy for the managed instance groups.

An autohealing policy relies on an app-based health check to verify that an app is responding as expected. Checking that an app responds is more precise than simply verifying that an instance is in a RUNNING state, which is the default behavior.

Create a health check that repairs the instance if it returns as unhealthy three consecutive times for the frontend and backend:

gcloud compute health-checks create http fancy-fe-hc \
    --port 8080 \
    --check-interval 30s \
    --healthy-threshold 1 \
    --timeout 10s \
    --unhealthy-threshold 3
gcloud compute health-checks create http fancy-be-hc \
    --port 8081 \
    --request-path=/api/orders \
    --check-interval 30s \
    --healthy-threshold 1 \
    --timeout 10s \
    --unhealthy-threshold 3

Create a firewall rule to allow the health check probes to connect to the microservices on ports 8080 and 8081:

gcloud compute firewall-rules create allow-health-check \
    --allow tcp:8080-8081 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 \
    --network default

Apply the health checks to their respective services:

gcloud compute instance-groups managed update fancy-fe-mig \
    --health-check fancy-fe-hc \
    --initial-delay 300
gcloud compute instance-groups managed update fancy-be-mig \
    --health-check fancy-be-hc \
    --initial-delay 300

Continue with the codelab to allow some time for autohealing to monitor the instances in the group. Later, you'll simulate a failure to test the autohealing.

To complement our managed instance groups, you'll use HTTP(S) Load Balancing to serve traffic to the frontend and backend microservices, and using mappings to send traffic to the proper backend services based on pathing rules. That will expose a single, load-balanced IP address for all services.

For more information about the load balancing options available in Google Cloud, see Overview of Load Balancing.

Create HTTP(S) Load Balancing

Google Cloud offers many different types of load balancing, but you'll use HTTP(S) Load Balancing for your traffic. HTTP(S) Load Balancing is structured as follows:

  1. A forwarding rule directs incoming requests to a target HTTP proxy.
  2. The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request.
  3. The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends. The health of each backend instance is verified using an HTTP health check. If the backend service is configured to use an HTTPS or HTTP/2 health check, then the request will be encrypted on its way to the backend instance.
  4. Sessions between the load balancer and the instance can use the HTTP, HTTPS, or HTTP/2 protocol. If you use HTTPS or HTTP/2, then each instance in the backend services must have an SSL certificate.

Create health checks that will be used to determine which instances are capable of serving traffic for each service.

gcloud compute http-health-checks create fancy-fe-frontend-hc \
  --request-path / \
  --port 8080
gcloud compute http-health-checks create fancy-be-orders-hc \
  --request-path /api/orders \
  --port 8081
gcloud compute http-health-checks create fancy-be-products-hc \
  --request-path /api/products \
  --port 8082

Create backend services that are the target for load-balanced traffic. The backend services will use the health checks and named ports that you created.

gcloud compute backend-services create fancy-fe-frontend \
  --http-health-checks fancy-fe-frontend-hc \
  --port-name frontend \
  --global
gcloud compute backend-services create fancy-be-orders \
  --http-health-checks fancy-be-orders-hc \
  --port-name orders \
  --global
gcloud compute backend-services create fancy-be-products \
  --http-health-checks fancy-be-products-hc \
  --port-name products \
  --global

Add the backend services.

gcloud compute backend-services add-backend fancy-fe-frontend \
  --instance-group fancy-fe-mig \
  --instance-group-zone us-central1-f \
  --global
gcloud compute backend-services add-backend fancy-be-orders \
  --instance-group fancy-be-mig \
  --instance-group-zone us-central1-f \
  --global
gcloud compute backend-services add-backend fancy-be-products \
  --instance-group fancy-be-mig \
  --instance-group-zone us-central1-f \
  --global

Create a URL map. The URL map defines which URLs are directed to which backend services.

gcloud compute url-maps create fancy-map \
  --default-service fancy-fe-frontend

Create a path matcher to allow the /api/orders and /api/products paths to route to their respective services.

gcloud compute url-maps add-path-matcher fancy-map \
   --default-service fancy-fe-frontend \
   --path-matcher-name orders \
   --path-rules "/api/orders=fancy-be-orders,/api/products=fancy-be-products"

Create the proxy that ties to the created URL map.

gcloud compute target-http-proxies create fancy-proxy \
  --url-map fancy-map

Create a global forwarding rule that ties a public IP address and port to the proxy.

gcloud compute forwarding-rules create fancy-http-rule \
  --global \
  --target-http-proxy fancy-proxy \
  --ports 80

Update configuration

Now that you have a new static IP address, you need to update the code on the frontend to point to the new address instead of the ephemeral address used earlier that pointed to the backend instance.

In Cloud Shell, change to the react-app folder, which houses the .env file that holds the configuration.

cd ~/monolith-to-microservices/react-app/

Find the IP address for the load balancer:

gcloud compute forwarding-rules list --global

Example output:

NAME                    REGION  IP_ADDRESS     IP_PROTOCOL  TARGET
fancy-http-rule          34.102.237.51  TCP          fancy-proxy

Edit the .env file with your preferred text editor (such as GNU nano) to point to the public IP address of the load balancer. [LB_IP] represents the external IP address of the backend instance.

REACT_APP_ORDERS_URL=http://[LB_IP]/api/orders
REACT_APP_PRODUCTS_URL=http://[LB_IP]/api/products

Rebuild react-app, which will update the frontend code.

cd ~/monolith-to-microservices/react-app
npm install && npm run-script build

Copy the application code into the GCS bucket.

cd ~
rm -rf monolith-to-microservices/*/node_modules
gsutil -m cp -r monolith-to-microservices gs://fancy-store-$DEVSHELL_PROJECT_ID/

Update the frontend instances

Now you want the frontend instances in the managed instance group to pull the new code. Your instances pull the code at startup, so you can issue a rolling restart command.

gcloud compute instance-groups managed rolling-action restart fancy-fe-mig \
    --max-unavailable 100%

Test the website

Wait approximately 30 seconds after issuing the rolling-action restart command to give the instances time to be processed. Then, check the status of the managed instance group until instances appear in the list.

watch -n 5 gcloud compute instance-groups list-instances fancy-fe-mig

After items appear in the list, exit the watch command by pressing Control+C (Command+C on Macintosh).

Confirm that the service is listed as healthy.

watch -n 5 gcloud compute backend-services get-health fancy-fe-frontend --global

Example output:

---
backend: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instanceGroups/fancy-fe-mig
status:
  healthStatus:
  - healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-x151
    ipAddress: 10.128.0.7
    port: 8080
  - healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-cgrt
    ipAddress: 10.128.0.11
    port: 8080
  kind: compute#backendServiceGroupHealth

Once items appear in the list, exit the watch command by pressing Control+C (Command+C on Macintosh).

The application will then be accessible via http://[LB_IP], where [LB_IP] is the IP_ADDRESS specified for the load balancer, which can be found with the following command:

gcloud compute forwarding-rules list --global

So far, you created two managed instance groups, each with two instances. The configuration is fully functional, but a static configuration regardless of load. Now, you'll create an autoscaling policy based on utilization to automatically scale each managed instance group.

Automatically resize by utilization

To create the autoscaling policy, execute the following commands in Cloud Shell. They'll create an autoscaler on the managed instance groups that automatically adds instances when the load balancer is higher than 60% utilization and removes instances when the load balancer is lower than 60% utilization.

gcloud compute instance-groups managed set-autoscaling \
  fancy-fe-mig \
  --max-num-replicas 5 \
  --target-load-balancing-utilization 0.60
gcloud compute instance-groups managed set-autoscaling \
  fancy-be-mig \
  --max-num-replicas 5 \
  --target-load-balancing-utilization 0.60

Enable content-delivery network

Another feature that can help with scaling is to enable Cloud CDN—a content-delivery network service—to provide caching for the frontend service. To do that, you can execute the following command on your frontend service:

gcloud compute backend-services update fancy-fe-frontend \
    --enable-cdn --global

Now, when a user requests content from the load balancer, the request arrives at a Google frontend, which first looks in the Cloud CDN cache for a response to the user's request. If the frontend finds a cached response, then it sends the cached response to the user. That's called a cache hit.

Otherwise, if the frontend can't find a cached response for the request, then it makes a request directly to the backend. If the response to that request is cacheable, then the frontend stores the response in the Cloud CDN cache so that the cache can be used for subsequent requests.

Updating instance template

Existing instance templates are not editable. However, given that your instances are stateless and all configuration is done through the startup script, you only need to change the instance template if you want to change the template settings core image itself. Now, you'll make a simple change to use a larger machine type and push that out.

Update the frontend instance, which acts as the basis for the instance template. During the update, you'll put a file on the updated version of the instance template's image, then update the instance template, roll out the new template, and confirm that the file exists on the managed instance group instances.

You'll modify the machine type of your instance template by switching from the f1-micro standard machine type into a custom machine type with 4 vCPU and 3840MiB RAM.

In Cloud Shell, run the following command to modify the machine type of the frontend instance:

gcloud compute instances set-machine-type frontend --machine-type custom-4-3840

Create the new instance template:

gcloud compute instance-templates create fancy-fe-new \
    --source-instance=frontend \
    --source-instance-zone=us-central1-a

Roll out the updated instance template to the managed instance group:

gcloud compute instance-groups managed rolling-action start-update fancy-fe-mig \
    --version template=fancy-fe-new

Monitor the status of the update:

watch -n 2 gcloud compute instance-groups managed list-instances fancy-fe-mig

Once you have more than 1 instance in status RUNNING, ACTION set to None, and with INSTANCE_TEMPLATE set as the new template name (fancy-fe-new), copy the name of one of the machines listed for use in the next command.

Control+S (Command+S on Macintosh) to exit the watch process.

Run the following to see if the virtual machine is using the new machine type (custom-4-3840), where [VM_NAME] is the newly created instance:

gcloud compute instances describe [VM_NAME] | grep machineType

Expected example output:

machineType: https://www.googleapis.com/compute/v1/projects/project-name/zones/us-central1-f/machineTypes/custom-4-3840

Make changes to the website

Your marketing team has asked you to change the homepage for your site. They think it should be more informative of who your company is and what you actually sell. In this section, you'll add some text to the homepage to make the marketing team happy! It looks like one of your developers already created the changes with the file name index.js.new. You can copy the file to index.js and your changes should be reflected. Follow the instructions below to make the appropriate changes.

Run the following commands, copy the updated file to the correct file name, and then print its contents to verify the changes:

cd ~/monolith-to-microservices/react-app/src/pages/Home
mv index.js.new index.js
cat ~/monolith-to-microservices/react-app/src/pages/Home/index.js

The resulting code should look like this:

/*
Copyright 2019 Google LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

import React from "react";
import { makeStyles } from "@material-ui/core/styles";
import Paper from "@material-ui/core/Paper";
import Typography from "@material-ui/core/Typography";
const useStyles = makeStyles(theme => ({
  root: {
    flexGrow: 1
  },
  paper: {
    width: "800px",
    margin: "0 auto",
    padding: theme.spacing(3, 2)
  }
}));
export default function Home() {
  const classes = useStyles();
  return (
    <div className={classes.root}>
      <Paper className={classes.paper}>
        <Typography variant="h5">
          Fancy Fashion &amp; Style Online
        </Typography>
        <br />
        <Typography variant="body1">
          Tired of mainstream fashion ideas, popular trends and societal norms?
          This line of lifestyle products will help you catch up with the Fancy trend and express your personal style.
          Start shopping Fancy items now!
        </Typography>
      </Paper>
    </div>
  );
}

You updated the React components, but you need to build the React app to generate the static files. Run the following command to build the React app and copy it into the monolith public directory:

cd ~/monolith-to-microservices/react-app
npm install && npm run-script build

Then, push the code to your Cloud Storage bucket again.

cd ~
rm -rf monolith-to-microservices/*/node_modules
gsutil -m cp -r monolith-to-microservices gs://fancy-store-$DEVSHELL_PROJECT_ID/

Push changes with rolling updates

You can now force all instances to restart to pull the update.

gcloud compute instance-groups managed rolling-action restart fancy-fe-mig \
    --max-unavailable=100%

Wait approximately 30 seconds after issues the rolling-action restart command in order to give the instances time to be processed, and then check the status of the managed instance group until instances appear in the list.

watch -n 5 gcloud compute instance-groups list-instances fancy-fe-mig

Once items appear in the list, exit the watch command by pressing Control+S (Command+S on Macintosh).

Run the following to confirm the service is listed as healthy:

watch -n 5 gcloud compute backend-services get-health fancy-fe-frontend --global

Example output:

---
backend: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instanceGroups/fancy-fe-mig
status:
  healthStatus:
  - healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-x151
    ipAddress: 10.128.0.7
    port: 8080
  - healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-cgrt
    ipAddress: 10.128.0.11
    port: 8080
  kind: compute#backendServiceGroupHealth

After items appear in the list, exit the watch command by pressing Control+S (Command+S on Macintosh).

To invalidate the cached content within the content-delivery network and ensure that fresh content is displayed, run the following:

gcloud compute url-maps invalidate-cdn-cache fancy-map \
    --path "/*"

Browse to the website via http://[LB_IP] where [LB_IP] is the IP_ADDRESS specified for the load balancer, which can be found with the following command:

gcloud compute forwarding-rules list --global

The new website changes should now be visible.

b081b8e885bf0723.png

Simulate failure

To confirm that the health check works, log into an instance and stop the services. To find an instance name, execute the following:

gcloud compute instance-groups list-instances fancy-fe-mig

From there, secure shell into one of the instances, where INSTANCE_NAME is one of the instances from the list:

gcloud compute ssh [INSTANCE_NAME]

In the instance, use supervisorctl to stop the app.

sudo supervisorctl stop nodeapp; sudo killall node

Exit the instance.

exit

Monitor the repair operations.

watch -n 5 gcloud compute operations list \
--filter='operationType~compute.instances.repair.*'

Look for the following example output:

NAME                                                  TYPE                                       TARGET                                 HTTP_STATUS  STATUS  TIMESTAMP
repair-1568314034627-5925f90ee238d-fe645bf0-7becce15  compute.instances.repair.recreateInstance  us-central1-a/instances/fancy-fe-1vqq  200          DONE    2019-09-12T11:47:14.627-07:00

After the repair is noticed, Control+C (Command+S on Macintosh) to exit the watch command. At this point, the managed instance group recreates the instance to repair it.

Once ready, the easiest way to clean up all activities performed is to delete the project. Deleting the project deletes the load balancer, instances, templates, and more created in during the codelab to ensure that no unexpected recurring charges occur. Execute the following in Cloud Shell, where PROJECT_ID is the full project ID, not only the project name.

gcloud projects delete [PROJECT_ID]

Confirm deletion by entering "Y" when prompted.

You deployed, scaled, and updated your website on Compute Engine. You're now experienced with Compute Engine, managed instance groups, load balancing, and health checks!