In this lab, you'll set up an manged instance group with a web server on each instance, configure autoscaling and load balancing, and test scaling and balancing under load. You'll use an HTTP load balancer to scale instances based on network traffic, distribute load across availability zones, and set up a firewall rule allowing ingress HTTP traffic.

What you will build

What you'll learn how to

What you'll need

Go to

Sign in as the owner of a free-trial GCP account or as a user with project owner access to a billing-enabled project.

Create an instance template as follows to specify how each auto scaled instance is created:

#! /bin/bash
apt-get update
apt-get install -y apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello from $(hostname)</h1>

Create a managed instance group that uses the template and sets an auto scaling policy:

Now you'll put the group behind a load balancer and test it under load.

Go to Networking > Load balancing

Test the load balancer to make sure it's working

Return to the Instance Groups page and refresh it until you see the load balancer show up under In use by on the right.

Refresh more until you get a yellow triangle saying the group hasn't received queries from the load balancer. This will change once traffic is flowing.

Return to the Load Balancing page and click Frontends tab to get your load balancer's external IP address.

Wait a minute or two for the instance and web server to start up.

Send a request from your laptop's web browser to the load balancer's external IP. You should get a response from one of the instances. Note its hostname's last 4 characters.

Send another request to get a response from the other host (note the last 4 characters are different).

Test the autoscaler and load balancer under load

For this section, you'll use a load testing tool called 'hey'.

Open your Cloud Shell and enter the following command to install it:

go get -u

Enter the following command to send 10 requests per second to the load balancer (substituting your load balancer's external IP):

hey -n 12000 -c 10 -q 1 http://<load-balancer-IP>

Let this command keep sending requests for at least 5 minutes (maybe 10)

After it's been running for about 2 minutes, return to the Load Balancing page to view some of your traffic

Click the load balancer's link

Click the Monitoring tab

Select the apache-service from the Backend drop-down to see traffic split 50/50 between the two instances (1 per zone) with each receiving about 5 RPS. Since each instance is receiving only 5 RPS, and the limit is 10, no new instances are created.

Look at the bottom to see how many instances are running in each zone and the specific RPS. On the far left, note your front end location (North America in this case).

Return to the Instance Groups page and click your instance group's link to see the instances (no change yet, they're only receiving 5 RPS).

Let the system cool for 10 minutes.

Then run the command again, but double the rate to 20 RPS and number of total requests:

hey -n 24000 -c 10 -q 2 http://<load-balancer-IP>

After 2 minutes, return to the load balancer's monitoring page.

Initially both instances showed orange bar status on the bottom-right and were maxing out with 10+ RPS because one of the instances was a bit flaky. So a second instance was spun up in the us-central1-b zone (note 2 of 2 instances healthy in the bottom-middle). Now us-central1-b instances show green bar status on the bottom-right (indicating half utilization per instance or 5 RPS each). But the spread across zones isn't optimal and the third instance in us-central1-c is still maxing out

So after about 4 minutes, a fourth instance was spun up, and to distribute load across zones, it was started in a third zone, us-central1-f (shown below):

Now the traffic load is split evenly across all four instances (see the green bar charts bottom-right in the screenshot above). But it's really not necessary to have four instances serving 20 RPS with a 10 RPS limit per instance. So eventually, the system reduced to 3 instances, one per zone, as shown in the instance group's page with about 7 RPS per instance (RPS not shown).

Go to the Load balancing page and click the load balancer's link.

Click the trash can icon to delete the load balancer.

Click the Backends tab.

Select the backend and click Delete.

Go to the Instance groups page.

Select the instance group and click Delete.

Go to the Instance templates page, select your template, and click Delete.

Congratulations! Now you've seen how to scale and load balance web applications on instances.