Istio is an open source framework for connecting, securing, and managing microservices, including services running on Google Kubernetes Engine (GKE). It lets you create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

You add Istio support to services by deploying a special sidecar proxy to each of your application's Pods. The proxy intercepts all network communication between microservices and is configured and managed using Istio's control plane functionality.

This codelab shows you how to install and configure Istio on Kubernetes Engine, deploy an Istio-enabled multi-service application, and dynamically change request routing.

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console ( and create a new project:

Screenshot from 2016-02-10 12:45:26.png

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

Next, you'll need to enable billing in the Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document).

New users of Google Cloud Platform are eligible for a $300 free trial.

Google Cloud Shell

While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):


Click the "Start Cloud Shell" button:

Screen Shot 2017-06-14 at 10.13.43 PM.png

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

project = <PROJECT_ID>

Cloud Shell also sets some environment variables by default which may be useful as you run future commands.


Command output


If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard:


IMPORTANT: Finally, set the default zone and project configuration:

gcloud config set compute/zone us-central1-f

You can choose a variety of different zones. Learn more in the Regions & Zones documentation.

You need to make sure that you have the Kubernetes Engine API enabled:

gcloud services enable

Choose a region for your cluster using the following command:

gcloud compute regions list

Set your region to one from the above list. For example:


To create a new cluster with Istio enabled with mutual TLS between sidecars enforced by default, run this command:

gcloud beta container clusters create hello-istio --project=$PROJECT_ID \
    --addons=Istio --istio-config=auth=MTLS_STRICT \
    --cluster-version=latest \
    --machine-type=n1-standard-2 \
    --num-nodes=4 \

Wait a few moments while your cluster is set up for you. It will be visible in the Kubernetes Engine section of the Google Cloud Platform console.

Once the cluster is created, click on the "Connect" command, copy the command and run in Cloud Shell. This will make sure that kubectl is setup to access the cluster.

At the end of the cluster creation, there will be a istio-system namespace created along with the required RBAC permissions, and the five primary Istio control plane components deployed:

First, ensure the following Kubernetes services are deployed:

istio-pilot, istio-ingressgateway, istio-egressgateway, istio-telemetry, istio-policy, istio-citadel, prometheus and istio-sidecar-injector.

kubectl get svc -n istio-system

Your output should look like this:

NAME                       TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)
istio-citadel              ClusterIP   <none>          8060/TCP,9093/TCP
istio-egressgateway        ClusterIP    <none>          80/TCP,443/TCP
istio-ingressgateway       LoadBalancer   80:31380/TCP,443:31390/TCP,31400:31400/TCP
istio-pilot                ClusterIP   <none>          15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP
istio-policy               ClusterIP   <none>          9091/TCP,15004/TCP,9093/TCP
istio-statsd-prom-bridge   ClusterIP   <none>          9102/TCP,9125/UDP
istio-telemetry            ClusterIP   <none>          9091/TCP,15004/TCP,9093/TCP,42422/TCP
prometheus                 ClusterIP   <none>          9090/TCP
istio-sidecar-injector     ClusterIP<none>          443/TCP

Next, make sure that the corresponding Kubernetes pods are deployed and all containers are up and running: istio-pilot-*, istio-ingressgateway-*, istio-egressgateway-*, istio-policy-*, istio-telemetry-*, istio-citadel-*, prometheus-* and istio-sidecar-injector-*.

kubectl get pods -n istio-system

When all the pods are running, you can proceed.

NAME                                     READY     STATUS 
istio-citadel-7bdc7775c7-22dxq             1/1       Running
istio-egressgateway-78dd788b6d-ld4qx       1/1       Running
istio-ingressgateway-7dd84b68d6-smqbt      1/1       Running
istio-pilot-d5bbc5c59-sv6ml                2/2       Running
istio-policy-64595c6fff-sqbz7              2/2       Running
istio-sidecar-injector-dbd67c88d-4jxqj     1/1       Running
istio-statsd-prom-bridge-949999c4c-fbfzg   1/1       Running
istio-telemetry-cfb674b6c-kk98w            2/2       Running
prometheus-86cb6dd77c-z2tmq                1/1       Running
istio-pilot-2275554717-93c43               2/2       Running

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation — BookInfo.

Let's first download the sample. The Istio release page offers download artifacts for several OSs. In our case, we can use a convenient command to download and extract a specific release:

curl -L | ISTIO_VERSION=1.0.0 sh -

The installation directory contains sample applications in samples/. You will find the source code and all the other files used in this example in your Istio samples/bookinfo directory.

This is a simple mock bookstore application made up of four microservices - all managed using Istio. Each microservice is written in a different language, to demonstrate how you can use Istio in a multi-language environment, without any changes to code.

The microservices are:

There are 3 versions of the reviews microservice:

The end-to-end architecture of the application is thus:

First, have a look at the YAML which describes the bookinfo application:

less samples/bookinfo/platform/kube/bookinfo.yaml

Note how there are standard Deployments and Services to deploy the Bookinfo application and nothing Istio-specific here at all. To start making use of Istio functionality, no application changes are needed. When we configure and run the services, Envoy sidecars will be automatically injected into each pod for the service.

For that to work, we need to enable sidecar injection for the namespace (‘default') that we will use for our microservices. We do that by applying a label:

kubectl label namespace default istio-injection=enabled

You can verify that the label was successfully applied:

kubectl get namespace -L istio-injection

default        Active    34m       enabled
istio-system   Active    32m
kube-public    Active    34m
kube-system    Active    34m

Now we can simply deploy the services to the default namespace with kubectl:

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Look at one of the pods. You will see that it now contains a second container, the Istio sidecar, along with all of the necessary configuration:

kubectl get pod

NAME                              READY     STATUS    RESTARTS   AGE
details-v1-64b86cd49-jqq4g        2/2       Running   0          46s
productpage-v1-84f77f8747-6vg6l   0/2       Pending   0          45s
ratings-v1-5f46655b57-h4zfw       2/2       Running   0          46s
reviews-v1-ff6bdb95b-hqm89        2/2       Running   0          46s
reviews-v2-5799558d68-6wsz6       0/2       Pending   0          45s
reviews-v3-58ff7d665b-rjpbn       0/2       Pending   0          45s

kubectl describe pod details-v1-64b86cd49-jqq4g

To allow ‘ingress' traffic to reach the mesh we need to create a ‘Gateway' (to configure a load balancer) and a ‘VirtualService' (which controls the forwarding of traffic from the gateway to our services). You can read more about gateways here. To create gateway:

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Finally, confirm that the application has been deployed correctly by running the following commands:

kubectl get services
kubectl get pods

When all the pods have been created, you should see five services and six pods:

NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)   
details              <none>        9080/TCP  
kubernetes            <none>        443/TCP   
productpage         <none>        9080/TCP  
ratings              <none>        9080/TCP  
reviews             <none>        9080/TCP  

NAME                              READY     STATUS    RESTARTS 
details-v1-1520924117-48z17       2/2       Running   0        
productpage-v1-560495357-jk1lz    2/2       Running   0        
ratings-v1-734492171-rnr5l        2/2       Running   0        
reviews-v1-874083890-f0qf0        2/2       Running   0        
reviews-v2-1343845940-b34q5       2/2       Running   0        
reviews-v3-1813607990-8ch52       2/2       Running   0        

Congratulations: you have deployed an Istio-enabled application. Next, let's see the application in use.

Now that it's deployed, let's see the BookInfo application in action. First, you need to get the external IP of the gateway:

kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
istio-ingressgateway   LoadBalancer   80:31380/TCP,443:31390/TCP,31400:31400/TCP

Copy the EXTERNAL-IP value and paste it into the GATEWAY_URL environment variable.

export GATEWAY_URL=<your gateway IP>

Once you have the address and port, check that the BookInfo app is running with curl:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

Check you have a HTTP 200 output.

You can now point your browser to http://<your gateway IP>/productpage to view the BookInfo web page.

Refresh the page several times. Notice how you see three different versions of reviews shown in the product page? If you refer back to the diagram on the previous page, you will see we have three different book review services, which are called in a round-robin style - showing black stars, red stars, or no stars at all. This is the normal Kubernetes balancing behavior.

We can use Istio to do something different — to control which users are routed to which version of the services.

The BookInfo sample deploys three versions of the reviews microservice. When you accessed the application several times, you will have noticed that the output sometimes contains star ratings and sometimes it does not. This is because without an explicit default version set, Istio will route requests to all available versions of a service, in a round-robin fashion.

Routes control how requests are routed within an Istio service mesh. Requests can be routed based on the source and destination, HTTP paths and header fields, and weights associated with individual service versions.

Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules. Run the following command to create default destination rules for the Bookinfo services:

kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml

destinationrule "productpage" created
destinationrule "reviews" created
destinationrule "ratings" created
destinationrule "details" created  

Static routing

First, let's add rules to make traffic go to v1 of each service.

Verify that you don't have any routes for the services yet apart from the one that allows the gateway to route to the top-level ‘productpage' service:

kubectl get virtualservices

NAME          AGE                                          
bookinfo      2m   

We will create a VirtualService for each microservice. A VirtualService defines the rules that control how requests for the service are routed. Each rule corresponds to one or more request destination hosts. In our case we are routing to other services within our mesh so we can use the internal mesh name (e.g. ‘reviews') as the host.

Here's how a rule can route all traffic for a reviews virtual service to Pods running v1 of that service, as identified by Kubernetes labels.

kind: VirtualService
  name: reviews
  - reviews
  - route:
    - destination:
        host: reviews
        subset: v1

The rule refers to a subset called v1, which is defined for the underlying reviews service instances as part of a DestinationRule:

kind: DestinationRule
  name: reviews
  host: reviews
  - name: v1
      version: v1

As can be seen above, a subset specifies one or more labels that identify version specific instances. As the VirtualService above specifies the subset called v1 it will only send traffic with the label "version: v1".

Bookinfo includes a sample with rules for all four services. Let's install it:

kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Note that we used the ‘mtls' version of the file because we are running optional Istio auth components. The file includes tls traffic policies so that the communication between the Envoy sidecars for service to service traffic is encrypted. This all happens without changes to application code.

Confirm that four routes were created. There should be five in total. You can add -o yaml to view the actual configuration.

kubectl get virtualservices

Also check the corresponding DestinationRules and their subset definitions:

kubectl get destinationrules

Go back to the Bookinfo application (http://$GATEWAY_URL/productpage) in your browser. Refresh a few times. Do you see any stars? You should see the book review with no rating stars, as reviews:v1 does not access the ratings service.

Dynamic routing

As the mesh operates at Layer 7, we can use HTTP attributes (paths or cookies) to decide on how to route a request.

We can route certain users to a service by applying a regex to a header (e.g. cookie) like this:

kind: VirtualService
  name: reviews
    - reviews
  - match:
    - headers:
          regex: "^(.*?;)?(user=jason)(;.*)?$"
    - destination:
        host: reviews
        subset: v2  
  - route:
    - destination:
        host: reviews
        subset: v1

Create the route:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

View it in the list, or add -o yaml to see the full output.

kubectl get virtualservices reviews

We now have a way to route some requests to use the reviews:v2 service. Can you guess how? (Hint: no passwords are needed) See how the page behaviour changes if you are logged in as no-one, 'jason', or 'kylie'.

Once the v2 version has been canary tested to our satisfaction by jason or another subset of our users, we can use Istio to progressively send more and more traffic to our new service.

Let's try that by sending 50% of the traffic to v3 by using weight based version routing. v3 of the service shows red stars. Replace the reviews route:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml

Confirm that the route was replaced:

kubectl get virtualservice reviews -o yaml

kind: VirtualService
  name: reviews
    - reviews
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

The implementation of the routing in the Envoy proxy sidecar means that you may need to refresh your browser many times before seeing the results. With significant traffic there will be a 50% split. Send some extra traffic to the service like this:

watch -n 0.2 curl -o /dev/null -s -w "%{http_code}" http://$GATEWAY_URL/productpage

Now refresh the productpage in your browser and you should now see red colored star ratings about 50% of the time.

In a normal canary rollout you would want to use much smaller increments and then increase the amount of traffic gradually by progressively increasing the weighting for v3. Now lets send 100% of the traffic to v3:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml

Now when you refresh your browser you should see the red stars 100% of the time.

For now, let's clean up the routing rules (don't worry if you see an error):

kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Congratulations; you've reached the end of the Istio 'Hello World'. For now, you can uninstall Istio and delete your cluster; watch this space, because very soon you will be able to continue straight to our Istio 201 codelab.

The Istio site contains guides and samples with fully working example uses for Istio that you can experiment with. These include:

Here's how to uninstall Istio.

kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl delete -f install/kubernetes/istio-demo-auth.yaml

In addition to uninstalling Istio, you can also delete the Kubernetes cluster created in the setup phase (to save on cost and to be a good Cloud citizen):

gcloud container clusters delete hello-istio

Note: If you get a "Not Found" error, it could be because you didn't set the region for your project in gcloud command line tool. Cluster delete command with --region option should work.

The following clusters will be deleted.
 - [hello-istio] in [us-central1-f]
Do you want to continue (Y/n)?  Y
Deleting cluster hello-istio...done.                                                                                                                                                                                            
Deleted [].