Istio is an open source framework for connecting, securing, and managing microservices, including services running on Google Kubernetes Engine (GKE). It lets you create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

You add Istio support to services by deploying a special sidecar proxy to each of your application's Pods. The proxy intercepts all network communication between microservices and is configured and managed using Istio's control plane functionality.

This codelab shows you how to install and configure Istio on Kubernetes Engine, deploy an Istio-enabled multi-service application, and dynamically change request routing.

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

Next, you'll need to enable billing in the Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document).

New users of Google Cloud Platform are eligible for a $300 free trial.

Google Cloud Shell

While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):

Then accept the terms of service and click the "Start Cloud Shell" link:

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard:

IMPORTANT: Finally, set the default zone and project configuration:

gcloud config set compute/zone us-central1-f

You can choose a variety of different zones. Learn more in the Regions & Zones documentation.

The requirements for this Istio codelab are as follows:

Be sure you didn't miss the step to set your default zone, especially if you're using Cloud Shell!

You'll need to make sure that you have the Kubernetes Engine API enabled:

gcloud services enable container.googleapis.com

To create a new cluster that meets these requirements, run the following command:

gcloud container clusters create hello-istio \
    --cluster-version=latest \
    --num-nodes 4

Wait a few moments while your cluster is set up for you. It will be visible in the Kubernetes Engine section of the Google Cloud Platform console.

Now, grant admin permissions in the cluster to the current gcloud user. You need these permissions to create the necessary RBAC rules for Istio.

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

Now, let's install Istio. Istio's control plane is installed in its own Kubernetes istio-system namespace, and can manage microservices from all other namespaces. The installation includes Istio core components, tools, and samples.

Downloading Istio

The Istio release page offers download artifacts for several OSs. In our case, we can use a convenient command to download and extract the 0.8.0 release automatically:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=0.8.0 sh -

The installation directory contains the following files which we'll use:

Change to the istio directory:

cd ./istio-*

Add the istioctl client to your PATH:

export PATH=$PWD/bin:$PATH

Installing the core components

Let's now install Istio's core components. We will install Istio with the optional Istio Auth components, which enable mutual TLS authentication between the sidecars:

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

This creates the istio-system namespace along with the required RBAC permissions, and deploys the five primary Istio control plane components:

You should see about 70 lines of console output like this:

namespace "istio-system" created
configmap "istio-statsd-prom-bridge" created
/* ... */
service "tracing" created
mutatingwebhookconfiguration "istio-sidecar-injector" created

First, ensure the following Kubernetes services are deployed:

istio-pilot, istio-ingressgateway, istio-egressgateway, istio-telemetry, istio-policy, istio-citadel, prometheus and istio-sidecar-injector.

kubectl get svc -n istio-system

Your output should look like this:

NAME                       TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)
istio-citadel              ClusterIP      30.0.0.119   <none>          8060/TCP,9093/TCP
istio-egressgateway        ClusterIP      30.0.0.11    <none>          80/TCP,443/TCP
istio-ingressgateway       LoadBalancer   30.0.0.39    9.111.255.245   80:31380/TCP,443:31390/TCP,31400:31400/TCP
istio-pilot                ClusterIP      30.0.0.136   <none>          15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP
istio-policy               ClusterIP      30.0.0.242   <none>          9091/TCP,15004/TCP,9093/TCP
istio-statsd-prom-bridge   ClusterIP      30.0.0.111   <none>          9102/TCP,9125/UDP
istio-telemetry            ClusterIP      30.0.0.246   <none>          9091/TCP,15004/TCP,9093/TCP,42422/TCP
prometheus                 ClusterIP      30.0.0.253   <none>          9090/TCP
istio-sidecar-injector     ClusterIP      10.23.242.122<none>          443/TCP

Next, make sure that the corresponding Kubernetes pods are deployed and all containers are up and running: istio-pilot-*, istio-ingressgateway-*, istio-egressgateway-*, istio-policy-*, istio-telemetry-*, istio-citadel-*, prometheus-* and istio-sidecar-injector-*.

kubectl get pods -n istio-system

When all the pods are running, you can proceed.

NAME                                     READY     STATUS 
istio-citadel-7bdc7775c7-22dxq             1/1       Running
istio-egressgateway-78dd788b6d-ld4qx       1/1       Running
istio-ingressgateway-7dd84b68d6-smqbt      1/1       Running
istio-pilot-d5bbc5c59-sv6ml                2/2       Running
istio-policy-64595c6fff-sqbz7              2/2       Running
istio-sidecar-injector-dbd67c88d-4jxqj     1/1       Running
istio-statsd-prom-bridge-949999c4c-fbfzg   1/1       Running
istio-telemetry-cfb674b6c-kk98w            2/2       Running
prometheus-86cb6dd77c-z2tmq                1/1       Running
istio-pilot-2275554717-93c43               2/2       Running

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation — BookInfo. This is a simple mock bookstore application made up of four microservices - all managed using Istio. Each microservice is written in a different language, to demonstrate how you can use Istio in a multi-language environment, without any changes to code.

The microservices are:

There are 3 versions of the reviews microservice:

The end-to-end architecture of the application is thus:

You will find the source code and all the other files used in this example in your Istio samples/bookinfo directory.

First, have a look at the YAML which describes the bookinfo application:

less samples/bookinfo/kube/bookinfo.yaml

Note how there are standard Deployments and Services to deploy the Bookinfo application and nothing Istio-specific here at all. To start making use of Istio functionality, no application changes are needed. When we configure and run the services, Envoy sidecars will be automatically injected into each pod for the service.

For that to work, we need to enable sidecar injection for the namespace (‘default') that we will use for our microservices. We do that by applying a label:

kubectl label namespace default istio-injection=enabled

You can verify that the label was successfully applied:

kubectl get namespace -L istio-injection

NAME           STATUS    AGE       ISTIO-INJECTION
default        Active    34m       enabled
istio-system   Active    32m
kube-public    Active    34m
kube-system    Active    34m

Now we can simply deploy the services to the default namespace with kubectl:

kubectl apply -f samples/bookinfo/kube/bookinfo.yaml

Look at one of the pods. You will see that it now contains a second container, the Istio sidecar, along with all of the necessary configuration:

kubectl get pod

NAME                              READY     STATUS    RESTARTS   AGE
details-v1-64b86cd49-jqq4g        2/2       Running   0          46s
productpage-v1-84f77f8747-6vg6l   0/2       Pending   0          45s
ratings-v1-5f46655b57-h4zfw       2/2       Running   0          46s
reviews-v1-ff6bdb95b-hqm89        2/2       Running   0          46s
reviews-v2-5799558d68-6wsz6       0/2       Pending   0          45s
reviews-v3-58ff7d665b-rjpbn       0/2       Pending   0          45s

kubectl describe pod details-v1-64b86cd49-jqq4g
...

To allow ‘ingress' traffic to reach the mesh we need to create a ‘Gateway' (to configure a load balancer) and a ‘VirtualService' (which controls the forwarding of traffic from the gateway to our services). You can read more about gateways here. To create Istio configuration we use the istioctl command.

istioctl create -f samples/bookinfo/routing/bookinfo-gateway.yaml

Finally, confirm that the application has been deployed correctly by running the following commands:

kubectl get services
kubectl get pods

When all the pods have been created, you should see five services and six pods:

NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)   
details                    10.0.0.31    <none>        9080/TCP  
kubernetes                 10.0.0.1     <none>        443/TCP   
productpage                10.0.0.120   <none>        9080/TCP  
ratings                    10.0.0.15    <none>        9080/TCP  
reviews                    10.0.0.170   <none>        9080/TCP  

NAME                              READY     STATUS    RESTARTS 
details-v1-1520924117-48z17       2/2       Running   0        
productpage-v1-560495357-jk1lz    2/2       Running   0        
ratings-v1-734492171-rnr5l        2/2       Running   0        
reviews-v1-874083890-f0qf0        2/2       Running   0        
reviews-v2-1343845940-b34q5       2/2       Running   0        
reviews-v3-1813607990-8ch52       2/2       Running   0        

Congratulations: you have deployed an Istio-enabled application. Next, let's see the application in use.

Now that it's deployed, let's see the BookInfo application in action. First, you need to get the external IP of the gateway:

kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
istio-ingressgateway   LoadBalancer   10.23.251.44   35.204.239.131   80:31380/TCP,443:31390/TCP,31400:31400/TCP

Copy the EXTERNAL-IP value and paste it into the GATEWAY_URL environment variable.

export GATEWAY_URL=<your gateway IP>

Once you have the address and port, check that the BookInfo app is running with curl:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

Check you have a HTTP 200 output.

You can now point your browser to http://<your gateway IP>/productpage to view the BookInfo web page.

Refresh the page several times. Notice how you see three different versions of reviews shown in the product page? If you refer back to the diagram on the previous page, you will see we have three different book review services, which are called in a round-robin style - showing black stars, red stars, or no stars at all. This is the normal Kubernetes balancing behavior.

We can use Istio to do something different — to control which users are routed to which version of the services.

The BookInfo sample deploys three versions of the reviews microservice. When you accessed the application several times, you will have noticed that the output sometimes contains star ratings and sometimes it does not. This is because without an explicit default version set, Istio will route requests to all available versions of a service, in a round-robin fashion.

Routes control how requests are routed within an Istio service mesh. Requests can be routed based on the source and destination, HTTP paths and header fields, and weights associated with individual service versions.

You use the istioctl command line tool to control routing.

Static routing

First, let's add rules to make traffic go to v1 of each service.

Verify that you don't have any routes for the services yet apart from the one that allows the gateway to route to the top-level ‘productpage' service:

istioctl get virtualservices

NAME          KIND                                          NAMESPACE
bookinfo      VirtualService.networking.istio.io.v1alpha3   default

We will create a VirtualService for each microservice. A VirtualService defines the rules that control how requests for the service are routed. Each rule corresponds to one or more request destination hosts. In our case we are routing to other services within our mesh so we can use the internal mesh name (e.g. ‘reviews') as the host.

Here's how a rule can route all traffic for a reviews virtual service to Pods running v1 of that service, as identified by Kubernetes labels.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

The rule refers to a subset called v1, which is defined for the underlying reviews service instances as part of a DestinationRule:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: reviews
spec:
  host: reviews
  subsets:
  - name: v1
    labels:
      version: v1

As can be seen above, a subset specifies one or more labels that identify version specific instances. As the VirtualService above specifies the subset called v1 it will only send traffic with the label "version: v1".

Bookinfo includes a sample with rules for all four services. Let's install it:

istioctl create -f samples/bookinfo/routing/route-rule-all-v1-mtls.yaml

Note that we used the ‘mtls' version of the file because we are running optional Istio auth components. The file includes tls traffic policies so that the communication between the Envoy sidecars for service to service traffic is encrypted. This all happens without changes to application code.

Confirm that four routes were created. There should be five in total. You can add -o yaml to view the actual configuration.

istioctl get virtualservices

Also check the corresponding DestinationRules and their subset definitions:

istioctl get destinationrules

Go back to the Bookinfo application (http://$GATEWAY_URL/productpage) in your browser. Refresh a few times. Do you see any stars? You should see the book review with no rating stars, as reviews:v1 does not access the ratings service.

Dynamic routing

As the mesh operates at Layer 7, we can use HTTP attributes (paths or cookies) to decide on how to route a request.

We can route certain users to a service by applying a regex to a header (e.g. cookie) like this:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - match:
    - headers:
        cookie:
          regex: "^(.*?;)?(user=jason)(;.*)?$"
    route:
    - destination:
        host: reviews
        subset: v2  
  - route:
    - destination:
        host: reviews
        subset: v1

Create the route:

istioctl replace -f samples/bookinfo/routing/route-rule-reviews-test-v2.yaml

View it in the list, or add -o yaml to see the full output.

istioctl get virtualservices reviews

We now have a way to route some requests to use the reviews:v2 service. Can you guess how? (Hint: no passwords are needed) See how the page behaviour changes if you are logged in as no-one, 'jason', or 'kylie'.

Once the v2 version has been canary tested to our satisfaction by jason or another subset of our users, we can use Istio to progressively send more and more traffic to our new service.

Let's try that by sending 50% of the traffic to v3 by using weight based version routing. v3 of the service shows red stars. Replace the reviews route:

istioctl replace -f samples/bookinfo/routing/route-rule-reviews-50-v3.yaml

Confirm that the route was replaced:

istioctl get virtualservice reviews -o yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

The implementation of the routing in the Envoy proxy sidecar means that you may need to refresh your browser many times before seeing the results. With significant traffic there will be a 50% split. Send some extra traffic to the service like this:

watch -n 0.2 curl -o /dev/null -s -w "%{http_code}" http://$GATEWAY_URL/productpage

Now refresh the productpage in your browser and you should now see red colored star ratings about 50% of the time.

In a normal canary rollout you would want to use much smaller increments and then increase the amount of traffic gradually by progressively increasing the weighting for v3. Now lets send 100% of the traffic to v3:

istioctl replace -f samples/bookinfo/routing/route-rule-reviews-v3.yaml

Now when you refresh your browser you should see the red stars 100% of the time.

For now, let's clean up the routing rules (don't worry if you see an error):

istioctl delete -f samples/bookinfo/routing/route-rule-reviews-test-v2.yaml
istioctl delete -f samples/bookinfo/routing/route-rule-all-v1-mtls.yaml

Congratulations; you've reached the end of the Istio 'Hello World'. For now, you can uninstall Istio and delete your cluster; watch this space, because very soon you will be able to continue straight to our Istio 201 codelab.

The Istio site contains guides and samples with fully working example uses for Istio that you can experiment with. These include:

Here's how to uninstall Istio.

kubectl delete -f samples/bookinfo/routing/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/kube/bookinfo.yaml
kubectl delete -f install/kubernetes/istio-demo-auth.yaml

In addition to uninstalling Istio, you can also delete the Kubernetes cluster created in the setup phase (to save on cost and to be a good Cloud citizen):

gcloud container clusters delete hello-istio
The following clusters will be deleted.
 - [hello-istio] in [us-central1-f]
Do you want to continue (Y/n)?  Y
Deleting cluster hello-istio...done.                                                                                                                                                                                            
Deleted [https://container.googleapis.com/v1/projects/codelab-test/zones/us-central1-f/clusters/hello-istio].