ASP.NET Core is an open-source and cross-platform framework for building modern cloud-based and internet-connected applications using the C# programming language.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Istio is an open framework for connecting, securing, managing and monitoring services.

In this first part of the lab, you deploy a simple ASP.NET Core app to Kubernetes running on Google Kubernetes Engine (GKE) and configure it to be managed by Istio.

In the second part of the lab, you further explore features of Istio such as metrics, tracing, dynamic traffic management, fault injection, and more.

What you'll learn

What you'll need

How will you use this tutorial?

Read it through only Read it and complete the exercises

How would rate your experience with Google Cloud Platform?

Novice Intermediate Proficient

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

Next, you'll need to enable billing in the Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document).

New users of Google Cloud Platform are eligible for a $300 free trial.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you use Google Cloud Shell, a command line environment running in Google Cloud.

Activate Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click "Start Cloud Shell":

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this lab can be done with simply a browser or your Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

Run the following command in the cloud shell to confirm that you are authenticated:

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

In Cloud Shell prompt, dotnet command line tool is already installed and you can verify its version as follows.

dotnet --version

Next, create a new skeleton ASP.NET Core web app.

dotnet new mvc -o HelloWorldAspNetCore

This should create a project and restore its dependencies. You should see a message similar to below.

Restore completed in 11.44 sec for HelloWorldAspNetCore.csproj.

Restore succeeded.

We're almost ready to run our app. Navigate to the app folder.

cd HelloWorldAspNetCore

Finally, run the app.

dotnet run --urls=http://localhost:8080

Application starts listening on port 8080.

Hosting environment: Production
Content root path: /home/atameldev/HelloWorldAspNetCore
Now listening on: http://[::]:8080
Application started. Press Ctrl+C to shut down.

To verify that the app is running, click on the web preview button on the top right and select ‘Preview on port 8080'.

You'll see the default ASP.NET Core webpage which simply prints Hello World in a new tab.

Once you verified that the app is running, press Ctrl+C to shut down the app.

Now, publish the app to get a self-contained DLL using the dotnet publish command.

dotnet publish -c Release

Running publish displays some messages with a successfully published DLL at the end of the process.

...
HelloWorldAspNetCore -> /home/atameldev/HelloWorldAspNetCore/bin/Release/netcoreapp2.1/HelloWorldAspNetCore.dll

Navigate to the the publish folder for the next step.

cd bin/Release/netcoreapp2.1/publish/

Next, prepare your app to run as a container. The first step is to define the container and its contents.

In the publish directory and create a Dockerfile to define the Docker image.

touch Dockerfile

Add the following to Dockerfile using your favorite editor (vim, nano,emacs or Cloud Shell's code editor).

FROM gcr.io/google-appengine/aspnetcore:2.1
ADD ./ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Dockerfile builds on the official Google App Engine image for ASP.NET Core apps, which is already configured to run .NET Core apps and adds the app files and the tools necessary to run the app from the directory.

One important configuration included in your Dockerfile is the port on which the app listens for incoming traffic (8080). This is accomplished by setting the ASPNETCORE_URLS environment variable, which ASP.NET Core apps use to determine which port to listen to.

Save this Dockerfile. We will build the image next but before that, let's set the PROJECT_ID as an environment variable and test that it is set as follows:

export PROJECT_ID=$(gcloud config get-value core/project)

Test that it is set as follows:

echo ${PROJECT_ID}
yourproject-XXXX

Now, let's build the image:

docker build -t gcr.io/${PROJECT_ID}/hello-dotnet:v1 .

Once this completes (it'll take some time to download and extract everything), you can see the image is built and saved locally:

docker images
REPOSITORY                             TAG   
gcr.io/yourproject-XXXX/hello-dotnet   v1            
gcr.io/google-appengine/aspnetcore     2.1

Test the image locally with the following command which will run a Docker container as a daemon on port 8080 from your newly-created container image:

docker run -d -p 8080:8080 gcr.io/${PROJECT_ID}/hello-dotnet:v1

You can see the container running:

docker ps
CONTAINER ID        IMAGE 
fb9c45661d87        gcr.io/jump-to-cloud18-eze-1601/hello-dotnet:v1

And again take advantage of the Web preview feature of CloudShell :

You should see the default ASP.NET Core webpage in a new tab.

Once you verify that the app is running fine locally in a Docker container, you can stop the running container. First, get the container id. In this example, your app was running as Docker process ced2872b26fc :

docker ps

CONTAINER ID        IMAGE                             
ced2872b26fc        gcr.io/PROJECT_ID/hello-dotnet:v1   

Stop the container.

docker stop ced2872b26fc
ced2872b26fc

Now that the image works as intended you can push it to the Google Container Registry, a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :

gcloud docker -- push gcr.io/${PROJECT_ID}/hello-dotnet:v1

If all goes well and after a little while you should be able to see the container image listed in the Container Registry section. At this point you now have a project-wide Docker image available which Kubernetes can access and orchestrate as you'll see in a few minutes.

If you're curious, you can navigate through the container images as they are stored in Google Cloud Storage by following this link: https://console.cloud.google.com/storage/browser/ (the full resulting link should be of this form: https://console.cloud.google.com/project/PROJECT_ID/storage/browser/).

Ok, you are now ready to create your GKE cluster but before that, navigate to the Google Kubernetes Engine section of the web console and wait for the system to initialize (it should only take a few seconds).

A cluster consists of a Kubernetes master API server managed by Google and a set of worker nodes. The worker nodes are Compute Engine virtual machines.

Let's use the gcloud CLI from your CloudShell session to create a cluster. Adjust your zone accordingly (the list of zones). This will take a few minutes to complete:

gcloud container clusters create hello-dotnet-cluster --cluster-version=latest --num-nodes 4 --zone europe-west1-b

In the end, you should see the cluster created.

Creating cluster hello-dotnet-cluster...done.
Created [https://container.googleapis.com/v1/projects/dotnet-atamel/zones/europe-west1-b/clusters/hello-dotnet-cluster].
kubeconfig entry generated for hello-dotnet-cluster.
NAME                  ZONE            MASTER_VERSION  
hello-dotnet-cluster  europe-west1-b  1.10.7-gke.6

You should now have a fully-functioning Kubernetes cluster powered by Google Kubernetes Engine:

Grant admin permissions

Before installing Istio, grant admin permissions in the cluster to the current gcloud user. You need these permissions to create the necessary Role-based access control (RBAC) rules for Istio.

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

Now, you're ready to install Istio. Istio's control plane is installed in its own Kubernetes istio-system namespace, and can manage microservices across all other namespaces. The installation includes Istio core components, tools, and samples.

Download Istio

The Istio release page offers download artifacts for several OSs. In this case, you can use a convenient command to download and extract a specific release:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.2 sh -

Install the core components

Next, you install Istio's core components. You install Istio with the optional Istio Auth components, which enable mutual TLS authentication between the sidecars. Navigate to the Istio directory and run:

cd istio-1.0.2
kubectl apply -f install/kubernetes/istio-demo-auth.yaml

This creates the istio-system namespace along with the required RBAC permissions, and deploys the five primary Istio control plane components:

You should see appoximately 70 lines of console output like this:

namespace "istio-system" created
configmap "istio-statsd-prom-bridge" created
/* ... */
service "tracing" created
mutatingwebhookconfiguration "istio-sidecar-injector" created

Automatic sidecar injection

To start using Istio, you don't need to make any changes to the application. When you configure and run the services, Envoy sidecars are automatically injected into each pod for the service.

For that to work, you need to enable sidecar injection for the namespace (‘default') that you use for your microservices. You do that by applying a label:

kubectl label namespace default istio-injection=enabled --overwrite

To verify that the label was successfully applied, run the following command:

kubectl get namespace -L istio-injection

The output confirms that sidecar injection is enabled for the default namespace:

NAME           STATUS    AGE       ISTIO-INJECTION
default        Active    34m       enabled
istio-system   Active    32m
kube-public    Active    34m
kube-system    Active    34m

First, ensure the following Kubernetes services are deployed:

kubectl get svc -n istio-system

Your output should look like the following:

NAME                       TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)
istio-citadel              ClusterIP      30.0.0.119   <none>          8060/TCP,9093/TCP
istio-egressgateway        ClusterIP      30.0.0.11    <none>          80/TCP,443/TCP
istio-ingressgateway       LoadBalancer   30.0.0.39    9.111.255.245   80:31380/TCP,443:31390/TCP,31400:31400/TCP
istio-pilot                ClusterIP      30.0.0.136   <none>          15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP
istio-policy               ClusterIP      30.0.0.242   <none>          9091/TCP,15004/TCP,9093/TCP
istio-statsd-prom-bridge   ClusterIP      30.0.0.111   <none>          9102/TCP,9125/UDP
istio-telemetry            ClusterIP      30.0.0.246   <none>          9091/TCP,15004/TCP,9093/TCP,42422/TCP
prometheus                 ClusterIP      30.0.0.253   <none>          9090/TCP
istio-sidecar-injector     ClusterIP      10.23.242.122<none>          443/TCP

Next, make sure that the corresponding Kubernetes pods are deployed and all containers are up and running:

kubectl get pods -n istio-system

After all the pods are marked as running, you can proceed. You might have some post-install and cleanup pods marked as completed, instead of running, and that's ok.

NAME                                     READY     STATUS 
istio-citadel-7bdc7775c7-22dxq             1/1       Running
istio-egressgateway-78dd788b6d-ld4qx       1/1       Running
istio-ingressgateway-7dd84b68d6-smqbt      1/1       Running
istio-pilot-d5bbc5c59-sv6ml                2/2       Running
istio-policy-64595c6fff-sqbz7              2/2       Running
istio-sidecar-injector-dbd67c88d-4jxqj     1/1       Running
istio-statsd-prom-bridge-949999c4c-fbfzg   1/1       Running
istio-telemetry-cfb674b6c-kk98w            2/2       Running
prometheus-86cb6dd77c-z2tmq                1/1       Running
istio-pilot-2275554717-93c43               2/2       Running

Now you've verified that Istio is installed and running, you can deploy the ASP.NET Core app.

Deployment and Service

First, create an aspnetcore.yaml file using your favorite editor (vim, nano,emacs or Cloud Shell's code editor) and define the Kubernetes Deployment and Service for the app:

apiVersion: v1
kind: Service
metadata:
  name: aspnetcore-service
  labels:
    app: aspnetcore
spec:
  ports:
  - port: 8080
    name: http
  selector:
    app: aspnetcore
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: aspnetcore-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: aspnetcore
        version: v1
    spec:
      containers:
      - name: aspnetcore
        image: gcr.io/YOUR-PROJECT-ID/hello-dotnet:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

The contents of the file are standard Deployments and Services to deploy the application and don't contain anything Istio-specific.

Deploy the services to the default namespace with kubectl:

kubectl apply -f aspnetcore.yaml
service "aspnetcore-service" created
deployment.extensions "aspnetcore-v1" created

Verify that pods are running:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
aspnetcore-v1-6cf64748-mddb   2/2       Running   0          34s

Gateway and VirtualService

To allow ingress traffic to reach the mesh you need to create a Gateway and a VirtualService.

A Gateway configures a load balancer for HTTP/TCP traffic, most commonly operating at the edge of the mesh to enable ingress traffic for an application. A VirtualService defines the rules that control how requests for a service are routed within an Istio service mesh.

Create a aspnetcore-gateway.yaml file to define the Gateway:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: aspnetcore-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

Create a aspnetcore-virtualservice.yaml file to define the VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: aspnetcore-virtualservice
spec:
  hosts:
  - "*"
  gateways:
  - aspnetcore-gateway
  http:
  - route:
    - destination:
        host: aspnetcore-service

Run the kubectl command to deploy the Gateway with:

kubectl apply -f aspnetcore-gateway.yaml

The command produces the following output:

gateway.networking.istio.io "aspnetcore-gateway" created

Next, run the following command to deploy the VirtualService:

kubectl apply -f aspnetcore-virtualservice.yaml

The command produces the following output:

virtualservice.networking.istio.io "aspnetcore-virtualservice" created

Verify that everything is running:

kubectl get gateway
NAME                      AGE
aspnetcore-gateway   28s
kubectl get virtualservice
NAME                             AGE
aspnetcore-virtualservice   33s

Congratulations! You have just deployed an Istio-enabled application. Next, you see the application in use.

You can finally see the application in action. You need to get the external IP and port of the gateway. It's listed under EXTERNAL-IP:

kubectl get svc istio-ingressgateway -n istio-system

Export the external IP and port to a GATEWAY_URL variable:

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')

export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

Use curl to test out the app. The service should respond with a response code of 200:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/

Alternatively, you can open up the browser, navigate to http://<gatewayurl> to view the app:

You just deployed a simple ASP.NET Core app to Kubernetes running on Google Kubernetes Engine (GKE) and configured it to be managed by Istio.

You might be wondering "What's the benefit of Istio?". That's a great question. So far, there's no advantage to having Istio manage this app. In the second part of the lab, we will further explore features of Istio, such as metrics, tracing, dynamic traffic management, service visualization, and fault injection.

Next Steps

License

This work is licensed under a Creative Commons Attribution 2.0 Generic License.

If you're not continuing to the second part of the lab, you can delete the app and uninstall Istio or you can simply delete the Kubernetes cluster.

Delete the app

To delete the app:

kubectl delete -f aspnetcore-gateway.yaml
Kubectl delete -f aspnetcore-virtualservice.yaml
kubectl delete -f aspnetcore.yaml

To confirm that the app is gone:

kubectl get gateway 
kubectl get virtualservices 
kubectl get pods

Uninstall Istio

To delete Istio:

kubectl delete -f install/kubernetes/istio-demo-auth.yaml

To confirm that Istio is gone:

kubectl get pods -n istio-system

Delete Kubernetes cluster

gcloud container clusters delete hello-dotnet-cluster