Deploy and Update a .NET Core app in Google Kubernetes Engine

1. Overview

Microsoft .NET Core is an open-source and cross-platform version of .NET that can natively run in containers. .NET Core is available on GitHub and is maintained by Microsoft and the .NET community. This lab deploys a containerized .NET Core app into Google Kubernetes Engine (GKE).

This lab follows a typical development pattern where applications are developed in a developers local environment and then deployed to production. In the first part of the lab, an example .NET core app is validated using a container running in Cloud Shell. Once validated, the app is then deployed on Kubernetes using GKE. The lab includes the steps to create a GKE cluster.

In the second part of the lab, a minor change is made to the app that shows the hostname of the container that is running that app instance. The updated applicaiton is then validated in cloud shell and the deployment is updated to use the new version. The following illustration shows the sequence of activities in this lab:

Demo sequence diagram

Costs

If you run this lab exactly as written, normal costs for the following services will apply

2. Setup and Requirements

Prerequisites

To complete this lab, a Google Cloud account and project are required. For more detailed instructions on how to create a new project, refer to this Codelab.

This lab makes use of Docker running in Cloud Shell, which is available through the Google Cloud Console and comes preconfigured with many useful tools, such as gcloud and Docker. How to access to the cloud shell is shown below. Click the Cloud Shell icon in the top right to reveal it in the bottom pane of the console window.

Cloud Shell

Alternate configuration options for GKE cluster (optional)

This lab requires a Kubernetes cluster. In the next section, a GKE cluster with a simple configuration is created. This section shows some gcloud commands that provide alternate configuration options to use when building a Kubernetes cluster using GKE. For example, using the below commands it is possible to identify different machine types, zones and even GPUs (accelerators).

  • List machine types with this command gcloud compute machine-types list
  • List GPUs with this command gcloud compute accelerator-types list
  • List compute zones with this command gcloud compute zones list
  • Get help on any gcloud command gcloud container clusters --help
    • For example, this gives details about creating a kubernetes cluster gcloud container clusters create --help

For a complete list of configuration options for GKE, see this document

Prepare to create the kubernetes cluster

In Cloud Shell, it's necessary to set some environment variables and configure the gcloud client. This is accomplished with the following commands.

export PROJECT_ID=YOUR_PROJECT_ID
export DEFAULT_ZONE=us-central1-c

gcloud config set project ${PROJECT_ID}
gcloud config set compute/zone ${DEFAULT_ZONE}

Create a GKE cluster

Since this lab deploys the .NET Core app on Kubernetes, it's necessary to create a cluster. Use the following command to create a new Kubernetes cluster in Google Cloud using GKE.

gcloud container clusters create dotnet-cluster \
  --zone ${DEFAULT_ZONE} \
  --num-nodes=1 \
  --node-locations=${DEFAULT_ZONE},us-central1-b \
  --enable-stackdriver-kubernetes \
  --machine-type=n1-standard-1 \
  --workload-pool=${PROJECT_ID}.svc.id.goog \
  --enable-ip-alias
  • --num-nodes is the number of nodes to add per zone and can be scaled later
  • --node-locations is a comma separated list of zones. In this case the zone you identify in the environment variable above and us-central1-b are used
    • NOTE: This list can't contain duplicates
  • --workload-pool establishes workload identity so GKE workloads can access Google Cloud services

While the cluster is building the following is displayed

Creating cluster dotnet-cluster in us-central1-b... Cluster is being deployed...⠼

Configure kubectl

The kubectl CLI is the primary way of interacting with a Kubernetes cluster. In order to use it with the new cluster that was just created, it needs to be configured to to authenticate against the cluster. This is done witht he following command.

$ gcloud container clusters get-credentials dotnet-cluster --zone ${DEFAULT_ZONE}
Fetching cluster endpoint and auth data.
kubeconfig entry generated for dotnet-cluster.

It should now be possible to use kubectl to interact with the cluster.

$ kubectl get nodes
NAME                                            STATUS   ROLES    AGE     VERSION
gke-dotnet-cluster-default-pool-02c9dcb9-fgxj   Ready    <none>   2m15s   v1.16.13-gke.401
gke-dotnet-cluster-default-pool-ed09d7b7-xdx9   Ready    <none>   2m24s   v1.16.13-gke.401

3. Test locally and confirm desired functionality

This lab uses the following container images from the official .NET repository on Docker hub.

Run container locally to verify functionality

In cloud shell, verify that Docker is up and running properly and that the .NET container works as expected by running the following Docker command:

$ docker run --rm mcr.microsoft.com/dotnet/samples

      Hello from .NET!
      __________________
                        \
                        \
                            ....
                            ....'
                            ....
                          ..........
                      .............'..'..
                  ................'..'.....
                .......'..........'..'..'....
                ........'..........'..'..'.....
              .'....'..'..........'..'.......'.
              .'..................'...   ......
              .  ......'.........         .....
              .                           ......
              ..    .            ..        ......
            ....       .                 .......
            ......  .......          ............
              ................  ......................
              ........................'................
            ......................'..'......    .......
          .........................'..'.....       .......
      ........    ..'.............'..'....      ..........
    ..'..'...      ...............'.......      ..........
    ...'......     ...... ..........  ......         .......
  ...........   .......              ........        ......
  .......        '...'.'.              '.'.'.'         ....
  .......       .....'..               ..'.....
    ..       ..........               ..'........
            ............               ..............
          .............               '..............
          ...........'..              .'.'............
        ...............              .'.'.............
        .............'..               ..'..'...........
        ...............                 .'..............
        .........                        ..............
          .....
  
Environment:
.NET 5.0.1-servicing.20575.16
Linux 5.4.58-07649-ge120df5deade #1 SMP PREEMPT Wed Aug 26 04:56:33 PDT 2020

Confirm web app functionality

A sample web application in can be also be validated in cloud shell. The Docker run command below creates a new container that exposes port 80 and maps that on to localhost port 8080. Remember that localhost in this case is in cloud shell.

$ docker run -it --rm -p 8080:80 --name aspnetcore_sample mcr.microsoft.com/dotnet/samples:aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
      No XML encryptor configured. Key {64a3ed06-35f7-4d95-9554-8efd38f8b5d3} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

Since this is a web app, it needs to be viewed and validated in a web browser. The next section shows how to do that in cloud shell using Web Preview.

4. Access services from cloud shell using "Web Preview"

Cloud Shell offers Web Preview, a feature that makes it possible to use a browser to interact with processes running in the cloud shell instance.

Use "Web Preview" to view apps in Cloud Shell

In Cloud Shell, click the web preview button and choose "Preview on port 8080" (or whatever port Web Preview is set to use).

Cloud Shell

That will open a browser window with an address like this:

https://8080-cs-754738286554-default.us-central1.cloudshell.dev/?authuser=0

View the .NET sample application using Web Preview

The sample app that was started in the last step can now be viewed by starting Web Preview and loading the provided URL. It should look something like this:

Screenshot of .NET app V1

5. Deploy to Kubernetes

Build the YAML file and apply

The next step requires a YAML file describing two Kubernetes resources, a Deployment and a Service. Create a file named dotnet-app.yaml in cloud shell and add the following contents to it.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dotnet-deployment
  labels:
    app: dotnetapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: dotnetapp
  template:
    metadata:
      labels:
        app: dotnetapp
    spec:
      containers:
      - name: dotnet
        image: mcr.microsoft.com/dotnet/samples:aspnetapp
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: dotnet-service
spec:
    selector:
      app: dotnetapp
    ports:
      - protocol: TCP
        port: 8080
        targetPort: 80

Now use kubectl to apply this file to kubernetes.

$ kubectl apply -f dotnet-app.yaml
deployment.apps/dotnet-deployment created
service/dotnet-service created

Notice the messages that indicate the desired resources have been created.

Explore the resulting resources

We can use the kubectl CLI to examine the resources that were created above. First, let's look at the Deployment resources and confirm that the new deployment is there.

$ kubectl get deployment
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
dotnet-deployment   3/3     3            3           80s

Next have a look at the ReplicaSets. There should be a ReplicaSet created by the above deployment.

$ kubectl get replicaset
NAME                           DESIRED   CURRENT   READY   AGE
dotnet-deployment-5c9d4cc4b9   3         3         3       111s

Finally, have a look at the Pods. The Deployment indicated that there should be three instances. The below command should show that there are three instances. The -o wide option is added so the nodes where those instances are running will be shown.

$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP          NODE                                            NOMINATED NODE   READINESS GATES
dotnet-deployment-5c9d4cc4b9-cspqd   1/1     Running   0          2m25s   10.16.0.8   gke-dotnet-cluster-default-pool-ed09d7b7-xdx9   <none>           <none>
dotnet-deployment-5c9d4cc4b9-httw6   1/1     Running   0          2m25s   10.16.1.7   gke-dotnet-cluster-default-pool-02c9dcb9-fgxj   <none>           <none>
dotnet-deployment-5c9d4cc4b9-vvdln   1/1     Running   0          2m25s   10.16.0.7   gke-dotnet-cluster-default-pool-ed09d7b7-xdx9   <none>           <none>

Review the Service resource

A Service resource in Kubernetes is a load balancer. The endpoints are determined by labels on Pods. In this way, as soon as new Pods are added to the deployment through the kubectl scale deployment operation above, the resulting Pods are immediately available for requests being handled by that Service.

The following command should show the Service resource.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
dotnet-service   ClusterIP   10.20.9.124   <none>        8080/TCP   2m50s
...

It's possible to see more details about the Service with the following command.

$ kubectl describe svc dotnet-service
Name:              dotnet-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=dotnetapp
Type:              ClusterIP
IP:                10.20.9.124
Port:              <unset>  8080/TCP
TargetPort:        80/TCP
Endpoints:         10.16.0.7:80,10.16.0.8:80,10.16.1.7:80
Session Affinity:  None
Events:            <none>

Notice that the Service is of type ClusterIP. This means that any Pod within the cluster can resolve the Service name, dotnet-service to its IP address. Requests sent to the service will be load balanced accross all instances (Pods). The Endpoints value above shows the IPs of the Pods currently available for this service. Compare these to the IPs of the Pods that were output above.

Verify the running app

At this point the application is live and ready for user requests. In order to access it, use a proxy. The following command creates a local proxy that accepts requests on port 8080 and passes them to the kubernetes cluster.

$ kubectl proxy --port 8080
Starting to serve on 127.0.0.1:8080

Now use Web Preview in Cloud Shell to access the web application.

Add the following to the URL generated by Web Preview: /api/v1/namespaces/default/services/dotnet-service:8080/proxy/. That will end up looking something like this:

https://8080-cs-473655782854-default.us-central1.cloudshell.dev/api/v1/namespaces/default/services/dotnet-service:8080/proxy/

Congratulations on deploying a .NET Core app on Google Kubernetes Engine. Next we will make a change to the app and redeploy.

6. Modify the app

In this section, the application will be modified to show the host on which the instance is running. This will make it possible to confirm that load balancing is working and that the available Pods are responding as expected.

Get the source code

git clone https://github.com/dotnet/dotnet-docker.git
cd dotnet-docker/samples/aspnetapp/

Update the app to include the host name

vi aspnetapp/Pages/Index.cshtml
    <tr>
        <td>Host</td>
        <td>@Environment.MachineName</td>
    </tr>

Build a new container image and test locally

Build the new container image with the updated code.

docker build --pull -t aspnetapp:alpine -f Dockerfile.alpine-x64 .

As before, test the new application locally

$ docker run --rm -it -p 8080:80 aspnetapp:alpine
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
      No XML encryptor configured. Key {f71feb13-8eae-4552-b4f2-654435fff7f8} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

As before, the app can be accessed using Web Preview. This time the Host parameter should be visible, as shown here:

Cloud Shell

Open a new tab in cloud shell and run docker ps to see that the container ID matches the Host value shown above.

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                  NAMES
ab85ce11aecd        aspnetapp:alpine    "./aspnetapp"       2 minutes ago       Up 2 minutes        0.0.0.0:8080->80/tcp   relaxed_northcutt

Tag and push the image so it's available to Kubernetes

The image needs to be tagged and pushed in order for Kubernetes to be able to pull it. Start by listing the container images and identify the desired image.

$ docker image list
REPOSITORY                                         TAG                 IMAGE ID            CREATED             SIZE
aspnetapp                                          alpine              95b4267bb6d0        6 days ago          110MB

Next, tag that image and push it to Google Container Registry. Using the IMAGE ID above, that will look like this

docker tag 95b4267bb6d0 gcr.io/${PROJECT_ID}/aspnetapp:alpine
docker push gcr.io/${PROJECT_ID}/aspnetapp:alpine

7. Redeploy the updated application

Edit the YAML file

Change back into the directory where the file dotnet-app.yaml is saved. Find the following line in the YAML file

        image: mcr.microsoft.com/dotnet/core/samples:aspnetapp

This needs to be changed to reference the container image that was created and pushed into gcr.io above.

        image: gcr.io/PROJECT_ID/aspnetapp:alpine

Don't forget to modify it to use your PROJECT_ID. It should look something like this when you're done

        image: gcr.io/myproject/aspnetapp:alpine

Apply the updated YAML file

$ kubectl apply -f dotnet-app.yaml
deployment.apps/dotnet-deployment configured
service/dotnet-service unchanged

Notice that the Deployment resource shows updated and the Service resource shows unchanged. The updated Pods can be seen as before with the command kubectl get pod, but this time we'll add the -w, which will watch all changes as they happen.

$ kubectl get pod -w
NAME                                 READY   STATUS              RESTARTS   AGE
dotnet-deployment-5c9d4cc4b9-cspqd   1/1     Running             0          34m
dotnet-deployment-5c9d4cc4b9-httw6   1/1     Running             0          34m
dotnet-deployment-5c9d4cc4b9-vvdln   1/1     Running             0          34m
dotnet-deployment-85f6446977-tmbdq   0/1     ContainerCreating   0          4s
dotnet-deployment-85f6446977-tmbdq   1/1     Running             0          5s
dotnet-deployment-5c9d4cc4b9-vvdln   1/1     Terminating         0          34m
dotnet-deployment-85f6446977-lcc58   0/1     Pending             0          0s
dotnet-deployment-85f6446977-lcc58   0/1     Pending             0          0s
dotnet-deployment-85f6446977-lcc58   0/1     ContainerCreating   0          0s
dotnet-deployment-5c9d4cc4b9-vvdln   0/1     Terminating         0          34m
dotnet-deployment-85f6446977-lcc58   1/1     Running             0          6s
dotnet-deployment-5c9d4cc4b9-cspqd   1/1     Terminating         0          34m
dotnet-deployment-85f6446977-hw24v   0/1     Pending             0          0s
dotnet-deployment-85f6446977-hw24v   0/1     Pending             0          0s
dotnet-deployment-5c9d4cc4b9-cspqd   0/1     Terminating         0          34m
dotnet-deployment-5c9d4cc4b9-vvdln   0/1     Terminating         0          34m
dotnet-deployment-5c9d4cc4b9-vvdln   0/1     Terminating         0          34m
dotnet-deployment-85f6446977-hw24v   0/1     Pending             0          2s
dotnet-deployment-85f6446977-hw24v   0/1     ContainerCreating   0          2s
dotnet-deployment-5c9d4cc4b9-cspqd   0/1     Terminating         0          34m
dotnet-deployment-5c9d4cc4b9-cspqd   0/1     Terminating         0          34m
dotnet-deployment-85f6446977-hw24v   1/1     Running             0          3s
dotnet-deployment-5c9d4cc4b9-httw6   1/1     Terminating         0          34m
dotnet-deployment-5c9d4cc4b9-httw6   0/1     Terminating         0          34m

The above output shows the rolling update as it happens. First, new containers are started, and when they are running, the old containers are terminated.

Verify the running app

At this point the application is updated and ready for user requests. As before, it can be accessed using a proxy.

$ kubectl proxy --port 8080
Starting to serve on 127.0.0.1:8080

Now use Web Preview in Cloud Shell to access the web application.

Add the following to the URL generated by Web Preview: /api/v1/namespaces/default/services/dotnet-service:8080/proxy/. That will end up looking something like this:

https://8080-cs-473655782854-default.us-central1.cloudshell.dev/api/v1/namespaces/default/services/dotnet-service:8080/proxy/

Confirm the Kubernetes Service is distributing load

Refresh this URL several times and notice that the Host changes as the requests are load balanced across different Pods by the Service. Compare the Host values to the list of Pods from above to see that all Pods are receiving traffic.

Scale up instances

Scaling apps in Kubernetes is easy. The follow command will scale the deployment up to 6 instances of the application.

$ kubectl scale deployment dotnet-deployment --replicas 6
deployment.apps/dotnet-deployment scaled

The new Pods and their current state can be viewed with this command

kubectl get pod -w

Notice that refreshing the same browser window shows that traffic is now being balanced across all the new Pods.

8. Congratulations!

In this lab, a .NET Core sample web application was validated in a developer environment and subsequently deployed to Kubernetes using GKE. The app was then modified to display the hostname of the container in which it was running. The Kubernetes deployment was then updated to the new version and the app was scaled up to demonstrate how load is distributed across additional instances.

To learn more about .NET and Kubernetes, consider these tutorials. These build on what was learned in this lab by introducing Istio Service Mesh for more sophisticated routing and resilience patterns.

9. Clean up

In order to aviod unintended costs, use the following commands to delete the cluster and the container image that were created in this lab.

gcloud container clusters delete dotnet-cluster --zone ${DEFAULT_ZONE}
gcloud container images delete gcr.io/${PROJECT_ID}/aspnetapp:alpine