Rewriting or re-engineering existing applications to work on Kubernetes isn't always possible or feasible to do manually. Migrate for Anthos can help modernize your existing applications and get them running in Kubernetes. In this codelab, you'll migrate an existing web app hosted on Compute Engine to Kubernetes Engine using Migrate for Anthos.
What you'll learn
- How to deploy a Migrate for Anthos on a Kubernetes cluster
- How to create a container in a stateful set from an existing Compute Engine instance
- How to deploy your container to Kubernetes and configure it with a load balancer
What you'll need
- A Google Cloud project with billing set up. If you don't have one you'll have to create one.
This codelab can run completely on Google Cloud Platform without any local installation or configuration.
Before starting, make sure to enable the required APIs on your Google Cloud project:
Create a Compute Instance Web Server
Let's create a compute instance that we'll use to host our initial nginx web server, along with the firewall rules that will allow us to view the web server's default landing page. There are a few ways we can do this, but for ease of use, we'll use Cloud Shell.
In Cloud Shell run the following:
gcloud compute instances create webserver --zone=us-central1-a && \ gcloud compute firewall-rules create default-allow-http --allow=tcp:80
The first half of this command will create a Google Cloud instance in the us-central1-a zone while the second half will create a firewall rule named ‘default-allow-http' that will allow http traffic into our network.
When the instance is successfully created, it will display a table with the instance's details. Take note of the External IP - we will need this to verify our web server is running later on.
Once the instance is up and running we can SSH into our instance from the Cloud Shell to install nginx and start the web server:
gcloud compute ssh --zone us-central1-a webserver
Once logged into our compute instance, install nginx:
sudo apt install nginx
Logout of the ssh session with the
Let's verify that our web server is running by entering the instance's external IP from earlier into our browser. You should see the default nginx welcome screen:
This web server will serve as the legacy web app that we will migrate to Kubernetes using Migrate for Anthos.
Next, we'll create a GKE cluster which is where we will ultimately migrate the compute engine web server. In the Cloud Console, run the following:
gcloud container clusters create my-gke-cluster \ --zone us-central1-a \ --cluster-version 1.13 \ --machine-type n1-standard-4 \ --image-type "UBUNTU" \ --num-nodes 1 \ --enable-stackdriver-kubernetes
Give this command a few minutes to complete. Once the cluster has been created, you'll receive some output with its details:
Next, navigate to the GCP Marketplace to Deploy Migrate for Anthos:
On the marketplace page for Migrate for Anthos, click configure and if prompted, select your project from the list. The proceeding page will present a form with some default values entered. Ensure that the cluster selected is the one we just created and click Deploy:
Migrate for Anthos should now be deployed on our kubernetes cluster. When it's finished deploying, you'll see a status of ‘OK' on the Kubernetes Engine Applications page:
We've got a Kubernetes cluster running Migrate for Anthos, so now we can begin the migration process. In order to deploy our compute instance to a Kubenetes cluster, we'll shutdown our compute engine instance so that we'll be able to take snapshots of the disks. Before moving on, note the instance ID, which we'll need later on:
gcloud compute instances describe webserver --zone us-central1-a | grep ^id
Let's shut down our compute instance:
gcloud compute instances stop webserver --zone us-central1-a
Now that the instance is stopped we're able to safely snapshot the disks by running the following script. Be sure to insert your project ID and your instance ID:
python3 /google/migrate/anthos/gce-to-gke/clone_vm_disks.py \ -p <project-id> -i <instance-id> \ -z us-central1-a \ -T us-central1-a \ -A webserver-statefulset \ -o containerized-webserver.yaml
With those flags,
- Verify your GCE instance is off
- Create a snapshot from each of your instance's disks
- Create a new disk from each snapshot
- Delete the snapshots it created
- Generate a YAML file in your current working directory for deploying a stateful set that will host your web server
The yaml file generated will provision a stateful set in our kubernetes cluster, along with the persistent volume claims required to mount the copied disks to our webserver container. We can apply these changes with
kubectl apply -f containerized-webserver.yaml
Check the status of the webserver-statefulset on the Workloads page:
It is normal for the status to read ‘Pods are pending' for a few minutes after running
kubectl apply. Move on once the status reads ‘OK.'
At this point, our Kubenetes cluster should be running our web server as a stateful set, but we'll also need to expose its container to a load balancer to access our webserver via an external IP address. In the Cloud shell, create a new file named
loadbalancer.yaml with the following content:
apiVersion: v1 kind: Service metadata: name: webserver-loadbalancer spec: type: LoadBalancer selector: app: webserver-statefulset ports: - protocol: TCP port: 80 targetPort: 80
And now apply it with
kubectl apply -f loadbalancer.yaml
We can use kubectl to retrieve the external IP address of the webserver-container service:
kubectl get services
If we enter the external IP address in our browser, we should get the same default nginx welcome screen from earlier:
We've done it! Our GCE webserver is now hosted on Kubernetes! Nice!
As a managed Kubernetes service, Kubernetes Engine is automatically instrumented for both logging and monitoring with Stackdriver. Let's check out some of the metrics Stackdriver captures for us automatically.
Click the Monitoring link on the products menu - accessing this for the first time from your project may take a few minutes while it sets up your workspace.
Once loaded, hover over Resources in the left pane and select "Kubernetes Engine NEW" from the menu.
Each row in the dashboard presented here represents a Kubernetes resource. You can switch between the infrastructure, workloads or services view with the links above the dashboard.
In Workloads view, expand ‘my-gke-cluster' and drill down to default > webserver-statefulset > webserver-statefulset-0 > webserver-statefulset. Click on the webserver-stateful set container. Here you'll find some out-of-the box metrics being captured by Stackdriver, including memory utilization and CPU utilization.
The charts displayed in this dashboard are ones we'll be able to use to create a custom dashboard.
Stackdriver lets us create custom dashboards that we can use to organize charts and graphs for any metric data available to us. Let's create a custom dashboard to provide an at-a-glance view of some of our web server's metrics.
On the left side pane, hover over Dashboards, then click Create Dashboard.
Now that we have our empty dashboard, we can add metrics that we want to keep an eye on. Let's give our Untitled Dashboard a useful name like ‘My Web Server Containers' and click ‘Add Chart' at the top right:
Remember the out-of-the-box metrics? Let's add a chart for the container CPU utilization. In the field for Chart Title, enter ‘CPU Utilization'. In the box for ‘Find resource type and metric', type request_utilization and select CPU request utilization from the filtered list. This selection will populate both the Resource type and Metric fields for us.
Next, we'll want to filter by our project_id (if we have multiple projects) and container_name. In the Filter box, type project_id, select it from the filtered list, and select your project in the Value field. We also need to filter by container_name. In the Filter box, type container_name, select it from the filtered list and select webserver-statefulset in the Value field. Click Save.
We now have a dashboard with our first chart.
With Stackdriver, we can set up alerts to notify us when any metrics hit any threshold values we specify. For example, we can have Stackdriver email us when the CPU utilization from the last step is above a certain threshold for a sustained amount of time, which may indicate a problem with our app. To demonstrate what these alerts look like, let's set up an uptime check and then simulate an outage.
From the left pane, select Uptime Checks and then Uptime Checks Overview:
As the Uptime Checks page suggests, let's set up our first uptime check. Click the Add Uptime Check button at the top right of the page.
In the proceeding form, enter ‘Endpoint Uptime' as the title and your load balancer's external IP address as the hostname.
Click Save and you will be prompted to create an accompanying Alert Policy:
Click Create Alert Policy.
Let's name this ‘Endpoint Uptime Policy'. In the Configuration section, set ‘Condition triggers if' to ‘Any time series violates' and click save.
We're not quite finished yet. Next, we will specify a Notification Channel so that we're notified when our alert policy has been violated. In the Notification Channel Type drop-down select Email followed by a valid email address.
Click Add Notification Channel. Finally, at the bottom of the form, name the policy ‘Web App Uptime' and click Save.
To see what an alert will look like, in your Cloud Console, open up your Cloud Shell once again. The following command will stop the nginx service running in our webserver pod:
kubectl exec -t webserver-statefulset-0 -- /bin/bash -c "nginx -s stop"
After a few minutes, you should receive an email alerting you of the outage:
Let's undo that. Back in our Cloud Shell, let's restart nginx:
kubectl exec -t webserver-statefulset-0 -- /bin/bash -c "nginx"
After a few minutes , you'll get another Stackdriver email, this time with better news than before:
Now that we've migrated from GCE to GKE with Migrate for Anthos, let's clean up our project of all the resources we've created.
Delete the Project
If you prefer, you can delete the entire project. In the GCP Console, go to the Cloud Resource Manager page:
In the project list, select the project we've been working in and click Delete. You'll be prompted to type in the project ID. Enter it and click Shut Down.
If you prefer to delete the different components one by one, proceed to the next section.
From your dashboard page click the settings icon at the top of the page and select Delete Dashboard.
From the Policies page, select Delete from the Actions menu on the right for each policy you created.
From the Uptime Checks page, select Delete from the Actions menu on the right of each check you created.
GCE and Kubernetes
Google Compute Engine instance
gcloud compute instances delete webserver --zone=us-central1-a
Kubernetes Cluster (includes Migrate for Anthos, stateful set, and load balancer service)
gcloud container clusters delete my-gke-cluster --zone=us-central1-a
Our stateful set used a disk we created. Use the following to retrieve the name:
gcloud compute disks list --filter=webserver
Using your disk name in place of mine, delete it with:
gcloud compute disks delete vls-690d-webserver --zone=us-central1-a
All cleaned up!
Way to go! You migrated your web server from a GCE instance to a Kubernetes cluster using Migrate for Anthos.
What we've covered
- We migrated a web server from GCE to a Kubernetes cluster using Migrate for Anthos
- We opened our stateful set web server to the world by exposing it via a Kubernetes load balancer service.
- We enabled Stackdriver and made a custom dashboard
- We configured an uptime check along with an alert policy to let us know when our web server goes down