In this Codelab you will learn how to:
Kubernetes is all about applications and in this codelab you will utilize the Kubernetes API to deploy, manage, and upgrade applications. In this part of the workshop you will use an example application called "app" to complete the labs.
Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, from virtual machines to bare metal.
For the purpose of this codelab, using a managed environment such as Google Container Engine (a Google-hosted version of Kubernetes running on Compute Engine) will allow you to focus more on experiencing Kubernetes rather than setting up the underlying infrastructure.
If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:
Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as
Next, you'll need to enable billing in the Developers Console in order to use Google Cloud resources and enable the Container Engine API.
Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document). Google Container Engine pricing is documented here.
New users of Google Cloud Platform are eligible for a $300 free trial.
While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.
This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).
To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):
Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your
gcloud auth list
Credentialed accounts: - <myaccount>@<mydomain>.com (active)
gcloud config list project
[core] project = <PROJECT_ID>
If for some reason the project is not set, simply issue the following command :
gcloud config set project <PROJECT_ID>
Looking for your
PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard:
IMPORTANT: Finally, set the default zone and project configuration:
gcloud config set compute/zone us-central1-f
You can choose a variety of different zones. Learn more in the Regions & Zones documentation.
In this course we'll be using a hosted version of Kubernetes, called Google Container Engine. Follow this link to enable the Container Engine API.
In the cloud shell environment type the following command to set the zone:
$ gcloud config set compute/zone us-central1-b
After the zone is set, we'll start up a cluster for use in this codelab.
$ gcloud container clusters create io
Clone the GitHub repository from the command line:
$ git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git $ cd orchestrate-with-kubernetes/kubernetes
The sample has the following layout:
deployments/ /* Deployment manifests */ ... nginx/ /* nginx config files */ ... pods/ /* Pod manifests */ ... services/ /* Services manifests */ ... tls/ /* TLS certificates */ ... cleanup.sh /* Cleanup script */
Now that you have the code -- it's time to give Kubernetes a try!
The easiest way to get started with kubernetes is to use the kubectl run command.
Let's use the kubectl run command to launch a single instance of the nginx container
$ kubectl run nginx --image=nginx:1.10.0
And you see Kubernetes has created what is called a deployment -- we'll explain more about deployments later, but for now all you need to know is that deployments keep our pods up and running even when the nodes they run on fail.
In Kubernetes, all containers run in what's called a pod. Use the kubectl get pods command to view the running nginx container.
$ kubectl get pods
Now that the nginx container is running we can expose it outside of Kubernetes using the kubectl expose command.
$ kubectl expose deployment nginx --port 80 --type LoadBalancer
So what just happened? Behind the scenes Kubernetes created an external Load Balancer with a public IP address attached to it. Any client who hits that public IP address will be routed to the pods behind the service. In this case that would be the nginx pod.
If we list our services now...
$ kubectl get services
We'll see that we have a External IP that we can use to hit the nginx container remotely.
$ curl http://<External IP>:80
And there you go! Kubernetes supports an easy to use workflow out of the box using the kubectl run and expose commands.
Now that you've seen a quick tour of kubernetes, it's time to dive into each of the components and abstractions.
At the core of Kubernetes is the Pod.
Pods represent a logical application.
Pods represent and hold a collection of one or more containers. Generally, If you have multiple containers with a hard dependency on each other they would be packaged inside of a single pod.
In our example you can see that we have a pod that contains the monolith and nginx containers.
Pods also have Volumes. Volumes are data disks that live as long as the pods lives -- and can be used by the containers in that pod. This is possible because pods provide a shared namespace for their contents. This means that the two containers inside of our example pod can communicate with each other. And they also share the attached volumes.
Pods also share a network namespace. This means that a pod has one IP Address per pod.
Let's take a deeper dive into pods now.
Pods can be created using pod configuration files. Let's take a moment to explore the monolith pod configuration file:
$ cat pods/monolith.yaml
apiVersion: v1 kind: Pod metadata: name: monolith labels: app: monolith spec: containers: - name: monolith image: kelseyhightower/monolith:1.0.0 args: - "-http=0.0.0.0:80" - "-health=0.0.0.0:81" - "-secret=secret" ports: - name: http containerPort: 80 - name: health containerPort: 81 resources: limits: cpu: 0.2 memory: "10Mi"
There's a few things to notice here. You'll see that our pod is made up of one container (the monolith). You can also see that we're passing a few arguments to our container when it starts up. Lastly, we're opening up port 80 for http traffic.
Create the monolith pod using kubectl:
$ kubectl create -f pods/monolith.yaml
Let's examine our pods. Use the
kubectl get pods command to list all pods running in the default namespace.
$ kubectl get pods
Once the pod is running, use
kubectl describe command to get more information about the monolith pod.
$ kubectl describe pods monolith
You'll see a lot of the information about the monolith pod including the Pod IP address and the event log. This information will come in handy when troubleshooting.
As you can see, Kubernetes makes it easy create pods by describing them in configuration files and view information about them when they are running. At this point you have the ability create all the pods your deployment requires!
Pods are allocated a private IP address by default and cannot be reached outside of the cluster. Use the
kubectl port-forward command to map a local port to a port inside the monolith pod.
Use two terminals. One to run the
kubectl port-forward command, and the other to issue
Run the following command to set up port-forwarding:
2$ kubectl port-forward monolith 10080:80
Now we can start talking to our pod using
$ curl http://127.0.0.1:10080
Yes! We got a very friendly "hello" back from our container. Now let's see what happens when we hit a secure endpoint.
$ curl http://127.0.0.1:10080/secure
Uh oh. Let's try logging in to get an auth token back from our Monolith. At the login prompt, use the super-secret password "password" to login.
$ curl -u user http://127.0.0.1:10080/login
Logging in caused a JWT token to be printed out. We'll copy the token and use it to hit our secure endpoint with
$ TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token') $ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure
At this point we should get a response back from our application, letting us know everything is right in the world again!
kubectl logs command to view the logs for the monolith Pod.
$ kubectl logs monolith
Let's open another terminal and use the -f flag to get a stream of the logs happening in real-time!
3$ kubectl logs -f monolith
Now if you use
curl to interact with the monolith, you can see the logs updating (back in terminal 3).
$ curl http://127.0.0.1:10080
We can use the
kubectl exec command to run an interactive shell inside the monolith Pod. This can come in handy when you want to troubleshoot from within a container:
$ kubectl exec monolith --stdin --tty -c monolith /bin/sh
For example, once we have a shell into the monolith container we can test external connectivity using the
# ping -c 3 google.com
When you're done with the interactive shell be sure to logout.
As you can see, interacting with pods is as easy as using the
kubectl command. If you need to hit a container remotely or get a login shell, Kubernetes provides everything you need to get up and going.
Pods aren't meant to be persistent. They can be stopped or started for many reasons -- like failed liveness or readiness checks -- and this leads to a problem:
What happens if we want to communicate with a set of Pods? When they get restarted they might have a different IP address.
That's where Services come in.
Services provide stable endpoints for Pods.
Services use labels to determine what Pods they will operate on. If Pods have the correct labels, they are automatically picked up ap and exposed by our services.
The level of access a service provides to a set of pods depends on the Service's type. Currently there are three types:
ClusterIP (internal) -- the default type means that this Service is only visible inside of the cluster,
NodePort gives each node in the cluster an externally accessible IP and
LoadBalancer adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it.
It's time for you to learn how to:
Before we can create our services -- let's first create a secure pod that can handle https traffic
If you've changed directories, make sure you return to the ~/orchestrate-with-kubernetes/kubernetes directory:
$ cd ~/orchestrate-with-kubernetes/kubernetes
Explore the monolith service configuration file:
$ cat pods/secure-monolith.yaml
Create the secure-monolith pods and it's configuration data:
$ kubectl create secret generic tls-certs --from-file tls/ $ kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf $ kubectl create -f pods/secure-monolith.yaml
Now that we have a secure pod, it's time to expose the secure-monolith Pod externally and to do that we'll create a Kubernetes service.
Explore the monolith service configuration file:
$ cat services/monolith.yaml
kind: Service apiVersion: v1 metadata: name: "monolith" spec: selector: app: "monolith" secure: "enabled" ports: - protocol: "TCP" port: 443 targetPort: 443 nodePort: 31000 type: NodePort
Things to note:
1. we've got a selector which is used to automatically find and expose any pods with the labels "app=monolith" and "secure=enabled"
2. Now I have to expose the nodeport here because this is how we'll forward external traffic from port 31000 to nginx (on port 443).
kubectl create command to create the monolith service from the monolith service configuration file:
$ kubectl create -f services/monolith.yaml You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:31000) to serve traffic. See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details. service "monolith" created
This output is telling us you're using a port to expose the service. This means that it's possible to have port collisions if another app tries to bind to port 31000 on one of your servers.
Normally, Kubernetes would handle this port assignment for us -- in this codelab we chose one so that it's easier to configure health checks, later on.
gcloud compute firewall-rules command to allow traffic to the monolith service on the exposed nodeport:
$ gcloud compute firewall-rules create allow-monolith-nodeport \ --allow=tcp:31000
Now that everything is setup -- we should be able to hit the secure-monolith service from outside the cluster without using port forwarding. First, let's get an IP address for one of our nodes. And then try hitting the secure-monolith service using
$ gcloud compute instances list $ curl -k https://<EXTERNAL_IP>:31000
Uh oh! That timed out. What's going wrong?
Currently the monolith service does not have any endpoints. One way to troubleshoot an issue like this is to use the kubectl get pods command with a label query.
We can see that we have quite a few pods running with the monolith label.
$ kubectl get pods -l "app=monolith"
But what about "app=monolith" and "secure=enabled"?
$ kubectl get pods -l "app=monolith,secure=enabled"
Notice this label query does not print any results.
It seems like we need to add the "secure=enabled" label to them.
We can use the kubectl label command to add the missing secure=enabled label to the secure-monolith Pod. Afterwards, we can check and see that our labels have been updated.
$ kubectl label pods secure-monolith 'secure=enabled' $ kubectl get pods secure-monolith --show-labels
Now that our pods are correctly labeled, let's view the list of endpoints on the monolith service:
$ kubectl describe services monolith | grep Endpoints
And we have one!
Let's test this out by hitting one of our nodes again.
$ gcloud compute instances list $ curl -k https://<EXTERNAL_IP>:31000
Bam! Houston, we have contact.
The goal of this codelab is to get you ready for scaling and managing containers in production.
And that's where Deployments come in. Deployments are a declarative way ensure that the number of Pods running is equal to the desired number of Pods, specified by the user.
The main benefit of Deployments is in abstracting away the low level details of managing Pods. Behind the scenes Deployments use Replica Sets to manage starting and stopping the Pods. If Pods need to be updated or scaled, the Deployment will handle that. Deployment also handle restarting Pods if they happen to go down for some reason.
Let's look at a quick example:
Pods are tied to the lifetime of the Node they are created on. In the picture above, Node3 went down (taking a Pod with it). Insteading of manually creating a new Pod and finding a Node for it -- our Deployment created a new Pod and started it on Node2.
And that's pretty cool!
It's time to combine everything we learned about Pods and Services to breakup the monolith application from earlier into smaller Services using Deployments.
We're going to break our monolith app into three separate pieces:
We are ready to create deployments, one for each service. Afterwards, we'll define internal services for the auth and hello deployments and an external service for the frontend deployment. Once we're finished we'll be able to interact with our microservices just like we did with our Monolith -- but now each piece will be able to be scaled and deployed, independently!
Let's get started by examining the auth deployment configuration file.
$ cat deployment/auth.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: auth spec: replicas: 1 template: metadata: labels: app: auth track: stable spec: containers: - name: auth image: "kelseyhightower/auth:1.0.0" ports: - name: http containerPort: 80 - name: health containerPort: 81 ...
Notice how our deployment is creating 1 replica and we're using version 1.0.0 of the auth container.
When we run the kubectl create command to create the auth deployment it will make one pod that conforms to the data in the Deployment manifest. This means we can scale the number of Pods by changing the number specified in the Replicas field.
Anyway, let's go ahead and create our deployment object:
$ kubectl create -f deployment/auth.yaml
It's time to create a service for our auth deployment. You've already seen service manifest files, so I won't go into the details here. Use the kubectl create command to create the auth service.
$ kubectl create -f services/auth.yaml
Now , I'm going to do the same thing to create and expose the hello Deployment
$ kubectl create -f deployments/hello.yaml $ kubectl create -f services/hello.yaml
And one more time to create and expose the frontend Deployment.
$ kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf $ kubectl create -f deployments/frontend.yaml $ kubectl create -f services/frontend.yaml
Interact with the frontend by grabbing it's External IP and then curling to it.
$ kubectl get services frontend $ curl -k https://<EXTERNAL-IP>
And we get our hello response back!
Congratulations you've deployment a multi-service application using Kubernetes. The skills you've learned here will allow you to deploy complex applications on Kubernetes using a collection of deployments and services.
Time for some cleaning of the resources used (to save on cost and to be a good cloud citizen).
We've included a cleanup scripts to simplify this. Be sure to check out what the script is doing.
$ cat cleanup.sh $ chmod +x cleanup.sh $ ./cleanup.sh $ gcloud container clusters delete io
Of course, you can also delete the entire project but you would lose any billing setup you have done (disabling project billing first is required). Additionally, deleting a project will only happen after the current billing cycle ends.
This concludes this simple getting started codelab with Kubernetes.
We've only scratched the surface of this technology and we encourage you to explore further with your own pods, replication controllers, and services but also to check out liveness probes (health checks) and consider using the Kubernetes API directly.
Here are some follow-up steps :