1. Welcome
Thanks for visiting the Knative codelab by Google.Introduction to Knative codelab is designed to give you an idea of what Knative does, how you use Knative API to deploy applications and how it relates to Kubernetes within 1-2 hours.This codelab requires beginner-level hands-on experience with Kubernetes, such as concepts like Deployments, Pods and using the "kubectl" command-line tool. What you will need
|
What you will learn
- How to create a Kubernetes cluster on Google Kubernetes Engine (GKE)
- How to install Knative on a Kubernetes cluster
- Deploy a web application from source to Knative
- Autoscale applications from 0-to-1, 1-to-N, and back to 0
- Knative Serving API types and the relationship between them
- Roll out new versions (blue/green deployments) with Knative Serving API
- Knative Build API and why it's useful
- Build and push container images inside a Kubernetes cluster
- Use custom builders and build templates
2. Getting set up
You can follow this codelab on either:
- Google Cloud Shell (recommended): in-browser shell, comes with tools installed
- your laptop (follow the instructions below)
Start with Google Cloud Platform
- Log in to your Google Cloud Platform account.
- Go to Google Cloud Console and click "Select a project":
- Make a note of the "ID" of the project somewhere, then click on the project to choose it:
Option 1: Use Google Cloud Shell (recommended)
Cloud Shell provides a command-line shell inside your browser with the tools you need installed and automatically authenticated to your Google Cloud Platform account. (If you don't wish to run this exercise on Cloud Shell, skip to the next section.)
Go to Cloud Console and click "Activate Cloud Shell" on the top right toolbar:
Some quick tips that can make it easier to use Cloud Shell:
1. Detach the shell into a new window: | |
2. Using file editor: Click the pencil icon on the top right to launch an in-browser file editor. You will find this useful as we will copy code snippets into files. | |
3. Start new tabs: If you need more than one terminal prompts. | |
4. Make the text larger: Default font size on Cloud Shell can be too small to read. | Ctrl-+ on Linux/Windows⌘-+ on macOS. |
Option 2: Set up your laptop (not recommended)
If you feel more comfortable using your own workstation environment than Cloud Shell, set up the following tools:
- Install
gcloud:
(Pre-installed on Cloud Shell) Follow instructions to installgcloud
on your platform. We will use this to create a Kubernetes cluster. - Install
kubectl:
(Pre-installed on Cloud Shell) Run the following command to install:
gcloud components install kubectl
Run the following command to authenticate gcloud. It will ask you to log in with your Google account. Then, choose the pre-created project (seen above) as the default project. (You can skip configuring a compute zone):
gcloud init
- Install
curl:
Pre-installed on most Linux/macOS systems. You probably have it already.
3. Create a Kubernetes cluster
Knative is installed as a set of custom APIs and controllers on Kubernetes. You can easily create managed Kubernetes cluster with Google Kubernetes Engine (GKE) and have Google operate the cluster and autoscaling for you.
First, enable Google APIs required to use Kubernetes Engine and Google Container Registry:
gcloud services enable \ cloudapis.googleapis.com \ container.googleapis.com \ containerregistry.googleapis.com
The following command will create a Kubernetes cluster:
- named "knative",
- in us-central1-b zone,
- with latest Kubernetes version available,
- machine type is "n1-standard-4" (4 CPU cores, 15 GB memory),
- with 3 initial nodes, that autoscale to minimum 1, and maximum 5:
gcloud container clusters create knative \ --zone=us-central1-b \ --cluster-version=latest \ --num-nodes=3 \ --machine-type=n1-standard-4 \ --enable-autoscaling --min-nodes=1 --max-nodes=5 \ --enable-autorepair \ --scopes=service-control,service-management,compute-rw,storage-ro,cloud-platform,logging-write,monitoring-write,pubsub,datastore
(This may take around 5 minutes. You can watch the cluster being created at Cloud Console.)
After the Kubernetes cluster is created, gcloud
automatically configures kubectl
with the credentials of the cluster. You should be able to use kubectl
with your new cluster now.
Run the following command to list Kubernetes nodes of your cluster (they should show status "Ready"):
kubectl get nodes
Then, give your account cluster-admin
role on the cluster (necessary to install Knative):
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account)
Now you have a fully-provisioned Kubernetes cluster running in GCP, and you're ready to install Knative on it!
4. Introduction to Knative
What is Knative?
Knative is a set of open-source components and custom APIs installed on Kubernetes.
Knative makes it possible to:
- Deploy and serve applications with a higher-level and easier to understand API. These applications automatically scale from zero-to-N, and back to zero, based on requests.
- Build and package your application code inside the cluster.
- Deliver events to your application. You can define custom event sources and declare subscriptions between event buses and your applications.
This is why, Knative provides developer experiences similar to serverless platforms**.**
You can read the documentation at https://github.com/knative/docs.
Knative is still Kubernetes
If you deployed applications with Kubernetes before, Knative will feel familiar to you. You will still write YAML manifest files and deploy container images on a Kubernetes cluster.
5. Who is Knative for?
Knative APIs
Kubernetes offers a feature called Custom Resource Definitions (CRDs). With CRDs, third party Kubernetes controllers like Istio or Knative can install more APIs into Kubernetes.
Knative installs of three families of custom resource APIs:
- Knative Serving: Set of APIs that help you host applications that serve traffic. Provides features like custom routing and autoscaling.
- Knative Build: Set of APIs that allow you to execute builds (arbitrary transformations on source code) inside the cluster. For example, you can use Knative Build to compile an app into a container image, then push the image to a registry.
- Knative Eventing: Set of APIs that let you declare event sources and event delivery to your applications. (Not covered in this codelab due to time constraints.)
Together, Knative Serving, Build and Eventing APIs provide a common set of middleware for Kubernetes applications. We will use these APIs to run build and run applications.
Is Knative for me?
Knative serves two main audiences:
1. I want to deploy on Kubernetes easier:
- Knative makes it easy to declare an application that auto-scales, without worrying about container parameters like CPU, memory, or concerns like activation/deactivation.
- You can go from a code in a repo to app running on Knative very easily.
2. I want to build my own PaaS/FaaS on Kubernetes:
- You can use these Knative components and APIs to build a custom deployment platform that looks like Heroku or AWS Lambda at your company.
- Knative Serving has many valuable "plumbing" components like the autoscaler, request based activation, telemetry.
- Knative Build lets you declare transformations on the source code, like converting functions to apps, and apps to containers.
- You don't have to reinvent the wheel, can reuse plumbing components offered by Knative.
Knative principles
- Knative is native to Kubernetes (APIs are hosted on Kubernetes, deployment unit is container images)
- You can install/use parts of Knative independently (e.g. only Knative Build, to do in-cluster builds)
- Knative components are pluggable (e.g. don't like the autoscaler? write your own)
6. Installing Knative
Knative is a set of custom Kubernetes API registrations (a.k.a Custom Resource Definitions, CRDs) and controllers running on a Kuberentes cluster.
The following instructions will install Knative v0.2.1 to the Kubernetes cluster. (Refer to the latest documentation for more up-to-date installation instructions.)
- Install Istio: Knative uses Istio for configuring networking and using request-based routing.
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
- Activate Istio on the "default" Kubernetes namespace: This automatically injects an Istio proxy sidecar container to all pods deployed to the "default" namespace
kubectl label namespace default istio-injection=enabled
- Wait until Istio installation is complete (all pods become "Running" or "Completed").
kubectl get pods --namespace=istio-system
- Install Knative Build & Serving:
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
- Wait until Knative Serving & Build installation is complete (all pods become "Running" or "Completed"), run these a few times:
kubectl get pods --namespace=knative-serving
kubectl get pods --namespace=knative-build
Knative is now installed on your cluster!
7. Your first Knative application
To run an application with Knative on a Kubernetes cluster and expose it to the public internet, you need:
- an application packaged as container image
- a Knative
Service
manifest file
Service definition
To expose an application on Knative, you need to define a Service
object. (This is different than the Kubernetes Service type which helps you set up load balancing for Pods.)
Save the following into a file named helloworld.yaml
:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: "helloworld"
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: "gcr.io/knative-samples/helloworld-go"
env:
- name: "TARGET"
value: "world"
This Knative Service example uses the container image gcr.io/knative-samples/helloworld-go
, which is a Go web application listening on port 8080
(currently required port number by Knative).
Deploy it:
kubectl apply -f helloworld.yaml
Verify it's deployed by querying "ksvc" (Knative Service) objects:
$ kubectl get ksvc NAME CREATED AT helloworld 32s
Make a request
External requests to Knative applications in a cluster go through a single public load balancer called knative-ingressgateway
which has a public IP address.
Find the public IP address of the gateway (make a note of the EXTERNAL-IP
field in the output), by running:
kubectl get service --namespace=istio-system knative-ingressgateway
Find the hostname of the application:
kubectl get ksvc helloworld --output jsonpath='{.status.domain}'
The hostname of the application should be helloworld.default.example.com
.
Now, use curl
to make the first request to this function (replace the IP_ADDRESS
below with the gateway's external IP address you found earlier):
curl -H "Host: helloworld.default.example.com" http://IP_ADDRESS Hello world!
After you made a request to the helloworld
Service, you will see that a Pod is created on the Kubernetes cluster to serve the request. Query the list of Pods deployed:
kubectl get pods NAME READY STATUS AGE helloworld-00001-deployment-58b8b4d79b 3/3 Running 1m
You've just deployed a very simple working application to Kubernetes with Knative! The next section explains what happened under the covers.
8. Introduction to Knative Serving API
When you deploy the helloworld
Service to Knative, it creates three kinds of objects: Configuration
, Route
, and Revision
:
kubectl get configuration,revision,route NAME CREATED AT configuration.serving.knative.dev/helloworld 28m NAME CREATED AT revision.serving.knative.dev/helloworld-00001 28m NAME CREATED AT route.serving.knative.dev/helloworld 28m
Here's what each of these Serving APIs do:
| Describes an application on Knative. |
| Read-only snapshot of an application's image and other settings (created by |
| Created by Service (from its |
| Configures how the traffic coming to the Service should be split between |
This diagram explains the relationship between them:
You can read more about how these objects work if you are interested.
9. Serving multiple versions simultaneously
The helloworld
Service had a spec.runLatest
field which serves all the traffic to the latest revision created form the Service's revisionTemplate
field.
To test out the effects of a new version of your application, you will need to run multiple versions of your applications and route a portion of your traffic to the new "canary" version you are testing. This practice is called "blue-green deployment".
Knative Serving offers the Revision
API, which tracks the changes to application configuration, and the Route
API, which lets you split the traffic to multiple revisions.
In this exercise you will:
- Deploy a "blue" Service version in
runLatest
mode. - Update Service with "green" configuration and change mode to
release
to split traffic between two revisions.
Deploying the v1
To try out the a blue-green deployment, first you will need to deploy a "blue" version.
Services in runLatest
mode will send all the traffic to the Revision specified in the Service manifest. In the earlier helloworld
example, you've used a service in runLatest
mode.
First, deploy the v1 (blue) version of the Service with runLatest
mode by saving manifest to a file named v1.yaml
, and apply it to the cluster:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: canary
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/knative-samples/knative-route-demo:blue
env:
- name: T_VERSION
value: "blue"
kubectl apply -f v1.yaml
Query the deployed revision name (should be canary-00001
):
kubectl get revisions NAME CREATED AT canary-00001 39s
Make a request and observe the blue version by replacing the IP_ADDRESS
below with the gateway's IP address (the first request may take some time to complete as it starts the Pod):
curl -H "Host: canary.default.example.com" http://IP_ADDRESS ... <div class="blue">App v1</div> ...
Deploying the v2
The Knative Service
API has a release
mode that lets you roll out changes to new revisions with traffic splitting.
Make a copy of v1.yaml
named v2.yaml
cp v1.yaml v2.yaml
Make the following changes to v2.yaml
.
- change
runLatest
mode torelease
- change
blue
togreen
in "image" and "env" fields - add a
revisions
field with the [current, current+1] revision names - specify a
rolloutPercent
field, routing 20% of traffic to the candidate ("green") revision
The resulting v2.yaml
should look like the following snippet. Save and apply this to the cluster:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: canary
spec:
release:
revisions: ["canary-00001", "canary-00002"] # [current, candidate]
rolloutPercent: 20 # 20% to green revision
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/knative-samples/knative-route-demo:green
env:
- name: T_VERSION
value: "green"
kubectl apply -f v2.yaml
You should now see the new revision created, while the old one is still around:
kubectl get revisions NAME CREATED AT canary-00001 6m canary-00002 3m
Now, make a few requests and observe the response is served from the new "green" version roughly 20% of the time (replace IP_ADDRESS
below):
while true; do curl -s -H "Host: canary.default.example.com" http://IP_ADDRESS | grep -E 'blue|green'; done <div class="green">App v1</div> <div class="blue">App v1</div> <div class="green">App v2</div> <div class="blue">App v1</div> <div class="blue">App v1</div> <div class="green">App v2</div> <div class="green">App v2</div> <div class="blue">App v1</div> <div class="blue">App v1</div> <div class="green">App v2</div> <div class="blue">App v1</div> <div class="blue">App v1</div> <div class="green">App v1</div> <div class="green">App v2</div> <div class="blue">App v1</div> ...
The rolloutPercent
determines what portion of the traffic the candidate revision gets. If you set this field to 0
, the candidate revision will not get any traffic.
If you want to play with the percentages, you can edit the v2.yaml
and re-apply it to the cluster.
With the Service
configured in release
mode, you can also connect to specific revisions through their dedicated addresses:
current.canary.default.example.com
candidate.canary.default.example.com
latest.canary.default.example.com
(most recently deployed Revision, even if it's not specified on therevisions
field.)
After the Service
is configured with the release
mode, you should see the Route
object configured with the traffic splitting (20% to "candidate", 80% to "current"):
kubectl describe route canary ... Status: Traffic: Name: current Percent: 80 Revision Name: canary-00001 Name: candidate Percent: 20 Revision Name: canary-00002 Name: latest Percent: 0 Revision Name: canary-00002
As you roll out changes to the Service
, you need to repeat finding the Revision
name and specify it in the revisions
field as the candidate.
Great job! You just used the Knative Serving API to create a blue-green deployment.
Recap
The Knative Service
object has a release
mode that lets you manage the lifecycle of Revision
s and configure Route
s to split traffic between old and new deployments.
10. Autoscaling applications with Knative
In this example, you will deploy an application, send some artificial request load to it, observe Knative scales up the number of Pods serving the traffic and look at the monitoring dashboard about why the autoscaling has happened.
Deploy the application
The following manifest describes an application on Knative, where we can configure how long each request takes. Save it to autoscale-go.yaml
, and apply to the cluster:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: autoscale-go
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: "gcr.io/knative-samples/autoscale-go:0.1"
kubectl apply -f autoscale-go.yaml
Now, find the public IP address of Knative gateway and save it to IP_ADDRESS variable on your shell:
IP_ADDRESS="$(kubectl get service knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}")"
Make a request to this application to verify you can connect (note that the response indicates the request took 1 second):
curl --header "Host: autoscale-go.default.example.com" \ "http://${IP_ADDRESS?}?sleep=1000"
Launch monitoring dashboard
Knative installation by default installs Prometheus to collect metrics about requests, applications and autoscaling, and exports these metrics to Grafana dashboards for viewing.
To connect to the Grafana dashboard on your cluster, open a new separate terminal and keep the following command running:
kubectl port-forward --namespace knative-monitoring \ $(kubectl get pods --namespace knative-monitoring \ --selector=app=grafana --output=jsonpath="{.items..metadata.name}")\ 8080:3000
This command exposes the Grafana server on localhost:8080 on the Cloud Shell machine. To access this from your browser, click "Preview on port 8080" of Cloud Shell. This will launch
To view the autoscaling dashboard, follow the steps:
- Click "Home" on top right to view dashboards.
- Choose "Knative Serving - Scaling Debugging" dashboard.
- Click the time settings on the top right, choose "last 5 minutes", then choose "refresh every 10 seconds", then click Apply.
- In the main panel, choose "Configuration" as "autoscale-go".
- Expand the autoscaler metrics.
- You should be seeing graphs for "Pod Counts" and "Observed Concurrency".
Keep this dashboard window and kubectl port-forward
command running, now you will send some request load to the application.
Triggering autoscaling
In this step, we will send some artificial load through a load generator. Download the load generator named hey
using go
tool.
go get github.com/rakyll/hey
Now, use hey
to send 150,000 requests (with 500 requests in parallel), each taking 1 second (leave this command running, as it will take a while to complete).
hey -host autoscale-go.default.example.com -c 500 -n 150000 \ "http://${IP_ADDRESS?}?sleep=1000"
Meanwhile, open a new terminal window and keep an eye on the number of pods.
watch kubectl get pods NAME READY STATUS RESTARTS AGE autoscale-go-00002-deployment-6988bf78b-5j8tm 3/3 Running 0 4m autoscale-go-00002-deployment-6988bf78b-m9v6b 3/3 Running 0 2m autoscale-go-00002-deployment-6988bf78b-np8jc 3/3 Running 0 2m autoscale-go-00002-deployment-6988bf78b-npg9n 3/3 Running 0 2m autoscale-go-00002-deployment-6988bf78b-sp6rx 3/3 Running 0 2m
Knative Serving, by default, has a concurrent requests target of 100. Sending 500 concurrent requests causes autoscaling to note that it needs to run 5 Pods to satisfy this level.
Go back to the Grafana dashboard and observe that the number of Pods has increased from 1 to 5:
Similarly, on Grafana dashboard, you can see that the observed concurrency level briefly peaks, and as Knative created more Pods, it comes back down to below 100 (the default concurrency target):
You can close the Grafana window, and stop the hey
and kubectl proxy
commands after observing the autoscaling.
11. Introduction to Knative Build
Knative Build APIs and components let you complete custom tasks on your application's source code. Examples of these actions could be:
- compiling a program (e.g.
go build
) - building a docker image (
docker build
) - pushing a docker image (
docker push
)
Each build "step" is a container image, and you can choose the order these steps are executed.
With Knative Build:
- you can declare build operations sequentially, just like any other build system (like Travis CI, or Jenkins)
- each build operation runs as a container, from a container image you specify
- each of these containers run one after another in the same Kubernetes Pod
- since they are in the same pod, you can share state between each step.
This way, Knative allows you to:
- bring your own build process
- execute builds inside the cluster as containers
- (you can build docker images inside Kubernetes Pods!)
Build API
The Knative Build API introduces the following types:
BuildTemplate
: Declares a set of ordered build steps.Build
****: Represents a single build job. It can either list the "steps" or refer to aBuildTemplate
.
Authentication to source code repositories
Knative Build can fetch code from public repos without any configuration.
For private repos, Knative Build can authenticate to and fetch code from Git repositories using:
- SSH (with ssh private key), or
- HTTP (basic auth with username and password)
To authenticate, you just need to create a Secret
and ServiceAccount
(Kubernetes objects) in a specific way. (We will not cover this topic.)
12. Your first Knative Build
In this section, we will build an application using code from a public Git repository into a container image, then push the resulting image to a container registry.
To start a build job on Knative, you need to submit a Build
object.
View the source code
We will build the helloworld-go application you deployed earlier. The source of this application is available here. Specifically, we will build:
- the
serving/samples/helloworld-go/
directory in the repository - at the branch named
v0.1.x
(or it could be a git commit hash)
Design the build steps
Normally, we would want to have two steps in this build process:
- Build a container image from the provided Dockerfile.
- Push the built container image to a registry.
To help with this, Kaniko is a tool (conveniently packaged as a container image) to build a Dockerfile, and push to Google Container Registry (GCR) without having to set up docker
command-line with authentication.
We will be using the Kaniko container image as a build step to build the helloworld-go application container, and push the resulting container image.
So the only step we will run is the gcr.io/kaniko-project/executor
image, with arguments:
--dockerfile=/workspace/Dockerfile
--destination=gcr.io/<your-project-id>/helloworld-go:v1
In Knative Build, the /workspace
directory is special: The source code downloaded is going to be placed to the /workspace
directory and this directory is persisted between multiple steps in the Build.
Create a Build
The following manifest declares a 1-step Build
with the Kaniko executor image to build and push the image to a Google Container Registry (GCR).
Save the following file to example-build.yaml
, replace <your-project-id>
, then apply to the cluster:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
source:
git:
url: "https://github.com/knative/docs.git"
revision: "v0.1.x"
subPath: "serving/samples/helloworld-go/"
steps:
- name: build-and-push
image: "gcr.io/kaniko-project/executor:v0.6.0"
args:
- "--dockerfile=/workspace/Dockerfile"
- "--destination=gcr.io/<your-project-id>/helloworld-go:v1"
kubectl apply -f example-build.yaml
Viewing Build progress
After the Build is submitted to Kubernetes, a Pod will be created to execute the build steps:
kubectl get pods NAME READY STATUS RESTARTS AGE example-build-2vj4r 0/1 Init:2/3 0 15s
Knative uses the init containers feature of Kubernetes, which provide a way to run containers in a Kubernetes Pod one after another (instead of starting them at the same time).
This Pod will have three init containers:
build-step-credential-initializer
: initializes Git credentials (none, in this case)build-step-git-source
: fetches the git repository into the /workspace directorybuild-step-build-and-push
: runs the Kaniko build step specified above
To view the logs from the Kaniko executor step ("build-and-push"), run the following command with the corrected Pod name from above:
kubectl logs --follow --container=build-step-build-and-push example-build-2vj4r
This build may take up to 5 minutes. You should see logs like following:
INFO[0000] Downloading base image golang ... ... 2018/12/04 19:01:58 pushed blob sha256:ff2ee7cb646b994a7581534a94c8b248f03d251dc744224878c1b5dbd06150ec 2018/12/04 19:01:58 gcr.io/foo-bar/helloworld-go:v1: digest: sha256:9de6a6212ac4e3a7d412bf6cda4414b617eeeaa5da17257eb5019f73eff9d597 size: 592
After the build is complete, you should see the Pod showing with Completed status:
kubectl get pods NAME READY STATUS RESTARTS AGE example-build-2vj4r 0/1 Completed 0 10m
Hooray! You just used Knative build and Kaniko to build a Docker image from a remote source, and pushed that image to Google Container Registry.
Next, we'll create a generic build template from this build step, to reuse it in other Builds.
13. Configuring reusable Build Templates
In the previous section, you defined a Build
object with the steps
field detailing how the build should be executed.
Often, you need to build the same application many times, therefore having a template created out of steps
helps you reuse the same build process for different Build
objects.
Build Templates
The BuildTemplate
API in Knative lets you reuse a build process many times by codifying the steps and allowing you to parametrize arguments. There are two types of build templates:
BuildTemplate
: scoped to the Kubernetes namespace it is applied toClusterBuildTemplate
: available cluster-wide to all Kubernetes namespaces
To create a build template from the Kaniko build step used in the previous section, with the image name required as the IMAGE
parameter, you can define a build template as follows.
Save this as kaniko-build-template.yaml, and apply to the cluster:
apiVersion: build.knative.dev/v1alpha1
kind: BuildTemplate
metadata:
name: kaniko
spec:
parameters:
- name: IMAGE
description: name of the image to be tagged and pushed
steps:
- name: build-and-push
image: "gcr.io/kaniko-project/executor:v0.6.0"
args: ["--destination=${IMAGE}"]
kubectl apply -f kaniko-build-template.yaml
Referencing the BuildTemplate
In the Build
object used in the previous exercise, we will modify the steps
field to template
, which is referencing the kaniko BuildTemplate
you just created. We will also pass the IMAGE
parameter's value to the template.
The new Build manifest should look like this. Save this to templatized-build.yaml
, update <your-project-id>
, and apply it to the cluster:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: templatized-build
spec:
source:
git:
url: "https://github.com/knative/docs.git"
revision: "v0.1.x"
subPath: "serving/samples/helloworld-go/"
template:
name: kaniko
arguments:
- name: IMAGE
value: "gcr.io/<your-project-id>/helloworld-go:v2"
kubectl apply -f templatized-build.yaml
Note the spec.template
field and how the BuildTemplate is specified with per-Build arguments.
You can again use kubectl get pods
and kubectl get logs
to monitor the progress and status of the Build.
kubectl get pods kubectl logs -c build-step-build-and-push templatized-build-XXX
Other Knative Build features
These exercises only cover a small subset of what Knative Build can do. Other features include:
- Build Sources: In addition to git, you can also specify Google Cloud Storage buckets, or any container image as the source code input to the build.
- Private repositories: You can authenticate to private git repositories by providing basic auth credentials (username/password) or ssh keys through Kubernetes ServiceAccount object, and specify that ServiceAccount on the Build.
- Mounting Kubernetes Volumes: You can use Kubernetes volume types to mount extra disks or data to the build step container. This enables use cases like:
- reading data from an external disk (e.g.
gcePersistentDisk
volume) - mounting a Secret (containing a file, or username/password) during a build step (
secret
volume) - providing a temporary cache directory preserved between build steps (
emptyDir
volume)
14. What's Next
Congratulations! You have completed the Knative codelab by Google.
Knative is a fairly new project, released in July 2018. Most parts of the project such as the API and the documentation are changing very frequently. To stay up to date, join one of the community forums below and visit the documentation for the most recent instructions.
We have not covered the Knative Eventing APIs during this codelab. If you are interested in getting events from external sources and having them delivered to your applications, read about it and play with it if you have time.
Clean up (optional)
You don't need to clean up if you're using a temporary account provided for this codelab.
By deleting your Kubernetes cluster, you can delete your Knative installation and the resources associated to your cluster:
gcloud container clusters delete knative --zone=us-central1-b
Take action
- Tweet about your experience at this codelab.
- Register for GKE Serverless (Knative) add-on early-access
- Read the Knative documentation
- Join the slack.knative.dev Slack channel
- Follow @KnativeProject on twitter
- Participate in the Knative community process and working groups