1. Overview
In this lab, you'll learn about Skaffold, an open-source tool by Google that simplifies and automates container-oriented development. Skaffold supports all phases of the application delivery process, and has special features for speeding up the development inner loop. Skaffold is leveraged by continuous deployment services such as Cloud Deploy.
What you will learn
- Skaffold basics and inner development loop pipeline
- Understand the
skaffold.yaml
format - Hot reloading and file syncing
- Use cases: Continuous Development, Continuous Integration with GitOps and Continuous Delivery
- Practice with Profiles, multi configuration and unit testing
- Use Cloud Code as an interface to access Skaffold features
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
- The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as
PROJECT_ID
). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project. - For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Starting Cloud Shell Editor
This lab was designed and tested for use with Google Cloud Shell Editor. To access the Editor,
- Access your google project at https://console.cloud.google.com.
- In the top right corner click on the cloud shell editor icon:
- A new pane will open in the bottom of your window
- Click on the Open Editor button
- The editor will open with an explorer on the right and editor in the central area
- A terminal pane should also be available in the bottom of the screen
- If the terminal is NOT open use the key combination of `ctrl+`` to open a new terminal window
Starting Minikube
For most of the sections in this lab, you will need a Minikube cluster running so you can deploy applications locally.
- Go to the embedded Terminal in Cloud Shell Editor, and start minikube
minikube start
- Wait for Minikube to finish starting up. The last line of minikube start command output should be:
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
The minikube cluster is now up and running and ready to host your applications.
Getting the Skaffold code examples
Cloud Shell already has a working Skaffold version installed for you to use. As you'll be using code examples from the Skaffold repo for this lab, you'll make sure you've got a version of the examples that matches it.
- Determine the version of Skaffold, which should be of the form "v1.38.0":
skaffold version
- Clone the Skaffold repository from GitHub with the corresponding version number from step 1, and go to the examples folder:
git clone --depth 1 --branch "$(skaffold version)" https://github.com/GoogleContainerTools/skaffold && cd skaffold/examples
- Make that folder your active workspace:
cloudshell open-workspace .
Cloud Shell Editor will reload and set the active workspace in the
examples
directory.
- If the Terminal closed as a result of Cloud Shell Editor refreshing, open it again pressing `Ctrl + `` (backtick)
- Set your active Google Cloud Project:
gcloud config set project <your-project-id>
The Skaffold project has tons of very good examples on how to use all Skaffold features and learn Skaffold in the way. As they include different languages and different workload types, it's perfect for what you're trying to achieve here and you'll be using them for this lab.
3. Getting Started with Skaffold
Skaffold
Skaffold is a command-line tool that facilitates continuous development for Kubernetes-native applications. Skaffold handles the workflow for automatically building and pushing your container images as required, and then deploying your application. These steps can be used independently and provide building blocks for creating CI/CD pipelines.
Skaffold projects usually start using a single skaffold.yaml configuration to describe how to build and deploy the application.
Understanding the skaffold.yaml format
To start working with Skaffold and understand the basic skaffold.yaml
structure, you will focus on a microservices-based application written in Go.
When you're writing a skaffold.yaml
file from Cloud Code, it assists you with authoring as it provides schema base validation, diagnostics, completions and access to the documentation. In any case, you can access the full documentation for the skaffold.yaml
file in the official Skaffold site
- Navigate to the **
microservices/
** folder in the lower left panel in Cloud Shell Editor and **double click on the** **skaffold.yaml
** **file to open it**. - Review the different sections of the file.
Build
This section declares the images you want to build, called artifacts in the Skaffold world.
build: artifacts: - image: leeroy-web context: leeroy-web requires: - image: base alias: BASE - image: leeroy-app context: leeroy-app requires: - image: base alias: BASE - image: base context: base
In this case, these artifacts are container images. There are two images to be built: leeroy-web
and leeroy-app,
and a third image that will be used as base for building the previous two called base.
You can see in the skaffold.yaml
file the requires:
key indicating the two images being built require the base image.
Skaffold will proceed building in the right order:
- It will build the base image first.
- Then, it will build the two other images that rely on
base
.
This base image usage pattern makes sense as you may want to share a common version of the language/platform you're using that you want to be consistent in all the application images you build. Or maybe your company requires you to follow some pattern for dealing with authentication or logs... By packaging these requirements in the base image, they will be applied and Skaffold will build them in the right order.
Notice the alias attribute inside the requires
subsection containing the word BASE
. To explore what it means, navigate in the Cloud Shell Editor workspace to the microservices/leeroy-web
and double click on the Dockerfile
file to open it:
ARG BASE FROM golang:1.15 as builder COPY web.go . # `skaffold debug` sets SKAFFOLD_GO_GCFLAGS to disable compiler optimizations ARG SKAFFOLD_GO_GCFLAGS RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -o /app . FROM $BASE COPY --from=builder /app .
Observe the first line: you're passing an argument into the Docker build process with ARG BASE
. Then, you're using this argument in the FROM $BASE
at the bottom of the Dockerfile
. This way the argument passes from the skaffold.yaml
into the Dockerfile
, and then becomes a FROM
block.
One of the reasons for using a multistep Dockerfile here is that while it makes sense to have a full blown image for building our application, it's more secure and lean to use a minimal base image for running the application (less size, faster, minimal attack surface). This is what's being done here by using GCP's distroless images for copying the compiled Go binary that will be served in the Kubernetes cluster when deployed.
Deploy
This section uses raw Kubernetes manifests. In this specific case it's telling Skaffold to deploy the manifests using kubectl
, and where to find the manifests by using the "*
" glob.
deploy: kubectl: manifests: - leeroy-web/kubernetes/* - leeroy-app/kubernetes/*
portForward
This section describes which resources you want to be port forwarded and what ports you want them to be on. Using this, Skaffold will automatically port forward them when you run skaffold dev
.
portForward: - resourceType: deployment resourceName: leeroy-web port: 8080 localPort: 9000 - resourceType: deployment resourceName: leeroy-app port: http localPort: 9001
In this case, after the app comes up you'll be able to see in port 8080
the main page of this application.
Skaffold managed Inner Loop development
You'll use the previous file to start practicing with Skaffold:
- From the microservices folder, launch the **
dev
** **command** to execute the Skaffold managed inner development loop pipeline:
cd $HOME/skaffold/examples/microservices skaffold dev
- Notice the information displayed in the Output pane in Cloud Shell Editor. As declared in the
skaffold.yaml
file you review before, Skaffold is:
- Building the images in the proper order
- Deploying the manifests in the Minikube cluster
- Exposing the services in the ports you declared
- Once Skaffold has completed deploying,
Cmd + click
(orCtrl + click
depending on your OS) on the URL for leeroy-web application that you see in the output:
[...] Starting deploy... - deployment.apps/leeroy-web created - service/leeroy-app created - deployment.apps/leeroy-app created Waiting for deployments to stabilize... - deployment/leeroy-web is ready. [1/2 deployment(s) still pending] - deployment/leeroy-app is ready. Deployments stabilized in 1.148 second Port forwarding deployment/leeroy-app in namespace default, remote port http -> http://127.0.0.1:9002 Port forwarding deployment/leeroy-web in namespace default, remote port 8080 -> http://127.0.0.1:9003 Press Ctrl+C to exit Watching for changes... [leeroy-app] 2022/04/13 15:31:11 leeroy app server ready [leeroy-web] 2022/04/13 15:31:11 leeroy web server ready
- Observe how a new tab opens rendering the text served by the web application. While it's easy to get a browser to render text, it's quite hard to deploy that into a Kubernetes cluster. This is what Skaffold has just done for you.
By launching skaffold dev
, Skaffold has automated the whole pipeline starting from when the file is saved, going through these steps:
- Building the images
- Pushing the images to a container registry
- Deploying to Kubernetes
- Port forwarding the app to your local machine
Hot reloading
With skaffold dev
running, you can now change the application code and see how Skaffold automatically detects the change and redeploys everything:
- In the left pane of Cloud Shell Navigate to
leeroy-app > app.go
. Locate the text in line and replace it with a different one, for example"Hello Skaffold"
and save the file.
package main import ( "fmt" "log" "net/http" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello Skaffold!!\n") } func main() { log.Print("leeroy app server ready") http.HandleFunc("/", handler) http.ListenAndServe(":50051", nil) }
This rebuilds just the leeroy-app container and redeploys it. Under the hood, Skaffold is using Docker and taking advantage of things like layer caching to be as efficient as possible.
- Go to the tab where the main page of the application was loaded and refresh it. You will see the new text.
- Press
Ctrl + C
to cancel theskaffold dev
command you just launched. - Observe how Skaffold cleans up the deployments in the cluster. This is the default behavior; if you want to leave things around, you can set up a flag
--cleanup=false
that won't perform the automatic cleanup.
Whenever you change your code, Skaffold analyzes the Dockerfile to find out which files are important to watch. So if you're doing very specific things in the Dockerfile like copying go files or go binaries, it will only update when you change these. This provides a developer friendly fast feedback loop, and will give you almost immediate visibility from when you change the code, to when you see the results.
These same steps that you've just performed from the command line provided by the integrated Terminal can be done straight from Cloud Code by invoking Cloud Code: Run on Kubernetes from the Command Palette.
4. Understanding Skaffold Profiles & File Sync
To learn more about Skaffold profiles and file syncing, you'll now use an application written in Typescript.
As Typescript is an interpreted language, it will allow you to see more interesting things about how Skaffold can help simplify the inner development loops and make things simple and fast.
Again, you will explore the Skaffold definition file to understand its structure. There are some differences compared to what you saw in the previous microservices example.
Navigate to the
examples/typescript/ folder in Cloud Shell Editor and open the skaffold.yaml
file. Follow along as the different sections of the file are described:
Build
The image part in the build section is quite basic. The artifact is just the image name and the context, which is the directory where the Dockerfile for that image is going to be.
build: artifacts: - image: node-typescript-example context: backend
Notice how the image name specified in the attribute image:
is not a Fully Qualified Registry Name, but just the image name. This image name needs to match the image name being used in the kubernetes manifests. Type the following command to output the part of the spec where the image name is in the deployment manifest to check it:
grep -B4 -A 4 'node-typescript-example' k8s/deployment.yaml
Output:
app: node spec: containers: - name: node image: node-typescript-example ports: - containerPort: 3000
This way Skaffold knows that the thing that you build is the thing that needs to go into the Kubernetes manifests. Since you're using Docker locally, which shares the docker daemon between Minikube and Docker, the image is built locally and run straight off the docker cache, which provides some speed improvements.
When moving from the inner development loop to the outer development loop using skaffold build
or skaffold run
, Skaffold will allow us specify a image name prefix (like gcr.io/<your-project-name>
) so that you can publish the image to a private registry and use it in Kubernetes from there.
Profiles
The profiles
section of the skaffold.yaml
file is where you define the different ways you want Skaffold to operate depending on the Skaffold command you're passing to it.
profiles: - name: dev activation: - command: dev
In this example there's a dev
profile enabled, so skaffold dev
runs it'll actually be running skaffold dev -p dev
, where the -p
option stands for profile
. This allows Skaffold to automatically run this particular profile and whatever commands associated with it when running the default skaffold dev
, because the activation
attribute is associating the profile called dev
with the skaffold
command dev
. This constitutes in practice a way to catalog the procedures or workflows of your development team.
There's much more you can do with profiles. In the end of this section you'll revisit them and see what other configurations can be used with them.
File Syncing
This skaffold.yaml
file build section uses Docker for building the image, and also uses a
sync:
section to configure file syncing to the container:
build: artifacts: - image: node-typescript-example context: backend docker: buildArgs: ENV: development sync: manual: # Sync all the TypeScript files that are in the src folder # with the container src folder - src: 'src/**/*.ts' dest: .
File syncing is very useful with interpreted or bytecode-based languages as you want to make sure you don't rebuild from scratch every time you change a file, but instead you move your files from the local filesystem out into the running container and just restart the running process. Skaffold can take care of this for you as well.
In this case, the skaffold.yaml
file sets up a manual sync, where the source paths and destination paths are explicitly specified. Notice the support for wildcards, where **
matches all directories. In this example Skaffold will sync any file with the .ts
extension in any subdirectory under src/
into the source folder of the destination container. Skaffold supports an infer
mode for certain builders (docker
) to determine the destination paths based on the build configuration.
Thanks to this syncing you'll see very fast updates, with no rebuild and no compilation as it was the case before.
Launching the example
- Using the Cloud Code Integrated Terminal, move to the directory holding the code and Skaffold configuration:
cd $HOME/skaffold/examples/typescript
- Launch
skaffold dev
specifying the port-forward option:
skaffold dev --port-forward
This option tells Skaffold to port forward the application using the same local port configured for it without the need to modify the skaffold.yaml
file.
- Observe the output. Skaffold will tell you where the port forwarding is taking place once deployment has finished. Ctrl-click (or Cmd-click if you're in OS X) in the URL with the forwarded port showing up in the command output.
Observe the application running, showing a simple "Hello world
" text in the browser.
Something to notice here is that although the skaffold.yaml
does not contain a deploy section, the app has been successfully deployed in the Minikube cluster. If you look at the examples/typescript
directory, you'll see there's a k8s
directory containing the kubernetes manifest to deploy the application. Skaffold will look by default for any manifest inside this directory and will use them to deploy the app, so there's no need to explicitly declare it in the configuration file.
Testing File Sync
You'll modify the application code to test the file syncing that Skaffold provides:
- From the Cloud Shell Editor File Explorer, go to the **
examples/typescript/backend/src/
** folder, open the **index.ts
** file and **change the** **'Hello World!'
** text in line6
to **'Hello Skaffold'
**.
import express, { Response } from 'express'; import { echo } from './utils'; const app = express() const port = 3000 app.get('/', (_, res: Response) => res.send(echo('Hello Skaffold!'))) app.listen(port, () => console.log(`Example app listening on port ${port}!`))
- Save the file and observe in the console output how Skaffold is picking up the file change immediately and transferred to the running container in no time. If you now go back to the tab where the web page is rendered and you refresh, you will see the new text.
- Go back to where you run
skaffold dev
and pressCtrl + C
. Skaffold will automatically clean up all the deployed objects.
File Syncing with Hot Reloading
If you had an app configured for hot reloading (i.e. a React application like the one that's included in the examples/react-reload
folder), you will see how the browser automatically detects the new file that has been synced by Skaffold and reloads without you having to manually refresh the page. Although this is not something that depends on Skaffold but the framework you're programming in, it makes the whole developer experience even more agile.
5. Revisiting profiles
You've briefly covered profiles in the previous section, where you practiced with a dev profile with an activation associating the profile with the execution of a particular Skaffold command. You will now learn more about Skaffold profiles using another example application and a GKE cluster running in your project.
Create a Cluster
- Create a staging cluster in the cloud called staging:
gcloud container clusters create staging --zone europe-southwest1-b --async
The command launches the creation of the cluster and returns immediately, as it will take some minutes to complete. While this happens, proceed with the next steps of this section.
Review the available profiles
- Navigate to the
examples/profiles/
directory in the lower left panel in Cloud Shell Editor and double click on theskaffold.yaml
file to open it. - Have a look at the configuration and given what you've learned so far, can you predict what's going to happen if you launch the **
skaffold dev
** **command**? Pause here and try to come up with the possible expected result before moving forward. - Using the integrated terminal in Cloud Shell Editor, run skaffold dev in the profiles example folder:
cd $HOME/skaffold/examples/profiles skaffold dev
- Open the Cloud Code - Kubernetes: Clusters view at the left of Cloud Shell Editor and expand the Pods folder in the minikube cluster. You can see there's one pod running called
hello-service
:
- Go back to the terminal where you see the pod continuous output of the string
[hello-service] HELLO
and pressCtrl + C
. This cleans up the deployed artifacts: you can see from the Terminal that Skaffold deletes the hello-service pod. - Examine the
skaffold.yaml
file you just opened before again. You can see there are two profiles defined in theprofiles
section of the document,minikube-profile
andstaging-profile
:
profiles: - name: minikube-profile # automatically activate this profile when current context is "minikube" activation: - kubeContext: minikube build: # only build and deploy "hello-service" on minikube profile artifacts: - image: skaffold-hello context: hello-service deploy: kubectl: manifests: - 'hello-service/*.yaml' - name: staging-profile build: # build and deploy both services on "staging" artifacts: - image: skaffold-hello context: hello-service - image: skaffold-world context: world-service deploy: # use context "staging" for staging-profile kubeContext: staging kubectl: manifests: - '**/*.yaml'
What you see in the minikube-profile
corresponds to what just happened when you launched the skaffold dev
profile. This profile has an activation of type kubeContext
, which means that each time Skaffold detects minikube as the active Kubernetes context, it will automatically activate and use this profile. The profile builds only the artifacts corresponding to the folder hello-service/
(as indicated in the context attribute) and will deploy the built image using kubectl
with the manifests to be found under the hello-service
directory.
In addition to the minikube-profile
profile, there's another profile in the skaffold.yaml
file called staging-profile
, that you will be using with the GKE cluster you created before.
Prepare GKE Contexts
Before trying this profile out, make sure your GKE cluster is up and running and ready to host the application deployment:
- Check that the GKE cluster called staging is up and running:
gcloud container clusters list
Output:
NAME: staging LOCATION: europe-southwest1-b MASTER_VERSION: 1.21.6-gke.1503 MASTER_IP: 34.76.29.50 MACHINE_TYPE: e2-medium NODE_VERSION: 1.21.6-gke.1503 NUM_NODES: 3 STATUS: RUNNING
If the STATUS
is PROVISIONING
instead of RUNNING
, wait a bit more for the cluster to be created and check again using
gcloud container clusters list
until the cluster is
RUNNING
.
- Get the cluster credentials:
gcloud container clusters get-credentials staging --zone europe-southwest1-b
- Rename the GKE context to **
staging
**, as this is the context configured in theskaffold.yaml
file for the profile you'll be using:
kubectx staging=$(kubectx -c)
Deploy with profile
Once the GKE cluster and the corresponding context are set up, you will try Skaffold with the staging-profile
profile:
- Type the following command:
skaffold dev -p staging-profile \ --default-repo="gcr.io/$GOOGLE_CLOUD_PROJECT"
Notice that you're passing an additional flag --default-repo
to tell Skaffold the images repo the remote GKE cluster is using, as opposed to the local registry you've been using with the local Minikube cluster. As you may be using a Skaffold profile to deploy remotely using a specific context frequently, Skaffold offers the possibility to configure the default repo for a given context with skaffold config set default-repo <repo URL>
. This way, you won't have to pass the repo URL each time you activate a profile using a remote cluster.
- Observe how Skaffold builds two artifacts, publishes them in the Container Registry configured in your project and deploys all kubernetes manifests in the GKE cluster the staging context points to. Output:
[...] Waiting for deployments to stabilize... - pods: creating container hello-service - pod/hello-service: creating container hello-service - pod/world-service: creating container world-service - pods is ready. Deployments stabilized in 6.3 seconds Press Ctrl+C to exit Watching for changes... [hello-service] HELLO [hello-service] HELLO [world-service] WORLD [world-service] WORLD [hello-service] HELLO [world-service] WORLD [hello-service] HELLO [world-service] WORLD
This time, as configured in the skaffold.yaml
file, Skaffold is building the two microservices hello-service
and world-service
, and is processing all yaml manifests in our folder to deploy the images in the remote cluster.
- Press `Ctrl + `` (backtick) again to open a new Terminal window inside Cloud Shell Editor. From this new terminal, check the deployment of these two pods:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE hello-service 1/1 Running 0 13s world-service 1/1 Running 0 13s
- Exit this terminal:
exit
- From the terminal where Skaffold is running, press
Ctrl + C
to trigger Skaffold cleanup of the deployed artifacts. - Re-enable minikube as the active context so the next sections of this lab deploy locally:
kubectx minikube
6. Using Multi Config and decoupling steps
Configuration Dependencies
In this section of the lab, you will practice with Skaffold multi configuration, using a Skaffold feature called modules or configuration dependencies. This will allow you as a developer to work with broken up apps that may have different parts (backend, front end...) and manifests configuration (Helm charts, Kubernetes manifests...) by creating individual skaffold.yaml
files for each of them, and reference these files from a common skaffold.yaml
main configuration file:
- From your terminal, type:
cd $HOME/skaffold/examples/multi-config-microservices cloudshell edit skaffold.yaml
- The skaffold.yaml file will open in the Cloud Shell Editor.
apiVersion: skaffold/v2beta28 kind: Config requires: - path: ./leeroy-app - path: ./leeroy-web
This main skaffold.yaml
file is pulling in dependencies from two additional skaffold.yaml
files under the requires
section and using the path
attribute to express the location of the dependencies. Each of these corresponds to a specific part of the whole application, and Skaffold will mash them together to make everything happen.
- From the terminal, type the following to open the leeroy-app
skaffold.yaml
configuration file:
cloudshell edit leeroy-app/skaffold.yaml
Output:
apiVersion: skaffold/v2beta28 kind: Config metadata: name: app-config requires: - path: ../base build: artifacts: - image: leeroy-app requires: - image: base alias: BASE deploy: kubectl: manifests: - kubernetes/* portForward: - resourceType: deployment resourceName: leeroy-app port: http localPort: 9001
- Do the same for the leeroy-web
skaffold.yaml
file:
cloudshell edit leeroy-web/skaffold.yaml
You will see the file is almost identical, with the only changes being the ports and app names.
- Run
skaffold dev
to see how Skaffold combines both configurations, making sure you're back into the minikube context:
kubectx minikube skaffold dev
- Observe how Skaffold builds and deploys
[...] Starting deploy... - service/leeroy-app created - deployment.apps/leeroy-app created - deployment.apps/leeroy-web created Waiting for deployments to stabilize... - deployment/leeroy-app is ready. [1/2 deployment(s) still pending] - deployment/leeroy-web is ready. Deployments stabilized in 2.203 seconds Port forwarding deployment/leeroy-web in namespace default, remote port 8080 -> http://127.0.0.1:9000 Port forwarding deployment/leeroy-app in namespace default, remote port http -> http://127.0.0.1:9001 Press Ctrl+C to exit Watching for changes... [leeroy-app] 2022/04/13 09:45:16 leeroy app server ready [leeroy-web] 2022/04/13 09:45:16 leeroy web server ready
You can see how Skaffold has built and deployed both services in the minikube cluster, and exposed them in the remote ports declared in each of the skaffold.yaml
files.
- Open a new terminal pressing `Ctrl + `` and in this new terminal try the service endpoint:
curl localhost:9000
Output:
leeroooooy app!!
Everything has worked the same as when you were pulling the dependencies from the local filesystem, but now done from external Git repositories
- Exit the second terminal you just opened
exit
- Navigate back to the terminal where Skaffold is running and press
Ctrl + C
to stop Skaffold and get the automatic cleanup.
Remote configuration dependencies
In this particular example, the main skaffold.yaml
configuration file you reviewed in step 2 is using the path attribute to express these dependencies, but it could also be expressing a remote config dependency to a git repo using the git
attribute. This way, you can have dependencies on other microservices or parts of the application that live in your repo or outside of your repo. The requires
block in the skaffold.yaml
configuration file lets you pull these into your local development environment.
Follow the next steps to deploy a practical example of what this means:
- Using the terminal, move to the remote-multi-config-microservices/ and open the skaffold.yaml file there to explore its contents:
cd $HOME/skaffold/examples/remote-multi-config-microservices cloudshell edit skaffold.yaml
- Review the contents of the
skaffold.yaml
file you just opened:
apiVersion: skaffold/v2beta28 kind: Config requires: - git: repo: https://github.com/GoogleContainerTools/skaffold ref: main path: examples/multi-config-microservices/leeroy-app sync: false - git: repo: https://github.com/GoogleContainerTools/skaffold ref: main path: examples/multi-config-microservices/leeroy-web sync: false
Now, instead of filesystem paths, there are two git dependencies from the same repository GoogleContainerTools/skaffold
. When processing the pipeline, Skaffold will download each referenced repository (one copy per referenced branch) to its cache folder (~/.skaffold/repos
by default) and use them the same way as in the previous example..
- Test the pipeline with remote dependencies by launching the following command from the Terminal:
skaffold dev
- Open a new terminal pressing `Ctrl + `` and in this new terminal try the service endpoint:
curl localhost:9000
Output:
leeroooooy app!!
Everything has worked the same as when you were pulling the dependencies from the local filesystem, but now done from external Git repositories
- Exit the second terminal you just opened
exit
- Navigate back to the terminal where Skaffold is running and press
Ctrl + C
to stop Skaffold and get the automatic cleanup
Decoupling the pipeline steps
Up to this point, you've always been executing the whole pipeline defined in the Skaffold configuration file with skaffold dev
. Skaffold allows you to also run specific commands:
- Go to the Terminal in Cloud Shell Editor and run:
skaffold build --file-output=artifacts.json
This just executes the build section of the Skaffold pipeline and stores the result in the artifacts.json
file that you specified with the --file-output
flag.
- Examine the content of the artifacts.json file with the
jq
prettifier
cat artifacts.json | jq
Output:
{ "builds": [ { "imageName": "base", "tag": "base:9a77fc7ffe20a16ab5ae7bc3e7a4eb0b09c16b1c54bd31b476fccd5bf40af68e" }, { "imageName": "leeroy-app", "tag": "leeroy-app:ba10442fb04ce62602026e35a3068c58e30cb80ff671b536dce10cf5169ac71b" }, { "imageName": "leeroy-web", "tag": "leeroy-web:95a8fd9ad29d3b5bad33c84b62ed09af1471628d90eaffaab34cbe5cee663e6f" } ] }
You'll see a list of the images built, named and with their specific tags. The interesting thing here is that you can now pass this file to specific places where you might need it, like a CI/CD process. For instance, imagine you commit your code to main, and the CI pipeline runs skaffold build
to generate the artifacts.json
file that will contain the latest images for this particular commit. That file can then be passed to the CD process to deploy the right images into Kubernetes. You will simulate this in the following steps.
- Ask Skaffold to take this artifacts list and process it through the manifests defined in the
skaffold.yaml
files with the following command:
skaffold render --build-artifacts artifacts.json --digest-source=local > deploy.yaml cloudshell edit deploy.yaml
This generates a bunch of yaml that contains the manifests you can now deploy in Kubernetes. Having a look at the file, you'll see that in the specs of the deployments, the file has taken the image names that were generated by skaffold build with the fully qualified tag
- Run
skaffold apply
against this file:
skaffold apply deploy.yaml
This deploys the artifacts in the minikube cluster you configured at the beginning of this lab, closing the loop of what you did in just one step with skaffold dev at the beginning of this section.
- You can test the deployment of the artifacts in the minikube cluster using
kubectl
:
kubectl get pods
7. Using Helm & Kustomize
In the previous section, you've practiced with the skaffold build
and render
commands. These can take different options on how to implement the particular action. For example, skaffold build
could use Kaniko instead of Docker to perform the actual build of the artifacts.
The same is true for skaffold render
. Skaffold abstracts manifest management, and it can use tools like Helm, Kustomize and Kpt to render the artifacts.
Using Helm
- Using the Terminal, go to the Helm Deployment example folder and open the
skaffold.yaml
file:
cd $HOME/skaffold/examples/helm-deployment cloudshell edit skaffold.yaml
- Focus on the file you've just opened in Cloud Shell Editor, and zoom in into the section called
deploy
, where you can find a subsection calledhelm
.
kind: Config build: artifacts: - image: skaffold-helm deploy: helm: releases: - name: skaffold-helm chartPath: charts artifactOverrides: image: skaffold-helm
Notice how the directory where Helm charts are is captured by the chartPath
attribute value. chartPath
could also capture remote dependencies so the chart tarball is pulled in by Skaffold and then used. Skaffold is abstracting how Helm is run, and typical flags that you'd use to launch Helm, like the artifactOverrides
binding the Helm's image
value key to the build artifact skaffold-helm
that you can see in the last line of the charts/templates/deployment.yaml
file:
apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Chart.Name }} labels: app: {{ .Chart.Name }} spec: selector: matchLabels: app: {{ .Chart.Name }} replicas: {{ .Values.replicaCount }} template: metadata: labels: app: {{ .Chart.Name }} spec: containers: - name: {{ .Chart.Name }} image: {{ .Values.image }}
- Launch the pipeline to see Skaffold in action rendering the manifests in the deployment step:
skaffold dev
Output:
Waiting for deployments to stabilize... - deployment/skaffold-helm is ready. Deployments stabilized in 2.083 seconds Press Ctrl+C to exit Watching for changes... [skaffold-helm] Hello world! 0 [skaffold-helm] Hello world! 1 [skaffold-helm] Hello world! 0 [skaffold-helm] Hello world! 1 [skaffold-helm] Hello world! 2 [skaffold-helm] Hello world! 2 [skaffold-helm] Hello world! 3 [skaffold-helm] Hello world! 3
- Press
Ctrl + C
to stop Skaffold.
Using Kustomize
- Using the Terminal, go to the Kustomize deployment example folder and open the
skaffol.yaml
file:
cd $HOME/skaffold/examples/skaffold-deployment cloudshell edit skaffold.yaml
- The file you just opened is pretty simple, it's just telling Skaffold to look for the standard
kustomization.yaml
Kustomize file and go with it:
Output:
apiVersion: skaffold/v2beta28 kind: Config deploy: kustomize: {}
- Instead of using this file, you'll practice launching
skaffold deploy
using the option of passing another file different to the standard skaffold.yaml file. Open theskaffold-kustomize-args.yaml
to inspect it:
cloudshell edit skaffold-kustomize-args.yaml
Output:
apiVersion: skaffold/v2beta28 kind: Config deploy: kustomize: buildArgs: - "--load_restrictor none"
This file is just telling Skaffold to pass with buildArgs
the argument to Kustomize's --load_restrictor none
, which allows Kustomize to reference customization files outside of its root directory.
- Open the
kustomization.yaml
file as well to have a look at what kind of customizations will be applied by Kustomize via Skaffold:
cloudshell edit customization.yaml
Output:
resources: - deployment.yaml patches: - patch.yaml
- Although understanding how Kustomize and its overlays work is out of the scope of this lab, open the files referenced in the previous file to get a grasp of what will be deployed:
cloudshell edit deployment.yaml patch.yaml
deployment.yaml
declares a Kubernetes deployment with 1 replica where the template uses a container image that is clearly not valid: not/a/valid/image
. The patch.yaml
file will remediate that by applying a patch to the corresponding spec
that provides an image name, repo URL and entry command:
apiVersion: apps/v1 kind: Deployment metadata: name: kustomize-test spec: template: spec: containers: - name: kustomize-test image: index.docker.io/library/busybox command: - sleep - "3600"
- Launch Skaffold to apply the deployment using Kustomize:
skaffold deploy -f skaffold-kustomize-args.yaml
Output:
Tags used in deployment: Starting deploy... - deployment.apps/kustomize-test created Waiting for deployments to stabilize... - deployment/kustomize-test: creating container kustomize-test - pod/kustomize-test-785b6ccf79-ph7rs: creating container kustomize-test - deployment/kustomize-test is ready. Deployments stabilized in 6.155 seconds
- Check that the deployment pods are in your minikube cluster
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE kustomize-test-785b6ccf79-ph7rs 1/1 Running 0 50s
8. Running unit tests
Skaffold can also run your custom unit tests as part of your inner development loop, which is an important part of day to day development activities.
- Using the Terminal, go to the
custom-test
example folder and open theskaffol.yaml
file:
cd $HOME/skaffold/examples/custom-tests cloudshell edit skaffold.yaml
- Observe the structure of the file:
apiVersion: skaffold/v2beta28 kind: Config build: artifacts: - image: custom-test-example test: - image: custom-test-example custom: - command: ./test.sh timeoutSeconds: 60 dependencies: paths: - "*_test.go" - "test.sh" - command: echo Hello world!! dependencies: command: echo [\"main_test.go\"] deploy: kubectl: manifests: - k8s-*
For each of the images that gets built, Skaffold allows you to define commands to run when the image is done building. In this case, the test.sh
script and the echo Hello world!!
command locally in your machine. For example, the typical use case for a Golang application would be to run the image and run go test
.
The dependencies
section inside tests
just indicate files to watch to re-run the tests that won't trigger an image rebuild.
- Run
skaffold dev
to go through all the steps in the pipeline:
skaffold dev
Output
Build [custom-test-example] succeeded Starting test... Testing images... Running custom test command: "./test.sh" with timeout 60 s go custom test ok github.com/GoogleContainerTools/skaffold/examples/custom-tests 0.004s Command finished successfully. Running custom test command: "echo Hello world!!" Hello world!! Command finished successfully. Tags used in deployment: - custom-test-example -> custom-test-example:ae35b9bca27e59f6839ae6e6a459c17e38f24de6677e74b41c35a7bf95502198 Starting deploy... - pod/custom-test created Waiting for deployments to stabilize... - pods is ready. Deployments stabilized in 3.142 seconds Press Ctrl+C to exit Watching for changes... [custom-test] Min of 42 and 42 is: 42 [custom-test] Min of 98 and 76 is: 76 [custom-test] Min of 47 and 45 is: 45
This is building the images, running the tests defined in the skaffold.yaml
file and if they're successful, deploying in the Minikube cluster forwarding back the STDOUT
of the application to your terminal.
If the tests fail, Skaffold will keep the old version of the image and deployment of the new version won't take place.
- Press
Ctrl+C
to trigger a cleanup and exit the pipeline execution.
9. Optional - Using Buildpacks
Skaffold can be configured to create images in different ways. By default, it will look at a Dockerfile, but it can also use things like Kaniko to do dockerless, in-cluster builds, or use Buildpacks.
Buildpacks are an easy way to have a streamlined image build without having a Dockerfile straight from source code. It will detect the type of app you have and how to process the dependencies for the language or framework being used. Although Buildpacks per se are not the goal of this lab, you will practice with them to see how easy Skaffold manages them.
This example will be looking at GCP Buildpacks, although there's support for any of the packers available.
- Using the Terminal, go to the
custom-test
example folder and open theskaffol.yaml
file:
cd $HOME/skaffold/examples/buildpacks-python cloudshell edit skaffold.yaml
- Observe the structure of the file:
apiVersion: skaffold/v2beta28 kind: Config build: artifacts: - image: skaffold-buildpacks buildpacks: builder: "gcr.io/buildpacks/builder:v1" trustBuilder: true profiles: - name: gcb build: googleCloudBuild: {}
The buildpacks
builder
configuration in this file is telling Skaffold how to build the image in this particular case. The builder image gcr.io/buildpacks/builder:v1
specified here contains all the code that understands how to build an image from source code following the OCI standard.
- Run skaffold dev to build the image with port forwarding enabled:
skaffold dev --port-forward
You'll see a lot more happening than with the regular Dockerfile build. It pull the builder image, and then looks for the run image, which will be a leaner image to actually run the application.
- Observe the output of the previous command:
===> DETECTING google.python.runtime 0.9.1 google.python.pip 0.9.2 google.config.entrypoint 0.9.0 google.utils.label 0.0.2 ===> RESTORING ===> BUILDING
You'll see how Skaffold is detecting the kind of code we have here. After that, because this is Python, it will start installing dependencies with pip
.
As this is the first time the application image is built, it will take more time than successive rebuilds. Also, although not present at the moment for Python specifically, there's integration for file synchronization for Go, Java and Node.js that won't require image rebuilds and will sync the right bits to the running container.
- Observe the output logs to see where your application has been deployed:
Starting deploy... - service/web created - deployment.apps/web created Waiting for deployments to stabilize... - deployment/web is ready. Deployments stabilized in 2.126 seconds Port forwarding service/web in namespace default, remote port 8080 -> http://127.0.0.1:8080 Press Ctrl+C to exit Watching for changes... [web] * Serving Flask app 'web.py' (lazy loading) [web] * Environment: production [web] WARNING: This is a development server. Do not use it in a production deployment. [web] Use a production WSGI server instead. [web] * Debug mode: off [web] * Running on all addresses (0.0.0.0) [web] WARNING: This is a development server. Do not use it in a production deployment. [web] * Running on http://127.0.0.1:8080 [web] * Running on http://172.17.0.4:8080 (Press CTRL+C to quit)
Cmd + Click
on the forwarded port to open the web page served by the container. You should see aHello World
text being displayed.- Close the tab, navigate back to the Terminal and press
Ctrl + C
to trigger a Skaffold cleanup.
10. Congratulations!
Congratulations, you finished the codelab!
What you've covered
- Reviewed Skaffold basics and inner development loop pipeline
- Examined the skaffold.yaml format
- Utilized hot reloading and file syncing
- Worked with Profiles, and multi configuration
- Use Cloud Code as an interface to access Skaffold features