In this lab, you learn how to use pipeline options and carry out Map and Reduce operations in Dataflow.

What you need

You must have completed Lab 0 and have the following:

What you learn

In this lab, you learn how to:

The goal of this lab is to learn how to write MapReduce operations using Dataflow.

Step 1

In CloudShell, if you haven't already, git clone the repository:

git clone

Navigate to the folder containing the starter code for this lab:

cd training-data-analyst/courses/machine_learning/deepdive/04_features/dataflow

Step 2

View the source code for the pipeline using CloudShell Code Editor and navigating to: training-data-analyst/courses/machine_learning/deepdive/04_features/dataflow/javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp/

Step 3

What getX() methods are present in the class MyOptions? ____________________

What is the default output prefix? _________________________________________

How is the variable outputPrefix in main() set? _____________________________

Step 4

What are the key steps in the pipeline? _____________________________________________________________________________

Which of these steps happen in parallel? ____________________________________

Which of these steps are aggregations? _____________________________________

Step 1

Copy and paste the following Maven command:

export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:$PATH
cd ~/training-data-analyst/courses/machine_learning/deepdive/04_features/dataflow/javahelp
mvn compile -e exec:java \

Step 2

Examine the output file:

cat /tmp/output.csv

Step 1

Change the output prefix from the default value:

mvn compile -e exec:java \ \

What will be the name of the new .csv file that is written out?

Step 2

Note that we now have a new .csv file in the /tmp directory:

ls -lrt /tmp/*.csv

In this lab, you:

┬ęGoogle, Inc. or its affiliates. All rights reserved. Do not distribute.