BigQuery UI Navigation and Data Exploration Codelab

1. Introduction

BigQuery is a serverless, highly scalable, and cost-effective data warehouse. Simply move your data into BigQuery and let us handle the hard work so you can focus on what truly matters, running your business. You can control access to both the project and your data based on your business needs, such as giving others the ability to view or query your data.

In this lab, you will discover the analytical possibilities of the BigQuery. You'll learn how to import a dataset from a Google Cloud Storage bucket and get a grasp of the BigQuery UI by working with a Retail banking dataset. Additionally, this lab will teach you how to uncover key features in BigQuery that make your day to day analytics much easier such as exporting query results in a spreadsheet, viewing and running queries from your query history, viewing query performance, and creating table views for to be used by other teams and departments.

What you will learn

In this lab, you learn how to perform the following tasks:

  • Loading new data into BigQuery
  • Become familiar with the BigQuery UI
  • Running Queries in BigQuery
  • View Query Performance
  • Creating Views in BigQuery
  • Securely share datasets with others

2. Introduction: Understanding BigQuery UI

In this section you will learn how to navigate the BigQuery UI, view available datasets and run a simple query.

Loading BQ UI

  1. Type in "BigQuery" located at the top of the Google Cloud Platform Console.
  2. Select BigQuery from the option list. Be sure to select the option that has the BigQuery logo, the magnifying glass.

Viewing Datasets and Running Queries


  1. In the left pane in the Resource section, click on your BigQuery project.
  2. Click on bq_demo to view the tables in that dataset
  3. In the type to search box, type "card" to see a list of tables and datasets that contain "card" in their name.
  4. Select "card_transactions" table from the search results list


  1. Click on the Details tab under the card_transactions pane to view the metadata for this table.
  2. Click on the Preview tab to see a preview of the table

[Competitive Talking Point]: Integration with the Google Data Catalog means that BigQuery metadata can be managed along with other data sources, such as data lakes or operational data sources. This is one example that shows that Google Cloud is not just a relational data warehouse, it is an entire Analytical Data platform.

  1. Click the magnifying glass icon to query the "card_transactions" table. An auto-generated text will populate the BigQuery query editor.
  2. Enter the code below to show us distinct merchants from the Card_Transactions table
SELECT distinct (merchant) FROM bq_demo.card_transactions LIMIT 1000
  1. Click the Run button to run the query.


3. Creating datasets and sharing views

Sharing data and governance is crucial, this can be done intuitively in the BQ UI. In this section you will learn how to create a new dataset, populate it with a view, and share that dataset.

Viewing Query History

  1. Click "Query History" in the left pane of the GCP Console
  2. Click refresh in the Query History pane
  3. Click the download image/arrow on the far right of the query to view the results of the query.


Creating a new dataset

  1. Select [your project name] in the resources pane of the BigQuery UI.
  2. Select "Create new Dataset" from the project information pane
  3. For Dataset Id:


  1. Leave all other fields as defaults
  2. Click "Create Dataset"

b433eba38f55124f.png dd774aca416e7fbc.png

Creating Views

[Competitive Talking Point]: BigQuery is fully ANSI SQL compliant and supports both simple and complex multi-table joins and rich analytical functions. We have continuously released enhanced support for common SQL data types and functions used in traditional data warehouses to ease the migration process.

  1. Select "Compose New Query" at the top of the Query Editor pane.
  2. Insert the following code in the query editor
WITH revenue_by_month AS (
    card.type AS card_type,
    FORMAT_DATE('%Y-%m', trans_date) as revenue_date,
    SUM(amount) as revenue
FROM bq_demo.card_transactions
JOIN bq_demo.card ON card_transactions.cc_number = card.card_number
GROUP BY card_type, revenue_date
    revenue as monthly_rev,
    revenue -  LAG(revenue) OVER (ORDER BY card_type, revenue_date ASC) as rev_change
FROM revenue_by_month
ORDER BY card_type, revenue_date ASC;
  1. Click "Save View"
  2. Select your current project for Project Name
  3. Select the newly created Dataset:


  1. For Table Name:


  1. Click Save.


Sharing Views and Datasets

  1. Select the "bq_demo_shared" dataset from the left resource pane in the BigQuery UI.
  2. Click "Share Dataset" from In the dataset information pane
  3. Enter in an email address
  4. Select "BigQuery Data Viewer" from the Role dropdown menu
  5. Click "Add"
  6. Click Done


Explore Data in Sheets

[Competitive Talking Point]: Another benefit of BigQuery compared to its competitors is the BI Engine. BI Engine can be used to make BI type summary queries return in less than a second through in-memory caching engine. This is currently supported by Google Data Studio but will soon be available to accelerate all queries in BigQuery.

For example:

Snowflake relies on 3rd party BI tools for dashboards and data visualization while GCP offers a range of integrated BI tools, including Connected Sheets, Data Studio, and Looker.

  1. Select the "rev_change_by_card_type" view from the left resource pane in the BigQuery UI.
  2. Click on the magnifying glass to query the view 255be22b0eaf339.png
  3. Type:


FROM bq_demo_shared.rev_change_by_card_type

  1. Click Run
  2. Click on the "Export" Icon from the Results Pane
  3. Select "Explore Data with Sheets"


  1. Click"Start Analyzing"
  2. Select "Pivot Table"
  3. Select "New Sheet"
  4. Click "Create"
  5. Add "revenue_date" under the Row section of the Pivot Table Editor located on the right of the Sheets window
  6. Add "card_type" under the Column section of the Pivot Table Editor
  7. Add "monthly_rev" under the Column section of the Pivot Table Editor
  8. Click Apply


  1. Navigate to the top robbin of the Sheets UI and select Insert Chart

4. Setup: Data Integration

In this section you will learn how to create a new table and perform a JOINS on one of the many public datasets that Google Cloud has available.

[Competitive Talking Point]:

BigQuery has supported shared data sets for years. Customers in any project can query both public data sets and data sets in other projects that have been shared with them.

BigQuery can support data lakes in GCS through the use of external tables. In addition to bulk loading, BigQuery supports the ability to stream data into the database at rates upwards of hundreds of MB per second. Snowflake has no support for streaming data.

Importing Data to a new table

  1. In the resources pane select the bq_demo dataset
  2. In the dataset information pane select "Create Table"
  3. Select Google Cloud Storage for Source
  4. In the file path text box:


  1. Select CSV for File Format
  2. Enter "district" for Table Name
  3. Select the checkbox for Auto Detect schema
  4. Click Create Table

Querying Public Dataset

  1. In the query editor enter the following query:
    CAST(geo_id as STRING) AS zip_code,
    ROUND(SAFE_DIVIDE(employed_pop, pop_16_over),4) AS rate_employment,
    ROUND(SAFE_DIVIDE(bachelors_degree_or_higher_25_64, pop_25_64),4) AS rate_bachelors_degree_or_higher_25_64
  1. Click Run
  2. View the Results


  1. Now we will combine this public data with another query. Enter the following SQL Code in the Query Editor:
WITH customer_counts AS (
    select regexp_extract(address, "[0-9][0-9][0-9][0-9][0-9]") as zip_code, 
    count(*) as num_clients
    FROM bq_demo.client
    GROUP BY zip_code
    CAST(geo_id as STRING) AS zip_code,
    ROUND(SAFE_DIVIDE(employed_pop, pop_16_over),4) AS rate_employment,
JOIN customer_counts on zip_code = geo_id
ORDER BY num_clients DESC
  1. Click Run
  2. View the Results


5. Capacity Management

Working with slots and reservations

BQ offers multiple pricing models to meet your needs. Most large customers primarily leverage flat rate for predictable pricing with reserved capacity. For bursting beyond that baseline capacity, BQ offers flex slots which allow you to grow into additional capacity on the fly and then automatically shrink back with no impact on running queries. BQ also has a byte scan model which allows you to only pay for the queries you run.

[Competitive Talking Point: Some competitors work exclusively on a fixed capacity model where customers have to allocate a virtual warehouse for each workload in their organization. In addition to a low-cost per-query model that makes it easy to get started with BigQuery, we support a flat rate capacity pricing model where idle capacity can be shared among a set of workloads.]

  1. Go to the reservations tab.


  1. Click on "Buy Slots"


  1. Select "Flex" as duration.
  2. Select 500 slots.
  3. Confirm purchase.


  1. Click View Slot Commitments.
  2. Click "Create Reservation"
  3. User "demo" as reservation name
  4. Select United States as location
  5. Type 500 for slots (all available)
  6. Click Assignments
  7. Pick current project for organization project
  8. Select "demo" for reservation ID
  9. Click Create."