Analyze Clinical Data using BigQuery and AI Platform Notebooks

1. Introduction

139d6fa46b10ab12.png

Last Updated: 2022-9-22

This codelab implements a pattern to access and analyze healthcare data aggregated in BigQuery using BigQueryUI and AI Platform Notebooks. It illustrates data exploration of large healthcare datasets using familiar tools like Pandas, Matplotlib, etc. in a HIPPA compliant AI Platform Notebooks. The "trick" is to do the first part of your aggregation in BigQuery, get back a Pandas dataset and then work with the smaller Pandas dataset locally. AI Platform Notebooks provides a managed Jupyter experience and so you don't need to run notebook servers yourself. AI Platform Notebooks is well integrated with other GCP services like Big Query and Cloud Storage which makes it quick and simple to start your Data Analytics and ML journey on Google Cloud Platform.

In this code lab you will learn to:

  • Develop and test SQL queries using BigQuery UI.
  • Create and launch an AI Platform Notebooks instance in GCP.
  • Execute SQL queries from the notebook and store query results in Pandas DataFrame.
  • Create charts and graphs using Matplotlib.
  • Commit and push the notebook to a Cloud Source Repository in GCP.

What do you need to run this codelab?

  • You need access to a GCP Project.
  • You need to be assigned an Owner role for the GCP Project.
  • You need a healthcare dataset in BigQuery.

If you don't have a GCP Project, follow these steps to create a new GCP Project.

2. Project setup

For this codelab, we will use an existing dataset in BigQuery (hcls-testing-data.fhir_20k_patients_analytics). This dataset is pre-populated with synthetic healthcare data.

Get access to the synthetic dataset

  1. From the email address you are using to login to Cloud Console, send an email to hcls-solutions-external+subscribe@google.com requesting to join.
  2. You will receive an email with instructions on how to confirm the action.
  3. Use the option to respond to the email to join the group. DO NOT click the 525a0fa752e0acae.pngbutton.
  4. Once you receive the confirmation email, you can proceed to the next step in the codelab.

Pin the project

  1. In GCP Console, select your project, then navigate to BigQuery.
  2. Click the +ADD DATA dropdown, and select "Pin a project" > "Enter project name" .

55257ed5931961c6.png

  1. Enter the project name, "hcls-testing-data", then click PIN. The BigQuery test dataset "fhir_20k_patients_analytics" is available for use.

f9038e2a21e143fd.png

3. Develop queries using BigQuery UI

BigQuery UI setting

  1. Navigate to the BigQuery console by selecting BigQuery from the top-left-corner ("hamburger") GCP menu.
  2. In the BigQuery console, click More → Query settings and ensure that the Legacy SQL menu is NOT checked (we will be using Standard SQL).

455c6c3ed93e9a63.png

Build Queries

In the Query editor window, type the following query and click "Run" to execute it. Then, view the results in the "Query results" window.

QUERY PATIENTS

#standardSQL - Query Patients
SELECT
  id AS patient_id,
  name[safe_offset(0)].given AS given_name,
  name[safe_offset(0)].family AS family,
  telecom[safe_offset(0)].value AS phone,
  birthDate AS birth_date,
  deceased.dateTime AS deceased_datetime,
  Gender AS fhir_gender_code,
  Address[safe_offset(0)].line AS address1_line_1,
  Address[safe_offset(0)].city AS address1_city,
  Address[safe_offset(0)].state AS address1_state,
  Address[safe_offset(0)].postalCode AS address1_postalCode,
  Address[safe_offset(0)].country AS address1_country
FROM 
  `hcls-testing-data.fhir_20k_patients_analytics.Patient` AS Patient 
LIMIT 10

Query in "Query editor" and results:

fb8ef84f0cb583fb.png

QUERY PRACTITIONERS

#standardSQL - Query Practitioners
SELECT 
  id AS practitioner_id, 
  name[safe_offset(0)].given AS given_name,
  name[safe_offset(0)].family AS family_name, 
  gender 
FROM 
  `hcls-testing-data.fhir_20k_patients_analytics.Practitioner` 
LIMIT 10

Query results:

9515eb63813617e0.png

QUERY ORGANIZATION

Change the organization id to match your dataset.

#standardSQL - Query Organization
SELECT
  id AS org_id,
  type[safe_offset(0)].text AS org_type,
  name AS org_name,
  address[safe_offset(0)].line AS org_addr,
  address[safe_offset(0)].city AS org_addr_city,
  address[safe_offset(0)].state AS org_addr_state,
  address[safe_offset(0)].postalCode AS org_addr_postalCode,
  address[safe_offset(0)].country AS org_addr_country
FROM 
  `hcls-testing-data.fhir_20k_patients_analytics.Organization` AS Organization
WHERE 
  id = "b81688f5-bd0e-3c99-963f-860d3e90ab5d"

Query results:

79a7afe2dd7fca87.png

QUERY ENCOUNTERS BY PATIENT

#standardSQL - Query Encounters by Patient
SELECT
  id AS encounter_id,
  period.start AS encounter_start,
  period.end AS encounter_end,
  status AS encounter_status,
  class.code AS encounter_type,
  subject.patientId as patient_id,
  participant[safe_OFFSET(0)].individual.practitionerId as parctitioner_id,
  serviceProvider.organizationId as encounter_location_id,
  type[safe_OFFSET(0)].text AS encounter_reason
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Encounter` AS Encounter
WHERE
  subject.patientId = "900820eb-4166-4981-ae2d-b183a064ac18"
ORDER BY
  encounter_end

Query results:

18328b6506814084.png

GET AVG LENGTH OF ENCOUNTERS BY ENCOUNTER TYPE

#standardSQL - Get Average length of Encounters by Encounter type 
SELECT
  class.code encounter_class,
  ROUND(AVG(TIMESTAMP_DIFF(TIMESTAMP(period.end),    TIMESTAMP(period.start), HOUR)),1) as avg_minutes
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Encounter` AS Encounter
WHERE
  period.end >= period.start
GROUP BY
  1
ORDER BY
  2 DESC

Query results:

2087792ce2a67e97.png

GET ALL PATIENTS WHO HAVE A1C RATE >= 6.5

# Query Patients who have A1C rate >= 6.5
SELECT 
  id AS observation_id,
  subject.patientId AS patient_id,
  context.encounterId AS encounter_id,
  value.quantity.value,
  value.quantity.unit,
  code.coding[safe_offset(0)].code,
  code.coding[safe_offset(0)].display AS description
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Observation` 
WHERE 
  code.text like '%A1c/Hemoglobin%' AND 
  value.quantity.value >= 6.5 AND 
  status = 'final'

Query results:

65be2450ecd92485.png

4. Create AI Platform Notebooks instance

Follow the instructions in this link to create a new AI Platform Notebooks (JupyterLab) instance.

Please make sure to enable Compute Engine API.

You can choose " Create a new notebook with default options" or " Create a new notebook and specify your options".

5. Build a Data Analytics Notebook

Open AI Platform Notebooks instance

In this section we will author and code a new Jupyter notebook from scratch.

  1. Open a notebook instance by navigating to the AI Platform Notebooks page in the Google Cloud Platform Console. GO TO THE AI PLATFORM NOTEBOOKS PAGE
  2. Select Open JupyterLab for the instance that you want to open.

82457955b63cbffa.png

  1. AI Platform Notebooks directs you to a URL for your notebook instance.

7705bf2f2d9b1b20.png

Create a notebook

  1. In JupyterLab, go to File -> New -> Notebook and select the Kernel "Python 3" in the popup, or select "Python 3" under the Notebook section in the launcher window to create an Untitled.ipynbnotebook.

d0ae87f0bdac3205.png

  1. Right click Untitled.ipynb and rename the notebook to "fhir_data_from_bigquery.ipynb". Double click to open it, build the queries, and save the notebook.
  2. You can download a notebook by right clicking the *.ipynb file and select Download from the menu.

fc16337ffd9b1730.png

  1. You can also upload an existing notebook by clicking the "Up arrow" button.

49373254fbf1ddf9.png

Build and execute each code block in the notebook

Copy and execute each code block provided in this section one by one. To execute the code click "Run" (Triangle).

e6d8b08c124c675e.png

Get length of stay for encounters in hours

from google.cloud import bigquery

client = bigquery.Client()

lengthofstay="""
SELECT
    class.code as encounter_class,
    period.start as start_timestamp,
    period.end as end_timestamp, 
    TIMESTAMP_DIFF(TIMESTAMP(period.end), TIMESTAMP(period.start), HOUR) 
        as length_of_stay_in_hours
FROM 
    `hcls-testing-data.fhir_20k_patients_analytics.Encounter`
WHERE
    period.end >= period.start
ORDER BY
    4 DESC
LIMIT 10
"""
df = client.query(lengthofstay).to_dataframe()
df.head()

Code and execution output:

e7d37ff4d0d91518.png

Get Observations - Cholesterol values

observation="""
SELECT
  cc.code loinc_code,
  cc.display loinc_name,
  approx_quantiles(round(o.value.quantity.value,1),4) as quantiles,
  count(*) as num_obs
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Observation` o, o.code.coding cc
WHERE
  cc.system like '%loinc%' and lower(cc.display) like '%cholesterol%'
GROUP BY 1,2
ORDER BY 4 desc
"""
df2 = client.query(observation).to_dataframe()
df2.head()

Execution output:

7f43408857c0335.png

Get Approximate encounter quantiles

encounters="""
SELECT
  encounter_class,
  APPROX_QUANTILES(num_encounters, 4) num_encounters_quantiles
FROM (
  SELECT
    class.code encounter_class,
    subject.reference patient_id,
    COUNT(DISTINCT id) AS num_encounters
  FROM
    `hcls-testing-data.fhir_20k_patients_analytics.Encounter`
  GROUP BY
    1,2
  )
GROUP BY 1
ORDER BY 1
"""
df3 = client.query(encounters).to_dataframe()
df3.head()

Execution output:

4c2313fae0ebe007.png

Get Average length of encounters in minutes

avgstay="""
SELECT
  class.code encounter_class,
  ROUND(AVG(TIMESTAMP_DIFF(TIMESTAMP(period.end), TIMESTAMP(period.start), MINUTE)),1) as avg_minutes
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Encounter`
WHERE
  period.end >= period.start
GROUP BY
  1
ORDER BY
  2 DESC
  """
df4 = client.query(avgstay).to_dataframe()
df4.head()

Execution output:

a0cdbe42751f14f7.png

Get Encounters per patient

patientencounters="""
SELECT
  id AS encounter_id,
  period.start AS encounter_start,
  period.end AS encounter_end,
  status AS encounter_status,
  class.code AS encounter_type,
  subject.patientId as patient_id,
  participant[safe_OFFSET(0)].individual.practitionerId as parctitioner_id,
  serviceProvider.organizationId as encounter_location_id,
  type[safe_OFFSET(0)].text AS encounter_reason
FROM
  `hcls-testing-data.fhir_20k_patients_analytics.Encounter` AS Encounter
WHERE
  subject.patientId = "900820eb-4166-4981-ae2d-b183a064ac18"
ORDER BY
  encounter_end
"""

df5 = client.query(patientencounters).to_dataframe()
df5.head()

Execution output:

3ed6b4d6a1652de0.png

Get Organizations

orgs="""
SELECT
  id AS org_id,
  type[safe_offset(0)].text AS org_type,
  name AS org_name,
  address[safe_offset(0)].line AS org_addr,
  address[safe_offset(0)].city AS org_addr_city,
  address[safe_offset(0)].state AS org_addr_state,
  address[safe_offset(0)].postalCode AS org_addr_postalCode,
  address[safe_offset(0)].country AS org_addr_country
FROM 
  `hcls-testing-data.fhir_20k_patients_analytics.Organization` AS Organization
WHERE 
  id = "b81688f5-bd0e-3c99-963f-860d3e90ab5d"
"""

df6 = client.query(orgs).to_dataframe()
df6.head()

Execution result:

886b2e99a889422e.png

Get Patients

patients="""
SELECT
  id AS patient_id,
  name[safe_offset(0)].given AS given_name,
  name[safe_offset(0)].family AS family,
  telecom[safe_offset(0)].value AS phone,
  birthDate AS birth_date,
  deceased.dateTime AS deceased_datetime,
  Gender AS fhir_gender_code,
  Address[safe_offset(0)].line AS address1_line_1,
  Address[safe_offset(0)].city AS address1_city,
  Address[safe_offset(0)].state AS address1_state,
  Address[safe_offset(0)].postalCode AS address1_postalCode,
  Address[safe_offset(0)].country AS address1_country
FROM 
  `hcls-testing-data.fhir_20k_patients_analytics.Patient` AS Patient 
LIMIT 10
"""

df7 = client.query(patients).to_dataframe()
df7.head()

Execution results:

61533f943001c446.png

6. Create Charts and Graphs in AI Platform Notebooks

Execute code cells in notebook "fhir_data_from_bigquery.ipynb" to draw a bar graph.

For example, get the average length of Encounters in minutes.

df4.plot(kind='bar', x='encounter_class', y='avg_minutes');

Code and execution results:

e48071e58960f124.png

7. Commit Notebook to Cloud Source Repository

  1. In the GCP Console, navigate to Source Repositories. If it is the first time you use it, click Get started, then Create repository.

475d9a5c1d5dedc5.png

  1. For a subsequent time, navigate to GCP -> Cloud Source Repositories, and click +Add repository to create a new repository.

44416312bf155af1.png

  1. Select "Create a new Repository", then click Continue.
  2. Provide Repository name and Project name, then click Create.

ec2f3eaed74c2e0.png

  1. Select "Clone your repository to a local Git repository", then select Manually generated credentials.
  2. Follow step 1 "Generate and store Git credentials" instructions (see below). Copy the script that appears on your screen.

2089de5541527107.jpeg

  1. Start terminal session in Jupyter.

a2b49535e36a9d5c.png

  1. Paste all commands from your "Configure Git" window into the Jupyter terminal.
  2. Copy the repository clone path from the GCP Cloud source repositories (step 2 in screenshot below).

ba6a61ae8a4d9f9b.png

  1. Paste this command in JupiterLab terminal. Command will look like following one:
git clone https://source.developers.google.com/p/<your -project-name>/r/my-ai-notebooks
  1. The "my-ai-notebooks" folder is created in Jupyterlab.

19a2b2c910b3df3.png

  1. Move your notebook (fhir_data_from_bigquery.ipynb) into the folder "my-ai-notebooks".
  2. In the Jupyter terminal, change the directory to "cd my-ai-notebooks".
  3. Stage your changes using the Jupyter terminal. Alternatively you can use the Jupyter UI (right click the files in the Untracked area, select Track, then the files are moved into the Tracked area, and vice versa. Changed area contains the modified files).
git remote add my-ai-notebooks https://source.developers.google.com/p/<your -project-name>/r/my-ai-notebooks

5846abefb2451fd1.png

  1. Commit your changes using the Jupyter terminal or the Jupyter UI (type in the message, then click the "Checked" button).
git commit -m "message goes here"
  1. Push your changes to the remote repository using the Jupyter terminal or the Jupyter UI (click the "push committed changes" icon 71c61a74bb205ed1.png).
git push --all
  1. In the GCP console, navigate to Source Repositories. Click on my-ai-notebooks. Notice that "fhir_data_from_bigquery.ipynb" is now saved in the GCP Source Repository.

7a6b802d90743182.jpeg

8. Cleanup

To avoid incurring charges to your Google Cloud Platform account for the resources used in this codelab, after you've finished the tutorial, you can clean up the resources that you created on GCP so they won't take up your quota, and you won't be billed for them in the future. The following sections describe how to delete or turn off these resources.

Deleting the BigQuery dataset

Follow these instructions to delete the BigQuery dataset you created as part of this tutorial. Or, navigate to the BigQuery console, UnPIN project hcls-testing-data, if you used the test dataset fhir_20k_patients_analytics.

Shutting down AI Platform Notebooks instance

Follow instructions in this link Shut down a notebook instance | AI Platform Notebooks to shutdown an AI Platform Notebooks instance.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the GCP Console, go to the Projects page. GO TO THE PROJECTS PAGE
  2. In the project list, select the project you want to delete and click Delete.
  3. In the dialog, type the project ID, then click Shut down to delete the project.

9. Congratulations

Congratulations, you've successfully completed the code lab to access, query, and analyze FHIR formatted healthcare data using BigQuery and AI Platform Notebooks.

You accessed a public BigQuery dataset in GCP.

You developed and tested SQL queries using BigQuery UI.

You created and launched an AI Platform Notebooks instance.

You executed SQL queries in JupyterLab and stored the query results in Pandas DataFrame.

You created charts and graphs using Matplotlib.

You committed and pushed your notebook to a Cloud Source Repository in GCP.

You now know the key steps required to start your Healthcare Data Analytics journey with BigQuery and AI Platform Notebooks on the Google Cloud Platform.

©Google, Inc. or its affiliates. All rights reserved. Do not distribute.