Vertex AI: Building a fraud detection model with AutoML

1. Overview

In this lab, you will use Vertex AI to train and serve a model with tabular data. This is the newest AI product offering on Google Cloud, and is currently in preview.

What you learn

You'll learn how to:

  • Upload a Managed Dataset to Vertex AI
  • Train a model with AutoML
  • Deploy your trained AutoML model to an endpoint and use that endpoint to get predictions

The total cost to run this lab on Google Cloud is about $22.

2. Intro to Vertex AI

This lab uses the newest AI product offering available on Google Cloud. Vertex AI integrates the ML offerings across Google Cloud into a seamless development experience. Previously, models trained with AutoML and custom models were accessible via separate services. The new offering combines both into a single API, along with other new products. You can also migrate existing projects to Vertex AI. If you have any feedback, please see the support page.

Vertex AI includes many different products to support end-to-end ML workflows. This lab will focus on the products highlighted below: AutoML for tabular data, Prediction, and Workbench.

Vertex product overview

3. Setup your environment

You'll need a Google Cloud Platform project with billing enabled to run this codelab. To create a project, follow the instructions here.

Step 1: Enable the Compute Engine API

Navigate to Compute Engine and select Enable if it isn't already enabled. You'll need this to create your notebook instance.

Step 2: Enable the Vertex AI API

Navigate to the Vertex AI section of your Cloud Console and click Enable Vertex AI API.

Vertex dashboard

Step 3: Create a Vertex AI Workbench instance

From the Vertex AI section of your Cloud Console, click on Workbench:

Vertex AI menu

From there, within user-managed Notebooks, click New Notebook:

Create new notebook

Then select the latest version of TensorFlow Enterprise (with LTS) instance type without GPUs:

TFE instance

Use the default options and then click Create.

Step 5: Open your Notebook

Once the instance has been created, select Open JupyterLab:

Open Notebook

The data we'll use to train our model is from this Credit card fraud detection dataset. We'll use a version of this dataset made publicly available in BigQuery.

4. Create a Managed Dataset

In Vertex AI, you can create managed datasets for a variety of data types. You can then generate statistics on these datasets and use them to train models with AutoML or your own custom model code.

Step 1: Create a dataset

In the Vertex menu in your console, select Data sets:

Select Data sets

In this lab we'll be build a fraud detection model to determine whether or not a particular credit card transaction should be classified as fraudulent.

From the Data sets page, give the dataset a name, then select Tabular, and Regression/classification. Then Create the dataset:

Create dataset

There are a few options for importing data to Managed Datasets in Vertex:

  • Uploading a local file from your computer
  • Selecting files from Cloud Storage
  • Selecting data from BigQuery

Here we'll be uploading data from a public BigQuery table.

Step 2: Import data from BigQuery

Choose "Select a table or view from BigQuery" as your import method, and then copy the following into the BigQuery table box: bigquery-public-data.ml_datasets.ulb_fraud_detection. Then select Continue:

Import BQ data

You should see something like the following after importing your dataset:

Imported data

If you'd like, you can click Generate statistics to see additional info on this dataset, but that is not required before proceeding to the next step. This dataset contains real credit card transactions. Most of the column names have been obscured, which is why they are called V1, V2, etc.

5. Train a model with AutoML

With a managed dataset uploaded, we're ready to train a model with this data. We'll be training a classification model to predict whether or not a specific transaction is fraudulent. Vertex AI gives you two options for training models:

  • AutoML: Train high-quality models with minimal effort and ML expertise.
  • Custom training: Run your custom training applications in the cloud using one of Google Cloud's pre-built containers or use your own.

In this lab, we'll use AutoML for training.

Step 1: Kick off training job

From the dataset detail page where you left off in the previous step, select Train new model on the top right. Select Classification as the objective, leave AutoML selected for model training, and then click Continue:

Model training step 1

Give your model a name, or you can use the default. Under Target column select Class. This is an integer indicating whether or not a particular transaction was fraudulent (0 for non-fraud, 1 for fraud).

Then select Continue:

Model training step 2

In this step, scroll down and click to expand Advanced options. Since this dataset is heavily imbalanced (less than 1% of the data contains fraudulent transactions), chooose the AUC PRC option which will maximize precision-recall for the less common class:

Advanced training options

Select Continue and then proceed to the last step (Compute and pricing). Here, enter 1 as the number of node hours for your budget and leave early stopping enabled. Training your AutoML model for 1 compute hour is typically a good start for understanding whether there is a relationship between the features and label you've selected. From there, you can modify your features and train for more time to improve model performance. Next, select Start training.

You'll get an email when your training job completes. Training will take slightly longer than an hour to account for time to spin up and tear down resources.

6. Explore model evaluation metrics

In this step we'll see how our model performed.

Once your model training job has completed, navigate to the Models tab in Vertex. Click on the model you just trained and take a look at the Evaluate tab. There are many evaluation metrics here - we'll focus on two: the Confusion Matrix and Feature Importance.

Step 1: Understand the confusion matrix

A confusion matrix tell us the percentage of examples from each class in our test set that our model predicted correctly. In the case of an imbalanced dataset like the one we're dealing with, this is a better measure of our model's performance than overall accuracy.

Remember that less than 1% of the examples in our dataset were fraudulent transactions, so if our model accuracy is 99% there's a good chance it's just randomly guessing the non-fraudulent class 99% of the time. That's why looking at our model's accuracy for each class is a better metric here.

If you scroll down on the Evaluate tab, you should see a confusion matrix that looks something like this (exact percentages may vary):

Confusion matrix

The confusion matrix shows our initial model is able to classify 85% of the fraudulent examples in our test set correctly. This is pretty good, especially considering our significant dataset imbalance. Next we could try training our model for more compute hours to see if we can improve from this 85%.

Step 2: Looking at feature importance

Below the confusion matrix, you should see a feature importance chart that looks like this:

Feature importance

This shows us the features that provided the biggest signal to our model when making predictions. Feature importance is one type of Explainable AI - a field that includes various methods for getting more insight into an ML model is making predictions. The feature importance chart seen here is calculated as an aggregate by looking at all of our model's predictions on our test set. It shows us the most important features across a batch of examples.

This chart would be more exciting if most of the features in our dataset were not obscured. We might learn, for example, that the type of a transaction (transfer, deposit, etc.) was the biggest indicator of fraud.

In a real-world scenario, these feature importance values could be used to help us improve our model, and to have more confidence in it's predictions. We might decide to remove the least important features next time we train a model, or to combine two of our more significant features into a feature cross to see if this improves model performance.

We're looking at feature importance across a batch here, but we can also get feature importance for individual predictions in Vertex AI. We'll see how to do that once we've deployed our model.

7. Deploying the model to an endpoint

Now that we have a trained model, the next step is to create an Endpoint in Vertex. A Model resource in Vertex can have multiple endpoints associated with it, and you can split traffic between endpoints.

Step 1: Creating an endpoint

On your model page, navigate to the Deploy and test tab and click Deploy to endpoint:

Deploy and test

Give your endpoint a name, like fraud_v1, leave Access set to Standard and click Continue.

Leave traffic splitting and machine type as the default settings, click Done and then Continue.

We won't use model monitoring for this endpoint, so you can leave that unselected and click Deploy. Your endpoint will take a few minutes to deploy. When it is completed you'll see a green check mark next to it:

Deployed endpoint

You're getting close! Now you're ready to get predictions on your deployed model.

8. Getting predictions on our deployed model

There are a few options for getting model predictions:

  • Vertex AI UI
  • Vertex AI API

We'll show both here.

Step 1: Get model predictions in the UI

On your model page where your endpoint is shown (where we left off in the last step), scroll down to the Test your model section:

Test model

Here, Vertex AI has chosen random values for each of our model's features that we can use to get a test prediction. You are welcome to change these values if you'd like. Scroll down to the bottom of the page and select Predict.

In the Prediction result section of the page, you should see your model's predicted percentage for each class. A confidence score of 0.99 for class 0, for example, means that your model thinks this example has a 99% of being non-fraudulent.

Step 2: Get model predictions with the Vertex AI API

The UI is a great way to make sure your deployed endpoint is working as expected, but chances are you'll want to get predictions dynamically via a REST API call. To show you how to get model predictions here, we'll be using the Vertex Workbench instance you created at the beginning of this lab.

Next, open the notebook instance you created, and open a Python 3 notebook from the Launcher:

Open notebook

In your notebook, run the following in a cell to install the Vertex SDK:

!pip3 install google-cloud-aiplatform --upgrade --user

Then add a cell in your notebook to import the SDK and create a reference to the endpoint you just deployed:

from google.cloud import aiplatform

endpoint = aiplatform.Endpoint(
    endpoint_name="projects/YOUR-PROJECT-NUMBER/locations/us-central1/endpoints/YOUR-ENDPOINT-ID"
)

You'll need to replace two values in the endpoint_name string above with your project number and endpoint. You can find your project number by navigating to your project dashboard and getting the Project Number value.

You can find your endpoint ID in the endpoints section of the console here:

Find the endpoint ID

Finally, make a prediction to your endpoint by copying and running the code below in a new cell:

test_instance={
    'Time': 80422,
    'Amount': 17.99,
    'V1': -0.24,
    'V2': -0.027,
    'V3': 0.064,
    'V4': -0.16,
    'V5': -0.152,
    'V6': -0.3,
    'V7': -0.03,
    'V8': -0.01,
    'V9': -0.13,
    'V10': -0.18,
    'V11': -0.16,
    'V12': 0.06,
    'V13': -0.11,
    'V14': 2.1,
    'V15': -0.07,
    'V16': -0.033,
    'V17': -0.14,
    'V18': -0.08,
    'V19': -0.062,
    'V20': -0.08,
    'V21': -0.06,
    'V22': -0.088,
    'V23': -0.03,
    'V24': 0.01,
    'V25': -0.04,
    'V26': -0.99,
    'V27': -0.13,
    'V28': 0.003
}

response = endpoint.predict([test_instance])

print('API response: ', response)

You should see a prediction around .67 for the 0 class, which means the model thinks there is a 67% chance this transaction is non-fraudulent.

🎉 Congratulations! 🎉

You've learned how to use Vertex AI to:

  • Upload a managed dataset
  • Train and evaluate a model on tabular data using AutoML
  • Deploy the model to an endpoint
  • Get predictions on a model endpoint using the SDK for Vertex

To learn more about different parts of Vertex AI, check out the documentation.

9. Cleanup

If you'd like to continue using the notebook you created in this lab, it is recommended that you turn it off when not in use. From the Workbench UI in your Cloud Console, select the notebook and then select Stop.

If you'd like to delete the notebook entirely, simply click the Delete button in the top right.

To delete the endpoint you deployed, navigate to the Endpoints section of your Vertex AI console and undeploy the model from your endpoint:

Delete endpoint

To delete the Storage Bucket, using the Navigation menu in your Cloud Console, browse to Storage, select your bucket, and click Delete:

Delete storage