In this lab, you will use AI Platform Explanations to train and deploy a TensorFlow model for identifying fraudulent transactions. Fraud detection is a type of anomaly detection specific to financial services, and presents some interesting challenges for ML models: inherently imbalanced datasets and a need to explain a model's results.

What you learn

You'll learn how to:

The total cost to run this lab on Google Cloud is about $1.

Anomaly detection can be a good candidate for machine learning since it is often hard to write a series of rule-based statements to identify outliers in data. Fraud detection is a type of anomaly detection, and presents two interesting challenges when it comes to machine learning:

You'll need a Google Cloud Platform project with billing enabled to run this codelab. To create a project, follow the instructions here.

Step 1: Enable the Cloud AI Platform Models API

Navigate to the AI Platform Models section of your Cloud Console and click Enable if it isn't already enabled.

Step 2: Enable the Compute Engine API

Navigate to Compute Engine and select Enable if it isn't already enabled. You'll need this to create your notebook instance.

Step 3: Create an AI Platform Notebooks instance

Navigate to AI Platform Notebooks section of your Cloud Console and click New Instance. Then select the latest TensorFlow Enterprise 1.x instance type without GPUs:

Use the default options and then click Create. Once the instance has been created, select Open JupyterLab:

When you open the instance, select Python 3 notebook from the launcher:

Step 4: Import Python packages

Create a new cell and import the libraries we'll be using in this codelab:

import itertools
import numpy as np
import pandas as pd
import tensorflow as tf
import json
import matplotlib.pyplot as plt

from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix

We'll be using this synthetically generated dataset from Kaggle to train our model. The original dataset includes 6.3 million rows, 8k of which are fraudulent transactions - a mere 0.1% of the whole dataset!

Step 1: Download the Kaggle dataset and read with Pandas

We've made the Kaggle dataset available for you in Google Cloud Storage. You can download it by running the following gsutil command in your Jupyter notebook:

!gsutil cp gs://financial_fraud_detection/fraud_data_kaggle.csv .

Next, let's read the dataset as a Pandas DataFrame and preview it:

data = pd.read_csv('fraud_data_kaggle.csv')
data.head()

You should see something like this in the preview:

Step 2: Accounting for imbalanced data

As mentioned above, right now the dataset contains 99.9% non-fraudulent examples. If we train a model on the data as is, chances are the model will reach 99.9% accuracy by guessing every transaction is not a fraudulent one simply because 99.9% of the data is non fraudulent cases.

There are a few different approaches for dealing with imbalanced data. Here, we'll be using a technique called downsampling. Downsampling means using only a small percentage of the majority class in training. In this case, "non-fraud" is the majority class since it accounts for 99.9% of the data.

To downsample our dataset, we'll take all ~8k of the fraudulent examples and a random sample of ~31k of the non-fraud cases. This way the resulting dataset will have 25% fraud cases, compared to the .1% we had before.

First, split the data into two DataFrames, one for fraud and one for non-fraud (we'll make use of this later in the codelab when we deploy our model):

fraud = data[data['isFraud'] == 1]
not_fraud = data[data['isFraud'] == 0]

Then, take a random sample of the non-fraud cases. We're using .005% since this will give us a 25/75 split of fraud / non-fraud transactions. With that, you can put the data back together and shuffle. To simplify things we'll also remove a few columns that we won't be using for training:

# Take a random sample of non fraud rows
not_fraud_sample = not_fraud.sample(random_state=2, frac=.005)

# Put it back together and shuffle
df = pd.concat([not_fraud_sample,fraud])
df = shuffle(df, random_state=2)

# Remove a few columns (isFraud is the label column we'll use, not isFlaggedFraud)
df = df.drop(columns=['nameOrig', 'nameDest', 'isFlaggedFraud'])

# Preview the updated dataset
df.head()

Now we've got a much more balanced dataset. However, if we notice our model converging around ~75% accuracy, there's a good chance it's guessing "non-fraud" in every case.

Step 3: Split the data into train and test sets

The last thing to do before building our model is splitting our data. We'll use an 80/20 train-test split:

train_test_split = int(len(df) * .8)

train_set = df[:train_test_split]
test_set = df[train_test_split:]

train_labels = train_set.pop('isFraud')
test_labels = test_set.pop('isFraud')

*E. A. Lopez-Rojas , A. Elmir, and S. Axelsson. "PaySim: A financial mobile money simulator for fraud detection". In: The 28th European Modeling and Simulation Symposium-EMSS, Larnaca, Cyprus. 2016

Tree-based models have been shown to be effective for anomaly detection, which is what I'll be using here. We'll be building our model using TensorFlow's Boosted Tree Classifier. Many of the steps in this section are adapted from this great tutorial in the TensorFlow docs.

Step 1: Defining feature columns

We'll be feeding data into our model using TensorFlow's feature column API. If you're new to TensorFlow, feature columns essentially answer these questions for your model:

As you saw in our dataset preview above, we'll be using 7 features (pieces of data about a transaction) to train this model. 6 of them are numerical and one is categorical. The categorical feature is a string indicating the type of a transaction ("cash out", "transfer", etc.).

First, let's create lists with the name of each feature column:

fc = tf.feature_column
CATEGORICAL_COLUMNS = ['type']
NUMERIC_COLUMNS = ['step', 'amount', 'oldbalanceOrg', 'newbalanceOrig', 'oldbalanceDest', 'newbalanceDest']

Next, we'll use the following code to build our list of feature columns:

def one_hot_cat_column(feature_name, vocab):
    return tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocab))

feature_columns = []

for feature_name in CATEGORICAL_COLUMNS:
    vocabulary = train_set[feature_name].unique()
    feature_columns.append(one_hot_cat_column(feature_name, vocabulary))

for feature_name in NUMERIC_COLUMNS:
  feature_columns.append(tf.feature_column.numeric_column(feature_name,
                                           dtype=tf.float32))

For this model we're one-hot encoding our categorical feature (transaction type). If you print out feature_columns you'll see that we've now got a reference for each input to our model with its corresponding name and data type.

Step 2: Define input functions

Where feature columns tell our model what data it can expect, input functions tell TensorFlow how that data should be passed to our model.

You can define input functions for training and evaluation with the following code:

NUM_EXAMPLES = len(train_labels)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
  def input_fn():
    dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
    if shuffle:
      dataset = dataset.shuffle(NUM_EXAMPLES)
    dataset = dataset.repeat(n_epochs)
    dataset = dataset.batch(NUM_EXAMPLES)
    return dataset
  return input_fn

# Define training and evaluation input functions
train_input_fn = make_input_fn(train_set, train_labels)
eval_input_fn = make_input_fn(test_set, test_labels, shuffle=False, n_epochs=1)

The from_tensor_slices method from the tf.data API lets us read data directly from a Pandas DataFrame. For training, it's important to shuffle your data which we can do with the shuffle() method.

Our model output will be a single value ranging from 0 to 1, with 0 indicating a confident prediction of "not fraud" and 1 indicating a confident prediction of "fraud". An output of .75, for example, would mean that our model is 75% confident that a particular transaction was fraudulent.

Step 3: Train and evaluate the model

We can define our boosted tree model with one line of code. Since our data fits into memory, we're using one batch to keep things simple.

n_batches = 1
model = tf.estimator.BoostedTreesClassifier(feature_columns,
                                          n_batches_per_layer=n_batches)

With our input function and model defined, we're ready for training:

model.train(train_input_fn, max_steps=100)

Once training completes, let's evaluate our model on the test set to get some metrics on how it's performing:

result = model.evaluate(eval_input_fn)
print(pd.Series(result))

You should see accuracy and auc around 99%. Seems like our model is learning to identify fraud! But before we get too excited, let's do a quick test by sending it some test data and verifying that it labels the fraudulent transactions correctly:

pred_dicts = list(model.predict(eval_input_fn))
probabilities = pd.Series([pred['logistic'][0] for pred in pred_dicts])

for i,val in enumerate(probabilities[:30]):
  print('Predicted: ', round(val), 'Actual: ', test_labels.iloc[i])
  print()

Each row of output compares our model's prediction with the actual value for a particular transaction from our test set. The majority of examples here should be correct for your model.

Step 4: Create a confusion matrix

A confusion matrix is a nice way to visualize how our model performed across the test dataset. For each class, it will show us the percentage of test examples that our model predicted correctly and incorrectly. Scikit Learn has some utilities for creating and plotting confusion matrices, which we'll use here.

At the beginning of our notebook we imported the `confusion_matrix` utility. To use it, we'll first create a list of our model's predictions. Here we'll round the values returned from our model so that this lists matches our list of ground truth labels:

y_pred = []

for i in probabilities.values:
  y_pred.append(int(round(i)))

Now we're ready to feed this into the confusion_matrix method, along with our ground truth labels:

cm = confusion_matrix(test_labels.values, y_pred)
print(cm)

This shows us the absolute numbers of our model's correct and incorrect predictions on our test set. The number on the top left shows how many examples from our test set our model correctly predicted as non-fraudulent. The number on the bottom right shows how many it correctly predicted as fraudulent (we care most about this number). You can see that it predicted the majority of samples correctly for each class.

To make this easier to visualize, we've adapted the plot_confusion_matrix function from the Scikit Learn docs. Define that function here:

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = np.round(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], 3)

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

And create the plot by passing it the data from our model. We're setting normalize to True here so that the confusion matrix displays the number of correct and incorrect predictions as percentages:

classes = ['not fraud', 'fraud']
plot_confusion_matrix(cm, classes, normalize=True)

You should see something like this (exact numbers will vary):

Here we can see that our model predicted around 99% of the 1,594 fraudulent transactions from our test set correctly.

In order to deploy our model to Cloud AI Platform and make use of Explainable AI, we need to export it as a TensorFlow 1 SavedModel and save it in a Cloud Storage bucket.

Step 1: Create a Cloud Storage bucket for the model

Let's first define some environment variables that we'll be using throughout the rest of the codelab. Fill in the values below with the name of your Google Cloud project, the name of the cloud storage bucket you'd like to create (must be globally unique), and the version name for the first version of your model:

# Update these to your own GCP project, model, and version names
GCP_PROJECT = 'your-gcp-project'
MODEL_BUCKET = 'gs://storage_bucket_name'

Now we're ready to create a storage bucket to store our exported TensorFlow model assets. We'll point AI Platform at this bucket when we deploy the model.

Run this gsutil command from within your notebook to create a bucket:

!gsutil mb $MODEL_BUCKET

Step 2: Export our TensorFlow model

To export our model in the SavedModel format, we need to first define a serving input function. If that sounds confusing, the serving input function answers these questions for our deployed model:

The two questions here explain why ServingInputReceiver returns two variables in the function below. In our case, we're not doing any server-side transformations of the data sent from the client, so both of those variables are the same:

def json_serving_input_fn():
  inputs = {}
  for feat in feature_columns:
      if feat.name == "type_indicator":
            inputs['type'] = tf.placeholder(shape=[None], name=feat.name, dtype=tf.string)
      else:
          inputs[feat.name] = tf.placeholder(shape=[None], name=feat.name, dtype=feat.dtype)
  return tf.estimator.export.ServingInputReceiver(inputs, inputs)

We can now export our Estimator directly to our Cloud Storage bucket:

export_path = model.export_saved_model(
    MODEL_BUCKET + '/explanations',
    serving_input_receiver_fn=json_serving_input_fn
).decode('utf-8')

Visit the Storage page in your console to confirm the model files were uploaded correctly. They should be in a subdirectory named with a timestamp that looks something like this:

Before we deploy our model, there's a few things we need to do to configure it to make use of explainability on AI Platform.

Step 3: Inspecting our model's TensorFlow graph

Let's make use of TensorFlow's handy saved_model_cli to inspect the input and output tensors in our model. We'll need this info to tell AI Explanations which tensors from our model we want to explain. Run the following command from your notebook:

!saved_model_cli show --dir $export_path --all

Here's a clipped example of what you should see in the output:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['amount'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: amount:0
    ...
    inputs['type'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: type_indicator:0
  The given SavedModel SignatureDef contains the following output(s):
    ...
    outputs['logits'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: boosted_trees/BoostedTreesPredict:0
    outputs['probabilities'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: boosted_trees/head/predictions/probabilities:0
  Method name is: tensorflow/serving/predict

There's a lot there. What are we looking at? The list of inputs is everything your served model expects from the client. For example, when you send JSON to the model it's expecting a key called amount with a float value, another key called type with a string value, etc. The model provides a few different types of outputs. The one we'll be using here is output tensor called logits. This is a one element array our model returns with a prediction value ranging from 0 to 1 - 0 being a "not fraud" prediction and 1 being "fraud".

When we generate predictions on our deployed model in a few steps, we'll want to explain predictions on the logits tensor. What does that mean? Let's say we pass some transaction data to our model and the logits tensor returns .8. That means it thinks there's an 80% chance this prediction is a fraud. We want the explanations service to tell us how much each of our model's inputs (amount, type, oldbalanceDest, etc.), contributed to that prediction of .8.

We tell AI Explanations which input and output tensors to explain by providing an explanation_metadata.json file in the same Cloud Storage bucket as our saved model. We're almost ready to create that. First, we need to choose our model's baseline inputs.

Step 4: Choosing a baseline for explainability

Explainability helps us answer the question: "Why did our model think this was fraud?"

For tabular data, Cloud's Explainable AI service works by returning attribution values for each feature. These values indicate how much a particular feature affected the prediction. Let's say the amount of a particular transaction caused our model to increase its predicted fraud probability by 0.2%. You might be thinking "0.2% relative to what??". That brings us to the concept of a baseline.

The baseline for our model is essentially what it's comparing against. We select the baseline value for each feature in our model, and the baseline prediction consequently becomes the value our model predicts when the features are set at the baseline.

Choosing a baseline depends on the prediction task you're solving. For numerical features, it's common to use the median value of each feature in your dataset as the baseline. In the case of fraud detection, however, this isn't exactly what we want. We care most about explaining the cases when our model labels a transaction as fraudulent. That means the baseline case we want to compare against is non-fraudulent transactions.

To account for this, we'll use the median values of the non-fraudulent transactions in our dataset as the baseline. We can easily get the median in Pandas by running the following on the not_fraud dataset we created earlier:

not_fraud.median()

This tells us the median for each of the numerical features in our dataset. For the `type` string column, we'll use the most frequently occurring value from the not_fraud dataset. We can get that by running:

not_fraud['type'].value_counts()

In this case it's CASH_OUT.

Step 5: Creating the explanation_metadata.json file

With an understanding of the input and output tensor in our model, along with the baselines we plan to use, we're ready to put all of this info together in an `explanation_metadata.json` file. Define the JSON for this file in your notebook:

explain_metadata = {
    "inputs": {
      "amount": {
        "input_tensor_name": "amount:0",
        "input_baselines": [not_fraud['amount'].median()]
      },
      "newbalanceDest": {
        "input_tensor_name": "newbalanceDest:0",
        "input_baselines": [not_fraud['newbalanceDest'].median()]
      },
      "newbalanceOrig": {
        "input_tensor_name": "newbalanceOrig:0",
        "input_baselines": [not_fraud['newbalanceOrig'].median()]
      },
      "oldbalanceDest": {
        "input_tensor_name": "oldbalanceDest:0",
        "input_baselines": [not_fraud['oldbalanceDest'].median()]
      },
      "oldbalanceOrg": {
        "input_tensor_name": "oldbalanceOrg:0",
        "input_baselines": [not_fraud['oldbalanceOrg'].median()]
      },
      "step": {
        "input_tensor_name": "step:0",
        "input_baselines": [not_fraud['step'].median()]
      },
      "type": {
        "input_tensor_name": "type_indicator:0",
        "input_baselines": ["CASH_OUT"]
      }
    },
    "outputs": {
      "prob": {
        "output_tensor_name": "boosted_trees/head/predictions/logistic:0"
      }
    },
  "framework": "tensorflow"
  }

Then copy it to a local file:

with open('explanation_metadata.json', 'w') as output_file:
  json.dump(explain_metadata, output_file)

Finally, copy that file to the same Cloud Storage bucket as your SavedModel assets:

!gsutil cp explanation_metadata.json $export_path

Step 1: Create the model

Let's start by defining some variables we'll use in our deployment commands:

MODEL = 'fraud_detection'
VERSION = 'v1'

We can create the model with the following gcloud command:

!gcloud ai-platform models create $MODEL

Step 2: Deploy the model

Now we're ready to deploy our first version of this model with gcloud. The version will take ~5-10 minutes to deploy:

!gcloud beta ai-platform versions create $VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method 'sampled-shapley' \
--num-paths 10

In the origin flag, we pass in the Cloud Storage location of our saved model and metadata file. AI Explanations has two different explanation methods available. Here we're using Sampled Shapley since that is optimal for non-differentiable models like this one. The num-paths parameter indicates the number of paths sampled for each input feature. Generally, the more complex the model, the more approximation steps are needed to reach reasonable convergence.

To confirm your model deployed correctly, run the following gcloud command:

!gcloud ai-platform versions describe $VERSION --model $MODEL

The state should be READY.

Step 1: Preparing test inputs for prediction

For the purposes of explainability, we care most about explaining the cases where our model predicts fraud. We'll send 5 test examples to our model that are all fraudulent transactions.

We'll use gcloud to get predictions, and will send it a newline delimited file with our test examples as json. Run the following code to get the indices of all of the fraud examples from our test set:

fraud_indices = []

for i,val in enumerate(test_labels):
    if val == 1:
        fraud_indices.append(i)

Next we'll write 5 examples to a data.txt file in the format our model is expecting:

num_test_examples = 5
import numpy as np 

def convert(o):
    if isinstance(o, np.generic): return o.item()  
    raise TypeError

for i in range(num_test_examples):
    test_json = {}
    ex = test_set.iloc[fraud_indices[i]]
    keys = ex.keys().tolist()
    vals = ex.values.tolist()
    for idx in range(len(keys)):
        test_json[keys[idx]] = vals[idx]

    print(test_json)
    with open('data.txt', 'a') as outfile:
        json.dump(test_json, outfile, default=convert)
        outfile.write('\n')

To see what this file looks like, run:

!cat data.txt

Step 2: Send test instances to our model

We can send these five examples to our model, and save the API response to a variable:

explanations = !gcloud beta ai-platform explain --model $MODEL --version $VERSION --json-instances='data.txt'
explain_dict = json.loads(explanations.s)

Remember that all of the test examples we're sending are fraudulent transactions, so our model should predict a value close to 1 for each of them.

Step 3: Analyzing the explanation response

With the following code, we'll print our model's baseline prediction. This is what our model predicts when passed the baseline values we indicated above (the median values across all of our non-fraudulent transactions). Since this model outputs a single numerical value, the baseline prediction will be the same for every input we pass our model (that's why we're printing the baseline for the first example below):

print('Model baseline for fraud cases: ', explain_dict['explanations'][0]['attributions_by_label'][0]['baseline_score'], '\n')

The baseline is around .019 (yours may vary slightly) or a 1.9% chance of fraud. This means every attribution value we look at below will be relative to this value.

Next, we'll loop through the explanations response for each of the 5 test examples. For each one we'll print the models prediction score (float ranging from 0 to 1), and a chart showing the most important features for each example:

for i in explain_dict['explanations']:
    prediction_score = i['attributions_by_label'][0]['example_score']
    attributions = i['attributions_by_label'][0]['attributions']
    print('Model prediction:', prediction_score)
    fig, ax = plt.subplots()
    ax.barh(list(attributions.keys()), list(attributions.values()), align='center')
    plt.show()

You should see something like this:

In the first example, the account's initial balance before the transaction took place was the biggest indicator of fraud, pushing our model's prediction up from the baseline more than 0.8.

In the second example, the amount of the transaction was the biggest indicator, followed by the step. In the dataset, the "step" represents a unit of time (1 step is 1 hour). Attribution values can also be negative. Our model was not quite as confident in this case (predicting 89% chance of fraud) due to the original balance feature.

There is lots more you can do with Explainable AI on this model. Some ideas include:

If you'd like to continue using this notebook, it is recommended that you turn it off when not in use. From the Notebooks UI in your Cloud Console, select the notebook and then select Stop:

If you'd like to delete all resources you've created in this lab, simply delete the notebook instance instead of stopping it.

Using the Navigation menu in your Cloud Console, browse to Storage and delete both buckets you created to store your model assets.