Android Vendor Test Suite (VTS) consists of three products:


VTS itself means the compliance test suite of Android Vendor Interface (VINTF).

VINTF is a versioned, stable interface for Android vendor implementation. This concept is introduced from Android version 8.0 (O) in order to improve the engineering productivity, launch velocity, security, and reliability of Android device ecosystem.

VTS and VTS-* have a set of test cases designed to test the following components directly under VINTF:


VTS-* has the optional non-functional tests and test case development tools. Both are for quality assurance.

The non-functional tests include performance tests (e.g., vts-performance) and fuzz tests (e.g., vts-fuzz). The test development tools include a HAL API call trace recording tool and a native code coverage measurement tool.


Vendor Test Infrastructure (VTI) is a set of cloud-based infrastructures for Android device partners and Open Source Software (OSS) ecosystems.

It allows partners to easily create a cloud-based continuous integration service for VTS tests.

Are you interested in using and developing some VTS tests now? Then please click the next button.

Establishing a test environment

Recommended system environment:

To set up a testing environment:

  1. Install Python development kit:
$ sudo apt-get install python-dev
  1. Install Protocol Buffer tools (for Python):
$ sudo apt-get install python-protobuf
$ sudo apt-get install protobuf-compiler
  1. Install Python virtual environment-related tools:
$ sudo apt-get install python-virtualenv
$ sudo apt-get install python-pip
  1. Connect device to host:
$ adb devices
$ adb shell

Testing a patch

To test a patch:

  1. Build a VTS host-side package:
$ . build/
$ lunch aosp_arm64-userdebug
$ make vts -j
  1. Run the default VTS tests:
$ vts-tradefed
> run vts     // where vts is the test plan name

VTS plans

Available VTS test plans include:



> run vts

For default VTS tests

> run vts-hal

For default VTS HAL (hardware abstraction layer) tests

> run vts-kernel

For default VTS kernel tests

To view a list of all plans, refer to /test/vts/tools/vts-tradefed/res/

VTS TradeFed Console Options

Available VTS TradeFed console options include:



> run vts -m <test module>

Runs one specific test module

> run vts -l INFO

Prints detailed console logs

> list invocations (or "l i" for short)

Lists all invocation threads

> run vts --primary-abi-only

Runs a test plan on the primary ABI (e.g., ARM64) only.

> run vts --skip-all-system-status-check --skip-preconditions --primary-abi-only

Shortens test execution time

> run vts -s <device serial>

Selects a device to use when multiple devices are connected.

> help

Prints help page that lists other console options

For Windows Host

While building VTS on Windows is not supported, it is possible to run VTS on a Windows host machine with Python, Java, and ADB installed.

  1. Download links:
    Python 2.7
    ADB 1.0.39
    Install the required Python packages by using pip.
  2. Build VTS on Linux
$ . build/
$ lunch aosp_arm64-userdebug
$ make vts -j
  1. Copy out/host/linux-x86/vts/ to your Windows host and extract it.
  2. Add adb.exe to PATH. Run vts-tradefed_win.bat
$ vts-tradefed_win.bat
> run vts     // where vts is the test plan name

All VTS, VTS-*, and VTI code is kept in AOSP (Android Open Source Project). Let's download the AOSP source code based on this 'Downloading the Source' manual.

Write a Host-Side Python Test

We will extend the provided VTS HelloWorld Codelab test. Before actually extending that test, let's build and run that test.

$ make vts -j
$ vts-tradefed
> run vts -m VtsCodelabHelloWorldTest

If your VTS TradeFed console printed the following result (e.g., PASSED: 4), that means you can run VtsCodelabHelloWorldTest successfully on your device and thus are ready for this part of the codelab.

E/BuildInfo: Device build already contains a file for VIRTUALENVPATH in thread Invocation-<ID>
E/BuildInfo: Device build already contains a file for PYTHONPATH in thread Invocation-<ID>
I/VtsMultiDeviceTest: Setting test name as VtsCodelabHelloWorldTest
I/ConsoleReporter: [<ID>] Starting armeabi-v7a VtsCodelabHelloWorldTest with 2 tests
I/ConsoleReporter: [1/2 armeabi-v7a VtsCodelabHelloWorldTest <ID>] VtsCodelabHelloWorldTest#testEcho1 pass
I/ConsoleReporter: [2/2 armeabi-v7a VtsCodelabHelloWorldTest <ID>] VtsCodelabHelloWorldTest#testEcho2 pass
I/ConsoleReporter: [<ID>] armeabi-v7a VtsCodelabHelloWorldTest completed in 2s. 2 passed, 0 failed, 0 not executed
W/CompatibilityTest: Inaccurate runtime hint for armeabi-v7a VtsCodelabHelloWorldTest, expected 1m 0s was 19s
I/ResultReporter: Test Result: <omitted>/out/host/linux-x86/vts/android-vts/results/2017.04.21_11.27.07/test_result_failures.html
I/ResultReporter: Test Logs: <omitted>/out/host/linux-x86/vts/android-vts/logs/2017.04.21_11.27.07
I/ResultReporter: Invocation finished in 43s. PASSED: 4, FAILED: 0, MODULES: 2 of 2

It also shows where the test logs are kept (out/host/linux-x86/vts/android-vts/logs/2017.04.21_11.27.07) and the xml report is stored (out/host/linux-x86/vts/android-vts/results/2017.04.21_11.27.07).

The VtsCodelabHelloWorldTest code is stored in <your AOSP repo's local home dir>/test/vts/testcases/codelab/hello_world/. That directory has the following four files:

Let's look into each of the first three files.

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)
LOCAL_MODULE := VtsCodelabHelloWorldTest
VTS_CONFIG_SRC_DIR := testcases/codelab/hello_world
include test/vts/tools/build/

It tells that the test module's build module name is VtsCodelabHelloWorldTest, and its source code is kept in the testcases/codelab/hello_world directory. The last line is to use the predefined VTS build rule.


<configuration description="Config for VTS CodeLab HelloWorld test case">
    <target_preparer class="">
        <option name="push-group" value="HostDrivenTest.push" />
    <target_preparer class="">
    <test class="">
        <option name="test-module-name" value="VtsCodelabHelloWorldTest"/>
        <option name="test-case-path" value="vts/testcases/codelab/hello_world/VtsCodelabHelloWorldTest" />

This xml file tells VTS TradeFed how to prepare and run the VtsCodelabHelloWorldTest test. It uses two VTS TradeFed test preparers: VtsFilePusher and VtsPythonVirtualenvPreparer. It uses VtsFilePusher to push all the files needed for a host-driven test. The actual list is defined in HostDrivenTest.push file which includes VtsDriverHal.push and VtsDriverShell.push files. Those included files may include some other push files defined in the same directory. The other used preparer VtsPythonVirtualenvPreparer creates a Python virtual environment using Python v2.7 and installs all the default Python packages to that virtual environment. The actual test execution is specified by the VtsMultiDeviceTest class where option test-module-name specifies the actual test module name, which we can use when we do > run vts -m <Test Module Name> from a VTS TradeFed console, and option test-case-path specifies the path of the actual test source file (excluding .py extension). Note that the test module name can be no more than 43 characters in length.

import logging

from import asserts
from import base_test
from import const
from import test_runner
from vts.utils.python.controllers import android_device

class VtsCodelabHelloWorldTest(base_test.BaseTestClass):
    """Two hello world test cases which use the shell driver."""

    def setUpClass(self):
        self.dut = self.registerController(android_device)[0]

    def testEcho1(self):
        """A simple testcase which sends a command.""""my_shell1")  # creates a remote shell instance.
        results ="echo hello_world")  # runs a shell command.[const.STDOUT]))  # prints the stdout
        asserts.assertEqual(results[const.STDOUT][0].strip(), "hello_world")  # checks the stdout
        asserts.assertEqual(results[const.EXIT_CODE][0], 0)  # checks the exit code

    def testEcho2(self):
        """A simple testcase which sends two commands.""""my_shell2")
        my_shell = getattr(, "my_shell2")
        results = my_shell.Execute(["echo hello", "echo world"])[const.STDOUT]))
        asserts.assertEqual(len(results[const.STDOUT]), 2)  # check the number of processed commands
        asserts.assertEqual(results[const.STDOUT][0].strip(), "hello")
        asserts.assertEqual(results[const.STDOUT][1].strip(), "world")
        asserts.assertEqual(results[const.EXIT_CODE][0], 0)
        asserts.assertEqual(results[const.EXIT_CODE][1], 0)

if __name__ == "__main__":

This file contains the actual test source code. It has the test class VtsCodelabHelloWorldTest which inherits from BaseTestClass class. This class can have four default methods: setUpClass which is called once at the beginning for setup, setUp which is called before running each test case, tearDown which is called after each test case, and tearDownClass which is called once at the end for cleanup. In this case, only setUpClass is defined (by overriding) and simply gets a DUT (Device Under Test) instance.

A test case is a method with test as the prefix of its name (e.g., testEcho1 and testEcho2). testEcho1 test case invokes a remote shell instance (first line), sends 'echo hello_world' shell command to a target device (second line), and verifies the results (fourth line to check the stdout of the echo command and fifth line to check the exit code of the same echo command). testEcho2 test case shows how to send multiple shell commands using one Python function call.

To extend this test module, let's add the following method to VtsCodelabHelloWorldTest class.

    def testListFiles(self):
        """A simple testcase which lists files.""""my_shell3")
        results ="ls /data/local/tmp")[const.STDOUT]))
        asserts.assertEqual(results[const.EXIT_CODE][0], 0)

Then, run the following commands to test it:

$ make vts
$ vts-tradefed
> run vts -m VtsCodelabHelloWorldTest

You can check the test logs to see whether all files are correctly listed. That result can be validated by using adb shell ls /data/local/tmp command.

Because it only used host-side Python code, we call this a host-side Python test.

Write a Target-Side C/C++ Binary Test

This part explains how to package a target-side binary or a shell script as a VTS test by using BinaryTest template. Let's assume your binary test module name is `vts_sample_binary_test` that would exit with 0 if that test passes. You can wrap the test easily with VTS BinaryTest template by specifying the test module path and type in AndroidTest.xml:

<test class="">
    <option name="test-module-name" value="VtsSampleBinaryTest" />
    <option name="binary-test-source" value="DATA/nativetest/vts_sample_binary_test" />

The `binary-test-source` option specifies where the binary is packaged in VTS, and the BinaryTest template will push the test binary to a default location on device and deleting it after test finishes.

You can also specify a test tag, which is often used to distinguish 32bit tests and 64bit tests.

<test class="">
    <option name="test-module-name" value="VtsSampleBinaryTest" />
    <option name="binary-test-source" value="_32bit::DATA/nativetest/vts_sample_binary_test" />
    <option name="binary-test-source" value="_64bit::DATA/nativetest64/vts_sample_binary_test" />

An example test is available at $ANDROID_BUILD_TOP/test/vts/testcases/codelab/target_binary/.

Using a VTS template, you can quickly develop a VTS test for a specific objective. This part of codelab explains a few commonly used templates.

Wrap a target side GTest binary with GtestBinaryTest template

If your test binary is a GTest (Google Test), you may still use the BinaryTest template, which will treat the test module as a single test case in result reporting. You can specify the `gtest` binary test type so that individual test cases will be correctly parsed.

<test class="">
    <option name="test-module-name" value="VtsSampleBinaryTest" />
    <option name="binary-test-source" value="_32bit::DATA/nativetest/vts_sample_binary_test" />
    <option name="binary-test-source" value="_64bit::DATA/nativetest64/vts_sample_binary_test" />
    <option name="binary-test-type" value="gtest" />

GtestBinaryTest template will first list all the available test cases, and then run them one by one through shell command with --gtest_filter flag. This means, each test case will be executed on its own Linux process, and global static variable across test cases should not be used.

Wrap a target side HIDL HAL test binary with HalHidlGtest template

From Android version 8.0 (O), Hardware Interface Definition Language (HIDL) is used to specify HAL interfaces. Using VTS, HIDL HAL testing can be done effectively because the VTS framework handles its non-conventional test steps transparently and provides various useful utils which a HIDL HAL test case can use.

A HIDL HAL target side test often needs setup steps such as disabling Java framework, setting SELinux mode, toggling between passthrough and binder mode, checking HAL service status, and so forth.

Let's assume your test AndroidTest.xml looks like:

<test class="">
    <option name="test-module-name" value="VtsHalMyHidlTargetTest"/>
    <option name="binary-test-source" value="..." />

The following option is needed to use the HIDL HAL gtest template.

<option name="binary-test-type" value="hal_hidl_gtest" />

You can now use one of the following four preconditions to describe when your HIDL HAL test should be run.

1. Option `precondition-hwbinder-service` is to specify a hardware binder service needed to run the test.

<option name="precondition-hwbinder-service" value="" />

2. Option `precondition-feature` is to specify the name of a `pm`-listable feature needed to run the test.

 <option name="precondition-feature" value="" />

3. Option `precondition-file-path-prefix` is to specify the path prefix of a file (e.g., shared library) needed to run the test.

 <option name="precondition-file-path-prefix" value="/*/lib*/hw/libmy." />

4. Option `precondition-lshal` is to specify the name of a `lshal`-listable feature needed to run the test.

 <option name="precondition-lshal" value="" />

Other options:

The option `skip-if-thermal-throttling` can be set to `true` if you want to skip a test when your target device suffers from thermal throttling:

 <option name="skip-if-thermal-throttling" value="true" />

Use a target side test runner for HIDL HAL

Target side test runner is currently available for GTest and HAL HIDL tests.

A HIDL GTest extending from VtsHalHidlTargetTestBase will allow VTS framework to toggle between passthrough and binder mode for performance comparison.

The VTS HIDL target templates are located in `VtsHalHidlTargetTestBase` module, and you may include it through your Android.bp file in the following way:

 cc_test {
    name: "VtsHalHidlSampleTest",
    defaults: ["hidl_defaults"],
    srcs: ["SampleTest.cpp"],
    shared_libs: [
    static_libs: ["VtsHalHidlTargetTestBase"],

And in `SampleTest.cpp`:

#include <VtsHalHidlTargetTestBase.h>

class SampleTest : public ::testing::VtsHalHidlTargetTestBase {
Interface int_ = ::testing::VtsHalHidlTargetTestBase::getService<IInterface>();


`VtsHalHidlTargetCallbackBase` is another template in that runner. It offers utility function such as WaitForCallback and NotifyFromCallback. A typical usage is as follows:

class CallbackArgs {
    ArgType1 arg1;
    ArgType2 arg2;

class MyCallback
    : public ::testing::VtsHalHidlTargetCallbackBase<>,
      public CallbackInterface {
  CallbackApi1(ArgType1 arg1) {
    CallbackArgs data;
    data.arg1 = arg1;
    NotifyFromCallback("CallbackApi1", data);

  CallbackApi2(ArgType2 arg2) {
    CallbackArgs data;
    data.arg1 = arg1;
    NotifyFromCallback("CallbackApi2", data);

Test(MyTest) {
  auto result = cb_.WaitForCallback("CallbackApi1");
  // cb_ as an instance of MyCallback, result is an instance of
  // ::testing::VtsHalHidlTargetCallbackBase::WaitForCallbackResult
  EXPECT_TRUE(result.no_timeout); // Check wait did not time out
  EXPECT_TRUE(result.args); // Check CallbackArgs is received (not
                               nullptr). This is optional.
  // Here check value of args using the pointer result.args;
  result = cb_.WaitForCallback("CallbackApi2");
  // Here check value of args using the pointer result.args;

  // Additionally. a test can wait for one of multiple callbacks.
  // In this case, wait will return when any of the callbacks in the provided
  // name list is called.
  result = cb_.WaitForCallbackAny(<vector_of_string>)
  // When vector_of_string is not provided, all callback functions will
  // be monitored. The name of callback function that was invoked
  // is stored in

The source code may contain more detailed explanation on the APIs.

Customize your test configuration (Optional)

AndroidTest.xml file

Pre-test file pushes from host to device can be configured for `VtsFilePusher` in `AndroidTest.xml`.

By default, `AndroidText.xml` pushes a group of files required to run VTS framework specified in `test/vts/tools/vts-tradefed/res/push_groups/HidlHalTest.push`. Individual file push can be defined with "push" option inside `VtsFilePusher`. Please refer to TradeFed for more detail.

Python module dependencies can be specified as "dep-module" option for `VtsPythonVirtualenvPreparer` in `AndroidTest.xml`. This will trigger the runner to install or update the modules using pip before running tests.

    <target_preparer class="">
        <option name="dep-module" value="numpy" />
        <option name="dep-module" value="scipy" />
        <option name="dep-module" value="matplotlib" />
        <option name="dep-module" value="Pillow" />

`VtsPythonVirtualenvPreparer` will install a set of package including future, futures, enum, and protobuf by default. To add a dependency module, please add `<option name="dep-module" value="<module name>" />` inside `VtsPythonVirtualenvPreparer` in 'AndroidTest.xml'

Test case config

Optionally, a .config file can be used to pass variables in json format to test case.

To add a .config file, create a .config file under your project directory using project name:

$ vi test/vts/testcases/host/<your project directiry>/<your project name>.config

Then edit its contents to:

    <key_1>: <value_1>,
    <key_2>: <value_2>

And in your test case python class, you can get the json value by using self.getUserParams method.

For example:

    required_params = ["key_1"]
    self.getUserParams(required_params)"%s: %s", "key_1", self.key_1)

At last, add the following line to `` class

in `AndroidTest.xml`:

`<option name="test-config-path" value="vts/testcases/<your project directiry>/<your project name>.config" />`

Your config file will overwrite the following default json object defined at


            "name": "<your project name>",
            "AndroidDevice": "*"
    "log_path": "/tmp/logs",
    "test_paths": ["./"]


The test plan to run VTS performance tests is vts-performance. The available test modules in the vts-performance test plan are: BinderThroughputBenchmark, BinderPerformanceTest, HwBinderBinderizeThroughputTest, HwBinderBinderizePerformanceTest, HwBinderPassthroughThroughputTest, HwBinderPassthroughPerformanceTest, FmqPerformanceTest (listed in vts-performance.xml)

The source code for performance tests is located at test/vts-testcase/performance.

Description of test modules

Performance Tests for Binder, HwBinder

Performance Test for Fast Message Queue (fmq)

Throughput Tests for Binder, HwBinder

How to run performance tests and interpret results

  1. Run the vts-performance test plan (or alternatively, an individual test module)
$ vts-tradefed
> run vts-performance
  1. Read host logs to see the performance measurements of failed tests.

For Binder, HwBinder performance tests, the test output has four columns:

  1. Benchmark: represents message data size (bytes)
  2. Time: roundtrip RPC latency in real time (ns)
  3. CPU: roundtrip RPC latency in CPU time (ns)
  4. Iterations: number of iterations per second

Here are some examples of test outputs that have been formatted from host log:

Test module: HwBinderBinderizePerformanceTest for a 2016 Pixel XL device

Test module: HwBinderPassthroughPerformanceTest for a 2016 Pixel XL device

Test module: HwBinderBinderizeThroughputTest for a 2016 Pixel XL device

HAL API call latency profiling

By enable API call latency profiling for your VTS HIDL HAL test, you are expected to get:

1. Add profiler library to VTS

To enable profiling for your HAL testing, we need to add the corresponding profiler library in: The name of the profiling library follow the pattern as:

We will use Nfc Hal as a running example throughout this section, so, the profiler library name is

2. Modify Your VTS Test Case

If you have not already, Codelab for Host-Driven Tests gives an overview of how to write a VTS test case. This section assumes you have completed that codelab and have at least one VTS test case (either host-side or target-side) which you would like to enable profiling.

2.1. Target-Side Tests

This subsection describes how to enable profiling for target-side tests. To enable profiling for host-side tests, follow the same steps by replacing target to host everywhere.

Copy an existing test directory

$ cd test/vts-testcase/hal/nfc/V1_0/
$ cp target target_profiling -rf

Note nfc could be replaced by the name of your HAL and V1_0 could be replaced by the version of your HAL version with format V<MAJOR_VERSION>_<MINOR_VERSION>.

Then rename the test name from VtsHalNfcV1_0Target to VtsHalNfcV1_0TargetProfiling everywhere.

Add the following lines to the corresponding AndroidTest.xml file under the target_profiling directory to push the profiler libraries to target.

<option name="push" value="DATA/lib/hal_profiling_lib->/data/local/tmp/32/"/>
<option name="push" value="DATA/lib64/hal_profiling_library->/data/local/tmp/64/"/>

Note, if the testing hal relies on a dependent hal (e.g. android.hardware.nfc@2.0 depends on android.hardware.nfc@1.0), we need to push the profiler library for the dependent hal as well.

Add the following lines to the corresponding AndroidTest.xml file under the target_profiling directory to enable profiling for test.

<option name="enable-profiling" value="true" />

An example AndroidTest.xml file looks like:

<configuration description="Config for VTS VtsHalNfcV1_0TargetProfiling test cases">
    <target_preparer class="">
        <option name="push-group" value="HalHidlTargetProfilingTest.push" />
        <option name="cleanup" value="true"/>
        <option name="push" value="DATA/lib/>/data/local/tmp/32/"/>
        <option name="push" value="DATA/lib64/>/data/local/tmp/64/"/>
    <target_preparer class="" />
    <test class="">
        <option name="test-module-name" value="VtsHalNfcV1_0TargetProfiling" />
        <option name="binary-test-source" value="_32bit::DATA/nativetest/VtsHalHalV1_0TargetTest/VtsHalHalV1_0TargetTest" />
        <option name="binary-test-source" value="_64bit::DATA/nativetest64/VtsHalHalV1_0TargetTest/VtsHalHalV1_0TargetTest" />
        <option name="binary-test-type" value="hal_hidl_gtest" />
        <option name="enable-profiling" value="true" />
        <option name="precondition-lshal" value="android.hardware.nfc@1.0"/>
        <option name="test-timeout" value="1m" />

3. Schedule the profiling test

Add the following lines to vts-serving-staging-hal-hidl-profiling.xml

<option name="compatibility:include-filter" value="VtsHalProfilingTestName" />

4. Subscribe the notification alert emails

Please check notification page for the detailed instructions.

Basically, now it is all set so let's wait for a day or so and then visit your VTS Dashboard. At that time, you should be able to add VtsHalNfcV1_0TargetProfiling to your favorite list.

That is all you need to do in order to subscribe alert emails which will sent if any notably performance degradations are found by your profiling tests.

Also if you click VtsHalNfcV1_0TargetProfiling in the dashboard main page, the test result page shows up where the top-left side shows the list of APIs which have some measured performance data.

5. Where to find the trace files?

All the trace files generated during the tests are by default stored under /tmp/vts-test-trace/.

To change the directory that store the path file, create a configure file e.g. Test.config under the test directory with

    "profiling_trace_path": "path_to_store_the_trace_file"

add following lines to the corresponding AndroidTest.xml file

<option name="save-trace-file-remote" value="true" />
<option name="test-config-path" value="path/to/your/test/Test.config" />

Custom profiling points and post-processing

1. Prerequisites

Let's assume you have created a performance benchmark binary which could run independently on device, e.g. my_benchmark_test

2. Integrate the benchmark as a VTS test

2.1. Add benchmark binary to VTS

To package benchmark binary with VTS, add it to after the vts_test_bin_packages variable

2.2. Add VTS host side script

The host side script control the benchmark execution and the processing of the benchmark results. It typically contains the following major steps.

i. Register device controller and invoke shell on the target device.

def setUpClass(self):
      self.dut = self.registerController(android_device)[0]"one")

ii. Setup the command to run the benchmark on the device.

results =[
            "%s" % path_to_my_benchmark_test,

Where path_to_binary represents the full path of benchmark binary on the target device. The default path is /data/local/tmp/my_benchmark_test

iii. Validate benchmark test results.

            "Benchmark test failed.")

iv. Parse the benchmark test results and upload the metrics to VTS web dashboard.

Depends on the output format of the test results, we need to parse the STDOUT content in the return results into performance data points. Currently, VTS supports processing and displaying two performance data type. One is timestamp sample which records the start and end time stamp for a particular operation. The other is vector data sample which records a list of profiling data along with data labels.

Take the vector data sample as example, let's suppose we have parsed the benchmark results into two vectors. One stores the performance data (e.g. latency of the API call), the other stores the corresponding data labels (e.g. the input size of API call)

Call AddProfilingDataLabeledVector to upload the vector data sample to VTS web as follows:

            "Benchmark name",

2.3. Configure the VTS test

Follow the same instruction in Codelab for Host-Driven Tests ("Write a VTS Test" section) to create a host-side VTS test using the host side script created in section 2.2.


Support for native coverage through VTS depends on a functioning instance of the VTS Dashboard, including integration with a build server and a Gerrit server. See the documentation for VTS Dashboard setup and configuration before proceeding.

Building a Device Image

The first step in measuring coverage is creating a device image that is instrumented for gcov coverage collection. This can be accomplished with a flag in the device manifest and a few build-time flags.

Let's add the following code segment to the file:

# Set if a device image has the VTS coverage instrumentation.
ifeq ($(NATIVE_COVERAGE),true)

This will have no impact on the device when coverage is disabled at build time but will add a read-only device property in the case when coverage is enabled.

Next, we can build a device image. The continuous build server must be configured to execute the build command with a few additional flags: NATIVE_COVERAGE, and COVERAGE_PATHS. The former is a global flag to enable or disable coverage instrumentation. The latter specifies the comma-separated paths to the source which should be instrumented for coverage.

As an example, let's propose an example for measuring coverage on the NFC implementation. We can configure the build command as follows:

> make NATIVE_COVERAGE=true COVERAGE_PATHS="hardware/interfaces/nfc,system/nfc"

Modifying Your Test for Host-Driven HIDL HAL Tests

In most cases, no additional test configuration is needed to enable coverage.

By default, coverage processing is enabled on the target if it is coverage instrumented (as per the previous section) and the test is a target-side binary.

Host-driven tests have more flexibility for coverage measurement, as the host

may request coverage files after each API call, at the end of a test case, or when

all test cases have completed.

Measure coverage at the end of an API call

Coverage is available with the result of each API call. To add it to the dashboard

for display, call self.coverage.SetCoverageData with the contents of result.raw_coverage_data.

For example, in a test of the lights HAL, the following would gather coverage

after an API call to set the light:

# host-driven API call to light HAL
result = self.dut.hal.light.set_light(None, gene)

Measure coverage at the end of a test case

After a test case has completed, coverage can be gathered independently of an

API call. Coverage can be requested from the device under test (dut) with the

method GetRawCodeCoverage. For example, at the end of a host-side NFC test case, coverage data is fetched using the call:


Measure coverage by pulling all coverage files from the device

For coarse coverage measurement (e.g. after running all of the tests), coverage

can be requested by pulling any coverage-related output files from the device

manually over ADB. The base test class provides a coverage feature to fetch

and process the files.

self.coverage.SetCoverageData(dut=self.dut, isGlobal=True)

Configuring the Test for Coverage (optional)

The VTS framework automatically derives the information it needs to process the coverage data emitted from the device after test execution and to query Gerrit for the relevant source code. However, it relies on the assumption that there is a symmetry between git project names and the full Android source tree; specifically, the project name and the path to the project from the Android root may differ by at most one relative node in order for the VTS framework to identify the source code. For instance, the following two paths would both be identified as the project platform/test/vts:


On the other hand, a project with the path "android/platform/test/vts" would not be automatically matched with the project by name "platform/test/vts".

In cases when the project name differs significantly from the project's path from the Android root, a manual configuration must be specified in the test configuration JSON file. We must specify a list of dictionaries, each containing the module name (i.e. the module name in the make file for the binary or shared library), as well as git project name and path for the source code included in the module. For example, add the following to the configuration JSON file:

"modules": [{
               "module_name": "<module name>",
               "git_project": {
                                  "name": "<git project name>",
                                  "path": "<path to git project root>"

For the lights HAL, the test configuration file would look like:

"modules": [{
        "module_name": "vendor/lib64/hw/lights.msm8994",
        "git_project": {
                          "name": "platform/hardware/qcom/display",
                          "path": "hardware/qcom/display"
        "module_name": "system/lib64/hw/android.hardware.light@2.0-impl",
        "git_project": {
                          "name": "platform/hardware/interfaces",
                          "path": "hardware/interfaces"

Running VTS

At test runtime, coverage will automatically be collected and processed by the VTS framework with no additional effort required. The processed coverage data will be uploaded to the VTS Dashboard along with the test results so that the source code can be visualized with a line-level coverage overlay.

Note that two external dependencies are necessary to support coverage:

  1. A Gerrit server with REST API must be available and configured to integrate with the VTS Dashboard. See the Dashboard setup for integration directions.
  2. A build artifact server with REST API must be configured to integrate with the VTS runner. This will allow the runner to fetch both a build-time coverage artifact from the building step above as well as the source version information for each git project within the repository at build time. The VTS framework expects an a JSON file named "BUILD_INFO" which contains a dictionary of source project names to git revisions under the key "repo-dict".

Visualizing Coverage

After completing the build step and the test execution step, the VTS Dashboard should display a test result with a link in the row labeled "Coverage". This will display a user interface similar to the one below.

Lines executed are highlighted in green, while lines not exercised by the test are highlighted in red. White lines are not executable lines of code, such as comments and structural components of the coding language.

Offline Coverage

If the user would like to measure coverage without integrating with the VTS Dashboard or a build server, offline coverage measurement is also possible. First, we build a local device image:

> . build/
> lunch <product name>-userdebug
> make NATIVE_COVERAGE=true COVERAGE_PATHS="<list of paths to instrument with coverage>"

Next, we flash the device with the coverage-instrumented device image and run a VTS test. Note that you must manually force the HALs into same-process mode in order for the framework to extract coverage data.

Finally, we can process the files by pulling the outputted GCDA files from the device using adb and matching them with the source code and GCNO files produced at build time. The file structure is symmetric in the Android source code, out directory, and data partition of the device.

These three can be combined into a tool such as gcov or lcov to produce a local coverage report.

Background and FAQs


To measure coverage, the source file is divided into units called basic

blocks, which may contain one or more lines of code. All code in the same basic

block are accounted for together. Some lines of code (i.e. variable

declarations) are not executable and thus belong to no basic block. Some lines

of code actually compile to several executable instructions (i.e. shorthand

conditional operators) and belong to more than one basic block.

The generated coverage report displays a color-coded source file with numerical

annotations on the left margin. The row fill indicates whether or not a line of

code was executed when the tests were run: green means it was covered, red means

it was not. The corresponding numbers on the left margin indicate the number of

times the line was executed.

Lines of code that are not colored and have no execution count in the margin are

not executable instructions.


Why do some lines have no coverage information?

The line of code is not an executable instruction. For example, comments and structural coding language elements do not reflect instructions to the processor.

Why are some lines called more than expected?

Since some lines of code may belong to more than one basic block, they may

appear to have been executed more than expected. For example, a line of code with an inline conditional statement may cause the line to belong to two basic blocks. Even if a line of code belongs to only one basic block, it may be display as having been executed more than it actually was. This may occur if one or more lines of code in the same basic block were executed, causing the execution count of the whole basic block to increase.

What does HIDL HAL Interface Fuzzer do?

HIDL HAL Interface Fuzzer (inteface fuzzer) is a fuzzer binary built using LLVM asan, sancov, and libFuzzer. It runs against a user-specified target HIDL HALs. It calls HAL functions in random order with random inputs until a terminating condition, e.g. HAL crash, sanitizer violation, timeout.

More information about asan, sancov, and libFuzzer.


All the code for HIDL HAL interface fuzzer is already carried by In other words, no additional test code needs to be written or compiled. Only configuration is needed to run the interface fuzzer against a targeted HAL.

As usual, you need an and an AndroidTest.xml to deploy the fuzz test as part of VTS.

Assume your test is named: VtsHalBluetoothV1_0IfaceFuzzer. Then AndroidTest.xml should look something like this:

<target_preparer class="">
    <option name="push-group" value="IfaceFuzzerTest.push"/>
<target_preparer class=""/>
<test class="">
    <option name="test-module-name" value="VtsHalBluetoothV1_0IfaceFuzzer"/>
    <option name="hal-hidl-package-name" value="android.hardware.bluetooth@1.0"/>
    <option name="test-case-path" value="vts/testcases/fuzz/template/iface_fuzzer_test/iface_fuzzer_test"/>
    <option name="test-timeout" value="3h"/>

This should looks fairly standard. The only things to pay attention to are these three lines:

  1. This option specifies what files need to be pushed onto the device. Contents of IfaceFuzzerTest.push.
<option name="push-group" value="IfaceFuzzerTest.push"/>
  1. This option specifies bluetooth HAL as our fuzz target.
<option name="hal-hidl-package-name" value="android.hardware.bluetooth@1.0"/>
  1. This option specifies the host code used to deploy the fuzzer binary.
<option name="test-case-path" value="vts/testcases/fuzz/template/iface_fuzzer_test/iface_fuzzer_test"/>


To run the fuzzer you need to compile VTS with appropriate asan and sancov build options. From android source root directory do:

$ SANITIZE_TARGET="address coverage"  make vts -j64
$ vts-tradefed run commandAndExit vts -l VERBOSE --module VtsHalBluetoothV1_0IfaceFuzzer

This will run VtsHalBluetoothV1_0IfaceFuzzer test, print logs to screen, and return back to shell.


You will have to rely on logs to identify fuzzer failure. If the fuzzer encounters an error (e.g. segfault, buffer overflow, etc), you will see something like this in your log:

==15644==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000110 (pc 0x0077f8776e80 bp 0x007fe11bed90 sp 0x007fe11bed20 T0)
==15644==The signal is caused by a READ memory access.
==15644==Hint: address points to the zero page.
    #0 0x77f8776e7f  (/vendor/lib64/hw/
    #1 0x77f87747e3  (/vendor/lib64/hw/
    #2 0x77f87e384b  (/system/lib64/
    #3 0x79410ae4df  (/data/local/tmp/64/
    #4 0x794498c90f  (/data/local/tmp/64/
    #5 0x5f42e82ca3  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0xaca3)
    #6 0x5f42e8f08f  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x1708f)
    #7 0x5f42e8f27b  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x1727b)
    #8 0x5f42e900a7  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x180a7)
    #9 0x5f42e90243  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x18243)
    #10 0x5f42e88cff  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x10cff)
    #11 0x5f42e8655f  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0xe55f)
    #12 0x7944aef5f3  (/system/lib64/
    #13 0x5f42e8029b  (/data/local/tmp/libfuzzer_test/vts_proto_fuzzer+0x829b)

AddressSanitizer can not provide additional info.

This means that the fuzzer was able to trigger a segfault somewhere in bluetooth HAL implementation. Unfortunately, we don't have a way to symbolize this stack trace yet.

However, the log will contain the last call sequence batch that triggered the failure.


Let's assume you have got trace files for your test (e.g. by running the tests with profiling enabled, see instructions about HAL API call latency profiling). The trace files should be stored under test/vts-tescase/hal-trace/<HAL_NAME>/<HAL_VERSION>/

where <HAL_NAME> is the name of your HAL and <HAL_VERSION> is the version of your HAL with format V<MAJOR_VERSION>_<MINOR_VERSION>.

We will use Vibrator Hal as a running example throughout this section, so the traces are stored under test/vts-tescase/hal-trace/vibrator/V1_0/

Create an HIDL HAL replay test

Follow the same instruction in Codelab for Host-Driven Tests to create a host-side VTS test with name VtsHalVibratorV1_0TargetReplay.

Add the following line to the corresponding AndroidTest.xml under the test configuration to use the VTS replay test template.

<option name="binary-test-type" value="hal_hidl_replay_test" />

Add the following line to the corresponding AndroidTest.xml under the test configuration to add the trace file for replay.

<option name="hal-hidl-replay-test-trace-path" value="test/vts-testcase/hal-trace/vibrator/V1_0/vibrator.vts.trace" />

Note, if you want or replay multiple traces within the test, add each trace file using the above configuration.

Add the following line to the corresponding AndroidTest.xml under the test configuration to indicate the HIDL HAL package name for the test.

<option name="hal-hidl-package-name" value="android.hardware.vibrator@1.0" />

An example AndroidTest.xml for a replay test looks like follows:

<configuration description="Config for VTS VtsHalVibratorV1_0TargetReplay test cases">
    <target_preparer class="">
        <option name="abort-on-push-failure" value="false"/>
        <option name="push-group" value="HalHidlHostTest.push"/>
        <option name="cleanup" value="true" />
        <option name="push" value="spec/hardware/interfaces/vibrator/1.0/vts/Vibrator.vts->/data/local/tmp/spec/target.vts" />
        <option name="push" value="DATA/lib/>/data/local/tmp/32/"/>
        <option name="push" value="DATA/lib64/>/data/local/tmp/64/"/>
    <target_preparer class=""/>
    <test class="">
        <option name="test-module-name" value="VtsHalVibratorV1_0TargetReplay"/>
        <option name="binary-test-type" value="hal_hidl_replay_test" />
        <option name="hal-hidl-replay-test-trace-path" value="test/vts-testcase/hal-trace/vibrator/V1_0/vibrator.vts.trace" />
        <option name="hal-hidl-package-name" value="android.hardware.vibrator@1.0" />
        <option name="test-timeout" value="2m"/>

Schedule the replay test

Add the following line to vts-serving-staging-hal-hidl-replay.xml

<option name="compatibility:include-filter" value="VtsHalVibratorV1_0TargetReplay"/>

Basically, now it is all set so let's wait for a day or so and then visit your VTS Dashboard. At that time, you should be able to add VtsHalVibratorV1_0TargetReplay to your favorite list.

VtsVndkAbiTest ensures that the ABI of VNDK libraries is compatible with generic system image. It compares the libraries with pre-generated dump files which include symbols and virtual function tables.

Generate ABI dump

  1. Select a generic lunch target and build
$ . build/
$ lunch aosp_arm64-userdebug
$ make -j30
  1. Compile vndk-vtable-dumper
$ make vndk-vtable-dumper -j30
  1. Create a text file which consists of library names. One library in each line. In this example, the file name is vndk_list_26.txt
  2. Run to generate dump files in the directory ./26. In addition to the text file, the script accepts library names as arguments.
$ ./ -o ./26 vndk_list_26.txt
$ ./ -o ./26

By default, the script detects the target CPU architecture and search ${ANDROID_PRODUCT_OUT}/system/lib[64] for the libraries. To run the script outside Android source, the following environment variables and command-line options must be specified:

In order to configure the VTS Dashboard and notification service, several setup, configuration, and integration steps must be executed. Most parts of the VTS Dashboard are self-contained, but others are not and so they will depend on your own tool configurations. Begin by completing the first two sections first and then proceed to the third section which guides integration with existing services; note that the last step is only needed if the VTS Dashboard will be used to display coverage from test execution time and requires strong domain knowledge of existing internal web services.

The code for VTS Dashboard is located under test/vts/web/dashboard in Android O but will be migrated to test/vti/dashboard moving forward. We will refer to DASHBOARD_TOP as one of these two locations depending on the Android version.

Configure a Google App Engine project and Deploy the VTS Dashboard

This section should only be done once, when the project is deployed to the cloud. If changes are made to the code under DASHBOARD_TOP, then the VTS Dashboard need to be re-deployed (step 5); otherwise this is a one-time setup. Note that there most likely will only be one VTS Dashboard instance in Google Cloud for an entire company, so one person or group should be selected to own the web project.

1. Create a Google App Engine Project

  1. Decide how many Google Compute Engine machines you'd like in your cluster to balance your cost and performance constraints

2. Configure the App Engine project on Google Cloud Console

  1. Add an authorized email sender address under App Engine > Settings > Application Settings > Email API authorized senders. This will be the email address used to send emails to users about test failures/fixes
  2. Create an OAuth 2.0 client ID under IAM & Admin > API Credentials
  3. Create a service account and key file under IAM & Admin > Service accounts

3. Prepare the Deployment Host

Install some dependencies needed on the host machine that will be deploying the project to the cloud.

  1. Install Java 8
  2. Install Google Cloud SDK
  3. Run the setup instructions to initialize gcloud and log in to the project you created in step 1.
  4. Install Maven

For more information about setting up the host and using Maven, refer to the App Engine documentation.

4. Specify Project Configurations

Fill out the project configuration file (DASHBOARD_TOP/pom.xml) with parameters from the previous steps.

  1. appengine.clientID -- oauth 2.0 client ID for App Engine project (identifies the App Engine project as an OAuth 2.0 client, from step 2.2)
  2. appengine.serviceClientID -- ‘client_id' from service JSON file (from step 2.3)
  3. appengine.senderEmail -- email address from which to send alerts (from step 2.1)
  4. appengine.emailDomain -- email address domain to which emails will be sent (e.g. to limit emails to gmail accounts only)

5. Deploy the Project

To test the project locally using the App Engine development server, run the command "mvn clean appengine:devserver" from DASHBOARD_TOP.

To deploy to the cloud, run the command "mvn clean appengine:update" from DASHBOARD_TOP.

For additional documentation regarding Google Cloud App Engine setup, refer to the documentation for Java applications.

Configure the VTS Runner to Upload Results to the VTS Dashboard Service

After completing the first section, the web service is up and running. Next, the VTS test runner must be configured to upload the data to the correct place. Note that these changes are important for any machine running VTS that should report to the web, unlike the following section which only was relevant to the admin of the web service. The changes can be made locally for a per-machine configuration or can be checked it into the source tree so that everyone running VTS can post data to the Dashboard. The following changes must be made to the file test/vts/tools/vts-tradefed/res/default/DefaultTestCase.config:

  1. Add a value for key "service_key_json_path". This is where the key file from step 2.3 is stored. It should be located in some network drive or a directory on the local machine. VTS needs to read the file when running tests in order to post data to the VTS Dashboard.
  2. Add a value for key "dashboard_post_command". This should be a command-line for Windows/Linux (whichever platform is running VTS) that will post data to the dashboard. This may be a special command that is needed to pass through a proxy, or it may be a fairly standard command built-in command. The only constraint is that the string \'{path}\' must be located somewhere in the command so that the VTS framework can insert the path to the serialized data tempfile into the command. Below is an example for posting to a website from a Linux host where "<url>" is the fully specified URL of the VTS Dashboard instance.
  "service_key_json_path": "/networkdrive/vts/service_key.json",
  "dashboard_post_command": "wget --post-file=\'{path}\' <url>/api/datastore"

Results from continuous runs (i.e. with integer test and device build IDs run on a machine with access to the service JSON) are automatically visible on the dashboard.

Local runs with custom test and/or device builds may also be visible for debugging purposes if tests are run on a machine with access to the service key file. To view local runs on the dashboard for debugging purposes, add the following to the end of the URL from the table
summary page: '&unfiltered='. This will show all results uploaded to the VTS Dashboard without performing any filtering of the build IDs.

Integration For Display of Coverage Data (optional)

This section is important if the VTS Dashboard will be used to display coverage data from test runs; the Dashboard service does not store source code or artifacts from device builds, so VTS must be configured to integrate with existing services. Completing this process will require knowledge of the other tools used within the company, such as proxies, firewalls, build server APIs, etc. Fortunately the Gerrit REST API is standard, so minimal configuration is needed to integrate that with VTS Dashboard. On the other hand, the continuous build system is non-standardized so integrating will require domain knowledge.

The steps below must be executed by the owner of the VTS Dashboard web service:

1. Specify Project Configurations

Fill out the additional fields in DASHBOARD_TOP/pom.xml and repeat step (5) from the first section:

  1. gerrit.uri -- base URL to Gerrit REST API which contains the source compiled to a device image
  2. gerrit.scope -- set to the Gerrit Oauth 2.0 scope:

2. Configure CORS

Configure the Gerrit server to allow CORS (cross-origin resource sharing) with the VTS Dashboard web service. In order to overlay coverage data on top of the source code, the VTS Dashboard needs to query Gerrit for the source code; the proxy or firewall may block requests from other services unless they are added to a whitelist. In order to enable this, the Gerrit administrator will likely need the address of VTS Dashboard (as configured in step 1) and an OAuth 2.0 client ID (from step 2.2). If there aren't any limitations on CORS or if the services are hosted on the same domain, then no changes will be needed to allow communication between the services.

3. Configure the VTS Runner

Now that the web services are configured, the VTS runner must be configured on every machine running VTS to access the continuous build server. The following additional keys must be provided with values in test/vts/tools/vts-tradefed/res/default/DefaultTestCase.config:

  1. "build_server_scope" -- OAuth 2.0 scope of the REST build server
  2. "build_server_api_name" -- Name of the build server REST API
  3. "build_server_api_version" -- Version of the build server REST API

The VTS runner will now query the build server on every test run to retrieve two build artifacts:

  1. <product>-coverage-<build ID>.zip -- the ZIP file produced automatically in the out/dist directory when building a coverage-instrumented device image
  2. BUILD_INFO -- a JSON file describing the device build configuration. At minimum, the file must contain a dictionary of project names to commit identifiers under the key "repo-dict".

Monitoring (optional)

Google Stackdriver

The Google App Engine project can be configured with Stackdriver to verify the health of the web service. Refer to the Stackdriver documentation for more details on setting up a monitoring project.

Create a Simple Uptime Check

  1. Go to Stackdriver Monitoring console
  2. Go to Alerting > Uptime Checks in the top menu and then click Add Uptime Check. The New Uptime Check panel will be displayed.
  3. Fill in the following fields for the uptime check:
    Check type: HTTP
    Resource Type: Instance
    Applies To: Single, lamp-1-vm
    Leave the other fields with their default values.
  4. Click Test to verify the uptime check is working.
  5. Click Save.
  6. Fill out the configuration for notifications and click save policy.

Verify Checks and Notifications

To test the check and alert, go to the VM Instances page in Google Compute Engine, select an instance, and click Stop from the top menu. Wait up to five minutes for the next uptime check to fail. An email notification should be sent as configured in the previous steps notifying the administrator of a service outage.

To correct the outage, return to the VM Instances page, select the stopped instance, and click Start from the top menu.

Google Analytics

The VTS Dashboard supports integration with Google Analytics so that user behavior can be monitored and analyzed by a web administrator. Setting up and integrating with Analytics is very simple. First, create an Analytics account and generate a tracking ID in the project settings. This may be included in a HTML/Javascript code block similar to the following:

  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),

  ga('create', '<TRACKING ID>', 'auto');
  ga('send', 'pageview');

Note that the only value needed is the code defined in <TRACKING ID>. Next, this value must be supplied by the VTS Dashboard administrator to the file DASHBOARD_TOP/pom.xml. Provide the tracking ID in the tag


Once the project is re-deployed to Google App Engine, Analytics will begin to track access and usage patterns.

If you run into any problems with VTS, you can report an issue to the android-vts google group. In order for the VTS team to address your issue effectively, please run the python script (located under test/vts/script) and include the logged output to your report. The script prints important system platform information such as Python and Pip versions.

$ python test/vts/script/

Example output:

===== Current date and time =====
2017-05-23 10:37:35.117056
===== OS version =====
===== Python version =====
2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4]
===== Pip version =====
pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
===== Virtualenv version =====
===== Target device info [] =====