This preview of the Android VTS (Vendor Test Suite) Lab v9.0 infrastructure provides instructions for interested users in the Android device ecosystem to setup an Android platform test lab.

Are you an Android platform tester or developer who has a few, or even dozens of, unused Android devices on your desk? If so, you can create a custom test plan and use those devices to run tests that are critical for your jobs.

Imagine you will no longer need to order a set of dedicated Android devices for a centralized lab room and work with a lab infrastructure team to understand and set up the lab for your tests. This codelab shows you how to use the new Android Lab v9.0 infrastructure and set up your own lab using your desktop and existing devices. Such mini labs can be used to streamline your VTS and CTS-on-GSI tests.

The open source Android Lab infrastructure consists of three key components:

Host controller

The host controller is a command-line tool that interacts with a cloud scheduler and manages a set of VTS and CTS-on-GSI test framework instances running on the same host node. The host controller can fetch actual build artifacts, flash devices, run tests, report its progress, and upload the test results.

Cloud scheduler

A cloud scheduler is a Google App Engine (GAE) project that can fetch build information, test scheduling configs, and monitor devices and hosts in registered API labs. The cloud scheduler continuously schedules jobs on selected devices and sends infra alerts to registered mailing lists.

Test dashboard and notification

The VTS Dashboard is another GAE project that can show test results, code coverage, and performance data. The VTS Dashboard can also send test failure notifications to users.

If you want to build and operate an Android API lab, you will need the hardware listed below. Otherwise you can acquire Android devices and ship them to existing Android API labs around the world.

Host PC

Tested models:

Server rack

Device rack

Devices

  1. Login (no actual account required)
  2. Connect to Wi-Fi Access Point
  3. Developer setting > Enable USB debugging > Accept host's key for USB debugging
  4. Developer setting > Enable OEM unlocking
$ adb reboot-bootloader
$ fastboot flashing unlock
$ fastboot flashing unlock_critical
# check the device screen to select and unlock

OS installation

OS setup

SSH package installation (directly from the host OS console)

A secure shell (SSH) server is required to enable a lab administrator to control host nodes remotely.

$ sudo apt-get install openssh-server
$ sudo apt-get install gsutil
$ sudo apt-get install curl

Handle firewall setting (below can be done from a remote workstation by using SSH)

The firewall setting depends on the environment where the lab is being deployed.

USB setup

$ sudo apt-get install android-tools-adb android-tools-fastboot
$ sudo usermod -aG plugdev $LOGNAME 

# for usb permission (adb and fastboot)
$ sudo curl --create-dirs -L -o /etc/udev/rules.d/51-android.rules -O -L <URI of an Android USB rules file>

$ sudo chmod a+r /etc/udev/rules.d/51-android.rules
$ sudo service udev restart

$ sudo reboot

Android build environment setup

Based on the instructions at here.

System setup

Install the Google Cloud (GCloud) SDK:

$ export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"

$ echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

$ sudo apt-get update && sudo apt-get install google-cloud-sdk

$ gcloud init

Install the Python SDK:

$ sudo apt-get install -y python-virtualenv
$ sudo apt-get install -y python-pip
$ sudo apt-get -y install python-protobuf
$ sudo apt-get -y install protobuf-compiler
$ sudo pip install --upgrade protobuf 
$ sudo pip install --upgrade pip
$ sudo pip install httplib2
$ sudo pip install apiclient
$ sudo pip install --upgrade google-api-python-client
$ sudo pip install future
$ sudo pip install futures
$ sudo pip install requests
$ sudo pip install selenium
$ cd ~/run
$ screen

Download

$ export API_LAB_BUCKET=<API_LAB_RELEASE_BUCKET>
$ gsutil ls gs://${API_LAB_BUCKET}/prod/android-vtslab-*.zip
$ sudo rm android-vtslab* -rf
$ gsutil cp gs://${API_LAB_BUCKET}/prod/android-vtslab-<selected verison>.zip .
$ unzip android-vtslab-*.zip

Launch

$ cd android-vtslab/tools
$ export PATH=`pwd`/../bin:$PATH
$ which adb
$ df
$ ./run --vti=<CLOUD_SCHEDULER_INSTANCE>.appspot.com
> device --lease=true

Check http://<cloud scheduler instance>.appspot.com/device to see if your devices are caught in the lab infra dashboard and used for scheduling.

Task schedule

name: "vtslab-<lab name>"
owner: "email@address.com"
host {
  hostname: "<hostname 1>"
  ip: "<ip 1>"
}
host {
  hostname: "<hostname 2>"
  ip: "<ip 2>"
  device {
    serial: "<serial 1>"
    product: "<product 1>"
  }
}
manifest_branch: "<BRANCH_NAME>"
pab_account_id: "<PAB_ACCOUNT_ID>"

build_target {
  name: "<device name 1>-user"
  require_signed_device_build: true

  test_schedule {
    test_name: "vts/vts"
    period: 2880
    priority: "low"
    device: "vtslab-<LAB_NAME_1>/<device name 1>"
    device: "vtslab-<LAB_NAME_2>/<device name 1>"
    shards: 2
    gsi_branch: "<GSI_BRANCH_NAME>"
    gsi_build_target: "aosp_arm64_ab-userdebug"
    gsi_pab_account_id: "<PAB_ACCOUNT_ID>"
    test_storage_type: BUILD_STORAGE_TYPE_GCS
    test_branch: "<VTS release's GCS bucket e.g., gs://...>"
    test_build_target: "arm_64"
    retry_count: 2
  }
}

build_target {
  name: "<device name 2>"

  test_schedule {
    test_name: "vts/vts"
    period: 720
    priority: "low"
    device: "vtslab-<LAB_NAME_1>/<device name 2>"
    shards: 2
    gsi_branch: "<GSI_BRANCH_NAME>"
    gsi_build_target: "aosp_arm64_ab-userdebug"
    gsi_pab_account_id: "<PAB_ACCOUNT_ID>"
    test_branch: "<TEST_BRANCH_NAME>"
    test_build_target: "test_suites_arm64"
    test_pab_account_id: "<PAB_ACCOUNT_ID>"
  }
}

Expose build via Partner Android Build

If you want to download a required build artifact from Partner Android Build (PAB), please reach out to a point of contact at Android Partner Engineering.

In order to configure the VTS Dashboard and notification service, you need to complete several setup, configuration, and integration steps. Most parts of the VTS Dashboard are self-contained. Any parts of the dashboard that are not self-contained will depend on your own tool configurations.

Begin by completing the first section, Configure a Google App Engine project and deploy the VTS Dashboard and Configure the VTS runner to upload results to the VTS Dashboard. The second section, Configure the VTS runner to upload results to the VTS Dashboard, guides integration with existing services. The third section, Integration for display of coverage data, is needed only if the VTS Dashboard will be used to display coverage from test execution time and requires strong domain knowledge of existing internal web services. The last section, Monitoring, is needed only if an operator wants to monitor a deployed VTS dashboard service.

The code for the VTS Dashboard is located under test/vti/dashboard that was migrated from test/vts/web/dashboard after Android 8.0 release. This codelab uses DASHBOARD_TOP to refer to the code's location, regardless of the Android version.

Configure a Google App Engine project and deploy the VTS Dashboard

This section should only be done once, when the project is deployed to the cloud. This is a one-time setup, unless changes are made to the code under DASHBOARD_TOP. If the code under DASHBOARD_TOP changes, then the VTS Dashboard must be re-deployed following step 5.

Note: Most likely there will only be one VTS Dashboard instance in Google Cloud for an entire company, so one person or group should be selected to own the web project.

1. Create a Google App Engine project

Decide how many Google Compute Engine machines you would need in your cluster to balance your cost and performance constraints.

2. Configure the App Engine project on the Google Cloud Console

Using the Google Cloud Console UI, configure the project:

3. Prepare the deployment host

On the host machine that will deploy the project to the cloud, install the following dependencies:

  1. Install Java 8.
  2. Install Google Cloud SDK.
  3. Run the setup instructions to initialize gcloud and log in to the project you created in step 1.
  4. Install Maven. For more information about setting up the host and using Maven, refer to the App Engine documentation.

4. Specify project configurations

Fill out the project configuration file, DASHBOARD_TOP/pom.xml, with parameters from step 2.

5. Deploy the project

To test the project locally using the App Engine development server, from DASHBOARD_TOP run the command:

$ mvn clean appengine:devserver

To deploy the project to the cloud, from DASHBOARD_TOP run the command:

$ mvn clean appengine:update

For additional documentation regarding the Google Cloud App Engine setup, refer to Java on Google App Engine.

After completion of this first section, the web service should be up and running.

Configure the VTS runner to upload results to the VTS Dashboard

With the web service running, configure the VTS test runner to upload the data to the correct place. Configuring the VTS test running properly is important for any machine running VTS that should report to the web. This section is important for many VTS users who run VTS tests and upload test results to a VTS dashboard, while the following section is only relevant to the admin of the web service. Changes to the VTS test runner can be made locally, for a per-machine configuration, or can be checked into the source tree so that every machine running VTS can post data to the Dashboard.

To configure the VTS test runner, make the following changes to test/vts/tools/vts-tradefed/res/default/DefaultTestCase.config:

{
  "service_key_json_path": "/networkdrive/vts/service_key.json",
  "dashboard_post_command": "wget --post-file=\'{path}\' <URL>/api/datastore"
}

Results from continuous runs, where integer test and device build IDs run on a machine with access to the service JSON, are automatically visible on the dashboard.

Local runs with custom test and/or device builds may also be visible for debugging purposes if tests are run on a machine with access to the service key file. To view local runs on the dashboard, add page: '&unfiltered=' to the end of the URL from the table
summary. This will show all results uploaded to the VTS Dashboard without performing any filtering of the build IDs.

Integration for display of coverage data (optional)

This section is important if the VTS Dashboard will be used to display coverage data from test runs.

The Dashboard service does not store source code or artifacts from device builds, so VTS must be configured to integrate with existing services. Completing this process will require knowledge of the other tools used within the company, such as proxies, firewalls, build server APIs, etc. The Gerrit REST API is standard, so minimal configuration is needed to integrate that with VTS Dashboard. However, the continuous build system is non-standard, so integrating will require domain knowledge.

The following steps must be done by the owner of the VTS Dashboard web service:

1. Specify project configurations

Fill out the following fields in DASHBOARD_TOP/pom.xml and redeploy the project following step (5) from the first section:

2. Configure cross-origin resource sharing

Configure the Gerrit server to allow cross-origin resource sharing (CORS) with the VTS Dashboard web service. In order to overlay coverage data on top of the source code, the VTS Dashboard needs to query Gerrit for the source code. The proxy or firewall may block requests from other services unless they are added to a whitelist. To enable the VTS Dashboard to query Gerrit, the Gerrit administrator will need the address of the VTS Dashboard, as configured in step 1, and an OAuth 2.0 client ID, from section 2.2. If there aren't any limitations on CORS, or if the services are hosted on the same domain, then no changes will be needed to allow communication between the services.

3. Configure the VTS Runner

After web services are configured, configure the VTS runner on every machine running VTS to access the continuous build server. Provided the following keys with values in test/vts/tools/vts-tradefed/res/default/DefaultTestCase.config:

The VTS runner will query the build server on every test run to retrieve two build artifacts:

  1. <product>-coverage-<build ID>.zip -- the ZIP file produced automatically in the out/dist directory when building a coverage-instrumented device image.
  2. BUILD_INFO -- a JSON file describing the device build configuration. At minimum, the file must contain a dictionary of project names to commit identifiers under the key "repo-dict".

Monitoring (optional)

Google Stackdriver

The Google App Engine project can be configured with Stackdriver to verify the health of the web service. Refer to Stackdriver Monitoring Documentation for details on setting up a monitoring project.

Create a Simple Uptime Check

  1. Go to the Stackdriver Monitoring console.
  2. Go to Alerting > Uptime Checks in the top menu and click Add Uptime Check. The Uptime Check panel will be displayed.
  3. Fill in the following fields for the uptime check, leaving all other fields with their default values:
  1. Check type: HTTP
  2. Resource type: Instance
  3. Applies to: Single, lamp-1-vm
  4. Click Test to verify the uptime check is working.
  5. Click Save.
  6. Fill out the configuration for notifications and click Save Policy.

Verify checks and notifications

To test the check and notifications:

  1. Go to the VM Instances page in Google Compute Engine.
  2. Select an instance, and click Stop from the top menu.
  3. Wait up to five minutes for the next uptime check to fail. An email notification, as configured in the steps above, should be sent to notify the administrator of a service outage.
  4. Return to the VM Instances page.
  5. Select the stopped instance, and click Start from the top menu.

Google Analytics

The VTS Dashboard supports integration with Google Analytics so that a web administrator can monitor and analyze traffic. Setting up Analytics and integrating it with the VTS Dashboard is quick and simple:

  1. Create an Analytics account and generate a tracking ID in the project settings. Note that the only value needed is the code defined in <TRACKING_ID>. This may be included in a HTML/Javascript code block similar to the following:
<script>
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

  ga('create', '<TRACKING_ID>', 'auto');
  ga('send', 'pageview');
</script>
  1. The VTS Dashboard administrator must supply the tracking ID value in DASHBOARD_TOP/pom.xml. Provide the tracking ID in the tag analytics.id:
<analytics.id><TRACKING_ID></analytics.id>
  1. Redeploy the project to the Google App Engine, and Analytics will begin to track access and usage patterns.