In this codelab, you'll learn how to get started with testing for Android. To do this, you'll write features and tests for a modern real world Android application, a task app called, TO-DO Tasks, shown below:

What you'll learn

What you'll need

Download the Code

To get started, download the code:

Download Zip

Alternatively, you can clone the Github repository for the code:

$ git clone -b codelab2019 --single-branch

Run the sample app

Once you've downloaded the TO-DO Tasks app, make sure to open it in Android Studio and run it. It should compile. You can explore the app. Its features include:

Testing Overview

If you're already well versed in testing concepts, feel free to skip to the next section.

The Testing Pyramid

When you made the first feature of your first app, you likely ran the code to verify that it works as expected. This was a test, albeit a manual test. As you continued to add and update features, you probably also continued to run your code and verify it works. But doing this manually every time is tiring, prone to mistakes and does not scale.

Luckily, computers are great at scaling and automation! What you'll learn in this codelab is how to create a variety of automated tests for your Android app. Automated tests are run by software and do not require you to manually operate the app to verify the code works. There are three attributes that are important to consider for automated testing:

Speed and fidelity are a trade off - the faster the test, generally the less fidelity and vice versa.

Using these attributes, you can break tests down into three categories:

The suggested proportion of these tests is often represented by a pyramid, with the vast majority of tests being unit tests.

You'll learn more about these tests when you write them in this codelab.

Testing terminology

You'll find a few other testing terms mentioned in this codelab.

Test Coverage: The percentage of your code that is executed by your tests. If you have 100 lines of code, and your tests run through 80 of them, then you have 80% coverage.

Test Driven Development (TDD): A school of programming thought that says instead of writing your feature code first, you write your tests first. Then you write your feature code with the goal of passing your tests. You can learn more about it here.

Test Doubles: Test doubles are objects that stand in for a real object, such as a networking class, when testing.You can swap in a fake networking class to provide speed and determinism at the expense of fidelity. Categories of test doubles include Fakes, Dummies, Mocks, and Spies. In this codelab, you will be primarily using Fakes and Mocks with Stubs (via Mockito). For more information, check out this post.

Given, When, Then: Also known as "Arrange, Act, Assert." Each test will usually be written in three sections:

Architecturing for Testing

In the last section, you learned about unit, integration, and end-to-end tests. In order to have healthy amounts of all three, it is important that your application is architected for testability.

For example, an extremely poorly-architected application might put all of its logic inside one method. This makes it difficult to test a single unit or feature, as every test must then go through the entire method, rather than just the part related to the unit or feature. A better approach would be to break down the application logic into multiple methods and classes, allowing each piece to be tested in isolation.

TO-DO Tasks follows the architecture from Guide to App Architecture. If you're looking for a hands on introduction to this architecture, you can do the Room with a View codelab.

The important features of a testable architecture are:

Each class should have a clearly defined purpose

Follow separation of concerns and ensure that each of your classes has a single responsibility. For example, following the Guide to App Architecture, fragments and activities become solely responsible for drawing views, but no logic.

Limit and be explicit about which classes know about other classes

Your classes should limit which other classes they interact with. In the Guide to App Architecture, for example, view models interact with the repository, and never directly with the data sources.

Use constructor injection

Support testability by allowing dependencies to be passed in rather than constructed internally. This allows you to easily swap out real implementations for test doubles.

Keep Android code out of the view model

Views, fragments, activities, and contexts are Android specific code that usually require running a test on a physical device or booting up an emulator to test with full fidelity. For that and reasons of lifecycle leaks, we keep these classes in the UI layer and outside of view models and business logic code. You'll learn more about the trade offs of running tests on physical devices in the next section.

The Architecture of TO-DO Tasks

The application code is in the package that does not have anything in parenthesis by it's name - all other root packages you see are code related to testing (which you'll learn about in the next section). Let's focus on the application code for now:

Following our goal of packaging by UI feature and having a shared data layer, we have the following packages:

Package: package


The Add or edit a task screen: UI layer code for adding or editing a task.


The data layer: This deals with the data layer of the tasks. It contains the database, network and repository code.


The statistics screen: UI layer code for the statistics screen.


The task detail screen: UI layer code for a single task.


The tasks screen: UI layer code for the list of all tasks.


Utility classes: Shared classes used in various parts of the app, e.g. for handling file access.

Data layer

This app includes a simulated networking layer, in the remote package, and a database layer, in the local package. For simplicity, in this project we simulate the networking layer with just a HashMap with a delay rather than make real network requests.

The repository acts as a middle man between these two layers and is what returns data to the UI layer. If you want to see the public interface for the repository, that is defined in TasksRepository.

UI layer

Each of the UI layer packages contains a fragment and a view model, along with any other classes that are required for the UI (such as adapter for the task list). The TaskActivity is the activity which contains all of the fragments.


Navigation for the app is controlled by the Navigation component. It is defined in the nav_graph.xml. Navigation is triggered in the view models using the Event class; the view models also determine what arguments to pass. The fragments observe the Events and do the actual navigation between screens.

Before you write your tests, let's see how we set up our environment for testing and run a test.

Where are the tests?

Source sets

This section assumes you start on the Project perspective in the Project Pane. Switch over now if not.

You've already got a lot of tests, but where are they? In app, under src/ you should see these folders, which are known as source sets. Source sets are folders containing source code.

main is the default location for all of your actual source code. androidTest, test and sharedTest all contain your tests.

Local versus Instrumentation

The difference between androidTest, test and sharedTest is that they are run differently.

When you're configuring and writing tests, a key choice is how those tests will run. This affects the speed and fidelity of your tests. The source set where you put your tests is part of what determines how they are run. There's a divide between local tests (in the test source set) and instrumentation tests (in the androidTest source set) :

Local tests: These tests are run locally on your development machine's JVM and do not require running on either an emulator or physical device. Because of this, they run fast, but their fidelity is lower. Unit tests are almost always local tests. Integration tests can be run locally or as instrumented tests. These tests are in the test source set.

Instrumentation tests:

These tests run on real devices, so they reflect what will happen in the real world, but are also much slower. End to end tests are always instrumented test that run on a real device. These tests are in the androidTest source set.

Robolectric and AndroidX Test

Two of the major libraries used for Android testing are related to this local versus instrumentation test divide:

Robolectric - Robolectric is a testing library that allows you to run Android framework and Android Jetpack code on the JVM. Essentially, Robolectric lets you run tests on the JVM that would have otherwise run as an instrumentation test on a real or virtual device or required you having to mock out the Android framework yourself. Instead of you creating test doubles for Android framework code, Robolectric does this for you.

AndroidX Test - AndroidX Test is a general Android testing library that strives to bridge the gap between local testing and instrumentation testing. Its API is intended for use in both types of Android test, allowing the user in some cases to run the exact same test as both a local test on the JVM and as an instrumented test on an emulator or device. AndroidX benefits include:

Shared Tests

Using the AndroidX testing framework, we're able to make a source set for shared tests. This is what sharedTest is. This is not auto-generated by Android Studio for new projects - rather it's something we added.

sharedTest is a source set for tests that can be run either locally or as instrumentation tests. sharedTest is set up using this code in build.gradle(app) which adds the sharedTest source set to both test and androidTest:

android {
    sourceSets {
        String sharedTestDir = 'src/sharedTest/java'
        test {
            java.srcDir sharedTestDir
        androidTest {
            java.srcDir sharedTestDir

In addition, there are a few other source sets in your app (mock and prod). These are used as a quick way to introduce a fake repository (via a class called ServiceLocator) when testing. The mechanism used to do this is described when you use them later in this codelab.

Run your first test

Let's run a test! In the test folder, open up and find the statistics sub-package. There you'll find StatisticsUtilsTest.

Right click and select Run 'StatisticsUtilsTest'

The Run window will pop up and start showing the progress of running through every test in the class. Once done, you'll get a report of which tests passed and which failed. The green check marks all of these as passed.

Good job! Next, you'll write your first test.

Unit tests verify the correct operation of small units of code. The scope of a unit test is as small as possible so that the code is tested exhaustively and give very fast feedback on failures. Unit tests should run in milliseconds and a big project will have thousands of them.

In this section we're going to verify the getActiveAndCompletedStats method, found in StatisticsUtils.kt.

internal fun getActiveAndCompletedStats(tasks: List<Task>?): StatsResult {

    return if (tasks == null || tasks.isEmpty()) {
        StatsResult(0f, 0f)
    } else {
        val totalTasks = tasks.size
        val numberOfActiveTasks = tasks.count { it.isActive }
            activeTasksPercent =  100f * numberOfActiveTasks / tasks.size,
            completedTasksPercent = 100f * (totalTasks - numberOfActiveTasks) / tasks.size

data class StatsResult(val activeTasksPercent: Float, val completedTasksPercent: Float)

This method returns a StatsResult instance with the percentage of each type of task but if the list of tasks is empty or null, it returns zeros.

You'll find some unit tests in the StatisticsUtilsTest class. For example:

fun getActiveAndCompletedStats_both() {
    // Given 3 completed tasks and 2 active tasks
    val tasks = listOf(
        Task("title", "desc", isCompleted = true),
        Task("title", "desc", isCompleted = true),
        Task("title", "desc", isCompleted = true),
        Task("title", "desc", isCompleted = false),
        Task("title", "desc", isCompleted = false)
    // When the list of tasks is computed
    val result = getActiveAndCompletedStats(tasks)

    // Then the result is 40-60
    assertThat(result.activeTasksPercent, `is`(40f))
    assertThat(result.completedTasksPercent, `is`(60f))

This is a JUnit 4 test that uses JUnit assertions that verify that a condition defined by matchers is true. Matchers are defined and combined with Hamcrest. This combination is the industry standard.

When this test is run, we verify that the calculation is correct for this particular case, in which we have both active and completed tasks. However, are we testing the whole method with it? A closer look at the method reveals that we're missing a whole branch: the case in which the tasks are null or the list is empty.

Often times, figuring out if we're covering all the cases is not as trivial. Luckily, tests can be run with coverage analysis. This tool shows which lines are covered by the test and which are not.

To run the test with coverage just right click on it and select the "with Coverage" option.

The test will run and a Coverage window will appear, showing some figures:

This matches what we were expecting! Our unit test is very narrowly scoped, only covering the StatisticsUtil file. However, it shows a 85% "Line" coverage on it. Click on the row to open the file.

A new coverage bar is displayed:

Green means that the test has covered the line. Red means it has not. As we anticipated, we are not testing the case in which tasks is null or the list is empty.

Next, you'll complete the getActiveAndCompletedStats_error test in which you'll test what happens when the method receives a null list of tasks.

    fun getActiveAndCompletedStats_error() {
        // When there's an error loading stats
        val result = getActiveAndCompletedStats(null)

Now you need to verify that the result is what we expect: zeros in both active and completed tasks

        // Both active and completed tasks are 0
        assertThat(result.activeTasksPercent, `is`(0f))
        assertThat(result.completedTasksPercent, `is`(0f))

Instead of running a single test with coverage, you can also run a whole test class with coverage and combine the results (actually, you can run any run configuration). Right click on the test class this time so coverage data of all tests are combined:

Android Studio might ask you if you want to add or replace active suites. This lets you add together coverage figures of different test runs. Choose Replace this time.

If you open up StatisticsUtils.kt, then you'll see the whole method has coverage:

Does this mean we are done? No, we're still missing a case: tasks.isEmpty().

Repeat the same test with this an emptyList():

fun getActiveAndCompletedStats_empty() {
    // When there are no tasks
    val result = getActiveAndCompletedStats(emptyList())

    // Both active and completed tasks are 0
    assertThat(result.activeTasksPercent, `is`(0f))
    assertThat(result.completedTasksPercent, `is`(0f))

The app uses Kotlin's coroutines for asynchronous operations. Since they were released in 2018, the community adopted them quickly and they are a clear trend in Android development.

Asynchronous operations and concurrency are crucial in modern applications. Having a smooth user experience is not possible if your UI thread is busy with non-UI operations. Coroutines can help a lot with that, having an easier development model than the alternatives.

In the app, coroutines are launched from the ViewModel objects using the new viewModelScope (currently in alpha).

viewModelScope.launch {
    tasksRepository.getTask(taskId).let { result ->
        if (result is Success) {
        } else {

Network and disk operations can be executed in parallel:

override suspend fun saveTask(task: Task) {
    // Do in memory cache update to keep the app UI up to date
    cacheAndPerform(task) {
        coroutineScope {
            launch { tasksRemoteDataSource.saveTask(it) }
            launch { tasksLocalDataSource.saveTask(it) }

From the documentation:

These patterns are very convenient but testing coroutines require some work in our test fixture. In order to have repeatable, reliable tests we need to undo everything we achieve with coroutines and flatten the operations into a synchronous sequence of operations.


One of the first tools that you'll use from tests (both local and Android) is runBlockingTest. It starts a new coroutine, so it lets you call suspend functions, but it blocks until their completion. It's very common to use runBlockingTest from tests that call suspend functions. See an example in DefaultTasksRepositoryTest.


With this JUnit rule we replace the Main coroutine dispatcher with one from a TestCoroutineScope. This is going to force operations on the main dispatcher to be called synchronously, one after the other. See MainCoroutineRule.

Architecture Components

Apart from preparing our tests for dealing with production code that uses coroutines, we also need to take into account a couple of Architecture Components that will affect our tests:


When testing Architecture Components, it's a good idea to add a JUnit rule to also flatten some calls made by them, to make the test deterministic. In our tests we use the InstantTaskExecutorRule which replaces the background executor used by the Architecture Components with one that executes each task synchronously.


The LiveData Architecture Component does not emit new values unless it's being observed. We use a mechanism in LiveDataTestUtil that gets its value in a safe way.

Make your first test fail

One of the rules of test-driven development is that tests should fail first. This is to prevent tests that, no matter what the subject under test does, will always pass. With coroutines (and concurrency in general), this suggestion is even more important.

Open TaskDetailViewModelTest and look for the loadTask_loading test, which is empty.

In this test class, we're injecting a fake repository into the ViewModel so we don't have to deal with the database or network connections. It comes with a preloaded task:

fun setupViewModel() {
    tasksRepository = FakeRepository()

    taskDetailViewModel = TaskDetailViewModel(tasksRepository)

Let's look at how the ViewModel loads data first:

fun start(taskId: String?) {
    _dataLoading.value = true

    viewModelScope.launch {
        if (taskId != null) {
            tasksRepository.getTask(taskId, false).let { result ->
                if (result is Success) {
                } else {
        _dataLoading.value = false

The first thing start does is set dataLoading to true. Then it launches a coroutine that calls the repository. When done, it sets dataLoading to false. This LiveData object controls the loading indicator on the screen.

In our test we're going to verify that the loading indicator is initially enabled and that it's disabled when loading finishes.

First, initialize the ViewModel:

fun loadTask_loading() {
    // Load the task in the viewmodel

Now, let's verify that the dataLoading LiveData is initially enabled and then disabled:

    // Progress indicator is shown

    // Progress indicator is hidden

This piece of code makes no sense! We're checking two different values for the same LiveData at the same time.

Run the test (by right-clicking on the method and clicking "Run") and you'll see that it fails:

We need a way to pause the execution of the coroutine in the ViewModel and verify the initial LiveData value before continuing. Luckily, we can do this with a TestCoroutineDispatcher, which is available through the MainCoroutineRule. This JUnit rule simply replaces the Main dispatcher (which would normally use Android's Main thread) with the test dispatcher. The Main dispatcher is used when you launch a coroutine from the viewModelScope.

With the TestCoroutineDispatcher in place, now we can call pauseDispatcher and resumeDispatcher at will. We'll start the test with a paused dispatcher and resume it after we've verified that the initial value of the dataLoading LiveData is correct:

fun loadTask_loading() {
    // Pause dispatcher so we can verify initial values

    // Load the task in the viewmodel

    // Then progress indicator is shown

    // Execute pending coroutines actions

    // Then progress indicator is hidden

Run the test again. Now the test passes!

In the middle of the testing pyramid we put the integration tests.

In this section, we'll focus on tests that verify fragments in isolation. These tests use the new FragmentScenario.

UI Testing on Android with Espresso

AndroidX Test contains the Espresso testing framework that provides APIs to interact with the View. Espresso has a very nice fluent, concise and readable API to write functional UI tests and was built to make UI testing as frictionless as possible. This means that you can focus on writing tests without having to deal with unreliable and flaky tests.

View Matching and Assertions with Espresso

Espresso tests are written based on what a user might do while interacting with your app. The key concepts are locating and interacting with UI elements. The first step is to find a View you are interested in, then check its state or interact with it.

ViewMatchers select Views in the current view hierarchy. The most common ones are withId(...) (that finds Views with a specific ID) and withText(...) (that finds Views with a specific text), but there are many others, including matchers for state (selected, focused, enabled), content description and hierarchy (root and children), among others.

ViewActions are actions that can be performed on a View (for example click).

ViewAssertions are passed to the check(...) ViewAction to verify its state.

Set up Android Studio for Android instrumentation tests

Unlike local unit tests, instrumentation tests must be placed in the androidTest source set (app/src/androidTest/java/...). Don't worry - when you create an Android Studio project, this source set is created for you.

Source sets, along with gradle configuration settings, can be combined to make a buildvariant. A build variant is a specific variation of the app's code and configuration for a purpose, such as testing. If you click on the Build Variants tab, you can see a list of build variants.

The mock and prod source sets are combined with main to produce mockDebug, prodDebug and prodRelease. These variants allow us to produce slightly different code for testing versus production:


Uses the mock source set to inject a fake remotemock data repository where tasks are stored. Check out the ServiceLocator class definition in mock to see how this is done.


Debuggable build that uses the prod source set to inject the ‘real' production data repository. Check out the ServiceLocator class definition in prod to see how this is done.


Shrunk (proguarded) Release build that uses a ‘real' data repository.

The first test we are writing is using the mockDebug variant - make sure to select it for our test artifact before continuing. (Different tests and directories are activated based on the variant you have selected!)

Testing navigation

Open TasksFragmentTest and look for clickAddTaskButton_navigateToAddEditFragment.

fun clickAddTaskButton_navigateToAddEditFragment() {
    // GIVEN - On the home screen
    // WHEN - Click on the "+" button
    // THEN - Verify that we navigate to the add screen

In this test we need to verify that clicking on the "+" button sends the user to the add/edit screen but we don't need to actually go there. In this project we're using the Navigation Architecture Component which uses a NavController to manage app navigation. There isn't a fake version of NavController provided for testing. We could make our own but in this case it's perfectly reasonable to use a mocking framework, like Mockito.

We'll use it to create a mock:

val navController = mock(

We can associate our new mock with the view's NavController:

scenario.onFragment {
    Navigation.setViewNavController(it.view!!, navController)

Navigate to the add/edit screen using the Espresso API, clicking on the FAB:


And finally checking that the mock received the navigation call with the correct parameters:

        null, getApplicationContext<Context>().getString(R.string.add_task)))

But before all this, we need to start the fragment somehow. We could start the TasksActivity but we're not doing an end-to-end test so we're not interested in the drawer, toolbar, etc. We just need to interact with the tasks fragment. With the FragmentScenario we can launch any fragment in a container (which is nothing but an empty activity):

val scenario = launchFragmentInContainer<TasksFragment>(Bundle(),

The final test should look like this:

fun clickAddTaskButton_navigateToAddEditFragment() {
    // Given a user in the home screen
    val scenario = launchFragmentInContainer<TasksFragment>(Bundle(),
    val navController = mock(
    scenario.onFragment {
        Navigation.setViewNavController(it.view!!, navController)

    // When the FAB is clicked

    // Then verify that we navigate to the add screen
            null, getApplicationContext<Context>().getString(R.string.add_task)))

If you right-click on the class name and run it, Android Studio will pick it up as instrumentation tests. Pick a device/emulator and you'll see the results:

These tests can also be run with Robolectric. You'll read how in the next step.

Testing data interactions

We want to verify that data coming from the data layer is correctly displayed and that actions from users end up calling the repository. We're going to create a test for the add/edit screen that saves a task.

Open AddEditTaskFragmentTest and look for validTask_isSaved. The test is already starting the fragment in isolation and saving a new task.

    fun validTask_isSaved() {
        // GIVEN - On the "Add Task" screen.
        val navController = mock(

        // WHEN - Valid title and description combination and click save

You only have to complete the verification. The repository should have stored the task:

        // THEN - Verify that the repository saved the task
        val tasks = (repository.getTasksBlocking(true) as Result.Success).data
        assertEquals(tasks.size, 1)
        assertEquals(tasks[0].title, "title")
        assertEquals(tasks[0].description, "description")

Good job!

UI tests can be written for instrumented tests, for Robolectric or for both. All tests in the sharedTest/ source set can be run anywhere thanks to the Unified APIs (read blog post: Write Once, Run Everywhere Tests on Android).

Running Robolectric tests in Android Studio

If you run a test in sharedTest/ from Android Studio, it's considered an instrumented test (i.e. it will require a device/emulator). However, you can also run it in Robolectric from Android Studio:

For that, create a new Run Configuration:

Select "Android JUnit" type:

Choose the "app" module first:

And then choose the TasksFragmentTest class:

You can also pick a method, category, directory, etc.

Optionally, add "Robolectric" to the run configuration name to differentiate it from instrumented tests.

If you run the new configuration you'll find out that the first test takes a longer time than the rest:

We're using Robolectric to run the tests without an emulator or device, so it needs more time to start up. However, tests can be faster and more predictable since they don't need to rely on a device or emulator.

Running tests from command line

From command line, running the test task will pick up all tests in test/ and sharedTest/, which will be run on your local machine. The ones in sharedTest/ will use Robolectric.

 ./gradlew test

On the other hand, if you run "connected" tests, all tests in androidTest/ and sharedTest/ will be run on a device/emulator:

./gradlew connectedMockDebug

Tests that verify the proper operation of your app using as many real components as possible are called end-to-end tests and they are usually UI tests. There are many reasons why you would write these tests:

End-to-end tests are slower and are less isolated so are not run as often and can be more flaky than integration and unit tests. They're also less focused (meaning bugs are harder to track down) but these are the tests that give you the confidence that your app works as a whole.

Create a navigation test

In this new end-to-end test we're going to test a happy path:

Open TasksActivityTest. Let's take a look at the setup first. We do get a reference to the repository but this is not to verify anything, only to pre-load the repository with tasks to make the tests run faster.

fun resetState() {
    repository = ServiceLocator.provideTasksRepository(getApplicationContext())

In the setup there's also an important component: the Idling Resource. It's a way to tell Espresso when your app is in an idle state. This helps Espresso to synchronize your test actions, which makes tests significantly more reliable.

fun registerIdlingResource() {

If you open the DefaultTaskRepository you'll find that when getTasks is called, we set the app to idle until the data is received. We use a counter in EspressoIdlingResource that tracks the number of operations in flight. If it's zero, the app is idle.

    EspressoIdlingResource.increment() // Set app as busy.

    val newTasks = fetchTasksFromRemoteOrLocal(forceUpdate)

    // Refresh the cache with the new tasks
    (newTasks as? Success)?.let { refreshCache( }

    EspressoIdlingResource.decrement() // Set app as idle.

The Data Binding library uses a mechanism to post messages which Espresso doesn't track yet. We use an Idling Resource to report idle status for all data binding layouts. See the DataBindingIdlingResource class.

Now let's create our end-to-end test. Look for the createTask test.

fun createTask() {
    // start up Tasks screen
    // TODO

    // Click on the "+" button, add details, and save
    // TODO

    // Then verify task is displayed on screen
    // TODO

TasksFragment is the first screen in TasksActivity so we can simply start the activity from the test:

    // start up Tasks screen
    val activityScenario = ActivityScenario.launch(

In end-to-end tests we don't know about coroutines or ViewModels so we're simply going to perform Espresso actions and do Espresso checks:

    // Click on the "+" button, add details, and save
    onView(withId("title"), closeSoftKeyboard())

    // Then verify task is displayed on screen

And now the test passes:

Take a look at the rest of the tests in TasksActivityTest and in AppNavigationTest.


You have now learned the basics of testing on Android!

In this codelab we have covered testing from a few different angles - from functional unit tests to instrumented UI tests with Espresso and Robolectric. Developing an application from a test-driven point of view like we did here can help you define a clean architecture and establish clear communication between different components of your application.

What we've covered

Next Steps

Learn More