Generative AI text generation in Java with PaLM and LangChain4J

1. Introduction

Last Updated: 2023-11-27

What is Generative AI

Generative AI or generative artificial intelligence refers to the use of AI to create new content, like text, images, music, audio, and videos.

Generative AI is powered by foundation models (large AI models) that can multi-task and perform out-of-the-box tasks, including summarization, Q&A, classification, and more. Plus, with minimal training required, foundation models can be adapted for targeted use cases with very little example data.

How does Generative AI work?

Generative AI works by using an ML (Machine Learning) model to learn the patterns and relationships in a dataset of human-created content. It then uses the learned patterns to generate new content.

The most common way to train a generative AI model is to use supervised learning — the model is given a set of human-created content and corresponding labels. It then learns to generate content that is similar to the human-created content and labeled with the same labels.

What are common Generative AI applications?

Generative AI processes vast content, creating insights and answers via text, images, and user-friendly formats. Generative AI can be used to:

  • Improve customer interactions through enhanced chat and search experiences
  • Explore vast amounts of unstructured data through conversational interfaces and summarizations
  • Assist with repetitive tasks like replying to requests for proposals (RFPs), localizing marketing content in five languages, and checking customer contracts for compliance, and more

What Generative AI offerings does Google Cloud have?

With Vertex AI, interact with, customize, and embed foundation models into your applications — little to no ML expertise required. Access foundation models on Model Garden, tune models via a simple UI on Generative AI Studio, or use models in a data science notebook.

Vertex AI Search and Conversation offers developers the fastest way to build generative AI powered search engines and chatbots.

And, Duet AI is your AI-powered collaborator available across Google Cloud and IDEs to help you get more done, faster.

What is this codelab focusing on?

This codelab focuses on the PaLM 2 Large Language Model (LLM), hosted on Google Cloud Vertex AI that encompasses all the machine learning products and services.

You will use Java to interact with the PaLM API, in conjunction with the LangChain4J LLM framework orchestrator. You'll go through different concrete examples to take advantage of the LLM for question answering, idea generation, entity and structured content extraction, and summarization.

Tell me more about the LangChain4J framework!

The LangChain4J framework is an open source library for integrating large language models in your Java applications, by orchestrating various components, such as the LLM itself, but also other tools like vector databases (for semantic searches), document loaders and splitters (to analyze documents and learn from them), output parsers, and more.

c6d7f7c3fd0d2951.png

What you'll learn

  • How to setup a Java project to use PaLM and LangChain4J
  • How to make your first call to the PaLM text model to generate content and answer questions
  • How to extract useful information from unstructured content (entity or keyword extraction, output in JSON)
  • How to do content classification or sentiment analysis with few shots prompting

What you'll need

  • Knowledge of the Java programming language
  • A Google Cloud project
  • A browser, such as Chrome or Firefox

2. Setup and requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

295004821bab6a87.png

37d264871000675d.png

96d86d3d5655cdbe.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as PROJECT_ID). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project.
  • For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Cloud Shell, a command line environment running in the Cloud.

Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell d1264ca30785e435.png.

cb81e7c8e34bc8d.png

If this is your first time starting Cloud Shell, you're presented with an intermediate screen describing what it is. If you were presented with an intermediate screen, click Continue.

d95252b003979716.png

It should only take a few moments to provision and connect to Cloud Shell.

7833d5e1c5d18f54.png

This virtual machine is loaded with all the development tools needed. It offers a persistent 5 GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with a browser.

Once connected to Cloud Shell, you should see that you are authenticated and that the project is set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

3. Preparing your development environment

In this codelab, you're going to use the Cloud Shell terminal and code editor to develop your Java programs.

Enable Vertex AI APIs

  1. In the Google Cloud console, make sure your project name is displayed at the top of your Google Cloud console. If it's not, click Select a project to open the Project Selector, and select your intended project.
  2. If you aren't in the Vertex AI portion of the Google Cloud console, do the following:
  3. In Search, enter Vertex AI, then return
  4. In the search results, click Vertex AI The Vertex AI dashboard appears.
  5. Click Enable All Recommended APIs in the Vertex AI dashboard.

This will enable several APIs, but the most important one for the codelab is the aiplatform.googleapis.com, which you can also enable on the command-line, in the Cloud Shell terminal, running the following command:

$ gcloud services enable aiplatform.googleapis.com

Creating the project structure with Gradle

In order to build your Java code examples, you'll be using the Gradle build tool, and version 17 of Java. To set up your project with Gradle, in the Cloud Shell terminal, create a directory (here, palm-workshop), run the gradle init command in that directory:

$ mkdir palm-workshop
$ cd palm-workshop

$ gradle init

Select type of project to generate:
  1: basic
  2: application
  3: library
  4: Gradle plugin
Enter selection (default: basic) [1..4] 2

Select implementation language:
  1: C++
  2: Groovy
  3: Java
  4: Kotlin
  5: Scala
  6: Swift
Enter selection (default: Java) [1..6] 3

Split functionality across multiple subprojects?:
  1: no - only one application project
  2: yes - application and library projects
Enter selection (default: no - only one application project) [1..2] 1

Select build script DSL:
  1: Groovy
  2: Kotlin
Enter selection (default: Groovy) [1..2] 1

Generate build using new APIs and behavior (some features may change in the next minor release)? (default: no) [yes, no] 

Select test framework:
  1: JUnit 4
  2: TestNG
  3: Spock
  4: JUnit Jupiter
Enter selection (default: JUnit Jupiter) [1..4] 4

Project name (default: palm-workshop): 
Source package (default: palm.workshop): 

> Task :init
Get more help with your project: https://docs.gradle.org/7.4/samples/sample_building_java_applications.html

BUILD SUCCESSFUL in 51s
2 actionable tasks: 2 executed

You will build an application (option 2), using the Java language (option 3), without using subprojects (option 1), using the Groovy syntax for the build file (option 1), don't use new build features (option no), generating tests with JUnit Jupiter (option 4), and for the project name you can use palm-workshop, and similarly for the source package you can use palm.workshop.

The project structure will look as follows:

├── gradle 
│   └── ...
├── gradlew 
├── gradlew.bat 
├── settings.gradle 
└── app
    ├── build.gradle 
    └── src
        ├── main
        │   └── java 
        │       └── palm
        │           └── workshop
        │               └── App.java
        └── test
            └── ...

Let's update the app/build.gradle file to add some needed dependencies. You can remove the guava dependency if it is present, and replace it with the dependencies for the LangChain4J project, and the logging library to avoid nagging missing logger messages:

dependencies {
    // Use JUnit Jupiter for testing.
    testImplementation 'org.junit.jupiter:junit-jupiter:5.8.1'

    // Logging library
    implementation 'org.slf4j:slf4j-jdk14:2.0.9'

    // This dependency is used by the application.
    implementation 'dev.langchain4j:langchain4j-vertex-ai:0.24.0'
    implementation 'dev.langchain4j:langchain4j:0.24.0'
}

There are 2 dependencies for LangChain4J:

  • one on the core project,
  • and one for the dedicated Vertex AI module.

In order to use Java 17 for compiling and running our programs, add the following block below the plugins {} block:

java {
    toolchain {
        languageVersion = JavaLanguageVersion.of(17)
    }
}

One more change to make: update the application block of app/build.gradle, to let users be able to override the main class to run on the command-line when invoking the build tool:

application {
    mainClass = providers.systemProperty('javaMainClass')
                         .orElse('palm.workshop.App')
}

To check that your build file is ready to run your application, you can run the default main class which prints a simple Hello World! message:

$ ./gradlew run -DjavaMainClass=palm.workshop.App

> Task :app:run
Hello World!

BUILD SUCCESSFUL in 3s
2 actionable tasks: 2 executed

Now you are ready to program with the PaLM large language text model, by using the LangChain4J project!

For reference, here's what the full app/build.gradle build file should look like now:

plugins {
    // Apply the application plugin to add support for building a CLI application in Java.
    id 'application'
}

java {
    toolchain {
        // Ensure we compile and run on Java 17
        languageVersion = JavaLanguageVersion.of(17)
    }
}

repositories {
    // Use Maven Central for resolving dependencies.
    mavenCentral()
}

dependencies {
    // Use JUnit Jupiter for testing.
    testImplementation 'org.junit.jupiter:junit-jupiter:5.8.1'

    // This dependency is used by the application.
    implementation 'dev.langchain4j:langchain4j-vertex-ai:0.24.0'
    implementation 'dev.langchain4j:langchain4j:0.24.0'
    implementation 'org.slf4j:slf4j-jdk14:2.0.9'
}

application {
    mainClass = providers.systemProperty('javaMainClass').orElse('palm.workshop.App')
}

tasks.named('test') {
    // Use JUnit Platform for unit tests.
    useJUnitPlatform()
}

4. Making your first call to PaLM's text model

Now that the project is properly set up, it is time to call the PaLM API.

Create a new class called TextPrompts.java in the app/src/main/java/palm/workshop directory (alongside the default App.java class), and type the following content:

package palm.workshop;

import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.vertexai.VertexAiLanguageModel;

public class TextPrompts {
    public static void main(String[] args) {
        VertexAiLanguageModel model = VertexAiLanguageModel.builder()
            .endpoint("us-central1-aiplatform.googleapis.com:443")
            .project("YOUR_PROJECT_ID")
            .location("us-central1")
            .publisher("google")
            .modelName("text-bison@001")
            .maxOutputTokens(500)
            .build();

        Response<String> response = model.generate("What are large language models?");

        System.out.println(response.content());
    }
}

In this first example, you need to import the Response class, and the Vertex AI language model for PaLM.

Next, in the main method, you're going to configure the language model, by using the builder for the VertexAiLanguageModel, to specify:

  • the endpoint,
  • the project,
  • the region,
  • the publisher,
  • and name of the model (text-bison@001).

Now that the language model is ready, you can call the generate() method and pass your "prompt" (ie. your question or instructions to send to the LLM). Here, you ask a simple question about what LLMs are. But feel free to change this prompt to try different questions or tasks.

To run this class, run the following command in the Cloud Shell terminal:

./gradlew run -DjavaMainClass=palm.workshop.TextPrompts

You should see an output similar to this one:

Large language models (LLMs) are artificial intelligence systems that can understand and generate human language. They are trained on massive datasets of text and code, and can learn to perform a wide variety of tasks, such as translating languages, writing different kinds of creative content, and answering your questions in an informative way.

LLMs are still under development, but they have the potential to revolutionize many industries. For example, they could be used to create more accurate and personalized customer service experiences, to help doctors diagnose and treat diseases, and to develop new forms of creative expression.

However, LLMs also raise a number of ethical concerns. For example, they could be used to create fake news and propaganda, to manipulate people's behavior, and to invade people's privacy. It is important to carefully consider the potential risks and benefits of LLMs before they are widely used.

Here are some of the key features of LLMs:

* They are trained on massive datasets of text and code.
* They can learn to perform a wide variety of tasks, such as translating languages, writing different kinds of creative content, and answering your questions in an informative way.
* They are still under development, but they have the potential to revolutionize many industries.
* They raise a number of ethical concerns, such as the potential for fake news, propaganda, and invasion of privacy.

The VertexAILanguageModel builder lets you define optional parameters which already have some default values that you can override. Here are some examples:

  • .temperature(0.2) — to define how creative you want the response to be (0 being low creative and often more factual, while 1 is for more creative outputs)
  • .maxOutputTokens(50) — in the example, 500 tokens were requested (3 tokens are roughly equivalent to 4 words), depending on how long you want the generated answer to be
  • .topK(20) — to randomly select a word out of a maximum number of probably words for the text completion (from 1 to 40)
  • .topP(0.95) — to select the possible words whose total probability add up to that floating point number (between 0 and 1)
  • .maxRetries(3) — in case you're running past the request per time quota, you can have the model retry the call 3 times for example

Large Language Models are very powerful, and can provide answers to complex questions, and are able to handle a large variety of interesting tasks. In the next section, we'll have a look at a useful task: extracting structured data from text.

5. Extracting information from unstructured text

In the previous section, you generated some text output. This is fine if you want to directly show this output to your end-users. But if you want to retrieve the data that is mentioned in this output, how do you extract that information from the unstructured text?

Let's say you want to extract the name and age of a person, given a biography or description of that person. You can instruct the large language model to generate JSON data structures by tweaking the prompt to as follows (this is commonly called "prompt engineering"):

Extract the name and age of the person described below.

Return a JSON document with a "name" and an "age" property, 
following this structure: {"name": "John Doe", "age": 34}
Return only JSON, without any markdown markup surrounding it.

Here is the document describing the person:
---
Anna is a 23 year old artist based in Brooklyn, New York. She was 
born and raised in the suburbs of Chicago, where she developed a 
love for art at a young age. She attended the School of the Art 
Institute of Chicago, where she studied painting and drawing. 
After graduating, she moved to New York City to pursue her art career. 
Anna's work is inspired by her personal experiences and observations 
of the world around her. She often uses bright colors and bold lines 
to create vibrant and energetic paintings. Her work has been 
exhibited in galleries and museums in New York City and Chicago.
---

JSON: 

Modify the model.generate() call in the TextPrompts class to pass it the whole text prompt above:

Response<String> response = model.generate("""
    Extract the name and age of the person described below.
    Return a JSON document with a "name" and an "age" property, \
    following this structure: {"name": "John Doe", "age": 34}
    Return only JSON, without any markdown markup surrounding it.
    Here is the document describing the person:
    ---
    Anna is a 23 year old artist based in Brooklyn, New York. She was born and 
    raised in the suburbs of Chicago, where she developed a love for art at a 
    young age. She attended the School of the Art Institute of Chicago, where 
    she studied painting and drawing. After graduating, she moved to New York 
    City to pursue her art career. Anna's work is inspired by her personal 
    experiences and observations of the world around her. She often uses bright 
    colors and bold lines to create vibrant and energetic paintings. Her work 
    has been exhibited in galleries and museums in New York City and Chicago.    
    ---
    JSON: 
    """
);

If you run this prompt in our TextPrompts class, it should return the following JSON string, which you could parse with a JSON parser like the GSON library:

$ ./gradlew run -DjavaMainClass=palm.workshop.TextPrompts

> Task :app:run
{"name": "Anna", "age": 23}

BUILD SUCCESSFUL in 24s
2 actionable tasks: 1 executed, 1 up-to-date

Yes! Anna is 23!

6. Prompt templates and structured prompts

Beyond question answering

Large language models like PaLM are powerful to answer questions, but you can use them for many more tasks! For example, try the following prompts in Generative AI Studio (or by modifying the TextPrompts class). Change the uppercase words with your own ideas, and examine their output:

  • Translation — "Translate the following sentence in French: YOUR_SENTENCE_HERE"
  • Summarization — "Provide a summary of the following document: PASTE_YOUR_DOC"
  • Creative generation — "Write a poem about TOPIC_OF_THE_POEM"
  • Programming — "How to write a Fibonacci function in PROGRAMMING_LANGUAGE?"

Prompt templates

If you tried the above prompts for translation, summarization, creative generation or programming tasks, you replaced the placeholder values with your own ideas. But instead of doing some string mangling, you can also take advantage of "prompt templates", which let you define those placeholder values, and fill in the blank afterwards with your data.

Let's have a look at a yummy and creative prompt, by replacing the whole content of the main() method with the following code:

VertexAiLanguageModel model = VertexAiLanguageModel.builder()
            .endpoint("us-central1-aiplatform.googleapis.com:443")
            .project("YOUR_PROJECT_ID")
            .location("us-central1")
            .publisher("google")
            .modelName("text-bison@001")
            .maxOutputTokens(300)
            .build();

PromptTemplate promptTemplate = PromptTemplate.from("""
    Create a recipe for a {{dish}} with the following ingredients: \
    {{ingredients}}, and give it a name.
    """
);

Map<String, Object> variables = new HashMap<>();
variables.put("dish", "dessert");
variables.put("ingredients", "strawberries, chocolate, whipped cream");

Prompt prompt = promptTemplate.apply(variables);

Response<String> response = model.generate(prompt);

System.out.println(response.content());

And by adding the following imports:

import dev.langchain4j.model.input.Prompt;
import dev.langchain4j.model.input.PromptTemplate;

import java.util.HashMap;
import java.util.Map;

Then run the application again. The output should look something like what follows:

$ ./gradlew run -DjavaMainClass=palm.workshop.TextPrompts

> Task :app:run
**Strawberry Shortcake**

Ingredients:

* 1 pint strawberries, hulled and sliced
* 1/2 cup sugar
* 1/4 cup cornstarch
* 1/4 cup water
* 1 tablespoon lemon juice
* 1/2 cup heavy cream, whipped
* 1/4 cup confectioners' sugar
* 1/4 teaspoon vanilla extract
* 6 graham cracker squares, crushed

Instructions:

1. In a medium saucepan, combine the strawberries, sugar, cornstarch, water, and lemon juice. Bring to a boil over medium heat, stirring constantly. Reduce heat and simmer for 5 minutes, or until the sauce has thickened.
2. Remove from heat and let cool slightly.
3. In a large bowl, combine the whipped cream, confectioners' sugar, and vanilla extract. Beat until soft peaks form.
4. To assemble the shortcakes, place a graham cracker square on each of 6 dessert plates. Top with a scoop of whipped cream, then a spoonful of strawberry sauce. Repeat layers, ending with a graham cracker square.
5. Serve immediately.

**Tips:**

* For a more elegant presentation, you can use fresh strawberries instead of sliced strawberries.
* If you don't have time to make your own whipped cream, you can use store-bought whipped cream.

Delicious!

With prompt templates, you can feed the required parameters before calling the text generation method. This is a great way to pass data and customize prompts for different values, provided by your users.

As the name of the class suggests, the PromptTemplate class creates a template prompt, and you can assign values to the placeholder elements by applying a map of placeholder names and values.

Structured prompts (OPTIONAL)

Another way to structure your prompts is with the @StructuredPrompt annotation, if you want to use a richer object oriented approach. You annotate a class with this annotation, and its fields correspond to the placeholders defined in the prompt. Let's see that in action.

First, we'll need some new imports:

import java.util.Arrays;
import java.util.List;
import dev.langchain4j.model.input.structured.StructuredPrompt;
import dev.langchain4j.model.input.structured.StructuredPromptProcessor;

Then we can create an inner static class within our TextPrompts class that gathers the data needed to pass in the placeholders in the prompt described in the @StructuredPrompt annotation:

@StructuredPrompt("Create a recipe of a {{dish}} that can be prepared using only {{ingredients}}")
static class RecipeCreationPrompt {
    String dish;
    List<String> ingredients;
}

Then instantiate that new class and feed it the dish and ingredients of our recipe, create and pass the prompt to the generate() method as before:

RecipeCreationPrompt createRecipePrompt = new RecipeCreationPrompt();
createRecipePrompt.dish = "salad";
createRecipePrompt.ingredients = Arrays.asList("cucumber", "tomato", "feta", "onion", "olives");
Prompt prompt = StructuredPromptProcessor.toPrompt(createRecipePrompt);

Response<String> response = model.generate(prompt);

Instead of filling the gaps through a map, you can use a Java object with fields that can be auto-completed by your IDE, in a more type-safe way.

Here's the whole code if you want to more easily paste those changes into your TextPrompts class:

package palm.workshop;

import java.util.Arrays;
import java.util.List;
import dev.langchain4j.model.input.Prompt;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.vertexai.VertexAiLanguageModel;
import dev.langchain4j.model.input.structured.StructuredPrompt;
import dev.langchain4j.model.input.structured.StructuredPromptProcessor;

public class TextPrompts {

    @StructuredPrompt("Create a recipe of a {{dish}} that can be prepared using only {{ingredients}}")
    static class RecipeCreationPrompt {
        String dish;
        List<String> ingredients;
    }
    public static void main(String[] args) {
        VertexAiLanguageModel model = VertexAiLanguageModel.builder()
            .endpoint("us-central1-aiplatform.googleapis.com:443")
            .project("YOUR_PROJECT_ID")
            .location("us-central1")
            .publisher("google")
            .modelName("text-bison@001")
            .maxOutputTokens(300)
            .build();

        RecipeCreationPrompt createRecipePrompt = new RecipeCreationPrompt();
        createRecipePrompt.dish = "salad";
        createRecipePrompt.ingredients = Arrays.asList("cucumber", "tomato", "feta", "onion", "olives");
        Prompt prompt = StructuredPromptProcessor.toPrompt(createRecipePrompt);

        Response<String> response = model.generate(prompt);
        
        System.out.println(response.content());
    }
}

7. Classifying text and analyzing sentiment

Similarly to what you learned in the previous section, you will discover another "prompt engineering" technique to make the PaLM model classify text or analyze sentiments. Let's talk about "few-shot prompting". It's a way to enhance your prompts with a few examples that will help steer the language model into the direction you want, to better understand your intent.

Let's rework our TextPrompts class to take advantage of prompt templates:

package palm.workshop;

import java.util.Map;

import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.vertexai.VertexAiLanguageModel;
import dev.langchain4j.model.input.Prompt;
import dev.langchain4j.model.input.PromptTemplate;

public class TextPrompts {
    public static void main(String[] args) {
        VertexAiLanguageModel model = VertexAiLanguageModel.builder()
            .endpoint("us-central1-aiplatform.googleapis.com:443")
            .project("YOUR_PROJECT_ID")
            .location("us-central1")
            .publisher("google")
            .modelName("text-bison@001")
            .maxOutputTokens(10)
            .build();

        PromptTemplate promptTemplate = PromptTemplate.from("""
            Analyze the sentiment of the text below. Respond only with one word to describe the sentiment.

            INPUT: This is fantastic news!
            OUTPUT: POSITIVE

            INPUT: Pi is roughly equal to 3.14
            OUTPUT: NEUTRAL

            INPUT: I really disliked the pizza. Who would use pineapples as a pizza topping?
            OUTPUT: NEGATIVE

            INPUT: {{text}}
            OUTPUT: 
            """);

        Prompt prompt = promptTemplate.apply(
            Map.of("text", "I love strawberries!"));

        Response<String> response = model.generate(prompt);

        System.out.println(response.content());
    }
}

Notice the approach of offering a few examples of inputs and outputs in the prompt. These are the "few shots" that help the LLM to follow the same structure. When the model then gets an input, it'll want to return an output that matches the input/output pattern.

Running the program should return just the word POSITIVE, as strawberries are yummy too!

$ ./gradlew run -DjavaMainClass=palm.workshop.TextPrompts

> Task :app:run
POSITIVE

Sentiment analysis is also a content classification scenario. You can apply the same "few-shot prompting" approach to categorize different documents into different category buckets.

8. Congratulations

Congratulations, you've successfully built your first Generative AI application in Java using LangChain4J and the PaLM API! You discovered along the way that large language models are pretty powerful and capable of handling various tasks like question/answering, data extraction, summarization, text classification, sentiment analysis, and more.

What's next?

Check out some the following codelabs to go further with PaLM in Java:

Further reading

Reference docs