Workday Canvas

Annotations

Show AI sources and edits of automated data extraction from documents and insertion into Workday fields using Annotations.

Overview

Annotations emerged as part of the Document-Driven Accounting (DDA) framework project as a way to clearly display data extracted from uploaded documents via AI. Annotations, and the related explanations, are a visual language to give users full visibility into their data’s origin within Workday. This design leverages our foundation in Supplier Accounts, incorporates Evisort’s AI power, and follows RAD’s AI standards to ensure every data point is both traceable and explainable. Additional considerations for tracking changes made (Auditability) and empowering users to fix extracted data and help the AI model improve over time (Correct & Learn) are also being factored in to provide a robust framework that can inform and explain without detracting from the users’ ability to complete their jobs to be done.

Side-by-Side Layout showing Workday Fields on the left and the Document Viewer with extracted data on the right, connected by annotations.

Annotations show the connection between Workday fields and source documents.

DDA UX Framework

From Customer Contracts, to Grants, to Payment Remittances, the efforts of the DDA project purposefully tried to span different use-cases as it was researched and designed in order to ensure the framework could extend consistently across the Workday platform. The DDA UX Framework emerged from this work and the annotations are just a piece of this larger user experience.

1. Intake - Bringing a document into Workday

Document can be brought into Workday through a variety of means. A user can manually upload a single or multiple files, or integrations via file sharing systems or email inboxes can auto-upload documents.

2. Extract - Extracting information out of the documents

Data extraction times can vary, so helping inform the user and setting expectations is an important aspect as they incorporate this new pattern into their workflow.

3. Apply - Adding the extracted information to Workday

The data is matched to the appropriate form fields and annotations help draw clear distinctions about how that data has been applied, making sure customers can be in the loop and review the agent’s work.

4. Utilize - Making use of the extracted information

The extracted information can then be also used to enable users to deep dive into the document via Workday assistant chat, as well as empower the agent - through customer interaction, editing, and feedback - to continuously learn and improve.

Annotation Elements

Annotations help to provide a lightweight UI that ought to be able to co-exist within existing product areas across Workday’s platform. While annotations are the most prominent UI element within this framework, there are a few more pieces that help provide the necessary clarity and orientation to enhance our customers’ experience.

Most of these elements have been designed to be in lock-step with the pattern outlined in Outputs and Displays to ensure we’re presenting AI-generated content and guidance consistently across Workday.

Side-by-Side Layout

The use of “Side-by-Side” refers to having the Workday Fields on the left-side and the Document Viewer showing the extracted data from the documents on the right-side. The annotations are meant to reference the source of the information. They provide an interactive connection between Workday and that source, and providing the user access to view that source either by default or by easily accessing it, is an important piece of the puzzle.

Screenshot of a Customer Contract page in Workday showing the side-by-side layout of form fields on the left, and the document viewer on the right.

In the image above, within Customer Contracts a side-by-side document viewer allows the user to view the contract document file while editing the contract object within Workday. They have control over hiding the doc viewer or undocking it to a separate screen, but by making the side-by-side layout the default view, a connection to the origin of the extracted data is made more clear.

Annotation States

Annotations help to draw visual attention to fields that have been populated by an AI agent, outside of the supervision or actions of the user. We want to bring attention to the data, changing the user’s mode from data input to data review. The annotation also serves to connect the data to its source - with numbers allowing for visual scanning from field to document, and hover and clickability baked in to enable a user to quickly jump to the connected highlighted data within its context.

The need for users to have transparency into the source of their data also means that a single style of annotation is not sufficient to explain the various ways data can be expressed within our form fields. Variations of the standard annotation style help draw attention to fields that may need increased user scrutiny. Expanding the visual language of the annotations helps add efficiency to the user’s review process, and serves to improve the explainability of the system.

A visual representation of various annotation states used to draw attention to fields populated by an AI agent, including a distinct style for conflicting data values that require user review.

In the image above, the AI agent detects two conflicting values that might apply to a single field, and needs the human user to help determine which is the correct value to use for the given document. This field demands more attention, and so leveraging a different annotation style to alert and draw the appropriate amount of attention to the conflict, ensures that the field is reviewed with the appropriate amount of scrutiny.

Explainability QuickTip

While a variety of annotation states improves explainability, by itself, the annotations are not quite enough. Utilizing QuickTips that appear on the click of an annotation allows the system to provide multiple layers of detail and feedback capability to the user.

These QuickTips allow this framework to more fully follow the guidance laid out in Explainable AI Design and should look and behave similarly to the Contextual Ingress pattern.

Screenshot of an Annotation QuickTip overlay, showing detailed information and feedback options related to an AI-populated field.

Types of Information found within Annotation QuickTips:

  1. Feature Orientation - An explanation of what is going on and what each annotation state means.
  2. Audit History - Showing the what and who of why information has changed, and what it was previously.
  3. Ability to Revert - The connection to the source document is maintained, allowing the user to return to that value after edits are made.
  4. Correct and Learn - Informing users that our system is “smart” - that changes help improve the agent over time (implicit feedback), while also giving them the ability to provide explicit feedback through feedback forms built within the QuickTip.
  5. Review Tour - Expediting the user’s data review task by allowing them to navigate via previous/next buttons.

Explainable AI Design Overview

Can't Find What You Need?

Check out our FAQ section which may help you find the information you're looking for. For further information, contact the #ask-canvas-design or #ask-canvas-kitchannels on Slack.

On this Page: