Workday Canvas

Explainable AI Design

Helping users understand how AI operates and makes decisions.

Published

October 2025, by Mel Gillespie & Owen Derby with contributions from James Mulholland, Jason Vantomme, Robyn Oliver, Tom Cunningham, Tamara Janson, Michael Blume, and Jonathan Keyek

Last Updated

October 2025, by Owen Derby

Introduction

Explainability gives users the information they need to understand AI output in context, enabling them to make informed decisions. At Workday, we design explainability for users with a shared set of best practices to ensure we are consistently setting the right expectations, keeping users in control, and building trust over time - in line with our Human-AI Experience Guidelines.

These guidelines are designed for Workmates involved in the end-to-end definition of user-facing AI, including UX Designers, Developers, Product Managers, Researchers, and many other disciplines across the Product and Technology organization.

The depth and detail of “good” explanations of AI outputs can vary based on the user’s relationship to the technology and the specific context. At times, the reasoning behind an output may be knowable, but difficult to explain to a user. This document offers various strategies for crafting explanations so that people can responsibly act on the outcomes of our AI systems.

What’s in the Guidelines?

The Explainable AI Design guidelines provide specific framing around Explainability. You can use these guidelines to help structure early design conversations and experimentation before you invest in solutions. They also work as a heuristic or “pre-flight” checklist to assess your current UX to improve Explainability for users.

Each of the guidelines below outlines a best practice approach and includes a description of how to apply it practically, along with real-world examples.

  1. Transparent: Use explanations to disclose AI use, limitations, and underlying logic.
  2. Contextual: Customise explanations to specific user needs and contexts.
  3. Understandable: Make sure explanations are clear, consistent and support comprehension.
  4. Actionable: Explanations should guide next steps and drive progress in the task flow.
  5. Contestable: Design explanations that support user agency and oversight.
  6. Traceable: Design traceable explanations to support findability, collaboration and auditability.

How Do I Apply These Guidelines?

Once you understand the importance of explainability for human centered design, you might ask, “How can I start to integrate its best practices into the experiences I’m designing?”. These guidelines outline 6 essential approaches to clarifying how our technology works for users. Each guideline is developed from real and hypothetical AI system examples and validated by AI practitioners.

As you read through each of the guidelines and examples, ask yourself:

  • How is this guidance applicable to your specific product or users?
  • If relevant, what specific information would be most useful to your users?
  • Do users currently receive this information? If they do, how can you add it?
  • Is the information clear and well-placed? If not, how can you improve its presentation?

Designers who are intentional about integrating AI explainability, should do the following:

  1. Design for the User Journey: Integrate explanations seamlessly into user workflows, providing them precisely where and when they are most beneficial to the user.
  2. Build in Explainability: Design interfaces so that explanations are an integral part of their design, rather than a separate feature.
  3. Prompt Critical Review: Create intentional opportunities for users to pause and reflect on decisions.
  4. Design for Evolving Trust: Begin with visible XAI support, then review and scale back over time as user confidence grows.

1. Transparent

Use explanations to disclose AI use, limitations, and underlying logic. Use notices with restraint to avoid littering the interface with repetitive badges, sparkle icons, notices, and disclaimers.

1.1. Visibly Disclose AI Usage

Clearly label content and interactions that are AI-driven.

A section in Workday with the title "Connect with a Coworker" has an AI indicator label in the top-left corner. The indicator has a sparkle icon and the text "AI Generated."

Since AI-produced content can be indistinguishable from human-generated materials, disclosures are essential. They help users understand what they’re interacting with and empower them to make informed decisions.

1.2. Provide Visible AI Disclaimer

Include a clear statement that AI-generated content may be incorrect and requires human review.

Maintaining consistency in AI disclaimers is crucial for building and retaining user trust, as well as for avoiding confusion regarding AI’s involvement. Establish a clear policy for disclosing AI use, and apply it consistently across all platforms and content types.

This commitment is rooted in various relevant new and developing legal regulations and best-practice frameworks (e.g., EU AI Act, New York City Local Law 144, and NIST AI Risk Management Framework).

A screenshot of the Create Job Requistion task. A dialog with the title "Generate Job Description" is open in the center of the page and contains an AI generated job description. At the bottom-left of the modal, there is a disclaimer "This content was generated by AI. Review content before use."


In this example from “Create a job Description,” automatic generation of job descriptions are created using AI using a fine tuned model built in partnership with ML. Displaying “Notice” in use cases like this is critical for ensuring that our AI features inform individuals about their interaction with AI and/or the type of data being processed by AI.

Document: RAI Protocol

Figma: Ask Workday Starter Kit

1.3. Reveal System Functionality

Provide options to access menus or settings.

For users to trust AI, they need to understand not just what the AI does, but also how it works and how they can influence its behaviour. Providing clear access to menus, settings, and controls reveals system functionality and is one of the best ways for users to learn how a system works.

A modal open in the center of the page. The modal has a title "Schedule Settings" and a description "Adjust sliders to customize how the schedule will be generated." It also contains several range inputs to control how a worker's schedule is generated. The inputs are labeled: "Schedule Consistency", "Labor Cost Minimization", "Preferred Weekly Hours", and "Preferred Days/Times."


Workday’s Time and Scheduling Hub presents the user with schedule settings to help them understand the different parameters by which a schedule is created. Showing users the settings in this way helps them create a more accurate mental model of product functionality.

Product Example: Edit setting modal

Product Example: Score card and Schedule settings

1.4. Provide Transparency

Provide transparency about how the system works.

Building trust with users means clearly surfacing the “what” that went into an AI based suggestion, recommendation, match or prediction - such as the data points used and general information about the underlying logic.

A VNDLY job candidate view for a Senior Systems Engineer. The AI Recommendations tab is open with a list of recommended workers. Above the list is a light blue info highlight section with the title "Recommendations have been generated by HiredScore AI" and the text "HiredScore AI recommends candidates based on how well a candidates experience, skills, education, and qualifications aligh wit hthe specific requirements outlined in the job. AI content should be reviewed before use."

In this example from VNDLY, a clearly displayed explanation describes how the AI recommends candidates based on how well a candidate’s experience, skills, education, and qualifications align with the specific requirements outlined in the job description.

2. Contextual

Customise explanations to specific user needs and contexts. For situations with minimal impact, or quick transactional tasks, a comprehensive explanation may not be required; a simple one will suffice.

2.1. Tailor to the User

Make sure explanations are aligned to user needs and their decision making context.

A one-size-fits-all explanation rarely works. To be truly helpful, an explanation must be relevant to the user’s specific situation and what they are trying to accomplish. Tailoring explanations to the user means aligning the content, complexity, and presentation of an explanation with the user’s needs and their decision-making context. Customize the explanation’s depth and focus based on who is seeing it (e.g., manager, employee, administrator). Use the task context to frame the explanation. For example, if a user is requesting time off, explain interaction with the AI’s responses in terms of existing standardised business logic and HR policy. Work with your research partner to understand more about user context and apply it to the explanation.

Explainability profiles for front stage user types are currently being developed. They are intended to serve as foundational guides and a user lens in which to help:

  1. Define Requirements
    Determine the essential explainability needs for each user type.

  2. Ground Solutions in User Needs
    Make sure our work is tailored to their real-world context.

The first DRAFT profile for the Worker user type can be found here.

A screenshot of the Metrics page in the Configure Worker Pipeline worflow. In the center of the page is an open dialog with the title "Female Representation" and a description of that metric and how it is calculated.


In this example from “configure worker pipeline,” A System Admin, needs to clearly understand the information in the People Analytics Installer in order to configure People Analytics correctly. This complex task is made easier by providing an explanation which focuses on understanding the user’s role, their current task, and their level of expertise to provide an explanation that is not just accurate, but also useful and actionable in their specific context.

Product Example: HelpText - Canvas

2.2. Leverage Both Local and Global Explainability

By visually anchoring explanations to their corresponding outputs (such as a match, recommendation, prediction or suggestion), you reinforce the relationship between a single result and its justification. This “local” approach minimizes cognitive effort and reduces the distance your user’s attention must travel, making the explanation easier to find and understand without interruption. This practice not only makes the AI’s logic clearer but also helps build user trust by demonstrating the rationale in a seamless, intuitive way.

It’s also important to consider “Global” or system level explanations. Global explanations clarify the broader logic or rules that govern the AI model’s overall operation. For example, you can use a global explanation to describe the general criteria the system uses to rank all candidates, providing users with a comprehensive understanding of the model’s behavior beyond a single instance.

“Local” Example - VNDLY

A VNDLY job candidate view for a Senior Systems Engineer. The Applicants tab is open with a list of applicants. A dialog is open in front of the first applicant. The dialog explaining what the HiredScore grade means and how it is generated. It also contains a disclaimer to review AI content before use.


In this example from VNDLY, the explanation is directly tied to a specific AI recommendation or grade. For example, explaining why this particular candidate received the grade, the grade scale - A, B, C, or D, and that the grade reflects how the candidate’s resume aligns with the job requirements.

“Global” Example - VNDLY

A VNDLY job candidate view for a Senior Systems Engineer. The AI Recommendations tab is open with a list of recommended workers. Above the list is a light blue info highlight section with the title "Recommendations have been generated by HiredScore AI" and the text "HiredScore AI recommends candidates based on how well a candidates experience, skills, education, and qualifications aligh wit hthe specific requirements outlined in the job. AI content should be reviewed before use."

In this example from VNDLY, a global explanation describes how the AI recommends candidates based on how well a candidate’s experience, skills, education, and qualifications align with the specific requirements outlined in the job description.

2.3. Use Progressive Disclosure

Surface the right level of explanation for the user’s immediate needs, providing the option to go deeper if they need to.

When aiming for both clarity and in-depth explanation, start with a concise, high-level overview. This initial summary should quickly build trust and improve understanding, while also providing an obvious route to more detailed reasoning. Offering this optional, deeper dive is essential for users who need to confirm unexpected results, audit the AI’s logic for compliance, or simply want a more technical grasp of how the system reached its conclusions. To ensure comprehension for users without extensive statistics or machine learning knowledge, detail in drill-downs that present a lot of detailed information should primarily utilise data visualisation. Intuitive visual formats, such as interactive charts, customizable graphs, or filtered views. This can help simplify complex information and facilitate understanding of model behavior.

tA GIF of the People Analytics' workflow. A modal is open in the center of the page with anayltics and insights for female representation.

When a user selects the “View More” button on a Story Card in People Analytics, a modal with a more detailed view of the story is opened. The Top Driver tab on the modal contains a list of content and each list item can be expanded by selecting the arrow icon. Expanding each list item opens a card view with more detailed information, allowing the user to deep dive into more detail if they need to without overwhelming them with information.

Product Example: Expandable List within a Modal

XO Layout: Progressive Disclosure Select Layout

A demo of the Workday homepage with the Ask Workday sidepanel open on the right. The sidepanel has an explanation of why the worker's time off request was rejected and a list of sources to explain its reasoning.


(For illustration only - not actual product functionality) Whilst requesting time off from “Ask Workday” an explanation is provided as to why an employee’s request for time off has been declined. Users are provided a link to the source of the response, in this case the “FY26 Time off Policy” Ask Workday is using to drive the response, This helps provide an obvious route to more detailed information trust in responses from the system.

3. Understandable

Make sure explanations are clear, consistent and support comprehension.

3.1. Use Non-technical Language

Make sure explanations use clear and consistent language that aligns to the AI persona guidelines.

This approach is critical for building trust; when users can easily understand the AI’s reasoning, the system feels more transparent and reliable. Consistent, clear language, suited to the users professional needs also reinforces a helpful and approachable AI persona, preventing the user experience from feeling disjointed or intimidating. It reduces cognitive load, allowing users to focus on their goals rather than on deciphering complex terms.

Remember: Explanations of AI system outputs must cater to two distinct audiences: users who are responsible for acting on the system’s recommendations and users who are directly impacted by those outcomes. The first group requires explanations that foster trust and confidence in the system’s recommendations. Users affected by decisions need clear, straightforward reasons for outcomes, the ability to correct inaccuracies, and an understanding of how their interactions can lead to different results.

Product Example: Feature Highlight

Figma: Ask Workday StarterKit

3.2 Provide Granularity Controls

Let users adjust the level of explanation detail to match their needs.

Allow users to adjust the level of detail in AI-generated explanations to match their needs. This provides them with different ways of viewing the same information, such as a high-level summary, a detailed textual breakdown, or the raw numerical data. Giving users this control not only acknowledges that different tasks and roles require different levels of insight, but that explanation controls can also support the ability to simplify information in a way that helps minimize confusion.

A screenshot of Workday's Financials dashboard.


This People Analytics example provides controls to update the visualisation for “Cost of goods Sold”. This supports users who might need a broad overview to spot trends, while another might need a granular table to audit specific data points. By providing multiple views, the interface can serve a wider range of user needs and use cases.

3.3 Use Familiar Patterns

Use scalable patterns that support user familiarity over time.

By leveraging familiar, scalable design patterns we can reduce cognitive load. Users can focus on their tasks rather than on learning a new interface. This leads to a more efficient and less frustrating experience.

A screenshot of the Workday Change Job workflow. There is a select input labeled "Why are you making this change?" A Quick Tip dialog is open to the right of the inpuy and provides more context for the available input options.


QuickTips are in-context help shown to end users, to help guide them completing a task in Workday. A question mark icon beside a field indicates to end users that they can find more information if they have a query with a field. QuickTips are generally intended to support ease of use, error prevention and documentation to help users more efficiently and successfully complete their tasks. It leverages a familiar “?” icon for ingress ubiquitous in most software.

Pattern: QuickTips

Product Example: Additional Information

3.4 Make Explanations Accessible for All Users

An accessible experience must adapt to the different ways people expect to interact with it. Creating good explainability means meeting the needs of a diverse group of users. Users must be able to perceive, understand, and operate the information in the user interface no matter what input device or assistive technology they’re using.

  • Perceivable: Non-text explanations such as icons and data visualizations should have at least one accessible text alternative, such as an accessible label, a data table, and a spoken description where applicable,that conveys the same information. Instructions should not rely solely on sensory characteristics such as shape, color, size, visual location, orientation, or sound.
  • Understandable: AI explanations should be presented in plain language appropriate to the user’s level of expertise or domain understanding, ensuring that it is easy to comprehend for non-specialists. Acronyms that users may not be familiar with should be spelled out the first time they are used.
  • Operable: A user should be able to navigate and interact with interactive explanations using various input methods, not just a mouse.

AI systems need to offer explanations in multiple formats (multimodality). This could include a combination of visual charts with corresponding audio descriptions and simplified, text-based summaries that can be easily read by screen readers.

For more specific guidance please visit our accessibility guidelines on Canvas.

A screenshot of the People Analytics' Skills Overview page. It contains two charts: Top 10 Skills by Region (a heatmap) and Top 5 Skills in Demand (a column chart).


In this conceptual example from People analytics, an explanation displayed as a visual heatmap has an accessible text alternative, such as a spoken description or a data table, that conveys the same information.

4. Actionable

Explanations should guide next steps and drive progress in the task flow. Avoid making suggestions when we don’t have appropriate context to guide users effectively.

4.1. Provide Contextual Next Steps

Use relevant nudges, such as contextual ingress to help guide user behaviour and usage.

These kinds of nudges are subtle prompts integrated into the user interface that guide users towards specific actions or behaviors at the moment they are most relevant. Unlike pop-ups or alerts, they feel like a natural part of the user flow. By providing suggested next steps we not only add explanations for users, but we also build on their mental model of what capabilities the product or feature provides. A nudge can be triggered by a variety of user actions such as hovering over an element, pausing on a particular field, asking a question via a text input or completing a specific step in a process.

A screenshot of the Workday homepage with the Ask Workday sidepanel open on the right. The sidepanel provides two suggested prompts at the bottom: "What's the daily limit for meals during travel?" and "What do I do if I lost my receipt?"


In this example from “Ask Workday” suggests contextual nudges in context to search results and within the side panel.

Suggested Prompts: Ask Workday Starterkit

Pattern: Contextual Ingress

4.2 Orient Users During Wait Times

Incorporate appropriate loading patterns to signal load times and process.

Use load time to share with the user what data sources are being used - similar to when you do a search for flights or book a hotel online. For example:

  • “Generating based on… [Example in sentence case]”

You may also use loading times to reiterate what the user has requested generating. E.g.

  • “Generating details for January Paycheck”
  • “Finding results for My Goals”

A screenshot of a dialog open in the center of a blank page. The dialog says "Generating based on Job Posting Title" and uses animated loading sparkles.

4.3 Reveal the System’s Process with Chain of Thought

For more complex Agent interaction utilize Chain of Thought (CoT) reasoning to transform abstract agent workflows into clear, step-by-step visualizations that show the user what the AI is doing “under the hood.”

This approach moves beyond simple loading by providing a live, observable record of an agent’s progress. By breaking down complex tasks into a sequence of logical steps, CoT helps users understand the agent’s plan, reveals data dependencies, and clarifies the flow of multi-agent or multi-threaded tasks. The addition of a “Modify plan” option can also act as feedback itself.

A screenshot of Workday's Agent Workspace dashboard.


In this conceptual example a user requests “a comprehensive analysis of Q3 financial results…” the Agent responds by breaking this request down into the steps it’s going to take next before asking the user to confirm or modify the plan.

A screenshot of Workday's Agent Workspace dashboard.


Once the user confirms the plan, the agent displays overall progress revealing data sources and templated steps. The pattern defines the problem domain by breaking down a task into logical steps & explaining existing knowledge the system has to achieve the task. It also sequences logic (e.g. split-solve-combine) as it calls out the reasoning steps. This loading pattern also error-checks intermediate results, asking the user to continue with planned actions or revise whilst also displaying estimated load time to confirm expected latency, system responsiveness, and wait times.

View Chain-of-Thought Demo

4.4 Provide Clear Resolution Pathways

For complex or failed interactions, explanations should guide the user toward an alternative action, escape hatch or provide a suggestion to escalate to human support. Users don’t want to be left at a dead-end, so plan for what to explain when things go wrong.

A screenshot of the Ask Workday chat where a worker asks "Why is my last paycheck different from the one before?"


In this example, a Worker checking their payslip is querying a change to their expected pay, and is unsatisfied with the response. Users who submit feedback that they’re unhappy with a response from “Ask Workday” can be given an option to create a case in Workday Help if they need further assistance. This creates a vital escalation point for users in need of human support.

Figma: Ask Workday Starter Kit

5. Contestable

Design explanations that support user agency and oversight.

5.1. Provide “What-if” Explanations

Show users how different inputs can change the outcome.

“What-if” explanations also known as Counterfactual Explanations build trust by demystifying the AI’s decision-making. By allowing users to ask “what if?”, they can directly see the changes required to alter an outcome. This visibility not only shows the direct link between their input and the model’s output but also helps them develop a better mental model of the system, understanding exactly where a threshold or decision boundary lies. This process reduces user over-reliance on the AI’s output by encouraging a more critical assessment of the result and its limitations.

A screenshot of a mortgage loan calculator in Workday.


In this example, a mortgage loan calculator from a bank allows users to enter a selection of variables (estimated property price, required mortgage amount, and duration of the loan) to allow the user to quickly see monthly repayment amounts. Allowing users to play with “what if” scenarios in this way can help them to quickly understand complexity and help them feel in control.

5.2 Empower User Agency Through Configurable Control

AI reshapes standardised workflows, necessitating designs that preserve and enhance user agency. Provide a range of intuitive controls that allow users to actively steer, iterate on, and guide AI outcomes. This involves designing interfaces where users can easily configure their AI experience, compare multiple AI-generated options, and clearly understand the similarities and differences between them. This approach ensures that users remain the ultimate decision-makers, fostering collaboration rather than passive acceptance.

A screenshot of Workday's AI Contract Analysis. A modal with the title "Proposed Resolution" is open in the center of the page. A "Configure Resolution" button at the bottom of the modal is highlighted.

An interstitial step designed for Workday AI Contract Analysis allowing the user to see “as-is” vs “to-be” states before committing to a process flow that breaks down collaboration and tasks before asking the user to review, configure the proposed resolution and eventually commit changes to the system.

5.3. Establish a Feedback Loop

Enable simple and clear methods for users to give feedback on explanations.

Help the user correct the AI system or adjust, reverse, and edit system suggestions to build user confidence and train the system to improve results. By letting users provide feedback, the system becomes more aligned with a user’s real-world context. This continuous feedback loop helps Workday’s AI systems adapt to evolving contexts and domain-specific issues product teams might not be aware of.

A screenshot of Workday's Payslip task with the Ask Workday sidepanel open on the right. The sidepanel has a section for the user to provide feedback.


In this example from “Ask Workday,” a user leverages the feedback feature to mark an explanation as “unclear.” In the future, this feature will support users being able to enter open text responses to describe real world context the product team may not be aware of.

A screenshot of Workday's Agent Workspace UI. There is a section in the middle of the page titled, "Ready to Begin Analysis." A "Modify Plan" button in that section is highlighted.


In this conceptual example, an Agent demonstrates a logical, step-by-step reasoning process and outlines a plan before prompting the user to initiate a task. The interaction allows the user to either proceed with the proposed actions as explained or modify the plan, thereby also serving as a form of evaluative feedback.

6. Traceable

Design traceable explanations to support findability, collaboration and auditability.

6.1. Connect to Underlying Data

Link explanations to the data sources or citations used.

Provide a way for users to directly access the data that informed the AI’s output. This could be a link or a button that allows the user to see a detailed view of the specific data points, records, or documents used in the explanation. This is crucial for data validation and for building user trust by demonstrating the source of the information.

A screenshot of Workday's Income Statement workflow.

This example from Discovery Boards includes “Drill Down” options. revealing the records that contributed to Maintenance and supply actuals in the ledger, all within the application. This approach helps the user understand the system’s reasoning and allows them to investigate the data without having to export it.

A screenshot of Workday's Income Statement workflow.

Product Example: Financial Reporting Show Details

Figma: PEX admin specs

6.2 Maintain a Visible History Log

Provide users access to previous messages, decision-making steps and outputs.

When an AI feature is used to modify or create content, the changes should be logged and made visible to the user. This history log should not only show the final output but also the steps and inputs that led to that output. This is vital for collaboration and for auditing the decision-making process, especially when multiple users or systems interact with the AI.

A screenshot of Workday's Org Studio. A sidepanel on the right is highlighted. It contains a log of the task's activity.


In this example from Org Studio, an activity stream of actions taken by the users and collaborators on a re-org project are retained.

A screenshot of Workday Slides with a version history log highlighted on the left side.


In this example from Workday Slides, a log of user activity is retained so that everyone collaborating on the project can see what actions were performed and who completed them creating greater transparency.

A screenshot of Workday's home page with the Ask Workday sidepanel open to the right. Inside the sidepanel, there is a detailed log of conversation history.


In this Ask Workday example, the system retains a history of conversations (for a limited time), enabling users to easily resume previous interactions. This feature is particularly useful for explaining system output to others, as users can reference past conversations to demonstrate the source of information. For instance, an employee can revisit a conversation about maternity leave processes if they need to discuss it further with their manager.

Product Example: Version history


Looking for practical tools to help you begin exploring explainability?
Take a look at The What, Why and How of Explainability


Annotations Overview

Can't Find What You Need?

Check out our FAQ section which may help you find the information you're looking for. For further information, contact the #ask-canvas-design or #ask-canvas-kitchannels on Slack.

On this Page: