Explainable AI Design
Helping users understand how AI operates and makes decisions.
Published
October 2025, by Mel Gillespie & Owen Derby with contributions from James Mulholland, Jason Vantomme, Robyn Oliver, Tom Cunningham, Tamara Janson, Michael Blume, and Jonathan Keyek
Last Updated
October 2025, by Owen Derby
Introduction
Explainability gives users the information they need to understand AI output in context, enabling them to make informed decisions. At Workday, we design explainability for users with a shared set of best practices to ensure we are consistently setting the right expectations, keeping users in control, and building trust over time - in line with our Human-AI Experience Guidelines.
These guidelines are designed for Workmates involved in the end-to-end definition of user-facing AI, including UX Designers, Developers, Product Managers, Researchers, and many other disciplines across the Product and Technology organization.
The depth and detail of “good” explanations of AI outputs can vary based on the user’s relationship to the technology and the specific context. At times, the reasoning behind an output may be knowable, but difficult to explain to a user. This document offers various strategies for crafting explanations so that people can responsibly act on the outcomes of our AI systems.
What’s in the Guidelines?
The Explainable AI Design guidelines provide specific framing around Explainability. You can use these guidelines to help structure early design conversations and experimentation before you invest in solutions. They also work as a heuristic or “pre-flight” checklist to assess your current UX to improve Explainability for users.
Each of the guidelines below outlines a best practice approach and includes a description of how to apply it practically, along with real-world examples.
- Transparent: Use explanations to disclose AI use, limitations, and underlying logic.
- Contextual: Customise explanations to specific user needs and contexts.
- Understandable: Make sure explanations are clear, consistent and support comprehension.
- Actionable: Explanations should guide next steps and drive progress in the task flow.
- Contestable: Design explanations that support user agency and oversight.
- Traceable: Design traceable explanations to support findability, collaboration and auditability.
How Do I Apply These Guidelines?
Once you understand the importance of explainability for human centered design, you might ask, “How can I start to integrate its best practices into the experiences I’m designing?”. These guidelines outline 6 essential approaches to clarifying how our technology works for users. Each guideline is developed from real and hypothetical AI system examples and validated by AI practitioners.
As you read through each of the guidelines and examples, ask yourself:
- How is this guidance applicable to your specific product or users?
- If relevant, what specific information would be most useful to your users?
- Do users currently receive this information? If they do, how can you add it?
- Is the information clear and well-placed? If not, how can you improve its presentation?
Designers who are intentional about integrating AI explainability, should do the following:
- Design for the User Journey: Integrate explanations seamlessly into user workflows, providing them precisely where and when they are most beneficial to the user.
- Build in Explainability: Design interfaces so that explanations are an integral part of their design, rather than a separate feature.
- Prompt Critical Review: Create intentional opportunities for users to pause and reflect on decisions.
- Design for Evolving Trust: Begin with visible XAI support, then review and scale back over time as user confidence grows.
1. Transparent
Use explanations to disclose AI use, limitations, and underlying logic. Use notices with restraint to avoid littering the interface with repetitive badges, sparkle icons, notices, and disclaimers.
1.1. Visibly Disclose AI Usage
Clearly label content and interactions that are AI-driven.

Since AI-produced content can be indistinguishable from human-generated materials, disclosures are essential. They help users understand what they’re interacting with and empower them to make informed decisions.
1.2. Provide Visible AI Disclaimer
Include a clear statement that AI-generated content may be incorrect and requires human review.
Maintaining consistency in AI disclaimers is crucial for building and retaining user trust, as well as for avoiding confusion regarding AI’s involvement. Establish a clear policy for disclosing AI use, and apply it consistently across all platforms and content types.
This commitment is rooted in various relevant new and developing legal regulations and best-practice frameworks (e.g., EU AI Act, New York City Local Law 144, and NIST AI Risk Management Framework).

In this example from “Create a job Description,” automatic generation of job descriptions are created using AI using a fine tuned model built in partnership with ML. Displaying “Notice” in use cases like this is critical for ensuring that our AI features inform individuals about their interaction with AI and/or the type of data being processed by AI.
Document: RAI Protocol
Figma: Ask Workday Starter Kit
1.3. Reveal System Functionality
Provide options to access menus or settings.
For users to trust AI, they need to understand not just what the AI does, but also how it works and how they can influence its behaviour. Providing clear access to menus, settings, and controls reveals system functionality and is one of the best ways for users to learn how a system works.

Workday’s Time and Scheduling Hub presents the user with schedule settings to help them understand the different parameters by which a schedule is created. Showing users the settings in this way helps them create a more accurate mental model of product functionality.
Product Example: Edit setting modal
Product Example: Score card and Schedule settings
1.4. Provide Transparency
Provide transparency about how the system works.
Building trust with users means clearly surfacing the “what” that went into an AI based suggestion, recommendation, match or prediction - such as the data points used and general information about the underlying logic.

In this example from VNDLY, a clearly displayed explanation describes how the AI recommends candidates based on how well a candidate’s experience, skills, education, and qualifications align with the specific requirements outlined in the job description.
2. Contextual
Customise explanations to specific user needs and contexts. For situations with minimal impact, or quick transactional tasks, a comprehensive explanation may not be required; a simple one will suffice.
2.1. Tailor to the User
Make sure explanations are aligned to user needs and their decision making context.
A one-size-fits-all explanation rarely works. To be truly helpful, an explanation must be relevant to the user’s specific situation and what they are trying to accomplish. Tailoring explanations to the user means aligning the content, complexity, and presentation of an explanation with the user’s needs and their decision-making context. Customize the explanation’s depth and focus based on who is seeing it (e.g., manager, employee, administrator). Use the task context to frame the explanation. For example, if a user is requesting time off, explain interaction with the AI’s responses in terms of existing standardised business logic and HR policy. Work with your research partner to understand more about user context and apply it to the explanation.
Explainability profiles for front stage user types are currently being developed. They are intended to serve as foundational guides and a user lens in which to help:
-
Define Requirements
Determine the essential explainability needs for each user type. -
Ground Solutions in User Needs
Make sure our work is tailored to their real-world context.
The first DRAFT profile for the Worker user type can be found here.

In this example from “configure worker pipeline,” A System Admin, needs to clearly understand the information in the People Analytics Installer in order to configure People Analytics correctly. This complex task is made easier by providing an explanation which focuses on understanding the user’s role, their current task, and their level of expertise to provide an explanation that is not just accurate, but also useful and actionable in their specific context.
Product Example: HelpText - Canvas
2.2. Leverage Both Local and Global Explainability
By visually anchoring explanations to their corresponding outputs (such as a match, recommendation, prediction or suggestion), you reinforce the relationship between a single result and its justification. This “local” approach minimizes cognitive effort and reduces the distance your user’s attention must travel, making the explanation easier to find and understand without interruption. This practice not only makes the AI’s logic clearer but also helps build user trust by demonstrating the rationale in a seamless, intuitive way.
It’s also important to consider “Global” or system level explanations. Global explanations clarify the broader logic or rules that govern the AI model’s overall operation. For example, you can use a global explanation to describe the general criteria the system uses to rank all candidates, providing users with a comprehensive understanding of the model’s behavior beyond a single instance.
“Local” Example - VNDLY

In this example from VNDLY, the explanation is directly tied to a specific AI recommendation or grade. For example, explaining why this particular candidate received the grade, the grade scale - A, B, C, or D, and that the grade reflects how the candidate’s resume aligns with the job requirements.
“Global” Example - VNDLY

In this example from VNDLY, a global explanation describes how the AI recommends candidates based on how well a candidate’s experience, skills, education, and qualifications align with the specific requirements outlined in the job description.
2.3. Use Progressive Disclosure
Surface the right level of explanation for the user’s immediate needs, providing the option to go deeper if they need to.
When aiming for both clarity and in-depth explanation, start with a concise, high-level overview. This initial summary should quickly build trust and improve understanding, while also providing an obvious route to more detailed reasoning. Offering this optional, deeper dive is essential for users who need to confirm unexpected results, audit the AI’s logic for compliance, or simply want a more technical grasp of how the system reached its conclusions. To ensure comprehension for users without extensive statistics or machine learning knowledge, detail in drill-downs that present a lot of detailed information should primarily utilise data visualisation. Intuitive visual formats, such as interactive charts, customizable graphs, or filtered views. This can help simplify complex information and facilitate understanding of model behavior.

When a user selects the “View More” button on a Story Card in People Analytics, a modal with a more detailed view of the story is opened. The Top Driver tab on the modal contains a list of content and each list item can be expanded by selecting the arrow icon. Expanding each list item opens a card view with more detailed information, allowing the user to deep dive into more detail if they need to without overwhelming them with information.
Product Example: Expandable List within a Modal
XO Layout: Progressive Disclosure Select Layout

(For illustration only - not actual product functionality) Whilst requesting time off from “Ask Workday” an explanation is provided as to why an employee’s request for time off has been declined. Users are provided a link to the source of the response, in this case the “FY26 Time off Policy” Ask Workday is using to drive the response, This helps provide an obvious route to more detailed information trust in responses from the system.
3. Understandable
Make sure explanations are clear, consistent and support comprehension.
3.1. Use Non-technical Language
Make sure explanations use clear and consistent language that aligns to the AI persona guidelines.
This approach is critical for building trust; when users can easily understand the AI’s reasoning, the system feels more transparent and reliable. Consistent, clear language, suited to the users professional needs also reinforces a helpful and approachable AI persona, preventing the user experience from feeling disjointed or intimidating. It reduces cognitive load, allowing users to focus on their goals rather than on deciphering complex terms.
Remember: Explanations of AI system outputs must cater to two distinct audiences: users who are responsible for acting on the system’s recommendations and users who are directly impacted by those outcomes. The first group requires explanations that foster trust and confidence in the system’s recommendations. Users affected by decisions need clear, straightforward reasons for outcomes, the ability to correct inaccuracies, and an understanding of how their interactions can lead to different results.
Product Example: Feature Highlight
Figma: Ask Workday StarterKit
3.2 Provide Granularity Controls
Let users adjust the level of explanation detail to match their needs.
Allow users to adjust the level of detail in AI-generated explanations to match their needs. This provides them with different ways of viewing the same information, such as a high-level summary, a detailed textual breakdown, or the raw numerical data. Giving users this control not only acknowledges that different tasks and roles require different levels of insight, but that explanation controls can also support the ability to simplify information in a way that helps minimize confusion.

This People Analytics example provides controls to update the visualisation for “Cost of goods Sold”. This supports users who might need a broad overview to spot trends, while another might need a granular table to audit specific data points. By providing multiple views, the interface can serve a wider range of user needs and use cases.
3.3 Use Familiar Patterns
Use scalable patterns that support user familiarity over time.
By leveraging familiar, scalable design patterns we can reduce cognitive load. Users can focus on their tasks rather than on learning a new interface. This leads to a more efficient and less frustrating experience.

QuickTips are in-context help shown to end users, to help guide them completing a task in Workday. A question mark icon beside a field indicates to end users that they can find more information if they have a query with a field. QuickTips are generally intended to support ease of use, error prevention and documentation to help users more efficiently and successfully complete their tasks. It leverages a familiar “?” icon for ingress ubiquitous in most software.
Pattern: QuickTips
Product Example: Additional Information
3.4 Make Explanations Accessible for All Users
An accessible experience must adapt to the different ways people expect to interact with it. Creating good explainability means meeting the needs of a diverse group of users. Users must be able to perceive, understand, and operate the information in the user interface no matter what input device or assistive technology they’re using.
- Perceivable: Non-text explanations such as icons and data visualizations should have at least one accessible text alternative, such as an accessible label, a data table, and a spoken description where applicable,that conveys the same information. Instructions should not rely solely on sensory characteristics such as shape, color, size, visual location, orientation, or sound.
- Understandable: AI explanations should be presented in plain language appropriate to the user’s level of expertise or domain understanding, ensuring that it is easy to comprehend for non-specialists. Acronyms that users may not be familiar with should be spelled out the first time they are used.
- Operable: A user should be able to navigate and interact with interactive explanations using various input methods, not just a mouse.
AI systems need to offer explanations in multiple formats (multimodality). This could include a combination of visual charts with corresponding audio descriptions and simplified, text-based summaries that can be easily read by screen readers.
For more specific guidance please visit our accessibility guidelines on Canvas.

In this conceptual example from People analytics, an explanation displayed as a visual heatmap has an accessible text alternative, such as a spoken description or a data table, that conveys the same information.
4. Actionable
Explanations should guide next steps and drive progress in the task flow. Avoid making suggestions when we don’t have appropriate context to guide users effectively.
4.1. Provide Contextual Next Steps
Use relevant nudges, such as contextual ingress to help guide user behaviour and usage.
These kinds of nudges are subtle prompts integrated into the user interface that guide users towards specific actions or behaviors at the moment they are most relevant. Unlike pop-ups or alerts, they feel like a natural part of the user flow. By providing suggested next steps we not only add explanations for users, but we also build on their mental model of what capabilities the product or feature provides. A nudge can be triggered by a variety of user actions such as hovering over an element, pausing on a particular field, asking a question via a text input or completing a specific step in a process.

In this example from “Ask Workday” suggests contextual nudges in context to search results and within the side panel.
Suggested Prompts: Ask Workday Starterkit
Pattern: Contextual Ingress
4.2 Orient Users During Wait Times
Incorporate appropriate loading patterns to signal load times and process.
Use load time to share with the user what data sources are being used - similar to when you do a search for flights or book a hotel online. For example:
- “Generating based on… [Example in sentence case]”
You may also use loading times to reiterate what the user has requested generating. E.g.
- “Generating details for January Paycheck”
- “Finding results for My Goals”

4.3 Reveal the System’s Process with Chain of Thought
For more complex Agent interaction utilize Chain of Thought (CoT) reasoning to transform abstract agent workflows into clear, step-by-step visualizations that show the user what the AI is doing “under the hood.”
This approach moves beyond simple loading by providing a live, observable record of an agent’s progress. By breaking down complex tasks into a sequence of logical steps, CoT helps users understand the agent’s plan, reveals data dependencies, and clarifies the flow of multi-agent or multi-threaded tasks. The addition of a “Modify plan” option can also act as feedback itself.

In this conceptual example a user requests “a comprehensive analysis of Q3 financial results…” the Agent responds by breaking this request down into the steps it’s going to take next before asking the user to confirm or modify the plan.

Once the user confirms the plan, the agent displays overall progress revealing data sources and templated steps. The pattern defines the problem domain by breaking down a task into logical steps & explaining existing knowledge the system has to achieve the task. It also sequences logic (e.g. split-solve-combine) as it calls out the reasoning steps. This loading pattern also error-checks intermediate results, asking the user to continue with planned actions or revise whilst also displaying estimated load time to confirm expected latency, system responsiveness, and wait times.
4.4 Provide Clear Resolution Pathways
For complex or failed interactions, explanations should guide the user toward an alternative action, escape hatch or provide a suggestion to escalate to human support. Users don’t want to be left at a dead-end, so plan for what to explain when things go wrong.

In this example, a Worker checking their payslip is querying a change to their expected pay, and is unsatisfied with the response. Users who submit feedback that they’re unhappy with a response from “Ask Workday” can be given an option to create a case in Workday Help if they need further assistance. This creates a vital escalation point for users in need of human support.
Figma: Ask Workday Starter Kit
5. Contestable
Design explanations that support user agency and oversight.
5.1. Provide “What-if” Explanations
Show users how different inputs can change the outcome.
“What-if” explanations also known as Counterfactual Explanations build trust by demystifying the AI’s decision-making. By allowing users to ask “what if?”, they can directly see the changes required to alter an outcome. This visibility not only shows the direct link between their input and the model’s output but also helps them develop a better mental model of the system, understanding exactly where a threshold or decision boundary lies. This process reduces user over-reliance on the AI’s output by encouraging a more critical assessment of the result and its limitations.

In this example, a mortgage loan calculator from a bank allows users to enter a selection of variables (estimated property price, required mortgage amount, and duration of the loan) to allow the user to quickly see monthly repayment amounts. Allowing users to play with “what if” scenarios in this way can help them to quickly understand complexity and help them feel in control.
5.2 Empower User Agency Through Configurable Control
AI reshapes standardised workflows, necessitating designs that preserve and enhance user agency. Provide a range of intuitive controls that allow users to actively steer, iterate on, and guide AI outcomes. This involves designing interfaces where users can easily configure their AI experience, compare multiple AI-generated options, and clearly understand the similarities and differences between them. This approach ensures that users remain the ultimate decision-makers, fostering collaboration rather than passive acceptance.

An interstitial step designed for Workday AI Contract Analysis allowing the user to see “as-is” vs “to-be” states before committing to a process flow that breaks down collaboration and tasks before asking the user to review, configure the proposed resolution and eventually commit changes to the system.
5.3. Establish a Feedback Loop
Enable simple and clear methods for users to give feedback on explanations.
Help the user correct the AI system or adjust, reverse, and edit system suggestions to build user confidence and train the system to improve results. By letting users provide feedback, the system becomes more aligned with a user’s real-world context. This continuous feedback loop helps Workday’s AI systems adapt to evolving contexts and domain-specific issues product teams might not be aware of.

In this example from “Ask Workday,” a user leverages the feedback feature to mark an explanation as “unclear.” In the future, this feature will support users being able to enter open text responses to describe real world context the product team may not be aware of.

In this conceptual example, an Agent demonstrates a logical, step-by-step reasoning process and outlines a plan before prompting the user to initiate a task. The interaction allows the user to either proceed with the proposed actions as explained or modify the plan, thereby also serving as a form of evaluative feedback.
6. Traceable
Design traceable explanations to support findability, collaboration and auditability.
6.1. Connect to Underlying Data
Link explanations to the data sources or citations used.
Provide a way for users to directly access the data that informed the AI’s output. This could be a link or a button that allows the user to see a detailed view of the specific data points, records, or documents used in the explanation. This is crucial for data validation and for building user trust by demonstrating the source of the information.

This example from Discovery Boards includes “Drill Down” options. revealing the records that contributed to Maintenance and supply actuals in the ledger, all within the application. This approach helps the user understand the system’s reasoning and allows them to investigate the data without having to export it.

Product Example: Financial Reporting Show Details
Figma: PEX admin specs
6.2 Maintain a Visible History Log
Provide users access to previous messages, decision-making steps and outputs.
When an AI feature is used to modify or create content, the changes should be logged and made visible to the user. This history log should not only show the final output but also the steps and inputs that led to that output. This is vital for collaboration and for auditing the decision-making process, especially when multiple users or systems interact with the AI.

In this example from Org Studio, an activity stream of actions taken by the users and collaborators on a re-org project are retained.

In this example from Workday Slides, a log of user activity is retained so that everyone collaborating on the project can see what actions were performed and who completed them creating greater transparency.

In this Ask Workday example, the system retains a history of conversations (for a limited time), enabling users to easily resume previous interactions. This feature is particularly useful for explaining system output to others, as users can reference past conversations to demonstrate the source of information. For instance, an employee can revisit a conversation about maternity leave processes if they need to discuss it further with their manager.
Product Example: Version history
Looking for practical tools to help you begin exploring explainability?
Take a look at The What, Why and How of Explainability
Related Information
Why is Explainability Important?
Explainability is vital across every type of AI interaction, from embedded content to agentic flows. It isn’t a single output but a fundamental design imperative that sits across the entire user journey, providing the right information at the right time, to help users understand, trust, and use AI responsibly as a collaborative partner. In short, explainability:
- Builds Trust and Confidence
- Enhances User Understanding and Mental Models
- Facilitates Debugging and Error Correction (Contestability)
- Ensures Ethical AI and Regulatory Compliance
- Improves User Experience (UX) Fundamentals
Builds Trust and Confidence
- Reduces “Black Box” Syndrome: When an AI provides a recommendation or makes a decision without any justification, users are less likely to trust it. Explainability demystifies the AI, showing why it did what it did, which fosters confidence in its reliability and accuracy.
- Increases Adoption: Users are more willing to integrate AI into their workflows if they understand its outputs and limitations. If they don’t trust it, they won’t use it, regardless of its technical sophistication.
Enhances User Understanding and Mental Models
- Clarity and Insight: Explanations help users form accurate mental models of how the AI works. They learn what factors influence decisions, what the AI is good at, and where its weaknesses lie.
- Informed Decision-Making: When users understand the rationale behind an AI’s suggestion, they can make more informed decisions, either by accepting the AI’s output with confidence or by overriding it when they have additional context the AI lacks.
Facilitates Debugging and Error Correction (Contestability)
- Identifying Flaws: If an AI makes a mistake, an explanation can help users identify why the mistake occurred (e.g., faulty input data, an incorrect assumption). This is crucial for debugging and improving the AI over time.
- Enabling Override: Explanations empower users to contest or override an AI’s decision when they spot an error or have superior human judgment. This ensures that the user remains in control and can correct the system when necessary.
Ensures Ethical AI and Regulatory Compliance
- Fairness and Bias Detection: Explanations can expose potential biases in an AI’s training data or decision-making process, allowing those building AI and users to address fairness concerns.
- Accountability: For regulated industries (e.g., finance, healthcare), explainability is a legal or ethical requirement. It provides an audit trail and justification for AI-driven decisions, ensuring accountability.
Improves User Experience (UX) Fundamentals
- Reduced Cognitive Load: Well-designed explanations are presented in context, reducing the effort users need to expend to understand the AI’s output.
- Increased Efficiency: When users quickly grasp the AI’s reasoning, they can process information and complete tasks more efficiently.
- Personalization: Explainability can be tailored to different user roles and needs, providing the right level of detail at the right time.
Explainability
Providing users with the information they need to understand how an AI operates and makes decisions, thereby enabling them to make informed choices. Without explainability, AI can feel like a “black box,” frustrating users and leading to low adoption. Explainability transforms AI from a mysterious tool into a transparent, collaborative partner. It contributes to building long-term AI literacy, fostering user trust and confidence by demystifying the technology and its limitations.
Human-AI Experience Guidelines
A set of foundational guidelines that help with the design of AI systems at Workday to ensure they are trustworthy, user-controlled, and aligned with human needs.
Heuristic or “Pre-flight” Checklist
A heuristic is a practical, rule-of-thumb approach to problem-solving. In this context, the guidelines function as a ‘pre-flight’ checklist, a set of specific questions and best practices that can be used to assess an existing or proposed user experience (UX) and identify opportunities to improve explainability. 1 This provides a structured, repeatable method for evaluating how effectively a product clarifies how its technology works for users
Interpretability The degree to which a human can directly understand how a model works or why it made a decision, based on its structure or behavior - it’s about how transparent and intuitive the model is by design. Local Explainability Local explainability refers to the practice of providing a specific, in-the-moment justification for a single AI prediction, recommendation, or output. This approach is anchored directly to a corresponding result, reinforcing the contextual relationship between the output and its underlying rationale for the user. The core question a local explanation answers is, “Why did the AI make this specific decision for this one instance?” The goal is to justify an individual result directly within the user’s workflow, minimizing cognitive effort and building trust on a case-by-case basis. From a technical perspective, local explainability is often achieved using post-hoc, model-agnostic techniques. This means the explanation method treats the AI model as a “black box” and works by probing its outputs without needing to understand its internal structure.
Global Explainability Global explainability provides a system-level overview of an AI model’s overall behavior and logic. Rather than focusing on a single instance, it clarifies the general rules, feature importance, and decision-making criteria that govern the model’s operation as a whole. A global explanation answers the question, “How does the AI system generally work?” It provides a comprehensive understanding of the model’s behavior beyond a single instance. For instance, in the same candidate grading system, a global explanation would describe the general criteria the model uses to rank all candidates, such as the weighting of experience, skills, education, and qualifications as a whole. This type of explanation is vital for building a more accurate mental model of the system’s functionality.
For further reading on local and global explainability, please refer to Interpretable Machine Learning. Special thanks to Jason Vantomme.
Black-Box Models
A term for models whose inner workings are not easily understood, making it hard to see or understand how they make decisions (like a magic trick where you see the result but not the method).
Cognitive Overload
This occurs when users are presented with too much information at once, making it difficult to process the explanation and make a decision. It can lead to confusion and frustration.
Chain of Thought (CoT)
Chain of Thought is a method that breaks down complex AI reasoning into a clear, step-by-step visualization for the user. It moves beyond simple loading screens by providing a live, observable record of an AI agent’s progress, showing the user what the AI is doing “under the hood”. This approach makes an abstract workflow into a clear sequence of logical steps, which can reveal data dependencies and clarify the flow of complex tasks.
Counterfactual Explanations
Counterfactual explanations are a specific, powerful type of explanation that directly supports the goal of contestability. Instead of only explaining why a specific outcome occurred (“Why this?”), a counterfactual explanation shows a user how different inputs could change the outcome (“Why not something else?”). It can be used to describe the minimum conditions that would have had to be different for an alternative, preferred decision to be made. For example, a loan applicant who is rejected could be told that their loan would have been approved if their income had been $20,000 higher. This approach empowers users by giving them actionable information to potentially alter an outcome or improve their chances in the future.
Data Drift
This happens when the data a model encounters in the real world starts to differ from the data it was trained on. This can lead to a decrease in the model’s performance and accuracy over time.
Mental Model
A mental model is a user’s internal representation or understanding of how a product or system works. When an AI system’s behavior is unpredictable or unexplained, a user’s mental model may be inaccurate, leading to frustration and distrust. Good explainability helps users form an accurate mental model of how the AI works, allowing them to understand what factors influence decisions and what the system is good at and where its weaknesses lie. An accurate mental model enhances informed decision-making and reduces user over-reliance on the AI’s output.
Model Drift
When the model itself changes over time (e.g. through retraining or updates), causing its decisions or behaviour to shift from how it originally performed.
Notice
A mandatory alert within the user interface that explicitly informs the user that a feature is generated or powered by AI. This is a key requirement for transparency and regulatory compliance.
Post-Hoc Explanation
An explanation that is generated by AI after it has already made a decision. It’s important to be aware that these explanations can sometimes be simplified summaries and may not reflect the exact, complex reasoning the model used.
Progressive Disclosure
This is a classic user experience (UX) design pattern applied to the context of explainability. It involves initially showing users a concise, high-level overview of an AI’s reasoning to address their immediate needs, while providing an obvious and easy option to “go deeper” for more detailed reasoning. This prevents users from being overwhelmed with information they don’t need, but it still provides a clear path for those who require a more technical grasp of how the system reached its conclusions.
RAG (Retrieval Augmented Generation)
A technique where a model retrieves relevant information from external knowledge sources or data, and uses that information to generate more accurate and contextually relevant responses, rather than relying solely on its training data.
Transparency
In the context of AI, this means providing clear information about how a system works. This includes details on the data used, known limitations, and the logic behind outputs, which can help foster user trust.
Tuning
When developers adjust their machine learning algorithm based on feedback or errors to improve accuracy and performance.
Contestable Explanations
Contestability is a core design principle that ensures explanations support user agency and oversight. It involves designing a feedback loop that enables users to challenge, adjust, or override an AI’s decision. This is vital for building user confidence and for training the system to improve its results over time. By providing a means for users to correct the AI’s suggestions or give feedback on its explanations, the system becomes more aligned with real-world contexts and human judgment.
Can't Find What You Need?
Check out our FAQ section which may help you find the information you're looking for. For further information, contact the #ask-canvas-design or #ask-canvas-kitchannels on Slack.