AI Experience Guidelines
12 guidelines to help structure early product conversations and experimentation across disciplines before you invest in solutions.
Published
Febuary 2024, by Owen Derby (v1.0)
Build Meaningful Intelligent Experiences
Human-to-computer interaction guidelines have helped us create meaningful user experiences for more than 25 years. Now, that guidance needs to evolve to inform our approach to designing products infused with Artificial Intelligence. Our new Human-AI Experience guidelines aim to navigate a new paradigm in interaction design and implement best practices throughout a user’s experience with Workday’s AI.
From the first steps users take with our AI, to ensuring they feel in control during interactions, to supporting users over time, you can use these guidelines to help structure early product conversations and experimentation across disciplines before you invest in solutions.


1 – Set the Right Expectations
When to use? Before or during initial user interaction.
Why: Because people will expect our AI to be both smarter and dumber than it is, we need to make sure users know what AI can do, and where its limitations are.
1.1 - Help the user understand what’s possible
Make sure the user understands what they can do with AI within the product, task or feature. By using an introductory explanation we can help tell the user what to expect. We can also aid understanding more by clearly showing preformulated user inputs or by showing them options, menus or settings that reveal system functionality. Ensuring the user has a clear understanding of functionality before they begin avoids the risk of disappointment or even task abandonment.
1.2 - Set User Expectations
Users may trust or distrust AI system performance in unrealistic ways. Set realistic expectations about how well the system can perform, especially at the early or beta stages of release where our training data may be limited. Setting expectations at the start is important to counter any bias or aversion to the system they may have at the start.
1.3 – Make AI Functionality Easy to Access
Ensure users can clearly identify where, when, and how to invoke the AI system efficiently, especially when we introduce users to new functionality that they may be unfamiliar with.
1.4 – Elicit User Preferences
It’s hard to provide reliable personalised recommendations, suggestions or matches without user interaction. If you’re not sure of user preferences before they begin, start by soliciting them. Remember: designing for casual, intermediate and professional users with different levels of familiarity with Workday’s functionality will require different kinds of support.
Set the Right Expectations - examples
| Help the User Understand what’s Possible | Setting User Expectations | Making AI Functionality Easy to Access | Eliciting User Preferences |
|---|---|---|---|
| 1.1.1 - Use an introductory explanation | 1.2.1 - Make it clear that AI can make mistakes | 1.3.1 - Use a consistent visual affordance | 1.4.1 - Ask users at the start |
| 1.1.2 - Reveal system settings | 1.2.2 - Use UI content to explain how it works | 1.3.2 - Surface recommendations | 1.4.2 - Use data external to the user |
| 1.1.3 - Show examples of system output | 1.2.3 - Share system performance | 1.3.3 - Be context aware | 1.4.3 - Consider a collaborative filtering approach |
| 1.1.4 - Offer first-time-user tours |
1.1.1 – Using an Introductory Explaination
Use introductory explanations to give context-specific guidance to users.

In this example from Workday’s Change Job task, a first time user is shown help text explaining what to do in context to the prompt or component they are interacting with.
1.1.2 – Reveal System Settings
Giving access to menus or settings that reveal system functionality helps the user understand what the system can do. also 3.4.6

Workday’s Time and Scheduling Hub presents the user with schedule settings to help them understand the different parameters by which a schedule is created. Showing users the settings in this way helps them create a more accurate mental model of product functionality.
1.1.3 – Show Examples of System Output
Showing a set of system outputs helps users understand what the system can do. also 3.4.7

Navigation apps suggest multiple routes with different benefits before the user selects ‘start’. Sharing sample outputs with the user at the start is a simple way to give new users a clear understanding of a wide array of functionality before they begin.
1.1.4 - Offer first-time-user tours
Introductory tours and explanations for first-time users help them discover non-obvious system capabilities. Having a strategy to avoid the need for this approach over time is important to avoid a cumbersome experience for users.

In this example from Workday’s Career Hub, a first time user is shown an introduction to functionality as part of a product tour. This can help users become familiar with new or unknown functionality quickly.
1.2.1 – Make it Clear That AI Can Make Mistakes
Labelling in the UI should reflect the accuracy of the system; i.e. the probability that the system may make mistakes.

The persistent disclaimer in this AI-infused text editor helps make clear to users that generated text may contain errors and is experimental, encouraging them to review the content before using it
1.2.2 – Use UI Content to Explain How it Works
Set expectations for how well the system can do what it can do by communicating how the system works in simple terms.

Apple Music’s Favorites Mix sets high expectations with its explanation. The UI text in this example lets the user know what to expect (this playlist contains music the user will love), that the mix will improve over time with use, and when it updates, all in 3 clear, concise sentences.
1.2.3 – Share System Performance
When users need to form more accurate expectations around system performance, display a statistical confidence score.

The travel website Kayak shares system performance information with users to help them better understand the accuracy of its airfare cost predictions.
1.3.1 – Use a Consistent Visual Affordance
Highlight AI features consistently, with the same visual treatment, so users can activate at the click or tap of a button, if needed.

The purple “sparkle” icon is part of Workday’s foundational AI Experience UI components.The sparkle icon is often used within the bounds of a single application, alongside familiar components like contextual menus, to help users easily access and identify new AI functionality in a consistent way while building familiarity with users.
1.3.2 – Surface Recommendations
Surface recommendations, matches, or suggestions to the user without making them hunt around.

Workday’s Learning product provides ML-generated recommendations about learning courses the user may be interested in. Courses “based on your interests” are inferred from the user’s expressed interests and help surface self-directed learning content the user might not have been aware of.
1.3.3 – Be Context Aware
Reveal AI functionality at the right time–make the most of opportunities to present contextually relevant information for users.

Google maps is context aware by providing notifications to users mid-route to help them understand how the route they’ve chosen, and their expected arrival time, might change.
1.4.1 – Ask Users at the Start
If the AI system has no knowledge of user preferences, it’s hard to make good suggestions or matches. Ask users about their preferences at the start.

As part of product onboarding, Workday’s Career Hub asks users to input skills that will drive their experience later on in the process, creating a richer, more relevant set of career matches, learning recommendations, and mentor suggestions.
1.4.2 – Use Data External to the User
Use external data sources to create diverse recommendations (e.g. most popular, editors picks, or based on your location), to complement matches, or when user preferences are not known.

Workday’s Learning product uses data external to the user, like most popular courses and courses popular in their location, to surface suggestions. This helps avoid a “cold start” when users first begin to use the product.
1.4.3 – Consider a Collaborative Filtering Approach
User data from across the Workday platform helps us create predictions that are specific to the user. In addition, we can also use information gleaned from many other users who have already used the product or service. This allows us to provide more reliable personalised recommendations, suggestions, and matches in situations where we have less user-specific data available, such as at the start of an interaction.

This example from Amazon shows a collaborative filtering approach can be taken. A user purchasing 'Return of the Jedi' may also want to purchase other films from the series based on other users doing so and the simple understanding that the films are part of a series.

2 – Ensure Users Feel in Control
When to use? During interaction
Why: Providing users with ways to clearly understand, interact with, intervene or reverse AI outputs ensures users are ultimately the ones in control. AI is more empowering when it works with the user, not for the user.
2.1 – Support User Feedback
Help the user correct the AI system or adjust, reverse, and edit system suggestions to build user confidence and train the system to improve results.
2.2 – Explain Results
Explanations in the course of a user flow or during regular interaction can help users understand the system, identify issues and intervene as needed. Provide appropriate explanations of why the AI system behaved as it did through a consistent interface. Make sure explanations support different types of users, who will need different levels of detail, as well as different formats and fidelities of explanation.
2.3 – Communicate level of confidence
Users may not immediately understand that AI system outputs (decisions, suggestions, recommendations) are based on probability, so we need to communicate our confidence in the results we return to users. When inputs are clear and the answer is certain, communicate this to the user and, conversely, when the system is less sure, make sure to present these results appropriately.
2.4 – Avoid Errors Getting In The Way
AI systems will make mistakes, or be partially correct. To make it easier for users to get back on track, suggest quick fixes or ensure that they can easily edit the system’s output. Give users a simple way of dismissing or ignoring undesired AI system services, especially when the AI system is unsure of the user’s intent and what further actions to take.
Ensure Users Feel in Control - examples
2.1.1 – Support Feedback on System Outputs
Gathering feedback on the overall performance of the system can help to assess the accuracy of user-facing AI and help improve it for users over time. Giving feedback can also help build confidence with users.

Google translations supports feedback on system outputs by offering a feedback mechanism. This not only ensures that users who spot inappropriate or inaccurate translations are heard, but it also helps train the underlying translations functionality to become more accurate.
2.1.2 – Support Feedback on Selected Outputs
Provide specific feedback mechanisms that help report anything inappropriate. Allowing users to give feedback on specific outputs like highlighted text also allows users to note when particular objects on the page are unhelpful or don’t match their social or cultural norms.

Microsoft Word's rewrite feature supports user feedback on sentence-level text suggestions in context to the generated text that is selected or highlighted by the user.
2.1.3 – Let users know how their feedback might be used
Make sure users know how their feedback can impact the model and their own future experience (a more personalised experience, better model performance, giving the user a say).

Google Translate tells users how their feedback is going to be used (to improve translation quality) and that other users of Google Translate may see the user's anonymised feedback. Establishing a connection between user feedback and its impact on the model’s performance can be an important step in users understanding that they are the ones in control.
2.2.1 – Tell Users how Outputs are Derived
Notify the user about which inputs are used to derive predictions, recommendations, and other AI outputs. Establishing an accurate mental model of system capability with users can aid in product adoption.

In this example from the sleep tracking app Sleep Cycle, a notification explains how an AI-generated personal sleep quality score is assessed, establishing trust and transparency with users.
2.2.2 – Make System Outputs Editable
Making system outputs editable ensures the user feels in control. Systems that provide support for user intervention can help users identify an approach to solving a problem the model may not be aware of.

Navigation apps suggest multiple routes with different benefits, allowing the user to quickly edit their route if they’re aware of something the model may not be aware of (like roadworks). Sharing of this information with the user supports their understanding of the results and gives them confidence that they are ultimately in control.
2.2.3 – Use a Consistent Interface for Explanations
As we begin to build user trust in Workday’s AI, it is vital that we present a consistent UI to explain predictions, recommendations, and other AI output for users in order to reduce the amount of learning that users must do. A disjointed, inconsistent UI makes users re-learn the system and erodes trust.

In this example from Workday’s AI Job Requisitions task, explainability is handled with a consistent disclaimer that makes the user aware of the data sources used to generate the text above. This component is re-used across other similar applications throughout Workday to build familiarity.
2.2.4 – Use the Right Fidelity
Workday supports a broad range of users, so make sure to consider what kind of format and fidelity of explanations are appropriate for different users. Some users may need formal explanations (esp. Fin & HR Professionals); others may need simple, casual explanations.

In this advanced example for professional users from Workday’s Demand Forecasting feature (part of the Time and Scheduling hub), match scores are added to worker recommendations. The score uses worker skills, preferences, and cost implications to drive an overall match score. This example uses a relatively detailed fidelity to augment a manager’s decision making process as they create forecasts and schedule time with workers to meet demand.
2.3.1 – Present Results with Appropriate Confidence
If AI system outputs vary in confidence, make sure to let the user know. Use easy to decipher and straightforward confidence ratings.

The accuracy score in this example pairs a colour indicator with a percentage to share the accuracy score of 90%, with additional text providing more context.
2.4.1 – Suggest Possible Fixes
If the system is able to, suggest a simple resolution for the user to follow and allow them to select from a small list of fixes.

Google maps suggests possible fixes to a user’s route when it spots a delay to the originally-suggested route, allowing them to quickly re-route or decline. Suggesting possible fixes in this way can help avoid errors getting in the way for users.
2.4.2 – Make it Easy to Dismiss Unwanted Outputs
AI systems can make mistakes, so ensure users can quickly dismiss or permanently turn off unwanted outputs or notifications.

Google Drive makes it easy for users to dismiss incorrectly suggested “priority” files. A user can select “Not a helpful suggestion” to remove the file from the user's Priority page.
2.4.3 – Support Users Experimenting with Outputs
If possible, systems should provide support for user interaction with outputs. Allowing users to play with or experiment with system outputs as variables helps them ask “what if?” This allows users to see how they can change an outcome quickly.

In this example, a mortgage loan calculator from a bank allows users to enter a selection of variables (estimated property price, required mortgage amount, and duration of the loan) to allow the user to quickly see monthly repayment amounts. Allowing users to play with “what if” scenarios in this way can help them to quickly understand complexity and make them feel in control.

3 – Build Trust With Users Over Time
When to use? Over time - Customise the user’s experience by learning from their actions over time.
Why: Trust is learned - Allowing time for users to learn how the system works, whilst the system improves alongside them helps build greater confidence and trust. As our AI gets to know the user, we can automate more and need to ask for permission less.
3.1 – Remember Recent interactions
AI can maintain short-term memory and retain deep context in a way that supercharges the experience and accelerates the speed and quality of workflow. Remember recent interactions and give the user efficient references to that memory.
3.2 – Learn From User Behaviour
Help create a personalised experience by learning and understanding user behaviour over time. By analysing their interactions, preferences, and patterns we can help drive improvements to their experience.
3.3 – Notify Users of Changes
When making major model updates that may change user experience, inform the user of how this might affect outcomes, recommendations, or system behaviour. Making the user aware of changes can manage their expectations and build trust.
3.4 – Create Human-Centered Transparency & Intelligibility
Explainability enables users to better understand the AI system’s reasoning, transparently. Both decision makers and those affected by decisions need to be accounted for. Decision makers will seek explanations that can build their trust and confidence in the system’s recommendations, however both decision makers and those users affected by decisions may have very different expectations when it comes to explaining an AI system’s output.
| Important: | Learn From User Behaviour |
|---|---|
| Different users will have higher or lower thresholds for complex explanations. Users affected by decisions need reasons for their outcomes communicated in a simple and direct way, and they need to be able to connect inaccuracies as well as understand how altering their interaction might give them a different result. | 1. Global or system-level explainability. 2. Explainability for users making decisions using AI. 3. Explainability for users affected by decisions. |
Build Trust With Users Over Time - examples
3.1.1 – Keep Recent User Interactions Visible and Easy to Access
If a user is carrying out a string of related tasks in an assistive experience, keeping recent actions nearby can help them retain deep context.

In this example from Workday’s AI Contract Analysis tool, the side panel remembers recent user interactions, keeping them visible as users work, and timestamps them to help users keep context.
3.1.2 – Reveal Recent Interactions at the Right Time
Build upon user inputs to reveal timely functionality, creating shortcuts within the natural flow of work via familiar controls. Be careful not to distract users attending to urgent or critical tasks.

Microsoft Outlook reveals the user's recent interactions at the right time. As they begin to add recipients to an email, the system suggests recipients that it can autofill.
3.1.3 – Surface Contextually-Relevant Suggestions
Aid user productivity with contextual assistance tied to recently performed interactions.

Google’s Duet AI feature for Gmail surfaces contextually-relevant suggestions to the user by summarising an email and then suggesting helpful next steps, like making a list of tasks to complete.
3.2.1 – Surface Suggestions and Hidden Connections
Help create references to ideas or documents held across multiple products or information sources across tools. Preempt user need by helping users discover useful functionality in unexpected ways.

Siri surfaces helpful suggestions that might otherwise be hidden by complexity, like reminding the user to call their mother based on a birthday date found in their contacts.
3.2.2 – Personalise recommendations
Leverage previous user behaviour to help the system learn what to recommend.

Workday’s Learning product leverages previous behaviour (such as identifying skills the user would like to develop) to recommend Learning courses that they might not have seen otherwise.
3.2.3 – Help Automate Tasks
Consider what you could automate for the user based on what we already know about them, people who usually perform this task, or the job the user is doing.

Workday’s Change Job feature helps automate a task by suggesting recommended responses to fields as the user fills in the form.
3.3.1 – Make Sure Users Know About Changes
Changes to the models that drive user-facing AI can affect user experience. Make sure users are notified about any major changes or updates that may affect their experience.

In a notification, sleep tracking app Sleep Cycle informs the user about how a new feature, respiratory rate tracking, will affect their sleep cycle report.
3.3.2 – Be specific about what’s changed
Be clear what functionality has been changed and how those changes might affect system outputs or performance to allow the user time to adjust their expectations and build on trust.

Explainer text regarding the adaptive battery feature on Google’s mobile devices is shown to the user in this example, making clear how system outputs (your phone learning how to use apps over time) and performance (extended battery life and the possibility of delayed notifications) may be affected.
3.4.1 – Simplify for System or Global Explainability
Regulatory bodies play a vital role in ensuring decisions are made in a safe and equitable way. Agencies like the EU’s General Data Protection Regulator (GDPR) need high-level system information about AI training data.
Regulatory agents checking the system to learn about where it is likely to fail may not be able to consume a lot of complexity. They may be satisfied by seeing that the overall process and training data is free from bias and negative societal impact.
3.4.2 – Explainability for Decision Makers and Analysts
User context is crucial, especially for analysts and decision-makers. Provide clear explanations of conclusions, detailing the data behind decisions to enhance transparency, accountability, fairness, trust, and understanding.
3.4.3 – Explainability for Users Affected by Decisions
Users affected by decisions informed by AI need clear and concise explanations for decisions that affect them.
Communicate these reasons in a simple, fair, and direct manner. Ensure users can appeal decisions or correct inaccuracies. Providing a fair resolution for situations where unforeseen negative impacts occur is crucial for building trust. Special attention should be paid to vulnerable persons or groups.
3.4.4 – Provide Context-Specfic Explanations
Explanations in context help users better understand and trust a single value, output, or action while supporting them to learn how the system behaves. Make sure to appropriately group the explanation close to the output being explained.

This example from a Workday feature allowing users to generate a job description with the help of AI informs the user about which sources are used to create a suggested job description.
3.4.5 – Explain how the System Works
Users may need a global explanation of how AI systems work to help them characterise how the system performs and how it makes decisions in general. Keep these explanations simple and avoid generalising or mysterious language.

Users may need a global explanation of how AI systems work to help them characterise how the system performs and how it makes decisions in general. Keep these explanations simple and avoid generalising or mysterious language.
3.4.6 – Reveal System Settings
Giving access to menus or settings that reveal system functionality can help the user understand what the system can do. also 1.1.2

Workday’s Time and Scheduling hub presents the user with schedule settings to help them understand the different parameters by which a schedule is created. Showing the user settings in this way helps them create a more accurate mental model of product functionality.
3.4.7 – Show Examples of System Output
Showing a set of system outputs helps users understand what the system can do. also 1.1.3

Navigation apps suggest multiple routes with different benefits. Sharing sample outputs with the user is a simple way to give users a clear understanding of a wide array of functionality before they begin.
Can't Find What You Need?
Check out our FAQ section which may help you find the information you're looking for. For further information, contact the #ask-canvas-design or #ask-canvas-kitchannels on Slack.