Legal Notice and Transparency
Design requirements for compliance with the EU AI Act.
Published
March 2026 by Owen Derby (with contributions from Gavin Renkin, Lindsay Porter, and Marshall Dear)
Overview
Transparency is a core principle for both user experience and legal compliance, especially with the EU AI Act, as we further integrate AI into the Workday platform. To maintain user trust and uphold Workday’s foundational Responsible AI Principles, all UX designers, product managers, and developers must follow the essential design requirements and guidelines detailed in this document. These mandates ensure all our AI features, from ‘AI Systems’ to ‘AI Generated Content,’ are fully compliant.
Key Compliance Dates
Compliance is an ongoing process with critical milestones:
- August 2, 2024: EU AI Act effective.
- February 2, 2025: AI Literacy programs and Prohibited AI restrictions.
- August 2, 2025: General Purpose AI (GPAI) model requirements.
- August 2, 2026: High-Risk AI and Transparency Risk AI requirements (Critical deadline for most Workday features currently in design).
Penalties for non-compliance include fines up to 3% of global gross revenue, regulatory investigations, and potential bans from operating in the EU market.
Two Key Legal Concepts
There are two situations in which we need to apply legal notice.
1. AI System
This applies to any apps, tasks, pages, or processes that use data inference to generate outputs such as predictions, recommendations, or decisions, with the goal of achieving specific objectives. It covers systems that extend beyond simple data processing to incorporate learning, reasoning, or outcome modeling.
- Examples: Journal Insights, Cash App Insights, OCR for receipts, Semantic Search, HiredScore.
2. AI Generated Content
New content (text, audio, video, images, code) created by Generative AI algorithms based on training data and user prompts. Providing notice for these categories helps mitigate risks associated with impersonation and deception, and helps inform individuals how they should interpret AI outputs.
- Examples: Generated Job Descriptions, Collection Letters, Developer Co-pilot, Generated Worksheet formulas.
Guidelines for AI System Notices
Applicable to all risk tiers.
The EU AI Act requires that users are informed “at the latest at the time of the first interaction” that they are interacting with an AI system.
Interaction Notice Options
| Type and Purpose | Component and Label | Example |
|---|---|---|
AI System Interaction Notice Clear, distinguishable, and provided at the latest during the first interaction that a user has with an AI System (AI powered feature). | Canvas Tooltip
| ![]() |
Blanket Notice For use on pages with multiple AI features. * Discovery work will be needed before implementation, not recommended for now. | Example: Footer
| ![]() |
Best Practices for Placement
- Early Exposure: Add notice at the very beginning of a user’s flow.
- Contextual Relevance: Visually anchor the notice directly to the interaction point.
- The “First Interaction” Rule: Notice must be provided before or during the first point of engagement or exposure.

Guidelines for AI Generated Content Disclaimers
Applicable to synthetic outputs.
To be compliant we must ensure that synthetic outputs are detectable as artificially generated. This builds trust and ensures users can distinguish between human and machine-authored content.
Disclaimer Options
| Type and Purpose | Component and Label | Example |
|---|---|---|
AI Generated Content Disclaimer in a Tooltip Attach a visual anchor to new content (text, audio, video, images, code) created by Generative AI. | Canvas Tooltip
| ![]() |
AI Generated Content Disclaimer To label an entire window or preview of new content (text, audio, video, images, code) created by Generative AI. | Canvas Modal
| ![]() |
Chat Panels and Modals
- Modals: Place disclaimer text at the bottom left of the modal footer.
- Chat Panels: Anchor the disclaimer at the bottom of the conversational thread (e.g., Sana, AIX).

Cards
- Cards Framework - Use the standard Sparkle info icon in the top right of cards or in a header section to trigger a Canvas Tooltip.
Mobile Card Framework

Please refer to our Canvas Mobile Card Framework (alpha).
Desktop Cards

Please refer to Cards for Desktop Hubs.
Examples in Practice
Adoption Agent
- Notice Placement: A single “blanket notice” is anchored above the table header.
- Design Rationale: This labels the entire window of content rather than individual rows, preventing visual noise and repetitive labeling.
- Transparency: A Quicktip is used in this case to house more content than a tooltip. It clarifies exactly which elements on the page are AI-derived (e.g., “Generated Impact Analysis” and “Feature Summary”).
- Call to Action: Explicitly instructs users to “Review before use.”

Open Questions and Implementation
As we move into Phase 1, we are addressing:
- Compliance Sufficiency: Is a blanket approach sufficient, or should notice always be anchored to active engagement?
- Ownership: UX Design will maintain guidance and Canvas patterns, while collaboration with UI Platform is required for implementation options.
Further Reading and Support
- RAI Protocol: Developer Protocol - Notice
- RAI Protocol: Developer Protocol - Interpretability
- Explainability Framework: Workday’s UX Explainability Framework
Terminology Note: At Workday, “Interpretability” and “Explainability” are currently used interchangeably as we work toward a shared language.
Contacts
- UX Author: Owen Derby (Principal UX Designer)
- Responsible AI: Zachary Roberts (Senior Program Manager)
- Data and Privacy: Jason Hammon (Principal Program Manager)
- Mobile Cards: Gavin Renkin
- Hubs: Lindsay Porter
- Slack: #ai-design | Email: product.tech.legal@workday.com
Can't Find What You Need?
Check out our FAQ section which may help you find the information you're looking for. For further information, contact the #ask-canvas-design or #ask-canvas-kitchannels on Slack.



