AI Act Transparency: Article 50 Obligations, Deepfake Labeling, and the Code of Practice
TL;DR — Article 50 transparency essentials
- Article 50 applies to every chatbot, every generative AI tool, and every deepfake system — not just high-risk AI. If your product generates content or interacts with people, you are in scope.
- The four transparency obligations cover: AI interaction disclosure, machine-readable content marking, emotion recognition / biometric categorisation notice, and deepfake / AI-generated content labeling.
- The deadline is 2 August 2026. Transparency obligations become enforceable alongside high-risk system obligations.
- Non-compliance triggers fines of up to EUR 15 million or 3% of global annual turnover — the same tier as high-risk violations. See the penalties guide.
- The Code of Practice on AI content marking and labeling (draft published March 2026, final expected June 2026) sets the practical benchmark for compliance. It mandates C2PA content credentials, invisible watermarking, and metadata standards.
- Obligations are split between providers (who build AI systems) and deployers (who use them). Both have distinct responsibilities. Misidentifying your role is a compliance failure — see our provider vs deployer guide.
- GPAI model providers must enable downstream compliance by making their outputs compatible with content marking and labeling requirements.
Most organisations preparing for the EU AI Act focus on high-risk classification, conformity assessment, and risk management. That makes sense — the high-risk framework is the most complex part of the regulation. But there is a parallel set of obligations that applies far more broadly, to systems that most companies would never consider "high-risk": chatbots, image generators, text-to-speech tools, video synthesis platforms, and any AI system that interacts directly with people.
These are the transparency obligations under Article 50. They apply regardless of risk classification. A customer support chatbot that answers billing questions is in scope. A marketing tool that generates product photos is in scope. A content platform that publishes AI-written summaries is in scope. There is no minimum size threshold, no exemption for internal tools used with external-facing outputs, and no grace period beyond the 2 August 2026 enforcement date.
This guide covers what Article 50 requires, who must comply, how the Code of Practice translates legal text into technical standards, and the practical steps to get compliant before the deadline. For a broader introduction to the regulation, see our overview of the EU AI Act.
The four transparency obligations
Article 50 establishes four distinct transparency obligations. Each targets a different category of AI system and a different type of disclosure. The common thread is that people have a right to know when they are interacting with AI, and when content they encounter was generated or manipulated by AI.
The obligations apply to systems that fall under Article 50 regardless of whether they are also classified as high-risk under Annex III. A high-risk system that also functions as a chatbot must comply with both the high-risk framework and the transparency obligations.
1. AI interaction disclosure (chatbots and virtual assistants)
Article 50(1) requires that providers ensure their AI system is designed so that natural persons are informed they are interacting with an AI system — unless this is obvious from the circumstances and context of use. The disclosure must occur before or at the point of first interaction.
This obligation covers:
- Customer support chatbots (text-based, voice-based, or multimodal)
- Virtual assistants embedded in products or services
- AI-powered phone agents that handle inbound or outbound calls
- Interactive AI systems on websites, apps, or messaging platforms
The disclosure must be clear, timely, and intelligible. A buried terms-of-service clause does not satisfy Article 50(1). The standard is that an average user encountering the system should understand, before substantive interaction begins, that they are communicating with AI rather than a human.
The "obvious from circumstances" exception is interpreted narrowly. A robotic-sounding voice assistant on a smart speaker may qualify. A sophisticated chatbot that mimics human conversational patterns almost certainly does not. When in doubt, disclose.
Real-world example: A European bank deploys an AI chatbot on its website to handle account inquiries. The chatbot uses natural language and responds in first person ("I can help you with that"). Without a clear disclosure — such as a persistent banner stating "You are chatting with an AI assistant" — the bank violates Article 50(1). The chatbot's conversational fluency makes the AI nature non-obvious.
2. Machine-readable content marking (watermarks, metadata, C2PA)
Article 50(2) requires that providers of AI systems that generate synthetic audio, image, video, or text content ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated.
This is not about visible labels (that obligation falls under Article 50(4)). Article 50(2) is about technical marking embedded in the content itself — invisible to the end user but readable by automated detection tools, platforms, and downstream systems.
The marking must be:
- Effective: Resistant to trivial removal (e.g., simple re-encoding, screenshots, or format conversion should not strip the marking entirely)
- Interoperable: Compatible with broadly adopted standards
- Proportionate: The technical approach should be appropriate to the content type and distribution context
The Code of Practice specifies C2PA (Coalition for Content Provenance and Authenticity) content credentials as the primary standard for images, video, and audio. For text, metadata-based approaches are the expected pathway.
3. Emotion recognition and biometric categorisation notice
Article 50(3) requires that deployers of emotion recognition systems and biometric categorisation systems inform the natural persons exposed to them. The notice must cover the operation of the system and the categories of personal data being processed.
This obligation intersects with Article 5 prohibited practices. Emotion recognition in workplaces and educational institutions is banned outright. The Article 50(3) disclosure obligation applies to the narrow set of emotion recognition and biometric categorisation uses that remain lawful — primarily medical/safety applications and certain security contexts.
Deployers must also comply with GDPR data protection obligations, which means the transparency notice under Article 50(3) should be coordinated with the GDPR privacy notice. See our AI Act vs GDPR comparison for how these frameworks interact.
4. Deepfake and AI-generated text labeling
Article 50(4) requires that deployers of AI systems that generate or manipulate content constituting a deepfake disclose that the content has been artificially generated or manipulated. The same applies to AI-generated text published to inform the public on matters of public interest.
This is the visible, human-facing label — distinct from the machine-readable marking in Article 50(2). The label must be:
- Clear and distinguishable to the average person
- Placed in a manner that is accessible at the point of consumption (not hidden behind a click or buried in metadata)
- Effective for the medium (visual label for images/video, audible disclosure for audio, text notice for written content)
The definition of deepfake under Article 3 is broad: AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or events and would falsely appear to a person to be authentic or truthful. This captures:
- AI-generated product photos that depict realistic scenes
- Synthetic voice clones used in marketing or media
- AI-manipulated video used in advertising, journalism, or entertainment
- AI-generated profile images used in user-facing contexts
For AI-generated text, the labeling obligation applies specifically when the text is published with the purpose of informing the public on matters of public interest — unless the content has undergone a process of human editorial review or control and a natural or legal person holds editorial responsibility. This exception is significant for media companies (see Exceptions and edge cases).
Who must comply: providers vs deployers
Article 50 distributes obligations between providers and deployers. Understanding which obligations fall on which role is critical — especially because many organisations are both providers (of their own tools) and deployers (of third-party AI).
The provider's obligation is primarily design-level: build the system so that compliance is possible and straightforward. The deployer's obligation is operational: ensure transparency actually reaches the people affected.
For a detailed breakdown of all provider and deployer obligations across the AI Act, see the provider vs deployer comparison. Role definitions are set out in Article 3, and deployer-specific obligations are detailed in Article 26.
A critical consequence: if a provider fails to build disclosure or marking features into the system, the deployer may be unable to comply. In that case, the deployer should either demand compliance features from the provider, switch providers, or build supplementary disclosure mechanisms — because the deployer's obligation to disclose is not excused by the provider's failure to enable it.
The Code of Practice on marking and labeling
The EU Commission published its draft Code of Practice on AI content marking and labeling in March 2026, with the final version expected by June 2026. While the Code of Practice is technically voluntary, it functions as the practical compliance benchmark — regulators will use it to assess whether organisations have met their Article 50(2) obligations in good faith.
The Code of Practice addresses three technical areas:
Content credentials (C2PA)
The Code endorses the C2PA (Coalition for Content Provenance and Authenticity) standard as the primary interoperable framework for content provenance. C2PA content credentials are a cryptographically signed manifest embedded in or associated with a media file, recording:
- The origin of the content (which tool or system generated it)
- The actions performed (generation, editing, composition)
- The identity of the actor (the provider or deployer, not necessarily the end user)
- A tamper-evident hash linking the manifest to the content
C2PA credentials are already supported by Adobe, Microsoft, Google, OpenAI, and major camera manufacturers. The Code of Practice makes C2PA the de facto EU standard for images, video, and audio.
Invisible watermarking
For content that may be stripped of metadata (e.g., social media uploads, screenshots, format conversions), the Code of Practice recommends invisible watermarking as a secondary layer. Invisible watermarks are embedded in the signal domain of the content and survive common transformations.
The Code does not mandate a specific watermarking algorithm but requires that the approach be:
- Robust against common transformations (compression, cropping, re-encoding, format conversion)
- Imperceptible to human viewers/listeners
- Detectable by standardised detection tools
- Carrying a minimum payload sufficient to identify AI origin
Metadata requirements for text
Text content presents unique challenges because there is no visual or auditory signal to embed a watermark in. The Code of Practice takes a metadata-based approach for text:
- AI-generated text should carry provenance metadata in the document, API response, or publishing infrastructure
- For web-published text, structured metadata (e.g., schema.org annotations, HTTP headers, or RSS feed tags) should indicate AI generation
- The Code acknowledges that statistical text watermarking (modifying token probabilities to embed a detectable signal) is still experimental and does not require it — but encourages research and future adoption
Technical implementation guide
Translating Article 50 and the Code of Practice into production systems requires concrete engineering work. Below is a practical breakdown by obligation.
Implementing chatbot disclosure (Article 50(1))
The disclosure must be unambiguous and appear before substantive interaction. Recommended patterns:
- Web chat: A persistent banner or first message from the system stating: "You are chatting with an AI assistant." The banner should remain visible throughout the conversation, not just in the first message.
- Voice assistants / AI phone agents: An audible disclosure at the start of the interaction: "This call is handled by an AI system." Repeat if the conversation is transferred or the context changes.
- Messaging platforms (WhatsApp, Slack, Teams integrations): Set the bot's display name and profile to clearly indicate AI nature (e.g., "Acme AI Assistant"). Add a first-message disclosure.
- Email: If AI generates or substantially drafts email responses, include a visible notice in the email body or signature: "This response was generated by an AI system."
Implementing machine-readable content marking (Article 50(2))
For images and video:
- Integrate a C2PA SDK (e.g., the open-source
c2pa-rslibrary or Adobe's Content Authenticity SDK) into your generation pipeline. - At generation time, create a C2PA manifest specifying the AI tool, generation action, and provider identity.
- Embed the manifest in the output file (JPEG, PNG, WebP, MP4, WebM).
- As a secondary layer, apply an invisible watermark using a robust algorithm (e.g., spectral-domain embedding for images, spread-spectrum embedding for audio/video).
- Verify that the manifest and watermark survive your distribution pipeline (CDN processing, thumbnail generation, format conversion).
For audio:
- Apply C2PA content credentials to the audio file.
- Embed an invisible audio watermark in the spectral domain.
- Test robustness against common transformations: MP3 compression, sample rate conversion, and partial clipping.
For text:
- Tag API responses with provenance metadata (e.g., a
X-AI-Generated: trueHTTP header or aprovenancefield in the JSON response). - For published web content, add schema.org structured data or a meta tag indicating AI generation.
- In CMS workflows, store AI generation metadata alongside the content record so downstream systems can access provenance information.
Implementing deepfake and content labeling (Article 50(4))
The visible label must be accessible to the consumer of the content:
- Images: Overlay a visible label (e.g., "AI Generated" or "Created with AI") in a legible position. Alternatively, display the label in the UI adjacent to the image with a provenance icon linking to C2PA verification.
- Video: Display a persistent or recurring on-screen label during playback. For short-form video, a label in the first frame and at regular intervals.
- Audio: Provide an audible disclosure at the beginning and/or a visible label in the player interface.
- Text: Include a clear notice at the beginning or end of the content: "This article was generated by AI" or "This content was produced with the assistance of AI."
Exceptions and edge cases
Article 50 includes several exceptions that narrow the labeling and disclosure obligations in specific contexts.
Artistic, satirical, and creative works
Article 50(4) provides an exception for AI-generated content used for artistic, satirical, creative, or fictional purposes. In such cases, the deepfake labeling obligation is reduced: the disclosure need not be placed on or in the content itself, but must be disclosed "in an appropriate manner that does not hamper the display or enjoyment of the work."
In practice, this means a film using AI-generated visual effects or a satirical video using AI face-swapping may place the AI disclosure in the credits, metadata, or an accompanying description rather than as an on-screen label during playback. The exception does not eliminate the disclosure obligation — it relaxes the placement requirement.
The "obvious AI" exception
Article 50(1) exempts AI interaction disclosure where the AI nature is obvious from the circumstances and context of use to a reasonably well-informed, observant person. This exception is interpreted narrowly by the Commission's guidance.
Systems that likely qualify: simple rule-based phone menu systems ("Press 1 for billing"), clearly robotic voices with no human mimicry, or game NPCs in an obviously fictional context.
Systems that likely do not qualify: GPT-class chatbots with natural language fluency, AI voice agents trained on human voice clones, or any system where a reasonable person might believe they are interacting with a human.
Law enforcement exceptions
Article 50 provides limited exceptions for law enforcement and national security uses. Where disclosure would compromise an ongoing investigation, prevent the detection of criminal activity, or jeopardise national security, the transparency obligations may be deferred — but not permanently waived. Disclosure must occur "without undue delay" once the justification ceases.
Editorial review exception for AI-generated text
AI-generated text published to inform the public on matters of public interest must be labeled — unless the text has undergone a process of human editorial review or control and a natural or legal person holds editorial responsibility for the publication.
This exception is designed for media organisations. A news outlet that uses AI to draft articles but subjects every draft to editorial review, fact-checking, and human sign-off before publication is not required to label the output as AI-generated — provided the outlet holds editorial responsibility. The exception does not apply if the editorial review is superficial or automated.
Penalties for transparency violations
Violations of Article 50 transparency obligations fall under Tier 2 of the AI Act penalty structure, as set out in Article 99:
- Up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher (for large organisations)
- Up to EUR 15 million or 3% of worldwide annual turnover, whichever is lower (for SMEs and startups — a proportionality adjustment)
Enforcement is handled by national market surveillance authorities in each EU Member State. The Commission's AI Office oversees GPAI-related obligations but transparency enforcement at the deployment level is national.
Enforcement triggers include:
- Consumer complaints about undisclosed AI interactions or unlabeled deepfakes
- Proactive market surveillance (authorities testing chatbots, scanning platforms for unlabeled AI content)
- Platform reporting (hosting platforms flagging content that lacks required markings)
- Sector-specific regulators (financial, healthcare, or telecom regulators identifying transparency failures in their sectors)
The penalty level — identical to high-risk system violations — signals that the Commission treats transparency as a core obligation, not a secondary concern. Organisations that dismiss Article 50 as "just labeling" underestimate both the regulatory intent and the enforcement risk.
For the full penalty breakdown, including calculation methodology and GDPR comparison, see the EU AI Act penalties guide.
Real-world compliance scenarios
Scenario 1: SaaS company with a customer support chatbot
A B2B SaaS company deploys an AI chatbot on its website and in its mobile app to handle Tier 1 customer support inquiries (password resets, billing questions, feature explanations). The chatbot uses a fine-tuned large language model and responds in natural language.
Obligations:
- AI interaction disclosure (Art. 50(1)): The chatbot must clearly inform users they are interacting with AI before substantive conversation begins. A first-message disclosure ("I'm an AI assistant") plus a persistent UI indicator satisfies this.
- Machine-readable content marking (Art. 50(2)): If the chatbot generates text responses, the provider of the underlying LLM must implement provenance metadata. The SaaS company (as deployer) should verify that the LLM provider's API responses include provenance fields and preserve them through the delivery pipeline.
- Deepfake labeling (Art. 50(4)): Not applicable unless the chatbot generates images, audio, or video.
Real-world example: The SaaS company's chatbot responds to a user asking "Can I speak to a human?" by saying "Of course — let me transfer you." The transfer takes 30 seconds, during which the chatbot continues the conversation to gather context. The user believes they are already speaking with a human. This is a compliance failure — the handoff point must clearly re-establish whether the user is speaking with AI or a human.
Scenario 2: Marketing agency using AI-generated product images
A marketing agency uses an AI image generation tool to create product photos, lifestyle imagery, and social media content for its clients. The images depict realistic products in realistic settings.
Obligations:
- Machine-readable content marking (Art. 50(2)): The AI image generation tool (provider) must embed C2PA content credentials and invisible watermarks in every generated image. The agency (deployer) must not strip these markings.
- Deepfake labeling (Art. 50(4)): If the images resemble real-world scenes and could falsely appear authentic, the agency must label them as AI-generated at the point of publication. For social media posts, this means a visible caption or overlay. For website product galleries, a label adjacent to or on the image.
Real-world example: The agency generates a product photo of a handbag placed in a Parisian street scene. The image is photorealistic. Without labeling, a consumer might believe the photo was taken on location. The agency must label it — even though it is commercial content, the deepfake definition covers realistic AI-generated imagery that could falsely appear authentic. The artistic/satirical exception does not apply to commercial product photography.
Scenario 3: Media company publishing AI-written news summaries
A news aggregation platform uses AI to generate summaries of current events, published alongside links to source articles. The summaries are generated without human editorial review.
Obligations:
- Machine-readable content marking (Art. 50(2)): The text summaries must carry provenance metadata indicating AI generation. The platform should implement structured data (schema.org or equivalent) on summary pages.
- Deepfake labeling / AI text disclosure (Art. 50(4)): Because the summaries are published to inform the public on matters of public interest and are not subject to human editorial review, the platform must label them as AI-generated. A visible notice such as "This summary was generated by AI" at the top of each summary is required.
Real-world example: The platform argues that its summaries are "just rephrasing" source articles and therefore do not constitute AI-generated content requiring disclosure. This argument fails — Article 50(4) applies to AI-generated text published on public interest matters regardless of whether the underlying facts came from human sources. The generation process, not the originality of the facts, triggers the obligation.
How Article 50 interacts with GPAI obligations
General-purpose AI (GPAI) model providers — organisations that develop foundation models or large language models — have obligations under Article 51 and Article 53 that directly support Article 50 compliance downstream.
Specifically, GPAI providers must:
-
Enable machine-readable content marking: GPAI models that generate synthetic content must be designed to allow downstream providers and deployers to mark outputs in compliance with Article 50(2). This means the GPAI model's API or output pipeline must support C2PA credential generation, watermark embedding, or metadata tagging.
-
Provide technical documentation: GPAI providers must supply documentation sufficient for downstream providers to understand the model's capabilities, limitations, and output characteristics — including what marking and labeling mechanisms are available.
-
Support transparency features: If a GPAI model powers a chatbot or interactive system, the model provider must ensure the system can be configured to deliver Article 50(1) disclosures.
The practical consequence is that organisations deploying third-party GPAI models (e.g., GPT, Claude, Gemini, Llama, Mistral) should verify that the model provider offers:
- C2PA-compatible output marking for images, video, and audio
- Provenance metadata in API responses for text
- Configurable disclosure mechanisms for interactive use cases
If the GPAI provider does not offer these features, the deployer must implement supplementary solutions or risk non-compliance. For a deeper analysis of GPAI obligations, see our GPAI obligations guide.
Step-by-step compliance checklist
Use this checklist alongside the full EU AI Act compliance checklist and the AI Act assessment tool to confirm your organisation's transparency posture.
-
Inventory all AI-powered touchpoints. Identify every system that interacts with natural persons (chatbots, voice assistants, recommendation interfaces) or generates content (text, images, audio, video). Include internal tools whose outputs reach external audiences.
-
Classify each touchpoint against the four Article 50 obligations. Map each system to the relevant obligation(s): AI interaction disclosure, machine-readable marking, emotion recognition notice, or deepfake/content labeling. Many systems trigger more than one.
-
Determine your role for each system. For each AI system, establish whether you are the provider, deployer, or both. Use the criteria in Article 3 and our provider vs deployer guide.
-
Audit existing disclosures. Review current chatbot interactions, generated content outputs, and user-facing notices. Identify gaps where AI interaction is not disclosed or AI-generated content is not labeled.
-
Verify machine-readable marking. For every AI system that generates images, video, audio, or text: confirm that C2PA content credentials, invisible watermarks, or provenance metadata are being embedded. Test that markings survive your distribution pipeline.
-
Implement or upgrade chatbot disclosures. Add clear, persistent AI interaction notices to all chatbot and virtual assistant interfaces. Test with real users to confirm the disclosure is noticed and understood.
-
Implement visible content labels. For AI-generated or manipulated content subject to Article 50(4), add visible labels at the point of publication or distribution. Establish a consistent labeling format across channels.
-
Review GPAI provider compliance. If you use third-party GPAI models, request documentation of the provider's Article 50(2) marking features. Verify that content credentials or watermarks are available and active.
-
Update contracts and DPAs. Ensure contracts with AI providers include obligations to maintain transparency features, supply provenance tools, and notify you of changes to marking capabilities.
-
Document everything. Record your Article 50 compliance measures in your technical documentation or a standalone transparency compliance register. Documentation is your primary evidence in an enforcement action.
-
Train relevant teams. Marketing, customer support, product, and engineering teams must understand their Article 50 obligations. Create clear internal guidelines for when and how to disclose AI interaction and label AI-generated content.
-
Establish ongoing monitoring. Transparency compliance is not a one-time exercise. New AI tools, updated models, and changed distribution channels can introduce new obligations. Build Article 50 review into your regular compliance cycle.
Frequently Asked Questions
Does Article 50 apply if my chatbot is clearly branded as "AI Assistant" in its name?
Possibly — but naming alone may not be sufficient. Article 50(1) requires that natural persons are informed they are interacting with AI. A name like "AI Assistant" is a strong indicator, but the Commission's guidance emphasises that the disclosure should be unambiguous and explicit at the point of interaction. Best practice: combine a clear name with a first-message disclosure and a persistent UI indicator. Do not rely solely on the product name.
Do I need to watermark AI-generated images used only internally?
Article 50(2) applies to providers of AI systems that generate content — the obligation triggers at the point of generation, not distribution. If the AI system is designed to generate synthetic images, the provider must embed machine-readable markings regardless of the intended use. Internal-only use does not exempt the provider. However, the deployer's Article 50(4) labeling obligation (visible disclosure) applies primarily to content that reaches natural persons or the public.
What if my AI generates text that a human then heavily edits before publication?
The editorial review exception in Article 50(4) applies. If the AI-generated text undergoes meaningful human editorial review and a natural or legal person holds editorial responsibility for the final publication, the text does not need to be labeled as AI-generated. "Meaningful" review means substantive fact-checking, rewriting, or editorial judgment — not a cursory read-through or automated spell-check.
How does Article 50 interact with GDPR transparency obligations?
They are complementary, not duplicative. GDPR Article 13 requires transparency about the processing of personal data — including informing data subjects about automated decision-making. AI Act Article 50 requires transparency about the AI nature of the system and its outputs, regardless of whether personal data is processed. Organisations should coordinate both sets of disclosures into a unified transparency approach. See our AI Act vs GDPR guide.
When does the "artistic works" exception apply to marketing content?
Narrowly. The artistic/satirical/creative exception in Article 50(4) applies to works with a genuinely artistic, satirical, or fictional purpose. Commercial marketing content — product images, advertising videos, promotional materials — does not qualify as artistic expression for the purpose of this exception. If you generate marketing assets with AI, you must label them.
Can I use a single "AI-generated content" disclaimer on my website instead of labeling each piece of content?
No. Article 50(4) requires labeling that is accessible at the point of consumption. A site-wide disclaimer in the footer or terms of service is insufficient. Each piece of AI-generated content subject to the labeling obligation must be individually identifiable as AI-generated — through an adjacent label, overlay, caption, or equivalent mechanism that a reasonable person would notice when encountering the specific content.
Prüfen Sie die Compliance Ihres KI-Systems
Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.
Kostenlose Bewertung starten


