Medium

Article 50 gets concrete — the European Commission publishes its transparency guidelines for the AI Act and gives you 26 days to weigh in

Ein messingfarbener Prägestempel auf cremefarbenem Papier mit leerem Prägekreis in der unteren Ecke, daneben ein gefalteter Brief auf Holz und ein oxblutfarbener Wachsstift mit messingfarbenem Lineal im kühlen Nordlicht.

Three days after the omnibus deal, the operational ground rules follow: on 8 May 2026 the European Commission published the consultation draft of the AI Act's Article 50 transparency guidelines. The deadline for submissions runs until 3 June 2026, the obligation itself applies from 2 August 2026 — 86 days later.

What has changed? The Commission delivers the operational ground rules for the omnibus structural clarification of 7 May. Who is affected? Providers and deployers of interactive chatbots, generative AI outputs (text, image, audio, video), and biometric classification — effectively every German Mittelstand company with a Sylius, TYPO3, or Symfony stack and AI-supported content production. What should you read today? Classification, consultation deadline, effective date — in that order.

The 90-second summary

On 8 May 2026 the European Commission published the draft of the guidelines for implementing transparency obligations under Article 50 of the AI Act and opened public consultation. Submission deadline: 3 June 2026. Effective date of the obligation: 2 August 2026. Tightened transitional period (3 months instead of 6, per Omnibus VII of 7 May): 2 December 2026. On the same date, the omnibus bans the commercial distribution of „nudifier“ tools — with no transitional period.

Four mandatory profiles from Article 50: (1) providers of interactive AI must make the AI nature recognizable; (2) providers of generative AI must mark outputs machine-readably; (3) deployers of deepfakes must disclose the synthetic nature; (4) deployers of emotion recognition or biometric categorization must inform individuals beforehand. Recommendation: classify now, submit a consultation response by 3 June, build the notification layer by 2 August, deploy the marking pipeline by 2 December.

What Article 50 requires — and what you have to classify today

What we got on 8 May

The European Commission consultation centers on the draft guidelines for Article 50 of the AI Act. The draft isn't a binding legal text, it's the operational translation of the four mandatory profiles from Article 50 into testable routines — machine-readable marking formats, notification-duty mechanics, deepfake thresholds, biometric information duties. De facto it becomes the benchmark against which supervisory authorities will assess compliance from 2 August onwards.

Three days earlier — on 7 May — the Council and Parliament agreed in trilogue on AI Omnibus VII: high-risk obligations pushed to 2027 and 2028, but transparency obligations brought forward (transition shortened from 6 to 3 months, basic obligation effective 2 August 2026, tightened marking deadline 2 December 2026). Anyone who read the omnibus as a „delay“ and paused classification work is now reading today's consultation for what it is: confirmation that the transparency part is exactly not being delayed.

The four mandatory profiles under Article 50

Profile 1 — providers of interactive AI (Art. 50(1)). AI systems that interact with natural persons (chatbots, voice bots, avatars) must ensure that the affected person recognizes the AI nature — unless it's obvious from context. In the German Mittelstand this covers service bots, sales assistants, and automated reply mail.

Profile 2 — providers of generative AI (Art. 50(2)). Providers of text, image, audio, or video synthesis must mark outputs machine-readably. The guidelines consultation discusses the technical options: C2PA signatures, visible and invisible watermarks, metadata provenance, cryptographically verifiable manifests. Sylius merchants who publish AI product descriptions or AI images fall under this.

Profile 3 — deployers of deepfakes (Art. 50(4)). Anyone presenting a deepfake or AI-generated content of public interest to an audience must disclose its synthetic nature. The obligation doesn't apply to artistic, satirical, or fictional works (with caveats). Brands using AI-generated moving images in press communication are the primary addressees.

Profile 4 — deployers of biometric classification (Art. 50(5)). Emotion-recognition or biometric-categorization systems require prior information to affected individuals. Rarely seen directly in the Mittelstand, but increasingly relevant in recruitment and HR tooling.

Classification — where you stand today

Before you assess the consultation text, you need an honest self-classification. Four questions we ask first in every AI Act review:

  1. Does your stack generate text, images, audio, or video with AI and publish them without further human editing? If yes: Profile 2 — machine-readable marking from 2 August.
  2. Do your systems interact with persons directly in customer contact (live chat, voice bot, automated reply mail)? If yes: Profile 1 — make AI nature recognizable.
  3. Do you use deepfake-capable image or video output in marketing, press, or product communication? If yes: Profile 3 — visible disclosure.
  4. Do your systems process biometric data for classification purposes (mood, age, gender, presumed attributes)? If yes: Profile 4 — prior information.

In our advisory practice we see the German Mittelstand most often in Profiles 1 and 3. Profile 2 is underestimated — many Sylius merchants generate AI product descriptions and don't realize that this falls under Article 50 from August onwards.

What the consultation changes operationally

Three points stand out as particularly consultation-relevant:

First — technical marking options. The draft cites C2PA as the reference, but accepts watermarks, metadata, and cryptographically signed manifests alongside it. If you generate images in your stack, between now and December 2026 you can choose which marking lane to run. C2PA is technically the cleanest but operationally the most expensive; pure metadata provenance is cheaper but easier to strip.

Second — deepfake threshold. The draft requires marking for audio, image, and video; plain text outputs fall under 50(2), not 50(4). In practice that matters: an AI-generated press text is marking-obligated, an AI-generated press video is additionally disclosure-obligated — with a visible notice in the frame.

Third — transitional period and „nudifier“ ban. The omnibus decision shortened the marking obligation transition from six to three months. It takes effect on 2 December 2026. On the same date, the omnibus bans the commercial distribution of AI tools that generate nude images without consent. That ban is sharp, has no transitional period, and isn't part of the guidelines consultation — but is anchored in the same effective-date date.

What we concretely recommend

By 3 June — if you're done with classification on day one: submit a consultation response. The Commission explicitly addresses the consultation to SMEs as well; a German Mittelstand voice today writing that the C2PA obligation needs 6 months of lead time in Sylius or TYPO3 workflows is part of the factual basis on which the final version is built.

By 2 August: build the notification layer for interactive AI systems into your TYPO3 or Sylius frontends. A banner, a first bot turn, a line in the cookie layer — technically trivial, bureaucratically uncomfortable if it still needs board approval in July.

By 2 December: establish the marking pipeline for generative outputs. For German Mittelstand SMEs we recommend the hybrid path: C2PA for hero visuals, metadata provenance for long-tail assets.

Structurally: dovetail the Article 50 obligation with your NIS-2 registration. The BSI registration window has been closed since 6 March 2026; anyone qualifying as an „important“ or „particularly important“ entity who isn't registered has an open task that doubles up with the AI Act effective date in August.

What we deliberately don't recommend

We don't recommend waiting for the final guidelines. Between 3 June and 2 August there are exactly 60 days; the final version will be published in that window, but it won't take effect any earlier and won't get milder. Anyone starting classification and pipeline work today has time to polish in August — anyone waiting will be under pressure in July.

We equally don't recommend „handing marking off to the agency“. The obligation falls on the provider or deployer — i.e. you, not your marketing agency. An agency can implement the marking technically; compliance responsibility stays with you. Contracts with agencies should explicitly cover the marking standard and burden of proof from August onwards.

Who is most affected

Three profiles from our advisory practice are acute today. Sylius and TYPO3 customers with AI-supported product or content production — online shops and publishers that auto-generate descriptions, newsletters, or image variants — fall under Profile 2 from 2 August. SMEs with voice bots or interactive chat systems on Symfony or Node backends sit under Profile 1, with the highest visibility to end customers. Brands with AI-generated moving images in press communication combine Profile 2 and 3 into a double workflow that marketing leads often haven't seen yet.

Conclusion

The consultation of 8 May is the operational ground rules for the structural decision of 7 May. What was delayed are the high-risk obligations for Annex III systems. What was accelerated are the transparency obligations — exactly the ones that hit the broadest part of the German Mittelstand.

The question isn't whether the AI Act is relevant for your stack. It's whether you, today on 10 May 2026, have sorted your AI outputs along the four Article 50 profiles — or whether you'll try that for the first time on the night of 2 August.

Personal context and technical detail on C2PA, watermarks, and metadata provenance in TYPO3 and Sylius workflows: ole-hartwig.eu.

Frequently asked questions on Article 50 and the transparency consultation

We only use AI for internal drafting — does Article 50 still apply?+

If a human edits and approves the AI output before publication, you are structurally on safe ground — the marking duty under 50(2) targets directly published generative outputs. As soon as text, images or audio go live without final editing (typically: product descriptions from a PIM workflow, newsletter variants from an A/B tool, AI-generated product images), the duty applies. The practical line is not “internal vs external“ but “edited vs unedited“.

C2PA, watermarks or metadata — what do you recommend for Sylius and TYPO3 stacks?+

For most DACH SMEs we recommend a hybrid path: C2PA signatures for hero visuals and PR moving images (where provenance must be forensically defensible) and metadata provenance (IPTC/XMP DigitalSourceType tags) for long-tail assets such as product images. Pure watermarks without a metadata layer are technically inferior because they are lost on the first re-encode. In practice: run the C2PA path of a hero asset through the image pipeline and check what remains after the image resizer and CDN delivery — that is where theory meets reality.

What does submitting a consultation response cost — is it worth it for an SME?+

A response takes two to four hours once your own classification is in place. The EU Commission actively reads SME submissions — not as a marketing slogan but demonstrably in the previous GPAI Code of Practice round. Writing that the C2PA duty requires a six-month lead in a Sylius workflow (image pipeline, CDN, asset manager, re-encode stages), or that the notice duty under 50(1) collides with German TMG provisions, makes you part of the evidence base. Staying silent now means staying silent when enforcement starts.

What fines apply if we miss the transparency duty?+

Article 99 AI Act provides for fines of up to EUR 15 million or 3 percent of global annual turnover (whichever is higher) for breaches of the Article 50 transparency duties. For SMEs the realistic ceiling is significantly lower because the proportionality principle applies and supervisory authorities typically advise before penalising in the ramp-up phase. But anyone who in autumn 2026 still has no marking in their generative pipeline and does not disclose the AI nature of their chatbot has an open task that will be more expensive at the next audit than implementing the pipeline itself.

How does Article 50 interlock with NIS-2 and GDPR?+

Three layers, one compliance stack: NIS-2 regulates security duties (BSI registration since 6 January 2026, registration window closed since 6 March 2026), GDPR regulates data duties (processing agreements, legal basis, data subject rights), Article 50 AI Act regulates the transparency duty in an AI context. They interlock: a chatbot running without an AI notice misses Article 50; if it also processes personal data without a clear legal basis, it additionally misses GDPR; if the underlying infrastructure should have been registered as an “important entity”, it misses NIS-2. The art is not running three projects but one compliance stack that addresses all three layers simultaneously.

What happens if the final guidelines turn out stricter than the draft?+

The final version can move the draft in either direction — stricter or looser — or leave it unchanged. What it cannot do is move the application date backwards (only another Omnibus could). Building with C2PA and metadata provenance today hedges well against any tightening; betting on the minimum (watermark only, no manifest) risks a retrofit in autumn. Our recommendation is: hybrid path now, cross-check the final version in July/August, tighten the marking layer if needed — do not rebuild the whole pipeline.

Bevor das Aufsichts-Audit kommt — sprechen wir über die Klassifizierung.

We classify your AI applications against Article 50 — with a consultation-response template by 03 June.

You give us read access to your AI application landscape — we audit along the four Article 50 profiles (interactive AI, generative AI, deepfakes, emotion recognition/biometric categorization), deliver a classification table with notification-layer mapping, a consultation-response template for the 3 June 2026 deadline, and a marking-pipeline plan for generative outputs by 2 August 2026.

This is the operational routine behind DevSecOps as a Service and the External IT Department — AI Act classification as a running operations process, not a compliance sprint before the deadline.

Termin direkt vereinbaren