Three months until the EU AI Act: what mid-market companies should have in place by 2 August 2026

Vier kleine Holzkisten in präziser Reihe auf Beton mit aufsteigender Risiko-Beschriftung, die rechte oxbloodfarben wachsversiegelt; dahinter ein halb gefalteter August-Kalenderblatt, daneben Messingstempel, Lupe und ledernes Notizbuch im kühlen Nordlicht.

On 2 August 2026 the 24-month transition period for the central provisions of the EU AI Act ends. Anything not in place by then becomes a real fines risk from that day on. Three months, four guard rails, one sober roadmap.

TL;DR — the 90-second summary

Deadline

2 August 2026 — end of the transition period for high-risk AI obligations

Penalty range

up to €35M or 7 % of global annual revenue — whichever is higher

Four risk classes

prohibited, high-risk, limited, minimal — the use case is classified, not the tool

Hidden obligation (active since 2 Feb 2025)

AI literacy under Article 4 — no transition period, retroactive

Biggest lever

an honest inventory including shadow AI — everything else follows

12-week roadmap

inventory (May) → classification & contracts (June–July) → training & steering group (July–August)

 

What is the problem?

Breaches of high-risk AI obligations carry penalties of up to €35 million or 7 % of worldwide annual turnover — whichever is higher — from 2 August 2026. To a German Mittelstand company that sounds abstract, and it is not. The obligations apply whether you develop AI yourself or only use it. An existing ChatGPT Enterprise licence, an AI-driven applicant filter, a chatbot on the website, a copilot in accounting — all of these are inside the obligation scope on 2 August 2026.

Since the start of 2026 we run every AI project in AI Act mode in parallel. This is not bureaucratic enthusiasm, it is experience: anyone who bolts compliance on after go-live builds twice. In this post we frame the deadline, describe the structural obligations and outline the four guardrails that carry the transition.

Why this deadline is a textbook case

The EU AI Act works differently from the GDPR. Instead of one broad data-protection obligation it splits AI systems into four risk classes — prohibited, high-risk, limited, minimal — and ties the obligations to the class. The classification itself is part of compliance. That means: for every AI system in use you must be able to say, in documented form, which class it falls into and why. A blanket "we use ChatGPT" survives no audit.

On top of that, the Article 4 AI-literacy obligation has been in effect since 2 February 2025. Anyone who has not addressed this systematically inside their organisation will from August 2026 not only have to explain the high-risk obligations, but also why the training obligation went unanswered for a year.

Who is affected?

Three points that recur in client conversations — and that show why August 2026 is not only relevant for high-risk AI developers.

Shadow AI is real

In nearly every Mittelstand company we have audited over the past months, employees use AI tools that the company has not formally introduced: private ChatGPT logins, browser plugins, free chatbots, AI assistants embedded in office software. Each of these is an open compliance question. Without inventory there is no register, without a register there is no classification, without classification there is no audit readiness.

Free ChatGPT versions are not a GDPR vehicle

Free ChatGPT versions are in most cases not GDPR-compliant for business use. Anyone using ChatGPT in a business context needs ChatGPT Business or Enterprise with a data-processing agreement and EU data residency. The same applies to comparable tools — Claude.ai, Copilot, Gemini. This shift from private to enterprise accounts has often not yet happened in the Mittelstand.

GDPR and AI Act interlock

The AI Act does not override the GDPR, it complements it. An AI application that processes personal data falls under both regimes. The data protection impact assessment becomes de facto a mandatory part of the AI risk analysis — which for most organisations means that two ongoing compliance processes now need a common bracket.

Impact: the four risk classes and their obligations

The EU AI Act distinguishes four risk classes. Three of them carry practical obligations, one is simply prohibited. The following ordering is not the legal text — it is a reading aid for operational classification inside a Mittelstand company.

Prohibited (unacceptable risk)

Applications that pose an unacceptable risk to fundamental rights. This includes: social scoring by public authorities, manipulative systems that exploit cognitive weaknesses, real-time biometric identification in public spaces (with narrowly defined exceptions for law enforcement), emotion recognition in the workplace and in educational institutions, and untargeted scraping of facial images from the internet. Anyone running such an application has been required to stop since 2 February 2025.

High-risk

Applications in sensitive domains that pose a significant risk to health, safety or fundamental rights. Typical Mittelstand use cases:

Obligations from August 2026: documented risk management, technical documentation, human oversight, continuous monitoring, conformity assessment.

Limited risk

Applications with defined transparency obligations. This includes chatbots (notice that the user is interacting with AI), AI-generated content (labelling), deepfakes (disclosure) and emotion-recognition systems outside the prohibited contexts. Operational obligation: clear disclosure to the user.

Minimal risk

Applications without specific AI Act obligations. Most Mittelstand applications sit here: spam filters, simple recommendation engines in webshops, translation tools for internal documents, general writing assistance in accounting. Important: minimal risk does not release you from GDPR obligations and does not release you from the Article 4 AI-literacy obligation.

The borderline case is the use case, not the tool

The operational difficulty sits in the borderline cases. An AI-supported CRM that scores contacts is not automatically high-risk. But once it is used to pre-select applicants, the use case falls into the high-risk bracket — regardless of what the tool can actually do. The use case is classified, not the tool. This has operational consequences for contracts: a tool can fall into different risk classes depending on configuration and concrete usage context.

Mitigation and immediate actions — the four guardrails

For the next 12 weeks, four guardrails are enough if you push them through consistently. They are deliberately unspectacular and work in combination.

1. Inventory including shadow AI

A systematic survey of the business units, not of IT alone. Which tools are in use today, formally introduced or not, with which vendor, with which data flow direction. The output is a list that is honest — not the one in the slide deck. Practical lever: an anonymous survey in each department combined with the past 90 days of DNS logs for known AI-vendor domains.

2. Classification with a conservative default

Determine the risk class for each tool — again: classification of the use case, not the tool. Where uncertain, classify conservatively. Correcting down later is easier than correcting up retroactively. Document high-risk applications and assign the obligation catalogue per class.

3. AI literacy in modules, scoped to roles

A sales rep needs different content from a developer building an agent. Training plans in modules, with traceable attendance and a refresher cadence. This is not a one-off event, it is a routine. Sensibly coupled to GDPR training — the target groups overlap heavily.

4. Governance with a lean steering group

An AI steering group of three to five people that meets quarterly and makes decisions on new tools, new use cases and audit preparation. Responsibilities are named, not invoked. Reviews run inside operations, not as special events. A light tool-onboarding form (three questions: use case, data flow, assumed risk class) prevents shadow AI from reaching the same size again two quarters later.

Detection and verification — how to find shadow AI

Before classification comes the inventory. Before the inventory comes honest detection. Five core questions that in practice surface 80 % of previously undetected AI usage:

A short quick-check sequence we run before every audit is a variant of this command across source repositories:

 

git grep -nE "(openai|anthropic|together\.ai|cohere|mistral|gemini-pro|claude-)" \
  -- '*.py' '*.ts' '*.js' '*.go' '*.php' '*.rb'

 

What surfaces goes onto the inventory list. What cannot be explained goes onto a second list — and that list does not feed the next inventory layer, it feeds the next board meeting.

Operator recommendation

What should be operationally in place by 2 August 2026 — depending on where you stand today.

Cross-references to topics that come up in the same breath: the EU AI Act Article 50 briefing for transparency obligations, the LiteLLM/Flowise post for AI-gateway discipline, and the AI security audits post for the link between AI governance and release discipline.

Conclusion

Most audit preparations we currently support do not fail because of major investment, but because of a fuzzy inventory. Nobody knows for sure which tools run in the house today, with which contracts and who actively uses them. This is exactly where the biggest lever for the next three months sits. Anyone who runs a clean inventory in May has June and July for contracts and training and goes into August calmly.

The question is not whether you will be AI Act compliant by August. The question is whether on 3 August you can name every AI tool entry with risk class, contract and responsible owner — and whether what is not in the register also does not run inside the house.

A longer piece with a template for AI registers, a shadow-AI survey methodology and the coupling of AI-literacy and GDPR training is available (in German) at ole-hartwig.eu.

Frequently asked questions

How do the AI Act and NIS-2 sit alongside each other?+

They overlap, but they don't collide. NIS-2 demands risk management, supply-chain security and security governance — that applies to AI systems in use as well. The AI Act adds specific obligations on classification, documentation and human oversight. Anyone already implementing NIS-2 can dock the AI Act into the existing risk-management path — that saves duplicate structures.

We have no high-risk application — is minimal administration enough?+

Be careful with that. High-risk is narrowly defined, but the classification remains your obligation and, in case of doubt, your burden of proof. Applicant screening, consumer credit checks, clinical decision support, control of safety-critical processes are high-risk — even when the underlying tool is a “harmless” SaaS product. A conservative classification under uncertainty is cheaper in audit than a late upward correction.

How much effort is the AI literacy obligation in practice?+

Less than most teams assume, once the training plans are tailored to roles. For a mid-sized organisation we plan with three to four modules of 60 to 90 minutes each, tailored to sales, HR, accounting, engineering. Documenting participation and the refresh cadence takes longer — without those, the obligation cannot be demonstrated in audit.

What at minimum belongs in an AI register?+

Per entry: purpose of the application, risk class and its justification, personal data involved, provider and contractual basis, fundamental-rights impact assessment, defined human oversight, internal owner. Anyone already keeping the GDPR record of processing activities can dock the AI register onto it — the overlap is large but not identical.

We only use AI tools provided by someone else — are we still on the hook?+

Yes. The EU AI Act distinguishes between provider and user of an AI system. As a user you have obligations even if you did not build the system yourself — in particular the classification of your use cases, the AI literacy obligation under Article 4, and for high-risk applications the human oversight and monitoring. Anyone using tools from Microsoft, OpenAI or others does not stand outside.

Before August catches up with you — let's talk about your AI register.

We bring your AI register to audit readiness by 2 August.

You give us half a day plus access to your AI-tool contracts, DNS logs and business-unit contacts — we deliver an honest inventory including shadow AI, a conservative risk classification per use case, a 12-week roadmap to 2 August, and an audit-ready report you can take into the audit appointment.

This is the operational routine from DevSecOps as a Service and the Outsourced IT Department — AI Act compliance as AI-register discipline, not as a gut feeling in the board meeting.

Book a call directly