A no-mercy investigation into what the EU's flagship AI regulation fails to cover — from military exemptions to Palantir's sovereignty risks, CLOUD Act conflicts, worker surveillance, and the Digital Omnibus rollback now dismantling your rights from within.
The EU AI Act (Regulation (EU) 2024/1689), in force from 1 August 2024, is marketed as the world's first comprehensive AI regulatory framework. In reality, it is riddled with exemptions, self-certification loopholes, and enforcement vacuums — and is already being actively dismantled through the Digital Omnibus. This analysis maps every significant failure.
“The AI Act brands itself as the first-ever legislation that uniformly regulates AI across all sectors. This branding appears not to be one hundred per cent accurate.”— TU Delft academic research on AI systems not covered by the AI Regulation, 2024
Every identified gap — sorted by severity and structural importance. Scroll further for in-depth analysis of the most consequential failures.
Article 2(3) fully exempts all AI systems used for military, defence, or national security — including autonomous weapons, AI targeting, and dual-use surveillance platforms. No oversight, no audit, no prohibition applies.
CriticalThe Act says nothing about AI systems operating under US CLOUD Act or FISA 702 jurisdiction. Switzerland ended its Palantir contract over sovereignty risks. Germany's Constitutional Court ruled its police use unconstitutional. The EU AI Act is silent.
CriticalMost “high-risk” AI systems require only internal self-assessment. Article 111's grandfathering clause means pre-existing systems may never comply. Article 6(3) allows providers to self-exempt from the high-risk classification entirely.
HighNo real-time public disclosure. No groundwater pollution rules. No binding consumption caps. The voluntary Climate Neutral Data Centre Pact is unenforceable. EU data centres are projected to consume 5 billion m³ of water annually by 2027.
HighModels below the 10²⁵ FLOPs systemic risk threshold are fully exempt if open-sourced. Powerful models can be freely distributed, fine-tuned for surveillance or deepfakes, and deployed by foreign actors with zero AI Act obligations.
HighStandalone models not yet deployed in a specific application context fall into a regulatory void — neither clearly regulated as GPAI models nor as AI systems. Powerful models can be distributed commercially with zero obligations.
StructuralMass biometric surveillance bans have three exceptions that effectively legalise them. None of the Act's prohibitions meaningfully apply in migration and border control. AI lie detectors at EU borders are legal. Banned systems can be legally exported.
CriticalEmotion recognition at work is banned — but algorithmic management, keystroke logging, continuous productivity monitoring, and AI scheduling remain largely unregulated. Internal employer tools may escape the Act's scope entirely.
HighThe EU AI Office has a budget smaller than the UK's AI Safety Institute. It cannot independently investigate — only coordinate. National enforcement capacity is wildly uneven, creating regulatory arbitrage across Member States.
Structural15 months after the Act entered force, the Commission proposed weakening it under Big Tech lobbying pressure (€151M/year). High-risk rules delayed to December 2027. GDPR personal data definition narrowed. Sensitive data allowed for AI training.
CriticalArticles 2(6) and 2(8) exempt personal non-professional use and scientific research. Combined with the open-source exemption, this creates a deployment pathway for powerful AI systems with absolutely zero obligations.
HighAI Act, GDPR, DSA, DMA, and the Cyber Resilience Act have overlapping, inconsistent compliance requirements. This structurally advantages large US and Chinese AI incumbents over European SMEs who cannot afford the compliance infrastructure.
StructuralUS-based, intelligence-connected, operating across EU member states with zero AI Act oversight

The EU Commission itself warned in March 2026 that global data centre water consumption could hit 5 billion m³ by 2027. EU data centres are a growing fraction of this. Despite this alarming projection, binding water limits do not yet exist under EU law.
The Energy Efficiency Directive (EED) now requires Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) reporting — but only to internal national databases, not in real time, not per facility, and not covering groundwater contamination or aquifer drawdown.
Belgium hosts major data centres operated by Google, Microsoft, Amazon, and Meta — all under US CLOUD Act jurisdiction. Google's St-Ghislain facility uses river and groundwater for cooling. None are required to publish real-time water or energy data accessible to citizens. A data centre can sit on your commune's water table with no obligation to inform you.

The Act prohibits “real-time remote biometric identification in publicly accessible spaces for law enforcement.” But three exceptions allow Member States to authorise exactly this via national legislation: targeted searches for crime victims, terrorism prevention, and suspects in “serious crimes” — a list that includes fraud and drug trafficking.
Mass biometric surveillance for border control, commercial security, or workplace access is not covered by the ban at all. The prohibition only applies to real-time use for law enforcement purposes — a narrow definition.
On 19 November 2025 — barely 15 months after the AI Act entered force — the European Commission proposed the Digital Omnibus: a sweeping package simultaneously amending the AI Act, GDPR, ePrivacy Directive, and other digital laws. The EDPB, EDRi, Amnesty International, and Corporate Europe Observatory have all called it a major rollback of EU digital protections.
High-risk AI compliance — biometric identification, judicial AI, hiring systems, credit scoring — delayed to December 2027. Another year-plus of unregulated operation for systems that directly affect your life.
Pseudonymised data would no longer necessarily count as personal data. Your browsing history, health proxies, location patterns — all pseudonymised — become available for AI training without consent under the new definition.
Health data, political views, ethnic profiles, and biometric characteristics can be used to train AI models if processing is “residual or incidental” — a standard companies will maximally exploit.
A comprehensive reference of AI activity categories entirely absent from the Act or so inadequately addressed as to constitute regulatory voids.
| Category | AI Act Coverage | Real-World Risk to Citizens | Severity |
|---|---|---|---|
| Military AI / Autonomous Weapons | Fully exempt — Article 2(3) | Lethal autonomous systems, AI-guided targeting, with no human-in-the-loop requirement | Critical |
| Palantir-class Platforms | Not addressed anywhere in the Act | US CLOUD Act overrides EU data protection; EU governments cannot prevent US government access | Critical |
| CLOUD Act / FISA 702 Conflict | Not addressed; DPF inadequate | All EU citizen data on US cloud AI structurally accessible to US intelligence agencies | Critical |
| Water Contamination & Groundwater | EED covers WUE metrics; pollution not covered | Data centre cooling discharge affecting local aquifers and ecosystems near Belgian facilities | High |
| Open-Source Model Misuse | Exempt below 10²⁵ FLOPs threshold | Powerful models freely redistributable for surveillance, deepfakes, or targeted harassment | High |
| Internal Employer AI Tools | Partly high-risk but self-certified; internal tools may escape scope | Workers evaluated, dismissed, or denied promotion by opaque algorithms with no appeal | High |
| AI in Migration & Border Control | Blanket exemptions; prohibitions don't apply; non-public database | Discriminatory surveillance; AI lie detectors at borders; AI-assisted pushbacks | Critical |
| Predictive Network-Level Policing | Individual prediction banned; group/network analysis not covered | Pattern analysis tools used as de facto predictive policing with no oversight | High |
| Export of Banned AI Systems | Not covered — territorial application only | Systems banned in EU sold to authoritarian governments targeting the same diaspora populations | Critical |
| Schrems III Invalidation Scenario | No contingency planning in AI Act | If CJEU invalidates DPF, all EU institutions using US AI cloud face immediate legal exposure | Structural |
| Real-Time Water/Energy Disclosure | EED requires reporting but not real-time per facility | Citizens cannot access current data for the data centre on their doorstep affecting their water supply | High |
| Standalone AI Models Pre-Deployment | Regulatory void between model and system definitions | Powerful models trained and distributed without triggering any Act obligations at all | Structural |
| Autonomous Cyber Weapons | Not covered | State-level offensive cyber AI operates entirely outside any EU framework | Critical |
The concrete, everyday implications of the Act's structural failures — for your data, your job, your environment, and your democracy.
The EU AI Act is not a failure of intention. It is a case study in regulatory capture at scale. The original draft was stronger. Industry lobbying — documented at €151 million per year by 2025, a 33.6% increase in two years — systematically removed the teeth. Member state governments led by those most dependent on US tech investment pushed for exemptions that protect company interests over citizen rights. The US Trump administration directly pressured the EU not to “overregulate.”
What emerged is a law that bans practices primarily harmful to authoritarian states that EU governments were not planning to implement anyway; creates enough regulatory complexity to impose costs on European AI startups while entrenching large incumbents; exempts precisely the sectors where AI causes the most harm to citizens; and is now being actively weakened through the Digital Omnibus before a single enforcement action has been completed.
The Belgian citizen's relationship with AI as regulated by this Act in 2026 is this: you are protected from social scoring you were never at risk of, while being unprotected from the surveillance tools already deployed on your data, your job application, your water supply, and your border crossing. Your data is legally accessible to US intelligence agencies. Your local data centre does not have to publish real-time water or energy consumption you can access. Palantir-type platforms can operate in your country's security infrastructure with no AI Act transparency requirements. And the Commission is proposing to make all of this worse.
“This is not the gold standard. It is a framework whose gaps are more consequential than its rules.”