Fractured EU shield dissolving into surveillance data streams
Critical Analysis  ·  April 2026

The EU AI Act —
Every Gap.
Every Loophole.
Every Failure.

A no-mercy investigation into what the EU's flagship AI regulation fails to cover — from military exemptions to Palantir's sovereignty risks, CLOUD Act conflicts, worker surveillance, and the Digital Omnibus rollback now dismantling your rights from within.

PublishedApril 26, 2026
Gaps Identified12 Critical
CoverageEU · Belgium · Citizens
Status⚠ Active Threat

Context

What the AI Act claims — and what it delivers

The EU AI Act (Regulation (EU) 2024/1689), in force from 1 August 2024, is marketed as the world's first comprehensive AI regulatory framework. In reality, it is riddled with exemptions, self-certification loopholes, and enforcement vacuums — and is already being actively dismantled through the Digital Omnibus. This analysis maps every significant failure.

“The AI Act brands itself as the first-ever legislation that uniformly regulates AI across all sectors. This branding appears not to be one hundred per cent accurate.”
— TU Delft academic research on AI systems not covered by the AI Regulation, 2024
0Critical gaps identified
100%Military AI exempt from Act
€0MBig Tech EU lobbying / year (2025)
+15moHigh-risk rules delayed by Omnibus
0Enforcement actions completed (2026)

Structural Failures

The 12 Critical Gaps

Every identified gap — sorted by severity and structural importance. Scroll further for in-depth analysis of the most consequential failures.

Gap 01
Military & National Security Exemption

Article 2(3) fully exempts all AI systems used for military, defence, or national security — including autonomous weapons, AI targeting, and dual-use surveillance platforms. No oversight, no audit, no prohibition applies.

Critical
Gap 02
Palantir & Foreign Intelligence Platforms

The Act says nothing about AI systems operating under US CLOUD Act or FISA 702 jurisdiction. Switzerland ended its Palantir contract over sovereignty risks. Germany's Constitutional Court ruled its police use unconstitutional. The EU AI Act is silent.

Critical
Gap 03
Self-Certification Trap

Most “high-risk” AI systems require only internal self-assessment. Article 111's grandfathering clause means pre-existing systems may never comply. Article 6(3) allows providers to self-exempt from the high-risk classification entirely.

High
Gap 04
Energy & Water Transparency Void

No real-time public disclosure. No groundwater pollution rules. No binding consumption caps. The voluntary Climate Neutral Data Centre Pact is unenforceable. EU data centres are projected to consume 5 billion m³ of water annually by 2027.

High
Gap 05
Open-Source Exemption Abuse

Models below the 10²⁵ FLOPs systemic risk threshold are fully exempt if open-sourced. Powerful models can be freely distributed, fine-tuned for surveillance or deepfakes, and deployed by foreign actors with zero AI Act obligations.

High
Gap 06
GPAI / AI System Definitional Void

Standalone models not yet deployed in a specific application context fall into a regulatory void — neither clearly regulated as GPAI models nor as AI systems. Powerful models can be distributed commercially with zero obligations.

Structural
Gap 07
Law Enforcement & Migration Carve-Outs

Mass biometric surveillance bans have three exceptions that effectively legalise them. None of the Act's prohibitions meaningfully apply in migration and border control. AI lie detectors at EU borders are legal. Banned systems can be legally exported.

Critical
Gap 08
Workplace AI Surveillance

Emotion recognition at work is banned — but algorithmic management, keystroke logging, continuous productivity monitoring, and AI scheduling remain largely unregulated. Internal employer tools may escape the Act's scope entirely.

High
Gap 09
Enforcement Vacuum

The EU AI Office has a budget smaller than the UK's AI Safety Institute. It cannot independently investigate — only coordinate. National enforcement capacity is wildly uneven, creating regulatory arbitrage across Member States.

Structural
Gap 10
The Digital Omnibus Rollback

15 months after the Act entered force, the Commission proposed weakening it under Big Tech lobbying pressure (€151M/year). High-risk rules delayed to December 2027. GDPR personal data definition narrowed. Sensitive data allowed for AI training.

Critical
Gap 11
Research & Personal Use Loopholes

Articles 2(6) and 2(8) exempt personal non-professional use and scientific research. Combined with the open-source exemption, this creates a deployment pathway for powerful AI systems with absolutely zero obligations.

High
Gap 12
Regulatory Overlap Jungle

AI Act, GDPR, DSA, DMA, and the Cyber Resilience Act have overlapping, inconsistent compliance requirements. This structurally advantages large US and Chinese AI incumbents over European SMEs who cannot afford the compliance infrastructure.

Structural

Gap 02 — Deep Dive

The Palantir Problem

Palantir Technologies — Structural Sovereignty Risk

US-based, intelligence-connected, operating across EU member states with zero AI Act oversight

What Switzerland found (Feb 2026)

  • Data held by Palantir could be accessed by US government & intelligence services
  • Leaks from Palantir's systems “cannot be technically prevented”
  • The risk represents a genuine loss of national sovereignty
  • Switzerland ended its Palantir contract entirely in February 2026

What Germany's courts ruled

  • Federal Constitutional Court: Palantir police use was unconstitutional
  • Violated citizens' right to informational self-determination
  • Hamburg's authorising law nullified entirely by the ruling
  • Despite this, Merz government reportedly reviving acquisition discussions

The CLOUD Act conflict

  • US CLOUD Act (2018): requires handing over data stored anywhere in the world
  • FISA 702: NSA collects EU citizen communications without individual court orders
  • GDPR Article 48 prohibits disclosure to foreign authorities without an agreement
  • EU-US Data Privacy Framework does not resolve CLOUD Act or FISA 702 conflicts

What the AI Act does about this

  • Nothing. The Act does not address US extraterritorial jurisdiction
  • No restriction on EU public institutions using US-jurisdiction AI platforms
  • No mandatory sovereignty audit of AI providers operating in the EU
  • No requirement for sensitive EU data to run on EU-jurisdictional infrastructure
Schrems III — The Third Invalidation Looms

The EU-US Data Privacy Framework has already been invalidated twice (Schrems I: 2015, Schrems II: 2020). An active CJEU appeal filed October 2025 may trigger a third invalidation — leaving every EU institution using US cloud AI with no legal basis for those data transfers whatsoever.

Gap 04 — Deep Dive

Energy, Water & the Transparency Void

Data center at night with cooling towers

5 billion cubic metres per year — and no real-time public disclosure

The EU Commission itself warned in March 2026 that global data centre water consumption could hit 5 billion m³ by 2027. EU data centres are a growing fraction of this. Despite this alarming projection, binding water limits do not yet exist under EU law.

The Energy Efficiency Directive (EED) now requires Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) reporting — but only to internal national databases, not in real time, not per facility, and not covering groundwater contamination or aquifer drawdown.

What's missing: No real-time per-facility public disclosure. No groundwater pollution rules. No binding water consumption caps. The Climate Neutral Data Centre Pact (water-neutral by 2030) is voluntary and unenforceable.
Belgium: What Sits on Your Water Table?

Belgium hosts major data centres operated by Google, Microsoft, Amazon, and Meta — all under US CLOUD Act jurisdiction. Google's St-Ghislain facility uses river and groundwater for cooling. None are required to publish real-time water or energy data accessible to citizens. A data centre can sit on your commune's water table with no obligation to inform you.

Gap 07 — Deep Dive

Biometric Surveillance & The Migration Carve-Out

Surveillance eye circuit board looking through EU map keyhole

“Banned” — with three exceptions that swallow the rule entirely

The Act prohibits “real-time remote biometric identification in publicly accessible spaces for law enforcement.” But three exceptions allow Member States to authorise exactly this via national legislation: targeted searches for crime victims, terrorism prevention, and suspects in “serious crimes” — a list that includes fraud and drug trafficking.

Mass biometric surveillance for border control, commercial security, or workplace access is not covered by the ban at all. The prohibition only applies to real-time use for law enforcement purposes — a narrow definition.

In migration and borders: AI lie detectors at EU borders are legal. Discriminatory risk assessment for asylum seekers is not banned. Key databases (Eurodac, SIS, ETIAS) are exempt from AI Act compliance until 2030. Deploying authorities' systems are listed in a non-publicly accessible section of the EU database.
Gap 10 — The Most Dangerous Development

The Digital Omnibus: Dismantling Rights From Within

On 19 November 2025 — barely 15 months after the AI Act entered force — the European Commission proposed the Digital Omnibus: a sweeping package simultaneously amending the AI Act, GDPR, ePrivacy Directive, and other digital laws. The EDPB, EDRi, Amnesty International, and Corporate Europe Observatory have all called it a major rollback of EU digital protections.

Danger 01

High-Risk AI Rules Delayed Again

High-risk AI compliance — biometric identification, judicial AI, hiring systems, credit scoring — delayed to December 2027. Another year-plus of unregulated operation for systems that directly affect your life.

+15months delay granted to high-risk AI operators
Danger 02

GDPR Personal Data Narrowed

Pseudonymised data would no longer necessarily count as personal data. Your browsing history, health proxies, location patterns — all pseudonymised — become available for AI training without consent under the new definition.

scope of GDPR protection significantly reduced
Danger 03

Sensitive Data for AI Training

Health data, political views, ethnic profiles, and biometric characteristics can be used to train AI models if processing is “residual or incidental” — a standard companies will maximally exploit.

€151MBig Tech EU lobbying/year in 2025 (+33.6% vs 2023)
Amnesty International, April 2026

“The Commission's proposals to ‘simplify’ tech laws will roll back our rights in order to feed AI and undermine our rights online.” — The Digital Omnibus reflects key lobbying messaging from Google, Microsoft, and Meta, as documented by Corporate Europe Observatory and LobbyControl.


Complete Reference

What is not covered at all

A comprehensive reference of AI activity categories entirely absent from the Act or so inadequately addressed as to constitute regulatory voids.

CategoryAI Act CoverageReal-World Risk to CitizensSeverity
Military AI / Autonomous WeaponsFully exempt — Article 2(3)Lethal autonomous systems, AI-guided targeting, with no human-in-the-loop requirementCritical
Palantir-class PlatformsNot addressed anywhere in the ActUS CLOUD Act overrides EU data protection; EU governments cannot prevent US government accessCritical
CLOUD Act / FISA 702 ConflictNot addressed; DPF inadequateAll EU citizen data on US cloud AI structurally accessible to US intelligence agenciesCritical
Water Contamination & GroundwaterEED covers WUE metrics; pollution not coveredData centre cooling discharge affecting local aquifers and ecosystems near Belgian facilitiesHigh
Open-Source Model MisuseExempt below 10²⁵ FLOPs thresholdPowerful models freely redistributable for surveillance, deepfakes, or targeted harassmentHigh
Internal Employer AI ToolsPartly high-risk but self-certified; internal tools may escape scopeWorkers evaluated, dismissed, or denied promotion by opaque algorithms with no appealHigh
AI in Migration & Border ControlBlanket exemptions; prohibitions don't apply; non-public databaseDiscriminatory surveillance; AI lie detectors at borders; AI-assisted pushbacksCritical
Predictive Network-Level PolicingIndividual prediction banned; group/network analysis not coveredPattern analysis tools used as de facto predictive policing with no oversightHigh
Export of Banned AI SystemsNot covered — territorial application onlySystems banned in EU sold to authoritarian governments targeting the same diaspora populationsCritical
Schrems III Invalidation ScenarioNo contingency planning in AI ActIf CJEU invalidates DPF, all EU institutions using US AI cloud face immediate legal exposureStructural
Real-Time Water/Energy DisclosureEED requires reporting but not real-time per facilityCitizens cannot access current data for the data centre on their doorstep affecting their water supplyHigh
Standalone AI Models Pre-DeploymentRegulatory void between model and system definitionsPowerful models trained and distributed without triggering any Act obligations at allStructural
Autonomous Cyber WeaponsNot coveredState-level offensive cyber AI operates entirely outside any EU frameworkCritical

What This Means for You

Your rights as a Belgian EU citizen

The concrete, everyday implications of the Act's structural failures — for your data, your job, your environment, and your democracy.

Your Data at Google / AWS Belgium

  • Physically stored in Belgium — but legally subject to US CLOUD Act jurisdiction
  • A US court order can force the company to hand your data to US authorities
  • You receive no notification; your consent is not required or sought
  • This may violate GDPR Article 48 — but companies face an impossible compliance conflict
  • The EU AI Act says nothing about this structural vulnerability

Your Job Application & Workplace

  • AI hiring tools are “high-risk” — but self-assessed by the company that built them
  • You have no independent right to demand an audit of how the system scored you
  • AI-powered productivity monitoring and keystroke logging remain largely unregulated
  • Emotion recognition at work is banned; other forms of AI surveillance are not
  • Night work is now legal across Belgian sectors — combined with unregulated AI scheduling

Your Water & Local Environment

  • 69% of Europeans worry about data centres impacting their local water supply (2025 poll)
  • Data centres draw heavily from local aquifers and rivers for server cooling
  • The EED mandates reporting — but not real-time public access per facility
  • Groundwater drawdown and thermal discharge are not specifically regulated by the AI Act
  • No binding water consumption caps; only an unenforceable voluntary industry pledge

Your Democracy & Future Rights

  • The most powerful AI tools — surveillance, targeting, profiling — are the least regulated
  • A government can deploy AI mass surveillance under “national security” with no AI Act consequences
  • The Digital Omnibus is narrowing your GDPR rights while public attention is elsewhere
  • High-risk AI rules affecting credit, jobs, and services delayed to late 2027
  • Schrems III could invalidate data transfer frameworks — leaving a total legal vacuum

Final Assessment

The Architecture of Regulatory Capture

The EU AI Act is not a failure of intention. It is a case study in regulatory capture at scale. The original draft was stronger. Industry lobbying — documented at €151 million per year by 2025, a 33.6% increase in two years — systematically removed the teeth. Member state governments led by those most dependent on US tech investment pushed for exemptions that protect company interests over citizen rights. The US Trump administration directly pressured the EU not to “overregulate.”

What emerged is a law that bans practices primarily harmful to authoritarian states that EU governments were not planning to implement anyway; creates enough regulatory complexity to impose costs on European AI startups while entrenching large incumbents; exempts precisely the sectors where AI causes the most harm to citizens; and is now being actively weakened through the Digital Omnibus before a single enforcement action has been completed.

The Belgian citizen's relationship with AI as regulated by this Act in 2026 is this: you are protected from social scoring you were never at risk of, while being unprotected from the surveillance tools already deployed on your data, your job application, your water supply, and your border crossing. Your data is legally accessible to US intelligence agencies. Your local data centre does not have to publish real-time water or energy consumption you can access. Palantir-type platforms can operate in your country's security infrastructure with no AI Act transparency requirements. And the Commission is proposing to make all of this worse.

“This is not the gold standard. It is a framework whose gaps are more consequential than its rules.”