Regulation (EU) 2024/1689 · In Force 1 Aug 2024 · Analysis: April 2026
The EU AI Act — Every Gap, Every Loophole, Every Failure
A Complete Critical Analysis with Belgian Citizen Impact
“The AI Act brands itself as the first-ever legislation that uniformly regulates AI across all sectors. This branding appears not to be one hundred per cent accurate.”— TU Delft academic research on AI Act exemptions, 2024 [^1]
The EU Artificial Intelligence Act applies a risk-based classification system: unacceptable risk (banned), high-risk (tightly regulated), limited risk (transparency obligations), and minimal risk (largely unregulated).
On paper, it bans manipulative AI, social scoring, most mass biometric surveillance, and emotion recognition at work and school. In reality — as this report documents — the law is riddled with exemptions, self-certification escape hatches, enforcement gaps, jurisdictional blind spots, and is already being actively dismantled from within by the European Commission through the Digital Omnibus proposals of November 2025.
It does not cover the military. It does not meaningfully bind foreign tech companies under CLOUD Act jurisdiction. It does not protect migrants and asylum seekers. It does not mandate real-time public disclosure of energy and water consumption by AI data centres. It does not protect workers from workplace surveillance. It does not address the structural dependency of EU institutions on US-owned AI platforms like Palantir. And it is being delayed and weakened under direct lobbying pressure from US tech companies and the Trump administration.
What follows is the most complete critical map of every significant weakness, gap, and failure in the EU AI Act — with direct analysis of what that means for you as a Belgian citizen.
Click any gap to expand the full analysis.