• Pascal's Chatbot Q&As
  • Posts
  • The AI Act was sold as Europe’s attempt to regulate powerful AI systems before they became too deeply embedded in society.

The AI Act was sold as Europe’s attempt to regulate powerful AI systems before they became too deeply embedded in society.

Now, before the most consequential obligations even fully apply, Europe is already softening, delaying and simplifying under pressure from competitiveness arguments.

Summary: Europe has not abandoned the AI Act, but it has blinked by delaying key high-risk AI obligations.



This is a tactical win for industry and a real win against sexual deepfake abuse, but a strategic warning for public-interest AI governance.



For society, the risk is a longer period where AI systems shape rights, opportunities and trust before accountability fully arrives.

by ChatGPT-5.5

The latest EU AI Act development is not a repeal, but it is a meaningful retreat. EU governments and European Parliament negotiators have reached a provisional agreement to simplify and delay parts of the AI Act, especially the rules for high-risk AI systems. The agreement still needs formal approval, but politically it signals a shift: Europe is moving from “we will regulate AI before it embeds itself everywhere” toward “we will regulate, but only once the compliance machinery is easier for industry to absorb.”

The headline change is the delay. High-risk AI obligations that were due to apply on 2 August 2026 will now be pushed back: to 2 December 2027 for stand-alone high-risk AI systems, and to 2 August 2028 for high-risk AI embedded in regulated products. These are not marginal systems. They include AI used in areas such as biometrics, critical infrastructure, law enforcement, employment, education and other contexts where errors, bias, opacity or automation can materially affect people’s rights and life chances.

There are also important carve-outs and simplifications. Machinery will be excluded from direct AI Act applicability where sectoral rules already apply, responding to concerns from major industrial players such as Siemens and ASML. The agreement also tries to reduce overlap between the AI Act and existing product-safety frameworks, and asks the Commission to provide guidance to reduce compliance burden for operators covered by sectoral legislation.

But the package is not simply deregulation. The negotiators added a new prohibition on AI systems that generate non-consensual sexual or intimate content, including child sexual abuse material. Reuters links this to recent controversies around sexually explicit deepfakes and “nudifier” apps. Mandatory marking or watermarking of AI-generated output is also moving forward, while the Commission has opened consultation on transparency guidelines requiring people to be informed when they interact with AI systems or are exposed to certain AI-generated or manipulated content.

So: is this a win or a loss?

ChatGPT’s honest answer is: a short-term win for business, a partial win for some victims of AI abuse, but a strategic loss for the original ambition of the EU AI Act.

It is a win for companies that were facing unclear standards, overlapping obligations, fragmented enforcement and rising compliance cost. There is a legitimate argument that rules should not become operational before the standards, guidance, templates, sandboxes and supervisory infrastructure are ready. Badly implemented regulation can produce box-ticking, legal uncertainty and a cottage industry of consultants, rather than meaningful protection. In that sense, some delay may be defensible.

It is also a win that the EU moved decisively on non-consensual sexual deepfakes. This is one of the clearest categories of AI harm: cheap, scalable humiliation and abuse, disproportionately targeting women, girls and children. A ban here is not “anti-innovation”; it is basic civil protection.

But the wider political signal is troubling. The AI Act was sold as Europe’s attempt to regulate powerful AI systems before they became too deeply embedded in society. Now, before the most consequential obligations even fully apply, Europe is already softening, delaying and simplifying under pressure from competitiveness arguments. That may be pragmatic, but it weakens the EU’s credibility as a rights-first regulator.

The core problem is the timing mismatch. AI deployment is accelerating now. AI systems are already being used in hiring, education, security, fraud detection, customer assessment, healthcare triage, public administration, content moderation, insurance and policing-adjacent contexts. Delaying high-risk obligations means society gets more deployment before it gets enforceable accountability. That creates a dangerous “governance gap”: the harms arrive in real time, while the protections arrive after the market has already normalized the systems.

The status quo is therefore not a clean victory or defeat. It is a compromise shaped by three forces: the EU’s desire to remain the global AI rule-setter; industry’s insistence that Europe is drowning itself in red tape; and the uncomfortable fact that public institutions are often slower than the technologies they are trying to govern. The result is a more business-calibrated AI Act.

For society as a whole, the consequences are significant.

First, citizens may face a longer period of exposure to high-risk AI without the full protection originally promised. That matters most for people who cannot easily opt out: job applicants, students, welfare recipients, patients, migrants, workers, children and people subject to biometric or automated security systems.

Second, trust may suffer. The EU has repeatedly claimed that trustworthy AI is its competitive advantage. But trust is not built by announcing rights and then postponing the mechanisms that make those rights real. If citizens see AI systems spreading faster than safeguards, they may conclude that regulation is performative.

Third, large incumbents may still benefit more than European start-ups. The simplification rhetoric is framed around helping European innovation, but global Big Tech also benefits from delays and looser compliance burdens. The companies with the largest data, compute, distribution channels and lobbying capacity are usually best positioned to exploit regulatory breathing space.

Fourth, private governance will matter more. If public law is delayed, then contracts, procurement rules, technical standards, audit rights, insurance requirements, internal governance and sector-specific professional norms become more important. For publishers, universities, healthcare providers, financial institutions and public bodies, the lesson is clear: do not wait for the AI Act to save you. Build your own provenance, documentation, testing, monitoring and accountability requirements now.

Fifth, AI regulation risks becoming permanently provisional. The greatest danger is not one delay. The danger is a pattern: every time enforcement approaches, industry argues that the rules are too burdensome, policymakers blink, and the compliance deadline moves again. If that happens, the AI Act becomes a political symbol rather than a hard governance instrument.

ChatGPT’s judgment: this is not yet a catastrophic loss, but it is a warning light.

It becomes a win only if the delay is used to build serious implementation capacity: clear standards, strong national authorities, meaningful AI Office supervision, practical guidance for companies, real testing infrastructure, enforceable transparency rules, and procurement requirements that reward responsible systems. It becomes a loss if the delay merely gives powerful companies more time to entrench risky systems while civil society, workers and consumers wait for protections that keep moving into the future.

The EU AI Act still matters. Even watered down, it remains one of the world’s most important AI regulatory frameworks. But its authority now depends less on the elegance of the legal text and more on whether Europe can resist the next round of pressure. The decisive question is no longer whether Europe can write AI rules. It is whether Europe can enforce them before AI becomes too embedded, too infrastructural and too politically expensive to discipline.