- Pascal's Chatbot Q&As
- Posts
- EU could redefine global AI norms by marrying technological excellence with democratic safeguards.
EU could redefine global AI norms by marrying technological excellence with democratic safeguards.
If it fails, it risks entrenching its role as a client of foreign tech giants, regulating from the sidelines while others set the pace.
Making Europe an AI Continent: Between Regulation, Competitiveness, and Sovereignty
by ChatGPT-4o
The European Union has long positioned itself as a global standard-setter in technology. With the adoption of the Artificial Intelligence Act in 2024, Europe became the first jurisdiction to craft a comprehensive legal framework for AI. Yet the European Parliamentary Research Service’s September 2025 briefing, Making Europe an AI Continent, makes it clear that regulation alone will not deliver leadership. The EU’s ambition to become a true “AI continent” requires not only trust and safeguards but also bold moves in investment, infrastructure, talent, and uptake.
A Vision Beyond Rules
The Commission’s April 2025 AI Continent Action Plan represents the pivot: a shift from focusing almost exclusively on setting rules toward driving competitiveness. Its five objectives—building large-scale data and computing infrastructure, improving access to data, fostering adoption in strategic sectors, strengthening skills and talent, and streamlining regulatory compliance—form a roadmap that stretches across technical, economic, and social domains.
Concrete measures such as the creation of AI factories (clusters of computing power, data, and talent) and the proposed AI gigafactories (scaled-up facilities integrating 100,000 chips) illustrate a willingness to invest in infrastructure. Parallel initiatives like the Apply AI strategy, the AI Skills Academy, and the forthcoming Cloud and AI Development Act aim to ensure that innovation and use can spread across Europe’s economy and public sector.
Persistent Structural Weaknesses
Despite these efforts, the EU remains far from technological leadership. The data is sobering: in 2024, fewer than 14% of EU enterprises used AI, compared with 58% of US small businesses by mid-2025. The gap is even sharper when comparing SMEs—11% adoption in Europe versus over half in the United States. Public administrations are also behind, with only 27% of local and regional bodies reporting adoption.
Investment figures underscore the disparity. Between 2013 and 2024, US private AI investment reached $109.1 billion, dwarfing the EU’s combined $19.42 billion. The imbalance is visible in patents (Europe accounts for just 2.77% of AI patents worldwide), in model development (only three notable European models in 2024, all from Mistral AI), and in dependence on foreign cloud and semiconductor providers. Amazon, Google, and Microsoft still control nearly 70% of Europe’s cloud market; the largest EU-based provider holds less than 2%.
This dependence is not just economic. It exposes Europe to strategic vulnerabilities in sovereignty and security.
Geopolitical Competition
The external environment only sharpens the challenge. The United States under the Trump administration has moved decisively toward deregulation, with its America’s AI Action Plan stressing competitiveness and rejecting perceived “ideological bias.” China continues to push forward, despite export restrictions on advanced chips, by developing models such as DeepSeek-R1 at lower costs. The UK has aligned more closely with the US approach through the Tech Prosperity Deal and large American investments, while reframing its AI “Safety Institute” into a “Security Institute.”
In this context, the EU’s insistence on “trustworthy AI” has symbolic power, but it risks being outpaced in the race for scale and speed. Partnerships with India, Japan, Singapore, Canada, and South Korea show recognition that Europe cannot go it alone, but the gap with the US and China remains daunting.
Implementation and Governance
Turning the plan into reality requires multi-level governance. The Commission and the EuroHPC Joint Undertaking oversee infrastructure projects, but Member States control siting, co-funding, and national strategies. Local authorities influence permitting, housing, and energy supply—critical factors for AI gigafactories. The Parliament’s role will be to scrutinize, especially with upcoming legislation like the Cloud and AI Development Act.
Stakeholders are divided. Industry bodies such as Digital Europe, EUTA, and ITI welcome simplification and greater investment but call for outcomes that move beyond R&D into real business scale. SME groups stress the skills gap as the main barrier to adoption. Civil society organizations like EDRi warn that simplification should not mean weakening the AI Act or sacrificing environmental and human-rights standards, particularly given the resource demands of data centres and mineral extraction.
Outlook: Can Europe Deliver?
The briefing’s conclusion is blunt: without much higher levels of investment, deeper capital market integration, and broader uptake, the EU will not achieve its ambition. The proposed Savings and Investments Union (SIU) is meant to mobilize citizens’ savings into European tech ventures, but its success depends on national governments ceding powers and on citizens’ willingness to channel funds toward high-risk innovation.
Upcoming milestones—publication of the Apply AI strategy and a European strategy for AI in science in October 2025, and a proposal for a “28th regime” to ease scaling across the single market—will test the EU’s resolve. Yet the deeper balancing act is unchanged: Europe must prove it can be both the guardian of rights and the engine of competitiveness.
If it succeeds, it could redefine global AI norms by marrying technological excellence with democratic safeguards. If it fails, it risks entrenching its role as a client of foreign tech giants, regulating from the sidelines while others set the pace.
