Artificial intelligence (AI) is revolutionizing the global economic landscape, significantly impacting business competitiveness and government strategies.
At a systemic level, AI is a strategic asset for economic growth, national security, and geopolitical influence. States, in particular, aim to integrate intelligent technologies into key sectors—healthcare, energy, transportation, and defense—to strengthen national competitiveness and create an environment conducive to innovation. This scenario has fueled intense global competition, especially since Generative AI demonstrated its transformative power at the end of 2022.
AI Regulation: Why It Matters
Due to its disruptive impact on the economy and society, artificial intelligence requires regulation beyond the (legitimate) concerns related to automation and labor market implications.
Unlike traditional technologies, AI's self-learning capabilities enable it to operate increasingly autonomously, making it difficult to control its actions. This, combined with a certain degree of decision-making opacity (black box), raises important questions about responsibility, transparency, and trust, especially in critical sectors.
Moreover, the spread of generative AI has highlighted how it can inherit and amplify biases present in the training data of underlying models, leading to questionable and sometimes discriminatory decisions. This phenomenon is at the heart of the debate on AI ethics.
Security and privacy implications are equally central, focusing on data protection and sovereignty. What information is used to train models? Has it been collected legally and in compliance with existing regulations (e.g., GDPR)? Has it been anonymized? Furthermore, what happens to the data users input as prompts? Where and how is it stored, analyzed, or used? A lack of transparency can compromise individuals’ privacy, posing risks of improper or unauthorized use of personal information.
The Impact on Innovation and Regulatory Competition
While the need for clear and shared rules is undeniable, it is equally important to ensure that rigid regulation does not hinder innovation and internal competitiveness.
In a market that—despite numerous exceptions—remains largely globalized, imposing strict limits on the development and implementation of new technologies could have counterproductive effects, penalizing organizations operating in highly regulated environments while benefiting competitors with greater freedom. The challenge, therefore, is to strike a balance between protecting fundamental rights and values and fostering innovation and economic growth.
Given the interconnected nature of the modern world, finding this balance is no easy task. Although different regions adopt varying regulatory approaches, the development of networks makes AI accessible worldwide (except in cases of specific bans, such as the past restriction on ChatGPT and the current one on DeepSeek), creating additional challenges.
This situation fosters regulatory competition, where individual states and supranational entities seek to attract investments and talent through more or less permissive regulations. In other words, a genuine race for AI supremacy has emerged over time, in which regulation serves as a strategic tool for achieving systemic competitive advantages.
Given AI’s global nature, it would be reasonable to promote international cooperation to harmonize regulations and define shared standards. However, the current geopolitical scenario, characterized by complex relationships between major players (the United States, China, and Europe), seems to be driving towards a fragmented set of rules, difficult to manage and lacking a unified global governance. Instead, in recent years, the concept of an AI Cold War between the US and China has frequently appeared in journalism, with intense debates on how the two powers are vying for dominance in the sector.
A Look at AI Regulation Around the World: USA, EU, and China
The global regulatory landscape is far from uniform, with differing and often conflicting approaches across regions. Moreover, alongside regulations specifically designed for AI (e.g., the AI Act), there exists a vast legal framework that, while not originally conceived for this topic, applies fully. The textbook example is Europe’s GDPR.
United States: A Decentralized and Evolving Approach
The United States, home to many leading AI companies, has so far adopted a rather decentralized approach to AI regulation, with limited federal intervention (e.g., CHIPS and Science Act). However, the situation is constantly evolving, and according to the Organization for Economic Co-operation and Development (OECD), the US currently has 82 policies related to AI development and deployment.
Several American states have begun legislating on the matter. Notable cases include California with the California AI Accountability Act and Colorado, which recently passed an AI Act, the first comprehensive AI legislation, set to take effect in 2026. This law establishes specific limits for AI system developers and distributors, with a particular focus on high-risk sectors.
European Union: The Centralization of the AI Act
The European Union has an ambitious goal: to create a harmonized and coherent regulatory framework that promotes reliable, efficient, and responsible AI that respects citizens' rights. The AI Act, which came into force on August 1, 2024, is the tool designed to achieve this objective.
The regulation follows a risk-based approach, classifying AI systems according to the level of risk they pose to people’s rights and safety. Four risk levels are assessed based on specific factors, which determine the obligations imposed on AI system providers (including general-purpose models) operating in the European single market.
This approach aims to balance the protection of citizens' rights and security with the need to foster innovation and competitiveness in the sector. High-risk AI systems are subject to stringent requirements for data governance, risk management, and technical standards, while lower-risk systems face fewer restrictions, primarily focused on transparency and information disclosure, avoiding excessive technical burdens. This allows for innovation while maintaining high safety and protection standards.
The AI Act’s implementation follows a roadmap with specific milestones. Starting February 2, 2025, with the introduction of regulations on prohibited AI uses, the process will continue until August 2, 2027.
China: Great Ambitions and a Progressive Approach
Recently, DeepSeek launched its R1 model, an LLM capable of shaking up a market previously dominated by American companies like OpenAI, Google, and Anthropic. DeepSeek stands out for its open-source approach and an architecture requiring fewer computational resources than competing models, showcasing China’s growing autonomy in AI.
China's AI development plan is highly ambitious, aiming to become the world’s leading AI innovation hub by 2030. To achieve this, the government has implemented various policies and regulations to control AI development, growth, and deployment while maintaining a centralized approach. However, China's regulatory model is characterized by phased implementation, with regulations introduced progressively and evolving into an increasingly holistic legal framework.
Among the most significant regulations are those governing AI algorithms and the Interim Measures for the Management of Generative Artificial Intelligence Services, which took effect on August 15, 2023. This regulation aims to monitor and control generative AI use, emphasizing risk management related to harmful content.
Does European Regulation Hinder Competitiveness?
The European Union, in particular, operates within an inherently complex framework: its model requires balancing regulatory needs, economic interests, and often differing (if not diverging) strategic visions among member states. The push toward centralization and synergy, already evident in other digital sectors—such as the ViDA project aimed at standardizing electronic invoicing across Europe—reflects the legislator’s intent to reduce internal fragmentation, which hinders innovation and grants global competitors a significant advantage.
At first glance, the AI Act strengthens Europe's leadership in AI regulation without consolidating its role as a global innovation powerhouse. However, rather than acting as a brake, this regulation represents the recognition of the necessity for a common framework—essential to compete with those who entered the AI governance race earlier and with greater momentum.
Like any regulation, it may be imperfect, but so far, the real obstacle has not been (for the most part) regulatory in nature. Instead, it has been the lack of a clear political strategy and adequate investment. Except for Mistral AI—the French startup that has managed to develop one of the most powerful LLMs in the world—Europe has struggled to secure a leading position in the sector, hindered by funding that pales in comparison to overseas giants. The American Stargate project, with its $500 billion budget, is a tangible demonstration of this gap.
A positive signal in this direction, however, emerged from the recent AI Action Summit in Paris, which showcased a Europe ready to take on a leading role. Among the most significant announcements were InvestAI, a $200 billion fund dedicated to AI development, and the concurrent establishment of four giga-factories to enhance the EU’s computational capacity. France, for its part, has further raised the stakes with a €109 billion investment plan, which—considering the relative sizes of their economies—serves as a counterweight to the Stargate project.
Whether this political and economic momentum will translate into tangible results capable of closing the gap with today’s leaders remains to be seen. Because Europe does not just have the opportunity to lead in AI governance—it now has the possibility, and the responsibility, to become a leader in technological development as well.