The debate about the future of artificial intelligence (AI) is coming to a head. At the "AI Action Summit" in Paris, it became clear that the political differences between Europe and the USA could influence the entire race for the technology. While the EU is pursuing ambitious goals, the USA is warning against too much regulation. Who will take the global lead?

Europe on the hunt for AI leadership

It was an exciting but also controversial meeting at the "AI Action Summit". The European Union, represented by Ursula von der Leyen, pinned its hopes on Europe's future as a center of AI development. She spoke of a "competition for leadership" and assured that Europe was not yet out of the race. "Europe wants to become the leading continent in the development of artificial intelligence," said von der Leyen.

The EU wants to create the basis for this by investing billions in AI research and infrastructure. Particularly noteworthy is the "InvestAI initiative", which is attracting private investment of 150 billion euros and is to be supported by the EU with a further 50 billion euros. However, the EU's vision of taking the lead in AI remains questionable. There is criticism that the bureaucratic burden in Europe is hampering the rapid development of the technology.

USA: "Too much regulation kills innovation"

The image of the USA at this summit was different. In his speech, US Vice President J.D. Vance emphasized the importance of the "free development" of AI and warned against overly strict regulations. "Excessive regulation could slow down progress in AI development", said Vance. The American position is clear: the USA sees itself as the global leader in AI development and this status should not be jeopardized.

The US refused to sign the summit's final declaration, signaling a clear rejection of too much international control. Instead, Vance emphasized the position that the US wants to "win" the competition in AI - even without including the EU in the equation.

The big question mark: How much regulation does AI need?

With its "AI Act", the EU has launched a legal framework for the regulation of AI. But there are concerns here too. Will Europe put the brakes on innovation with strict rules? Opinions differ in the debate about the right way to deal with the technology: the EU is focusing on user trust and security, while the USA is backing innovation without regulatory brakes.

Europe wants to ensure that AI does not become a tool of authoritarian regimes. This was a key issue that also came up repeatedly in the talks with China and the USA. In Europe, it is believed that regulation is necessary to prevent abuse - be it in terms of censorship or user privacy. But is this the right way to remain truly competitive?

Global AI declaration: key countries focus on sustainability - USA and UK remain on the sidelines

At the end of the AI Action Summit, 60 countries signed a declaration to promote "inclusive and sustainable artificial intelligence for people and the planet". Remarkably, both the US and the UK stayed away from the declaration, despite their role in the previous AI Security Summit. The UK stated that it would only sign agreements that were in its own interests. The signatory states committed to advancing AI in line with the Sustainable Development Goals, including by improving access to AI technologies, promoting open, ethical and safe AI and developing environmentally friendly and resource-efficient technologies. They also want to create a platform for AI in the public interest, tackle digital inequality and strengthen the dialog on the environmental impact of AI. Another goal is to create a network that better examines the impact of AI on labor markets and education.

The challenge of balance

The EU's big idea - the regulation of artificial intelligence - could do more harm than good if it is not implemented properly. It's no secret that Europe often lags behind the US when it comes to innovation. An overly bureaucratic approach could limit the scope for AI development and delay competition. On the other hand, the risks of completely unregulated AI use cannot be ignored. International cooperation may be needed here to set ethical and safety standards without stifling the spirit of innovation. Europe must decide: Do we really want to lead the AI world, or just try to be a "laggard" that sets the rules?

Subscribe to the newsletter

and always up to date on data protection.