Back to overview
September 23, 2025 | 3 minutes

AI and Europe part 2: the playing field

AI and Europe part 2: the playing field

The AI Act makes clear what is and is not allowed in Europe. In Part 1, we wrote about how the EU is trying to control the behavior of AI with that law, not the technology itself. The higher the risk, the stricter the conditions. Accountability for what AI does is central, but even the best-designed rules remain theory if practice can't run anywhere. You can draw up the playbook as tightly as you like, without field, ball and shoes, it will remain just watching from the sidelines. In fact, AI requires more than principles. It requires computing power. Not abstractly, but concretely: specialized chips, energy, data centers, cloud environments, and a digital infrastructure that connects it all. Without access to that physical engine, there is no training, no application and no scale. And that is precisely where Europe is lagging behind. A language model, medical image recognition or predictive algorithm does not work without millions of calculations per second. Those run on powerful hardware, with the right environment and security. Even the daily use of AI, think of a chatbot or customer service robot, remains dependent on that underlying computing power. In Europe now, that infrastructure is often scarce, expensive or dependent on others. That pinches, especially for public institutions and young companies.

In Part 1, I wrote that the AI Act requires a lot from organizations. Explaining what your system does, accounting for where data comes from, assessing whether your application falls into a risk category. But there is now a fundamental layer underneath that: can the system run at all in a European context?

Former ECB president and former prime minister of Italy Mario Draghi warned early this year, in his report on European competitiveness and the business and investment climate, that without digital infrastructure, the EU is simply not an equal player. He called the lack of computing power not a technical inconvenience, but a strategic vulnerability. The message was clear: those who want to compete in AI must not only know what is allowed, but above all make sure they can. That call is now getting policy translation in the Cloud and AI Development Act, part of the broader AI Continent strategy. The Act should ensure structural expansion of European computing power: a tripling of data center capacity in five to seven years. With an eye on sustainability, spread and speed. Also with the goal of accommodating public AI applications in Europe, without having to divert to platforms outside their own jurisdiction.

So the law is about more than cloud. It touches on geopolitics, innovation, energy, and the preconditions for credible AI policy. Because as long as hospitals, governments and startups lack affordable, reliable compute, responsible innovation quickly becomes a luxury item. At the same time, Brussels is trying to look ahead. In addition to traditional cloud capacity, the EU is investing in quantum technology, post-exascale computing and hybrid infrastructures. In other words, very high-tech. Not because everything will be different tomorrow, but because strategic technology now requires forethought. Those who want to secure their position should not wait until the game has already started.

What is at stake? The credibility of European AI policy, because rules without infrastructure are like promises without budgets. For startups, this is decisive in whether you can test, train and grow without having to go anywhere. For governments, it means whether you keep a grip on where sensitive systems run, and for citizens, it's whether public technology is truly public.

The AI Act set the bar for responsible use. This Act should ensure that that bar can actually be met. The rules of the game are in place, but without a field, there is no game.