Category: News

The Black Sea AI Gigafactory and Romania’s Contribution to Europe’s AI Agenda

Europe is rapidly advancing the development of a continent-wide AI infrastructure, and the numbers clearly reflect the scale of this ambition. Following the launch of the AI Innovation Package in 2024, which expanded EuroHPC’s role to full support for AI model development, the EU has already established 13 AI Factories. In 2025, the Commission raised the stakes through InvestAI, a plan aimed at mobilizing EUR 200 billion (mostly private investment), of which up to EUR 20 billion represents the public contribution dedicated to gigafactories.

Together with the Apply AI strategy, Europe is approaching an ecosystem of at least 17 publicly or public-privately funded AI (giga)factories, on top of which private projects are emerging. NVIDIA alone has recently announced a roadmap for building 20 AI factories in Europe, including 5 gigafactories — bringing the company’s active projects on the continent to 22, with another 15 planned by 2030, according to the Centre for European Policy Studies (CEPS). (See the full list of Nvidia’s projects in Europe here.)

At the local level, at the end of November, the Romanian government approved the memorandum mandating the Ministry of Energy and the Ministry of Finance to coordinate the implementation of the Black Sea AI Gigafactory project — an AI and supercomputing hub worth up to EUR 5 billion. The competition for European funding is open, and Romanian authorities are already preparing the official proposal to be submitted in March 2026.

What better moment to revisit the premises of the AI Gigafactory program and assess Romania’s plans and chances?

From AI Factories to AI Gigafactories: Objectives, Funding, and Perspectives

In February 2025, we reported that the President of the European Commission, Ursula von der Leyen, launched InvestAI, the initiative that includes a dedicated EUR 20 billion fund for building AI Gigafactories (AIGFs). The program aims to develop at least five such “gigafactories” — massive compute infrastructures conceived as an extension of the AI Factories program, capable of training models with hundreds of trillions of parameters by integrating over 100,000 AI chips and highly efficient, automated data centers.

Through InvestAI, together with contributions from member states, the EU seeks to provide researchers, startups, and industry with access to large-scale computing capacity, accelerating the development of advanced AI models and strengthening Europe’s technological sovereignty.

The estimated cost of an AI Gigafactory is EUR 3–5 billion, with funding shared between the public and private sectors. More specifically, public contributions can cover up to 35% of CAPEX, financed by the European Commission and the member states, while the remaining investment and 100% of OPEX come from private companies and investment funds. At the European level, funding dedicated to AI Gigafactories is therefore distributed as follows: 17% from the European Commission, 17% from EU member states, and 66% from the private sector.

The AI Gigafactories program is the natural extension of the AI Factories initiative , funded through EuroHPC with nearly EUR 2 billion. AI Gigafactories will build on this model but at a far larger industrial scale, creating the infrastructure needed for training advanced AI models.

The Countries Entering the Race for AI Gigafactories. What Comes Next?

In 2024–2025, the European Commission launched a “call for expressions of interest” to assess market readiness. A total of 76 proposals were submitted, ( covering 60 sites across 16 member states, indicating total intended investments of more than EUR 230 billion. Given the scale and complexity of the submissions, the official call for AI Gigafactories (AIGFs) has recently been postponed to 2026, instead of late 2025 as originally planned.

Although the European Commission has not officially disclosed the member states that submitted expressions of interest by the 20 June 2025 deadline—citing confidentiality reasons—several countries have announced their participation publicly. Among them are Romania, Austria (Vienna), Czechia (Prague), Spain (Mora la Nova), as well as Germany and the Netherlands, according to Data Center Dynamics.

Respondents to the call include data center operators, telecom companies, energy providers, European and global technology partners, as well as investors. According to the Commission, collectively, respondents expect to procure at least three million GPUs.

The call was launched to assess interest and create an informal registry of candidates. Between December 2025 and early 2026, the European Commission will engage with respondents to refine the proposals: technical details, location, specifications, funding structure, and sustainability plans.

  • On 4 December 2025, the European Commission signed a Memorandum of Understanding (MoU)  with the European Investment Bank and the European Investment Fund to support the development and financing of AI Gigafactories.
  • What’s next? The official call for proposals will be launched in early 2026 (estimated submission deadline: March 2026).
  • AI Gigafactories are expected to become operational by the end of 2028.

What Are Romania’s Plans for the Black Sea AI Gigafactory?

The Black Sea AI Gigafactory project proposes installing more than 100,000 AI accelerators in two stages: Phase I in Cernavodă and Phase II in Doicești, locations chosen for their energy advantages and digital infrastructure. Powered by up to 1,500 MW of zero-emission energy, primarily nuclear, the gigafactory would position Romania as a strategic supercomputing hub in Europe.

Cernavodă provides direct access to nuclear power, along with excellent fiber connectivity and submarine cable links, while Doicești offers the advantage of an “industrial site with potential for SMR co-location, hybrid cooling, and integration into the national high-speed communications network” (source). The project also includes a strong cybersecurity component, supported by the expertise built around the presence of the European Cybersecurity Competence Centre (ECCC) in Bucharest.

The memorandum mentioned at the beginning of this article—approved by the government on 27 November—places the Ministry of Energy and the Ministry of Finance in charge, with support from the Authority for Digitalization of Romania, to coordinate preparation, negotiations, and the involvement of all key public, private, academic, and research stakeholders to bring and implement the Black Sea AI Gigafactory in Romania.

What else does the memorandum reveal?

  • The Ministry of Energy becomes the main coordinator of the project and assumes primary responsibility for data center policy, leading the development and oversight of strategies in this field.
  • Romania could attract additional funding through European instruments such as the European Innovation Council Fund, TechEU Scale-up, the EIB Group’s “European Tech Champions Initiative,” or the InvestEU.
  • The gigafactory will integrate advanced energy-efficiency technologies—from sustainable cooling and renewable energy via PPAs to heat reuse and grid-resilience measures—to meet national and EU climate objectives.
  • Selection criteria also include responsible water usage and the adoption of circular-economy principles.
  • Romania is running a major investment program in nuclear energy, including the refurbishment of Unit 1 at Cernavodă (extending its lifetime by another 30 years), the construction of Units 3 and 4, and the development of advanced nuclear technologies such as the NuScale SMR at Doicești.
  • The gigafactory will boost competitiveness by providing advanced compute capacity to industry, startups, SMEs, and researchers; create jobs; develop new skills; and, through new cross-border fiber corridors and trusted compute services, strengthen regional connectivity—complementing EuroHPC infrastructure and supporting interoperable collaboration between countries.

We are encouraged that, regardless of the European Commission’s decision after the March 2026 stage, Romania does not plan to pause the project. On the contrary: in the event of a negative response, authorities are prepared to recalibrate the plan, attract investors, and move forward with implementation, with technical support from the World Bank Group and other international financial institutions. Romania is not stepping back from its ambition to build a regional AI hub — and that may be the best industry news as we enter 2026!

Server racks storing AI datasets for simulation, training and predictive tasks

Data Center Cost in 2025-2026? Liquid Cooling Drives Budgets Even Higher

The global data center boom, fueled by the generous budgets of hyperscalers and major investors, is steadily pushing construction costs upward. Analyses show that developers are already facing the challenge of working with a “final price” that is impossible to fix from the start. Estimating data centers budgets is becoming increasingly difficult as material prices fluctuate, critical equipment has delivery times exceeding a year, new technologies entail additional costs, and every project stage involves financial risks.

In early November, Turner & Townsend published the Data Centre Construction Cost Index 2025-2026, providing a clear view of the market’s actual state. This is currently the only index exclusively dedicated to data center construction costs, making it an essential reference for investors, developers, and operators. Its analysis helps us understand the directions the market is heading and the implications for future projects.

Liquid Cooling vs. Traditional Data Centers: Understanding the Cost Gap

According to the Turner & Townsend report, costs are rising, and the transition to liquid cooling systems involves significantly higher expenses compared to traditional (air-cooled) data centers. In numbers, the reality looks like this: construction costs for traditional cloud data centers increased by 5.5% in 2025 compared to last year. However, this growth is moderate compared to the 9% jump reported in 2024, indicating that the market is beginning to stabilize. The average construction sector inflation of 4.2% is felt less acutely in the data center segment as local supply chains develop, easing price pressures.

The situation is different for data centers using liquid cooling systems designed for AI workloads. In the U.S., construction costs for these facilities are, on average, 7–10% higher than those of traditional data centers with the same IT capacity. These high-density centers are more complex to build, integrate significantly more expensive technical and cooling systems, and see rapidly increasing demand in markets such as the U.S., the U.K., Europe, and East Asia, as companies seek to support increasingly intensive AI workloads.

Density Pays Off in the Long Run

The same Turner & Townsend report shows that, precisely because of their density, AI data centers can be a more cost-effective choice in the long term. They often feature more flexible designs, which can reduce project costs. Additionally, higher density allows for a smaller building footprint, and “mega campuses” designed to run AI models across multiple interconnected buildings bring significant economies of scale.

On the other hand, traditional cloud data centers require complex measures to ensure service continuity (technical redundancy measures), which considerably increase construction costs and, in the long term, operational expenses as well.

Developers’ Insights

How do developers view the 2025 situation? Nearly half of respondents reported construction cost increases of 6–15%, while 21% say costs have risen more than 15%. Additionally, under inflationary pressure, the majority (60%) anticipate further increases of 5–15% in 2026. About 21% are even more pessimistic, expecting inflation to exceed 15% in the coming year.

Major European capitals are climbing the ranks in data center construction costs, approaching the levels of large U.S. cities. Paris and Amsterdam reach $10.8 per watt, comparable to Portland, while Madrid and Dublin surpass U.S. cities such as Atlanta, Phoenix, and Columbus.

Check out the full Turner & Townsend report here.

Future-Proof Data Centers: Hybrid Designs for AI and Cloud Workloads

According to McKinsey, while AI training currently drives the size and scale of data centers, future facilities will be hybrid, combining training workloads, inference tasks, and cloud operations. These centers could surpass in size even the facilities considered large just two years ago.

Today, the time between requesting services and starting construction of a data center can range from 12 to 36 months, depending on type, design, size, and location. Optimizing the construction process can significantly shorten these timelines: for example, a U.S. architecture firm completed the design of a 929 m² data center in Colorado in just 30 days, using an Agentic AI platform – the first project of its kind entirely designed by artificial intelligence.

Such innovations could reduce construction time by 10–20% and deliver similar capital savings, potentially cutting global projected expenditures of $1.7 trillion by 2030 by up to $250 billion, according to McKinsey analysts.

They also recommend several key directions that could transform how data centers are designed and built, including:

  • Designs should allow for phased expansion, modularization, and off-site assembly.
  • Prefabricated and modular solutions currently account for 40–60% of data center components, with some projects using up to 80–85%. These solutions accelerate construction, reduce on-site labor, and improve quality.
  • Generative scheduling tools allow simulation of thousands of scenarios to optimize resources and work sequences, reducing delivery time by up to 20%.

Discover more in McKinsey’s full analysis.

In short, the data centers of tomorrow will get more expensive unless builders embrace innovative technologies and scalable, modular designs. These approaches help developers cut costs, accelerate delivery, and handle complexity more effectively, ensuring the infrastructure is ready for the rising demands of AI and cloud workloads.

From Myths to Metrics: What’s the Real Energy Footprint of the Data Center Industry?

How much power does the infrastructure behind our digital world really consume? From alarmist reports to optimistic forecasts, the energy use of data centers has become a true “gray zone.” Numbers, rumors, analyses, and “myths” abound: some claim that AI will double or even triple global electricity demand by 2030, while others argue that green technologies will balance it all out. The truth? It’s more nuanced than it seems—and lies somewhere in between. In the following lines, we’ll explore it through some of the most relevant recent reports and studies.

The Four-Scenario Perspective of the International Energy Agency (IEA)

According to an IEA report ,  in 2024, data centers consumed approximately 415 TWh, representing 1.5% of global electricity use. Over the past five years, their energy consumption has grown by 12% annually. The increasing adoption of artificial intelligence is expected to double consumption by 2030, reaching around 945 TWh, equivalent to 3% of global electricity use. Between 2024 and 2030, the energy consumption of data centers will grow by 15% per year, more than four times faster than the average growth across other sectors.

AI servers will be the main driver of this growth, accounting for half of the total increase, while traditional servers will contribute 20%. The U.S. and China will generate nearly 80% of the additional demand, with increases of +240 TWh (+130%) and +175 TWh (+170%), respectively, followed by Europe (+45 TWh, +70%) and Japan (+15 TWh, +80%).

However, this represents only the Base Case scenario of the IEA report — the reference or most likely scenario considered by analysts. The agency also outlines three alternative scenarios, reflecting different possible industry evolutions:

  • The Lift-Off Case: accelerated AI adoption, more resilient supply chains, and greater flexibility in location and operations. Result: rapid expansion of data centers. Energy consumption: >1,700 TWh in 2035, +45% compared to the Base Case. Global share: 4.4% of total electricity consumption.
  • The High Efficiency Case: significant progress in energy efficiency at the software, hardware, and infrastructure levels. Result: the same digital/AI demand met with reduced energy use. Energy consumption: approximately 970 TWh in 2035, savings of >15% versus the Base Case. Global share: 2.6% of total electricity consumption.
  • The Headwinds Case: slower AI adoption, local bottlenecks, and strained supply chains leading to delayed data center expansion. Result: stable IT capacity after 2030, limited growth, and increasingly efficient equipment. Energy consumption: about 700 TWh in 2035, <2% of global electricity consumption.

An important note: A March 2025 report by 4E TCP (a program under the IEA), authored by Radu Dudău (President of Energy Policy Group) and Vlad Coroamă (founder of the Roegen Center for Sustainability in Switzerland), shows that in 2023 data centers consumed 300–380 TWh globally (excluding cryptocurrency activity), and by 2030 their consumption is estimated at 600–800 TWh, representing 1.8–2.4% of the global electricity demand. According to the authors, global consumption will not reach 900 TWh in five years due to AI, as this scenario would require investments of USD 9,000 billion in CAPEX alone, plus operational costs. The most realistic scenario involves investments between USD 2,000 and 3,000 billion over the next five years.

The same report also includes a table summarizing most studies estimating the global energy consumption of data centers, as well as a table with studies on the global energy consumption of AI. Read the full report here.

Deloitte and the Impact of GenAI on Energy Consumption

In 2025, data centers account for approximately 2% of global electricity use, equivalent to 536 TWh, according to Deloitte. However, the energy consumption of GenAI data centers is increasing much faster, which could lead to a doubling of global consumption by 2030, reaching around 1,065 TWh. Estimates vary depending on processing and AI efficiency: if optimizations fail to materialize, consumption could exceed 1,300 TWh. Critical consumption for essential components (GPU, CPU, storage, cooling, networking) could reach 96 GW globally by 2026, with AI operations potentially consuming over 40% of that total.

  • The annual energy consumption of AI data centers will reach 90 TWh in 2026, roughly ten times more than in 2022. In the first quarter of 2024, global net additional demand for AI reached ~2 GW, marking a 25% increase compared to the previous quarter.
  • According to Deloitte analysts, if just 5% of daily global searches used GenAI, the required energy would be 3.12 GWh per day and 1.14 TWh per year, equivalent to the electricity used by approximately 108,450 U.S. households.
  • Regionally, data centers will represent 6% of total U.S. electricity consumption in 2026 (≈260 TWh) — the same share as in China. In the United Kingdom, data center energy demand could increase sixfold over a decade, driven mainly by AI adoption.

Access the full Deloitte analysis here.

McKinsey’s Outlook with Europe in Focus

Demand for data centers in Europe is set to surge in the coming years, rising from 10 GW in 2024 to 35 GW by 2030, while the industry’s energy consumption will nearly triple, from 62 TWh to over 150 TWh. According to McKinsey, at this pace, data centers will account for 5% of Europe’s total electricity consumption, up from just 2% today, and will generate 15–25% of the region’s new net electricity demand. The average annual growth rate of electricity use will reach 13% CAGR, and most of the required energy will come from renewable sources, in line with the commitments made by major industry players. Find the full McKinsey report here.

Goldman Sachs Research: Global Data Center Power Demand to Increase by 165% by 2030

Goldman Sachs analysts estimate that global data center power demand will grow by 50% by 2027, and by up to 165% by the end of the decade, compared to 2023. More specifically, demand is expected to rise from approximately 55 GW in 2025 to 84 GW in 2027, reaching a total online capacity of around 122 GW by 2030. The share of AI in the total energy consumption of data centers will reach 27% in 2027, while cloud workloads will account for 50%, and traditional workloads for 23%. The power density of infrastructure will also increase from 162 kW/m² in 2025 to 176 kW/m² in 2027, driven by higher AI-related demand.

In Europe, over the next decade, the data center pipeline—estimated at around 170 GW—could generate a 10–15% increase in the region’s total energy consumption, equivalent to about one-third of its current total. Read the full Goldman Sachs Research analysis here.

How Is Data Center Energy Consumption Evolving at the National Level?

  • United States: Data centers used 176 TWh in 2023 (4.4% of total national electricity use), up from 58 TWh in 2014. Energy demand has tripled over the past decade and could reach between 325 and 580 TWh by 2028, representing 6.7–12% of the country’s total electricity consumption. (Source: The U.S. Department of Energy)
  • United Kingdom: Data centers use about 2.5% of the country’s electricity, equivalent to 5 TWh per year, and by 2030 their consumption could reach 22 TWh/year. (National Energy System Operator)
  • Ireland: Data centers account for roughly one-fifth (20%) of annual energy consumption, and in the Dublin area, they represented almost half (48%) of the city’s total electricity use in 2023. (Commission for Regulation of Utilities)
  • Germany: The annual electricity consumption of data centers is about 20 TWh, projected to reach 38 TWh by 2037. Major network operators forecast that by 2045, it could rise further to 88 TWh.
  • Netherlands: Data centers consume 3.7 billion kWh, or 3.3% of the country’s electricity, equivalent to 13.32 PJ or 0.44% of the Netherlands’ total energy consumption (3,024 PJ). 90% of this energy is green. (Dutch Data Center Association)
  • France: Data center electricity consumption rose from 3 TWh in 2019 to 10 TWh in 2022, representing 2.2% of the country’s total electricity use. It is estimated that by 2050, consumption will increase by 74%, according to Aurora Energy Research.
  • Spain: According to consultancy firm DNV, data centers in Spain consumed over 6 TWh of energy in 2024, and this consumption is expected to increase to approximately 12 TWh by 2030 and reach 26 TWh by 2050, representing a growth of more than 300% compared to the current level.
  • Meanwhile, in Romania, data centers consumed approximately 1.5 million MWh in 2022 out of a total of around 50.518 million MWh used nationwide. In other words, this represents 2.3–2.5% of the national electricity consumption, up from 2.1% in 2019, according to estimates by Tema Energy.

Aside from these figures, it is also worth mentioning the Energy Efficiency Directive  (revised in 2023), which estimates that in the EU, data center consumption will rise from 76.8 TWh in 2018 to 98.5 TWh in 2030 (+28%), representing approximately 3.2% of the Union’s total electricity demand.

Although it seems we cannot agree on a single set of figures or a single consumption scenario, it is becoming increasingly clear that as digitalization and AI technologies accelerate, data centers are playing a growing role in the national and global energy mix. Efficient energy management and investments in green energy are becoming essential for a sustainable future.

Interview with Victor Vevera, ICI Bucharest: What resources and services will RO AI Factory provide, and when will it become operational?

The EuroHPC Joint Undertaking (EuroHPC JU) network of AI Factories — high-performance computing facilities optimized for artificial intelligence — is a key component of Europe’s strategy to strengthen digital sovereignty and competitiveness. These centers aim to provide start-ups, SMEs, researchers, and industry players with access to AI-optimized supercomputers, data, talent, and expertise, enabling Europe to become a sustainable “AI continent.” So far, 19 AI Factory locations have been selected across 16 EU Member States, including Romania. The network is being developed gradually, connecting infrastructures, supercomputing centers, universities, start-ups, and industry into an integrated European ecosystem for AI innovation.

We spoke with Adrian Victor Vevera, General Director of the National Institute for Research and Development in Informatics – ICI Bucharest, about Romania’s first AI Factory project — its objectives, coordination, and implementation plans.

  1. What is ICI’s role within the consortium, and how are the partners involved?

The National Institute for Research and Development in Informatics – ICI Bucharest is the national coordinator of the RO AI Factory consortium and the host entity of the future AI-optimized supercomputer. Its role is to ensure the strategic, operational, and technical coordination of the entire project, as well as the integration of the infrastructure into the EuroHPC European ecosystem.

Consortium partners contribute complementarily: technical universities (the POLITEHNICA University of Bucharest and the Technical University of Cluj-Napoca) provide academic expertise and skills development; research institutes (INCDSB, ICIA) support the development of AI applications in areas such as life sciences and data modeling; while industry and association partners (Transilvania IT, CNIPMMR, RoDIH) ensure the connection with the private sector, start-ups, and SMEs. Together, the consortium functions as an integrated platform bringing together research, education, and innovation.

  1. Where will the AI Factory be located, and what infrastructure will it host?

The RO AI Factory will be hosted at the ICI Bucharest data center, which already features enterprise-grade infrastructure designed for critical systems operation. The dedicated space for the supercomputer is equipped to accommodate high-density hardware and includes redundant power supply, modern liquid- and air-cooling systems, climate control, monitoring, and physical security. The data center is connected via the RoEduNet network to the GÉANT European research infrastructure, with the possibility of bandwidth expansion. In parallel, ICI Bucharest is upgrading its power and cooling capacities to support the new AI infrastructure.

  1. What is the expected computing power of RO AI Factory?

The RO AI Factory supercomputer will be a EuroHPC-class system, optimized for artificial intelligence workloads, advanced modeling, and the training of foundation and large language models (LLMs). It will deliver an unprecedented computing capacity in Romania, with a theoretical peak performance exceeding 5 exaFLOPS in AI operations. This places it among the most advanced AI systems in Central and Eastern Europe, capable of training and deploying next-generation artificial intelligence models — including large-scale generative and multimodal models. Through this capability, RO AI Factory will accelerate scientific research, industrial digitalization, and the development of innovative solutions with major economic and societal impact.

  1. What is the budget, and what is the estimated operational period?

The total estimated budget of €50 million covers both the acquisition and implementation of the hardware and software infrastructure and its operation over a five-year period. The funds will be used to procure the supercomputer and supporting infrastructure and to cover operational expenses — energy, maintenance, staffing, and connectivity — for the full five years. The budget also includes funding for the development and provision of AI services, training programs, start-up support initiatives, and ecosystem-building actions, implemented over a three-year contractual period.

The project’s financial model incorporates long-term sustainability mechanisms through national co-funding and integration with the EuroHPC and Digital Europe programs, ensuring that the RO AI Factory continues to operate beyond the initial implementation phase.

  1. What team and human resources will be involved?

The RO AI Factory operational team will initially include over 50 specialists, such as system engineers, AI and HPC experts, researchers, software developers, trainers, and AI ethics and compliance specialists. Recruitment will be carried out from among the consortium institutions, academia, and the private sector, through dedicated training and talent-attraction programs.

Romania already has a strong base of professionals in IT and engineering sciences, and the project aims to directly contribute to reducing brain drain by providing a competitive research and innovation environment aligned with European standards.

  1. Who will benefit from the infrastructure, and how can they access it?

RO AI Factory is designed as an open and accessible infrastructure for a wide range of users: academic and research institutions, start-ups, SMEs, technology companies, and public institutions. Access to computing resources will be provided through calls for projects, proposals, or subscription-based models, depending on user category.

Private IT and related companies will be able to use the platform for training their own AI models, testing AI solutions, and optimizing commercial products. There will also be dedicated acceleration programs, mentorship sessions, and training workshops to help the private sector leverage the infrastructure and expertise available through RO AI Factory.

  1. What is the implementation timeline, and what are the main milestones?

The RO AI Factory project will follow a phased implementation plan over approximately three years:

  • Year 1: hardware procurement, site preparation, and installation of core systems.
  • Year 2: service integration, interconnection with national and European data networks, and the launch of user programs.
  • Year 3: full operationalization of the national AI ecosystem and expansion of services for business and public administration.

It is estimated that RO AI Factory will become fully operational by the end of 2027, strengthening Romania’s position as a regional hub for AI innovation in Central and Eastern Europe.

Platform Global 2025: Looking Ahead to the Next Decade in Data Centers

From September 7 to 9, Platform Global 2025 took place in Antibes, France—a landmark summit dedicated to leaders building and financing digital infrastructure worldwide. The event brought together C-level executives from the data center, cloud, edge, and networking sectors, as well as investors, technology leaders, policymakers, regulatory authorities, real estate experts, energy transition specialists, consultants, and analysts, providing a comprehensive forum to explore market trends and growth opportunities.

The program featured over 100 speakers, conferences, and networking sessions covering topics such as AI-driven development, land and energy strategies, long-term energy solutions, mergers and acquisitions, sustainable investments, global geopolitical risks, quantum computing, emerging markets, demand and leasing, cloud sovereignty, how inference is reshaping the edge landscape, and the latest lessons learned by hyperscalers in liquid cooling.

In short, the challenges and outlook for the data center industry can be summarized across four “fronts”: energy, policy (regulation), AI, and sustainability. Energy constraints in the FLAP-D region are driving projects to areas with large available energy blocks. On the policy side, increasingly strict regulations and local issues already emerging in some markets (restrictions and opposition from authorities or local communities) are putting pressure on developers. Large-scale AI deployment is primarily occurring in North America and China, while sustainability remains important, however, outside of Europe, has not yet received the same level of attention.

Below is a summary of some of the most relevant ideas and conclusions for the data center industry from Platform Global 2025.

Investments & Market Growth

According to Nicola Hayes, Director at Platform Markets, global investments in data centers have increased by 53% this year. Tomas Peshkatari (Global Infrastructure Partners / BlackRock) explained that this demand will lead to a doubling of capacity over the next five years, with an annual growth rate of 22%. In addition, contracts have extended to 10–15 years, with investment returns of 8–9%. Against this backdrop, Wes Cummins from Applied Digital highlighted that the U.S. remains ahead of Europe, where finding land and energy for new projects is much more challenging, while China is rapidly building and scaling infrastructure. “The amount of capital we are spending now is enormous—it’s shocking even for me,” said Wes Cummins. Smaller companies like Nebius and Lambada Labs are also growing rapidly, adapting to increasing demand. Charles Antoine Beynet (DataOne) added that while in the U.S. you can deploy 100 MW in a year, in countries like France, the UK, or Germany, the process takes much longer due to limited access to electricity and strict regulations.

Energy Challenges and Solutions

On the energy front, Nathan Luckey (Macquarie Group) explained that energy in Europe is greener but more prone to intermittency, which can lead to blackouts, as recently seen in Spain. Pablo Ruiz-Escribano (Schneider Electric) added that, although energy is available, access takes too long. Building new transmission lines can take up to 10 years, according to Neil Cresswell (Virtus), who noted that, for example, data center power in the UK doubles every five years—from 100 MW to 250 MW, to 500 MW. Richard Bienfait from Stack Emea highlighted that energy costs are much higher in Europe than in the U.S., raising the question, “Why build here?” Sean James (Microsoft) explained that data centers tend to develop in clusters in certain regions, requiring huge amounts of energy, and energy quality can be affected if a large load is suddenly shut down.

On the solutions side, Ash Evans from Google noted that lower-cost locations will emerge as AI monetization evolves, and that the tech giant prefers to control all MEP systems (mechanical, electrical, and plumbing), even though leasing remains an option. Solutions discussed include generators to relieve grid load, batteries (e.g., Tesla Megapacks – xAI DC), and specialized software to optimize energy usage. For example, at Google, on average, only 66% of contracted power is actually used, according to Ash Evans.

Data centers can accelerate energy access when standard grid connections are unavailable using several strategies. According to McKinsey, these include:

  • Selecting new locations with shorter connection times (e.g., emerging cities, provinces such as Aragon);
  • Leveraging existing infrastructure (e.g., abandoned industrial sites, former coal plants, industrial parks);
  • Building on-site generation capacity and microgrids (e.g., gas plants, small modular reactors);
  • Accelerating the development of energy capacity together with suppliers (e.g., PPAs, joint grid investments, restarting or extending the life of nuclear/gas/coal plants).

McKinsey’s analysis also shows that data center operators are increasingly exploring behind-the-meter (BTM) options to secure energy directly at the source, including:

  • Solar and wind: Highly variable due to intermittency, but aligned with sustainability goals; requires storage capacity and large land areas. A feasible long-term solution (after 2030) if storage costs decrease.
  • Hydropower: Seasonal variability (reduced output in dry season), limited construction opportunities near dams, and additional connectivity costs.
  • Natural gas: Controlled variability (can be turned off when not needed), proximity to the source is essential to minimize transport investment; long-term sustainability remains debatable, though cheaper Carbon Capture and Storage (CCS) technologies could offset this.
  • Nuclear (SMR): High reliability, zero carbon emissions, most promising BTM option, but realistic only after 2030.

Conclusion: BTM feasibility before 2030 is limited, depending on cost reductions in storage and CCS; in the long term, nuclear power is the most likely dominant solution.

Currently, the cheapest on-site microgrids from a Levelized Cost of Energy (LCOE) perspective are based on gas generators. However, in most cases, grid power remains cheaper than microgrids, with differences depending on technology and country. Additionally, partnerships between energy companies and data center operators are becoming more frequent.

According to McKinsey, data center operators can leverage energy company opportunities to generate an additional 2–4% revenue. Practically, data centers become active participants in grid management, not just consumers, contributing to stability and energy efficiency. This can be achieved through several mechanisms:

  • Peak Shaving & Load Levelling: Adjusting energy consumption to reduce demand peaks that strain the grid, maximize solar energy capture, and efficiently use capacity during low-demand periods;
  • Frequency Regulation: Data centers can rapidly increase or decrease supply to maintain grid frequency (50/60 Hz) during unforeseen events;
  • Voltage Control: Managing reactive power to maintain near-constant voltage amid increasing volatility;
  • Price Arbitrage: Buying and selling energy based on hourly or minute-by-minute price variations, optimizing costs and avoiding additional peak-hour charges.

Locations, Regulations & Markets

In addition to immediate access to energy and fast construction permits, flexible data center design—allowing adaptation to future demand—is equally important. Eric Boonstra (Kevlinx Data Centers) emphasized that location choice is primarily dictated by clients. For example, cities like Brussels are currently under-supplied and will soon attract new data center investments, while established hubs such as Frankfurt and Amsterdam face energy-related challenges. Milan, although seemingly saturated, has the potential to surpass Amsterdam if the current growth trend continues.

Amine Kandil (N+One Datacenters) sees major future developments in North Africa, particularly Morocco, while Umberto Sordino (EAE) highlighted strong growth in the Nordic countries, Greece, India, and the Philippines. Dan Thomas (GreenScale) noted that the Nordics and Portugal offer cheap energy and the newest networks, although energy access times remain long, they are expected to improve. Oliver Schiebel (hscale) stressed the importance of having multiple high-quality sites for clients to choose from, and Robert Bath (FoundDigital DS) added that 2N redundant power path designs will become increasingly common.

  • According to McKinsey, the geographic strategy of data centers in Europe is shifting due to rising AI demand and energy shortages in Tier 1 and Tier 2 cities. Development is moving toward Tier 3 cities, new provinces, and areas around Tier-1 hubs.
  • New locations must meet criteria such as energy cost and type, connectivity, latency, proximity to Availability Zones (AZs), and the level of support from the local ecosystem.
  • Typical time to access energy: over 5 years in Tier-1 cities, 3–5 years in secondary cities, and 2–3 years in emerging locations.

Design, Flexibility & Cooling Technology

According to experts at Platform Global 2025, although data centers are often developed based on demand, a significant portion is built speculatively due to the importance of time-to-market, making financial risks and design assumptions crucial. Safi Farooqui (Brookfield Asset Management) highlighted the importance of hybrid cooling systems and the construction of high-density data centers. Pablo Ruiz-Escribano (Schneider Electric) noted that the industry has changed completely over the last 30 years: data centers are now part of the grid, white space impacts grey space, and the talent shortage for design, construction, and operation remains a challenge.

Wes Cummins provided an example illustrating the growing importance of flexibility. The first Applied Digital data center used 100% liquid cooling and 25% air-based cooling, whereas now the company designs buildings with a flexible mix of up to 50% air-based cooling. Vincent Barro (Schneider Electric) added that DCIM systems are increasingly becoming true copilots, using AI to automate operations and optimize consumption. The Schneider representative also emphasized the growing importance of microgrid developments. Tom Kingham (CyrusOne) believes that in the near future, data centers will require 600V or even 800V systems, while Andy Hayes (Polar) spoke about the increasing demand for Neocloud and AI colocation, especially in the pharmaceutical industry.

AI Demand & Leasing

AI-focused data centers, such as Neocloud, are already being largely built in Europe. The demand for AI is enormous and real—not just hype—and adoption rates are increasing rapidly, says Christina Mertens (Virtus). According to her, the way data center space is leased has also changed: from flexible contracts of a few years for a few racks or 1 MW in a large facility, leasing has evolved to entire floors or even entire sites being leased by a single client, with longer-term and more secure contracts. The more a data center is customized for a client, the stronger the relationship and commitment become.

  • Global demand for data centers is expected to triple, reaching over 180 GW
  • By 2030, AI/ML and HPC workloads will account for 71% of the total, while GenAI will grow from 14% in 2025 to 40%.
  • According to McKinsey, although significant capacity increases are announced through 2030, supply may remain below estimated demand after 2027. In Europe, the gap could reach 10 GW.
  • This gap will put pressure on energy grids in key regions but will also create opportunities for new hubs and the entry of new market players.
  • McKinsey estimates that Europe will account for approximately 15% of global workload volume by 2030, much of it driven by AI inference services.

Platform Global 2025 showed that data centers are more than just infrastructure—they’re hubs of innovation, where energy, AI technologies, and smart development strategies will shape who leads the next digital wave.

Learning from Downtime: What Recent Data Center Outages Reveal

A data center being offline for just a few hours can mean losses of hundreds of thousands or even millions of dollars. The “Annual Outage Analysis 2025” report by the Uptime Institute highlights a paradox in the industry: although the overall frequency of outages and reported severity have decreased for the fourth consecutive year, their financial and reputational impact is becoming increasingly severe. In 2024, more than half of the surveyed organizations (54%) reported that their most recent significant outage exceeded $100,000, and one in five reported losses of over $1 million.

What have the past few years really looked like for data center operators in terms of operational disruptions? Beyond charts and statistics, every percentage point hides real stories—high-profile incidents and major financial losses that demonstrate just how fragile the balance between availability and downtime can be.

Energy, the “Achilles’ Heel” of the Data Center Industry

Even though only 9% of incidents reported in 2024 were classified as serious or severe—the lowest level ever recorded by Uptime—power remains the “Achilles’ heel” of data centers, accounting for more than half (54%) of major-impact outages. The numbers become even more telling when placed alongside real-world cases—let’s look at a few concrete examples.

In October 2023, a failure in the electrical distribution system at a Microsoft data center in the Netherlands caused an outage of nearly two hours , after the switchover from the public grid to backup generators partially failed. The incident impacted key Azure services—from App Service and SQL DB to storage and virtual machines—and about 1% of racks lost power. Full recovery took until the evening, with some storage accounts affected for several hours, directly impacting customers and critical services that depended on them. Microsoft did not disclose details about the financial impact of this outage.

Read the article “Power cuts, a major challenge for data centers” to explore some of the solutions and preventive measures recommended for operators.

Cooling, Network, and IT – the Next Major Risk Factors

The Uptime Institute report also shows that, after power, cooling systems (13%), networks (12%), and IT systems (11%) follow as leading causes of outages, confirming that critical infrastructure remains vulnerable precisely at the points where it should be strongest.

We already know that heatwaves are not a data center operator’s best friend. Back in July 2022, Google’s and Oracle’s London data centers  were hit by a record-breaking heatwave, with temperatures soaring above 40 °C, which caused failures in cooling systems. Oracle’s first announcement about the incident stated that “unreasonable temperatures” had affected its cloud and network equipment at its South London data center, causing outages throughout the day and impacting customers. Google, in turn, partially shut down cloud services for several hours as a protective measure to prevent equipment damage and prolonged downtime, affecting a small number of users and causing temporary unavailability for services such as WordPress web hosting in Europe.

A more unusual incident was recently shared by Rick Bentley, founder of Cloudastructure and Hydro Hash, which operates a hydro-powered crypto-mining data center. This one occurred in Montana, USA, where the data center “froze solid overnight” . The problem, in this case, was the rapid temperature drop from -6 °C to -34 °C in less than 24 hours. Bentley emphasized that, although the team believed it was prepared, the combination of extreme cold with a power outage made the incident unavoidable.

Complex IT Infrastructures Mean More Frequent Outages

As mentioned earlier, in 2024 nearly a quarter of major-impact outages were caused by IT and network issues—a trend explained by the increasing complexity of infrastructures and the risks associated with misconfigurations. Uptime Institute data confirms this: the most common causes of IT-related outages are network and connectivity problems (30%), IT systems and software (23%), power outages (18%), third-party IT services such as public cloud or SaaS (8%), and cooling issues (7%).

A representative case is the incident on July 20, 2025, involving Alaska Airlines. This highlights that the damage is not only financial but also reputational. The U.S. airline suffered a critical hardware failure in its data centers, which led to the grounding of all flights for about three hours, between 8:00 p.m. and 11:00 p.m. PT. The issue disrupted core flight operations and also affected its subsidiary, Horizon Air. As a result, on July 21, FlightAware data showed that 7% of flights (66) were canceled, while another 12% (110) experienced delays, leading to crowded airports and confusion among passengers. The hardware failure was reportedly caused by a third-party component, with the company stating it was working with the vendor to resolve the issue.

Operational Outages Caused by Human Error on the Rise

In 2025, outages caused by human error increased by 10 percentage points compared to 2024, with the most common cause being failure to follow procedures—possibly amplified by the industry’s rapid growth and staff shortages. Investments in employee training and real-time operational support can help mitigate these risks. According to Uptime Institute, over the past three years, the main causes of major human errors have been, in addition to not following procedures (58%), staff following incorrect processes (45%), understaffing (18%), insufficient preventive maintenance (16%), and omissions in data center design (14%).

A Final Note

As data center infrastructure becomes increasingly complex and interconnected, operational risks are diversifying and becoming more costly. Even infrastructure designed to be robust can be vulnerable to extreme conditions or configuration errors, underlining the importance of an integrated prevention strategy.

To reduce the risk of outages in data centers, several complementary measures are essential: redundant power systems (generators and UPSs) to ensure uninterrupted hardware operation; regular maintenance and testing, supported by monitoring and predictive analytics; failover to mirror sites for rapid traffic redirection; disaster recovery plans with checklists and regular drills; and staff training to minimize human error.

AI Factories & Gigafactories Made in Europe: How the EU Is Building Its AI Ecosystem

Is Romania joining the AI race? In June, the Romanian Authority for Digitalization announced the country’s bid to host one of the most ambitious European projects in the field of artificial intelligence: Black Sea AI Gigafactory. The project could attract an investment of approximately $5 billion and has the potential to transform Romania into a strategic hub for high-performance computing in Europe.

The Black Sea AI Gigafactory project involves the installation of over 100,000 AI accelerators across two locations: Cernavodă (Phase I) and Doicești (Phase II), which were mainly selected for their energy advantages. The infrastructure will be powered by a sustainable energy mix of up to 1,500 MW, based primarily on low-carbon nuclear energy.

However, this is just one of the opportunities generated by the EU’s AI Factory / AI Gigafactory initiative funded by the European Commission through the European High Performance Computing Joint Undertaking (EuroHPC JU) — the body responsible for developing and coordinating Europe’s high-performance computing infrastructure.

Next, we explore the other opportunities, components, and key elements included in the EU’s strategic approach to AI development.

A “CERN for AI”: What Is the AI Factory / AI Gigafactory Initiative?

AI is becoming a strategic priority for Europe, and through the AI Factory / AI Gigafactory initiative, the European Commission is taking decisive steps toward strengthening the EU’s position on the global tech map. This so-called initiative is, in fact, a broad set of measures with complex objectives: reducing dependence on American and Chinese infrastructures and ensuring free access to computing power for startups, SMEs, and researchers. Instead of a Europe that consumes technology, a new Europe is emerging—one that creates, controls, and democratizes access to high-performance computing (HPC) resources, while fostering a competitive AI ecosystem across the continent.

Through the AI Continent Action Plan , the European Commission is accelerating the development of a strong AI infrastructure in Europe, based on three main pillars: AI Factories, AI Gigafactories, and the Cloud and AI Development Act.

On one hand, AI Factories aim to train and optimize AI models, backed by a €10 billion budget for the 2021–2027 period. AI Gigafactories, four times more powerful than AI Factories, are designed for the development of complex AI models and benefit from €20 billion in funding through InvestAI. In parallel, the Cloud and AI Development Act seeks to boost research in sustainable infrastructure and attract investments, aiming to triple the EU’s data center capacity over the next 5–7 years.

Timeline and context:

January 2024 – AI Innovation Package: The EU adds AI Factories to its list of strategic priorities and launches investments via Horizon Europe and Digital Europe.

February 2025 – InvestAI, a €200 billion program for AI investments, is launched, with €20 billion allocated specifically for AI Gigafactories. Public-private partnerships are strongly encouraged.

April 2025 – AI Continent Action Plan: The EU sets out its roadmap for a network of AI Factories and energy-efficient AI Gigafactories, integrated into major data centers and inspired by the CERN model.

 

What Are the Short- and Medium-Term Objectives?

Between 2025 and 2026/2027, the European Union has set concrete steps to strengthen its AI infrastructure through the AI Factory / AI Gigafactory initiative:

  • At least 13 AI Factories will become operational.
  • Several AI Antennas—regional access points—will be launched to allow users to remotely connect to high-power AI resources.
  • At least 9 next-generation AI supercomputers will be acquired and installed, distributed across various EU regions.
  • Up to 5 AI Gigafactories will be created, each equipped with over 100,000 AI accelerators and designed for maximum performance: energy efficiency, high-speed networks, secure supply chains, and AI-driven automation.

The AI Factory Projects Have Been Selected. Which Countries Will Host Them?

So far, EuroHPC JU has selected 13 AI Factory projects in two phases, laying the foundation for the future of artificial intelligence in Europe.

In the first phase, in December 2024, the first seven consortia were designated, bringing together 15 EU member states (including Romania) and 2 participating countries. The AI Factories will be built in Finland, Germany, Greece, Italy, Luxembourg, Spain, and Sweden. The estimated investment is €1.5 billion (EU funding and national contributions).

Here are the selected projects:

  1. BSC AI Factory (Spain, Barcelona): An initiative led by Spain, Portugal, Turkey, and Romania. It develops AI infrastructure for industry, government, SMEs, and startups, with a focus on health, energy, and agriculture. The project includes an upgrade to the MareNostrum 5 supercomputer and an experimental platform for new technologies.
  2. IT4LIA (Italy, Bologna): AI infrastructure based on the LEONARDO supercomputer, supporting the agri-food, cybersecurity, and manufacturing sectors. Italy partners with Austria and Slovenia.
  3. LUMI AI Factory (Finland, Kajaani): Coordinated with five other Nordic and Eastern European countries, this project enables rapid AI solution development.
  4. L-AI Factory (Luxembourg, Bissen): Powered by the MeluXina-AI supercomputer, it supports sectors such as finance, space, and the green economy. It offers fast and customized support for companies, with a strong focus on startups and SMEs.
  5. MIMER (Sweden, Linköping): A mid-range AI supercomputer with cloud access and scalable storage for sensitive data. It focuses on medicine, materials, autonomous systems, and gaming, and develops AI models for structural biology and personalized medicine.
  6. HammerHAI (Germany, Stuttgart): Offers a scalable and secure AI platform for research and industry, with support for machine learning and hybrid HPC/AI. It helps companies access pre-trained AI models.
  7. Pharos (Greece, Athens): Uses the DAEDALUS supercomputer to address needs in healthcare, culture, and sustainability, offering end-to-end support for users (from dataset provision to AI model training).

In March 2025, six additional AI Factory projects were selected to be developed in Austria, Bulgaria, France, Germany, Poland, and Slovenia, as follows:

  1. AI:AT (Austria) – Hosted at TU Wien in Vienna, this AI Factory will provide companies with access to datasets, advanced AI models, and scalable computing infrastructure.
  2. BRAIN++ (Bulgaria) – Located in Sofia Tech Park, it will include the Discoverer++ supercomputer and a dedicated AI hub. The project targets support for startups, the development of large language models (LLMs) for the Bulgarian language, robotics, and AI-based space observation.
  3. AI2F (France) – Will leverage France’s existing supercomputing infrastructure, including Alice Recoque, an Exascale EuroHPC supercomputer set to go live in 2026. It aims to support the use of AI in healthcare, energy, agriculture, education, and more.
  4. JAIF (Germany) – Built around JUPITER, Europe’s first exascale supercomputer, located at Forschungszentrum Jülich. It will provide integrated access to AI resources and host an experimental platform for developing and testing AI models. The project will support key sectors such as health, energy, education, and culture, and will collaborate closely with AI2F.
  5. PIAST (Poland) – Backed by universities in Poznań, Toruń, and the Wielkopolska region, it will use the national supercomputer and the Piast quantum computer. It targets areas such as healthcare, cybersecurity, robotics, space, sustainability, and the public sector.
  6. SLAIF (Slovenia) – To be installed in a new data center near the Mariborski otok hydropower plant. Its goal is to foster AI innovation in both business and the public sector, offering training, technical support, and knowledge transfer across the region.

Good to Know:

  • May 2, 2025, was the deadline for submitting applications in the third round of AI Factory establishment.
  • In April 2025, EuroHPC JU launched a call for proposals for AI Factory Antennas , enabling EU member states to deploy national AI Factory services without investing in their own supercomputing infrastructure.

AI Gigafactories: Next Steps

Another important date is June 20 — the European Commission closed the call for expressions of interest regarding the establishment of AI Gigafactories. A total of 76 proposals were received from 16 member states, covering 60 different locations.

The purpose of the call was to gather early insights from industry leaders, private and public investors, as well as member states interested in shaping the future AI infrastructure in Europe. According to a press release , although these proposals are not considered official applications, they will help the European Commission and member states create a shortlist of potential candidates for developing world-class AI Gigafactory facilities.

Thus, the ground is being prepared for the official project call, scheduled to launch by the end of 2025, with the actual implementation of the infrastructures expected to be completed most likely between 2026 and 2027.

Black Sea AI Gigafactory: What Makes Romania’s Proposal Unique?

Romania’s proposal, developed with the support of international experts from the World Bank, was submitted to the European Commission via EuroHPC JU in the form of a letter of intent to host the Black Sea AI Gigafactory.

The project is backed by a national consortium bringing together public, private, and academic sectors — including companies from the energy industry, consumer goods, advanced technologies, innovative startups, and research institutes. Authorities have announced plans to further expand and strengthen it.

Practically, Romania has expressed the desire to develop a “state-of-the-art AI infrastructure with a hybrid architecture, capable of serving both complex AI training and inference processes within a robust, secure, and sustainable operational framework.” As mentioned at the beginning of the article, this would involve installing over 100,000 AI accelerators in Cernavodă and Doicești, powered by 1,500 MW of nuclear energy.

  • Unique competitive advantages of the Romanian project: direct power supply from nuclear sources (Cernavodă), digital infrastructure connected to major European nodes via fiber optics and submarine cables, an industrial site with SMR co-location potential (Doicești), hybrid cooling systems, and integration into the national high-speed communications network.
  • Impact: strengthening Europe’s AI capacity, stimulating innovation in Central and Eastern Europe, supporting Ukraine’s digitalization, technologically integrating the Republic of Moldova, and expanding AI services to Serbia and Turkey.

The development of AI Factories and AI Gigafactories in the EU marks a crucial step towards consolidating European technological sovereignty, boosting long-term innovation and competitiveness in artificial intelligence. This evolution will also drive the data center sector, which is essential for supporting digital infrastructure. We hope Romania seizes this opportunity and plays its cards wisely through projects that highlight its strengths.

DataCenter Forum 2025: The Future of Digital Infrastructure is Being Built Now

Now in its seventh edition, the DataCenter Forum brought together approximately 700 participants and over 20 Romanian and international speakers on May 7, 2025, in Bucharest. Organized by Tema Energy – the local market leader in data center design and construction – the event highlighted the challenges and opportunities arising from the Artificial Intelligence revolution. From the explosion in computing power demand and energy consumption to massive investments in green energy driving energy stability, and Europe’s increased reliance on sovereign AI and cloud, speakers emphasized that we are at the beginning of a profound transformation that will shape the digital economy for decades to come.

Opening Keynote: The AI Revolution & Europe 4.0. Mihai Manole, Tema Energy: We Are Rapidly Reaching Unprecedented Levels of Computing Power and Energy Due to AI

Mihai Manole, Managing Partner of Tema Energy, opened this year’s edition with an overview of the AI revolution, comparing the industry’s dynamics between the US, Europe, and Romania. According to the CEO, while record-breaking global investments in AI infrastructure are being announced – $500 billion in the US, over €200 billion in the EU, more than $100 billion in China, and $50 billion in the UK – Romania currently has 59 data centers and data rooms, most of them small to medium-sized, with a vacancy rate of around 54%, similar to major European markets (FLAP).

In terms of energy capacity, the US is building data centers totaling over 4,000 MW, China around 2,000 MW, and Europe surpasses 1,000 MW – compared to a global total of just 1,000 MW a few years ago. Romania, with 50% of its energy production from renewable sources and one of the most developed telecom networks in Southeastern Europe, has strong potential to attract significant international investments in this sector.

“Southeastern Europe has a relevant number of data centers, but installed power is low. On the other hand, we are at a very favorable moment for developing new data centers, and we are seeing very large green energy projects supported by the state or EU funds. Announced projects total 55 gigawatts from wind, solar, nuclear, and small modular reactors. These give Romania an advantage in energy stability and competitive pricing to attract international investor interest. What should Romania do to attract investments in this new data center and AI paradigm? First, authorities need to create a plan similar to what exists for attracting industrial and manufacturing companies. Because the new industry is IT and AI,” said Mihai Manole.

Keynote: Welcome to the AI Factory Era. Lessons Learned from Building the Largest AI Data Centers in the US – Wes Cummins, Applied Digital

The special guest of this edition was Wes Cummins – Founder and CEO of Applied Digital, one of the largest builders of AI data centers in the US, who successfully transitioned from crypto mining facilities to “AI factories.”

“To understand today’s level of tech adoption, consider this: ChatGPT, launched in November 2022, reached one million users in just five days. The fastest in any app’s history. Today, it has over 800 million users and around a billion queries per day. Google reached that daily query volume in 11 years. We’re talking two and a half years vs. 11. Also, if we look at the compute resources needed, Google uses 1,000 to 10,000 compute blocks per query, while ChatGPT uses between 300 billion and 1 trillion. These figures show the immense power required to run AI. This revolution will be built on data centers, much like the fiber-optic boom in the 1990s and early 2000s enabled the internet to become real,” said Wes Cummins.

According to him, the industry is facing unprecedented challenges in efficiency, capacity, and innovation not seen in the last 30 years.

“There will be many opportunities for innovative companies willing to take bold risks. During major industry shifts, as we are seeing now, is when new leaders emerge. We are at the start of a major transformation, perhaps one we may never witness again in our lifetimes,” he added.

Panel: Southeast Europe’s Data Center Market – Between Challenges and Opportunities

The event’s first panel featured Florin Popa (Orange Business Director, Orange Romania), Ion Paraschiva (Chief Commissioner, Head of IT Systems Management, Romanian Police), Alexandros Bechrakis (Digital Realty Hellas), and Radu Brașoveanu (PPC Romania).

Ion Paraschiva presented the Romanian Police’s perspective, mentioning the construction of a container-type data center (chosen for its relocatability), developed with Tema Energy. This helped bypass bureaucratic obstacles and accelerate operations for the more than 45,000 police officers currently active.

Florin Popa from Orange emphasized the rapid evolution of the local market in the past 12–18 months, marked by growing demand for colocation and infrastructure-as-a-service (IaaS, PaaS), along with increased interest in edge computing as companies seek proximity to customers. From Orange’s viewpoint, local challenges are not yet centered around AI, but around traditional use cases: B2B computing, back-up, and IT infrastructure. Data sovereignty compliance pressures are increasing, driving demand for local data centers.

“Energy consumption is rising, and the question is how fast it will rise and how we can support it – through transport and distribution networks and at the societal level. We’ll see how fast we can build these data centers and how we can support their massive compute needs. It’s critical to generate energy near data centers to avoid overloading the distribution grid. The key to all of this is digitalization,” added Radu Brașoveanu.

One of the panel’s conclusions, from Alexandros Bechrakis, was that regional markets are small and do not require massive investments, but rather well-directed ones.

“Romania, Greece, and other countries in our region are heading toward the third wave of development. In the next few years, we’ll see new centers built. Maybe not next year, but soon. At Digital Realty, we almost doubled our investment plan to nearly $8 billion, especially in US markets, but also in the FLAP region, as well as Greece, Romania, Spain, and Israel. We’re always seeking new markets, and all our European sites use 100% green energy,” he added.

Flash-chat: Trends Shaping the Data Centers of the Future – From Energy Efficiency to Edge Computing

The lineup of international speakers was rounded out by Mark Acton, one of the industry’s most prominent independent consultants. Acton has played a key role in shaping various industry standards and is a member of the European Commission committee responsible for the development of the EU Data Centre Code of Conduct – the main regulation for energy efficiency.

In the first Flash-chat of this year’s edition, Acton emphasized that data centers consume massive amounts of energy and that environmental responsibility is increasingly critical. According to him, Artificial Intelligence (AI) does not replace existing infrastructure—it augments it. As a result, the way we design data centers is changing.

“I wouldn’t call it a cold war, but it’s definitely an arms race,” said Acton. “Last week, Nvidia CEO Jensen Huang was asked whether China is lagging in AI, and he said no—in fact, they’re very close to the U.S. Clearly, we need to stay ahead and keep up with the pace of change. It’s a technological arms race, and I think the current geopolitical instability is prompting countries to focus more on their own regions. In Europe, I believe we’ll see more collaboration driven by these geopolitical shifts. Sovereign AI and sovereign cloud—our own technologies and data control—will become essential.”

Moreover, the focus is shifting from merely reducing the PUE (Power Usage Effectiveness) index to improving IT load efficiency—the real energy consumer in data centers. Acton also pointed out the need for greater public awareness. There’s a growing negative perception of data centers in the media, largely due to a lack of understanding of just how dependent society is on digital infrastructure. Users must become more aware of how their digital habits impact infrastructure and energy use.

Regarding regulation, Acton compared the data center industry to glassblowing—both require a lot of energy, and there’s often little clarity on how it’s used. Glassblowing took 15 years to be properly regulated; Europe is only just beginning the regulatory journey for data centers.

He also noted that established markets like FLAP-D (Frankfurt, London, Amsterdam, Paris, Dublin) are struggling to secure enough power, prompting a shift toward emerging markets. Romania has a real opportunity to join this map, with a favorable energy mix, high renewable potential, strong connectivity, and prospects for government support. Cities like Crete, Marseille, and Lisbon serve as inspirational examples.

Acton introduced the concept of “stranded energy”—excess energy generated by power plants that goes unused due to a lack of infrastructure for proper distribution. He warned that updating Europe’s energy grids could take up to 20 years. Meanwhile, SMR (Small Modular Reactors) technology could support future data center power needs, but public skepticism toward nuclear energy and a vague legal framework remain significant hurdles.

 

Panel – Ensuring Compliance in the Data Center Market: A Review of EU Regulations and Standards

Iolanda Saviuc (Scientific Officer at the Joint Research Centre, European Commission), Vanessa Moffat (representing the Data Centre Alliance), Nicola Hayes (CMO, Platform Markets Group), and Mark Acton discussed new EU regulations, including the EN50600 standard and Delegated Regulation (EU) 2024, which are still slow to adopt.

Panelists noted investors’ concerns over the complexity of new requirements but emphasized this as a moment of opportunity. By sharing data, engaging actively, and collaborating with European authorities, data center operators can shape future policies directly.

On the topic of incident reporting and the NIS 2 Directive, experts highlighted a major challenge: the data center industry has historically hidden security incidents. This lack of transparency undermines trust and the industry’s ability to learn and improve.

Saviuc and Moffat encouraged operators to get involved in EU stakeholder groups, challenge unrealistic proposals, and help ensure regulations reflect on-the-ground realities. Accurate data reporting is crucial to understanding the European data center landscape.

Acton stressed that the EU isn’t currently enforcing punitive measures—the goal is to introduce and promote best practices.

Flash-chat: Direct Liquid Cooling – The Future of Data Center Cooling

Philipp Guth, CTO of Rittal, spoke about how the German company is tackling the cooling challenges posed by AI. He highlighted two major trends transforming data center infrastructure: the explosion of large language models (LLMs) like ChatGPT and the rapid advances in chip manufacturing. New AI-optimized GPUs are incredibly powerful but also highly energy-intensive, with most of that energy converting directly into heat.

In this context, liquid cooling is no longer optional—it’s essential. Data centers need scalable, reliable cooling solutions capable of supporting AI workloads.

At the DataCenter Forum 2025 expo, Rittal showcased an innovative cooling system capable of delivering 1 MW of cooling capacity within the footprint of a standard rack. This solution is designed for both white and grey areas of data centers, with built-in scalability and redundancy—ideal for AI-ready infrastructures.

As demand for AI clusters grows, so does thermal density. Cooling systems must now offer both modularity and precision. Technologies like direct-to-chip cooling, which targets the heat source directly, provide superior thermal efficiency and reduce energy waste, Guth concluded.

 

Panel – The Tech Show Must Go On: Cutting-Edge Technologies Powering the Data Center Industry

This panel featured Flavia Chitanovici (Country Sales Manager, EnerSys Romania), Matteo Faccio (CTO, HiRef S.p.A.), George Dritsanos (Secure Power VP CEE, Schneider Electric), Igor Grdić (Regional Director Central Europe, Vertiv), Laurent Orvoën (Development and Innovation Sales Manager, Eneria), and Ramki Balasubramanian (International Sales – Technical, nVent).

Flavia Chitanovici outlined the main challenges facing critical infrastructure: pressure on power grids, the need for constant uptime, sustainability goals, and—above all—the skyrocketing energy demand driven by AI.

“Energy demand is growing so fast it’s outpacing infrastructure capacity. In 2024, data centers consumed around 415 TWh globally—1.5% of total global electricity use. And that number is growing up to 12% each year.”

This shift calls for a new paradigm: from simple back-up systems to intelligent energy management that supports long-term sustainability and resilience.

George Dritsanos discussed Schneider Electric’s commitment to sustainability, showcasing their microgrid-based approach and diverse portfolio of integrated solutions, including advanced software and solar energy storage. These solutions help data centers become more energy-efficient, sustainable, and future-ready.

Igor Grdić shared Vertiv’s perspective, highlighting how AI factories and GPU-driven centers, especially those with Blackwell series GPUs, present a unique challenge. Unlike the steady-state nature of traditional cloud workloads, AI servers generate sharp fluctuations in power usage—varying by up to 20 MW within milliseconds in a 100 MW site. Vertiv’s systems are built to handle such volatility.

 

Flash-chat: Vodafone Managed Services for Data Storage and Management

Dinu Dragomir, Director of Vodafone Business Romania, highlighted the company’s commitment to supporting businesses of all sizes and public institutions. Since 2024, Vodafone has offered its own public cloud (VPS) and private cloud (Virtual Data Center), hosted in Vodafone-operated data centers. He also noted the completion of Romania’s first government cloud and discussed the company’s internship program, which helps young people start and grow their careers at Vodafone.

 

Panel – The Role of AI in Technology: How Artificial Intelligence is Changing the Way We Work and Live

Speakers included Gabriel Pavel (Regional Director, Fsas Technologies), Mihai Logofătu (Co-founder & CEO, Bittnet Group), Tudor Cosăceanu (Regional VP, UiPath), and Bogdan Tudor (Founder, StarTech Team and CEO, Class IT Group).

Gabriel Pavel announced that Fsas Technologies will launch an energy-efficient AI processor in 2026 aimed at helping the entire industry.

Bogdan Tudor revealed that, for the past eight years, his organization has solved about 80% of user support requests and 90% of tech incidents using AI-powered software bots that diagnose and solve issues autonomously.

Mihai Logofătu explained how Bittnet integrates AI across internal processes, from employee relations to investor communications. “The impact is real. This is an irreversible transformation that must be viewed in its entirety,” he said.

Tudor Cosăceanu demonstrated UiPath solutions like “Document Understanding,” which uses AI to read and interpret documents, including doctors’ handwritten notes.

 

Inspiration Moment with Virgil Stănescu

Virgil Stănescu—former national basketball team captain and five-time MVP of Romania—shared insights on how we measure success and performance in both sports and technology. He recounted a moment from a podcast with chess legend Garry Kasparov, who said, “Work is talent.” According to Stănescu, we train and work 90% of the time to perform 10% of the time—and passion for that 10% is what makes the difference.

 

Awards Ceremony – DataCenter Forum 2025

The event closed with awards recognizing the most notable contributions in Romania’s data center industry:

  • Cloud Services Provider of the Year: Vodafone Romania – accepted by Dinu Dragomir
  • Hyperscale Regional Development of the Year: Digital Realty – accepted by Alexandros Bechrakis
  • Digitalization of the Year – Public Institutions: General Directorate for Internal Protection, MAI – accepted by Florin Vizireanu
  • AI Infrastructure Solutions for Data Centers: Rittal – accepted by Marius Totolici
  • Press Contribution of the Year: Ziarul Financiar – accepted by Cristian Hostiuc

 

Conclusion

DataCenter Forum 2025 spotlighted the fast-paced transformation of the data center industry, driven by the AI revolution, rising power demands, and sustainability pressures. The surge in GPU clusters and the emergence of AI factories bring new challenges in cooling, energy consumption, and infrastructure stability. The industry is responding with tangible solutions: direct-to-chip liquid cooling, high-density modular systems, microgrids, and AI-powered operations. These technologies are reshaping data centers into smarter, more efficient, and more sustainable ecosystems—with a growing impact on both public and private digital transformation.

 

New reporting requirements for data center operators introduced by the Delegated Regulation (EU) 2024/1364

The IT&C sector is consuming more and more energy, and data centers are expected to account for 3.2% of the EU’s total electricity demand by 2030, a 28% increase compared to 2018, according to data cited by the European Commission. To adapt to this new reality, the Commission adopted in March 2024 the Delegated Regulation (EU) 2024/1364 on the first phase of the establishment of a common Union rating scheme for data centres. This regulation implements the Energy Efficiency Directive (EED) No. 2023/1791 and also lays the groundwork for benchmarking the sustainability of data centers in the European Union, based on a common measurement and calculation methodology.

The Delegated Regulation  details the key energy performance indicators (“KPIs”) that data center operators with an IT&C energy demand of at least 500 kW must report to the “European database on data centers.” It also outlines the calculation methods, reporting procedures, and the extent to which this information will be made public.

What does this mean for data center operators? In this article, we tried to provide some answers to their questions.

  1. Who is required to report?

As previously mentioned, the Delegated Regulation applies to data center operators with an installed IT energy demand of at least 500 kW. These operators must report to the European database the information and key performance indicators (KPIs) specified in Annexes I and II of the Delegated Act for each data center they operate. The regulation also defines the types of data centers subject to these requirements:

  • “Enterprise data center”: A data center operated by a company solely for the purpose of providing and managing its own IT needs.
  • “Colocation data center”: A data center where one or more customers install and manage their own networks, servers, and storage equipment and services.
  • “Multi-tenant hosting data center”: A data center where one or more customers have access to networks, servers, and storage equipment on which they operate their own services and applications. In this case, both the IT equipment and the building’s support infrastructure are provided as a service by the data center operator.
  1. What do data center operators report?

The information and key performance indicators that must be reported to the European database are listed in the Annexes of the Delegated Act.

Annex I

On the one hand, there is the information about the reporting data center:

  • Data center name: The name used to identify and describe the data center
  • Owner and operator of the data center, including the name and contact details of both
  • Location of the data center: The local administrative unit code (LAU code) of the data center’s location.
  • Type of data center: Corresponds to the primary operation of the data center (see above).
  • Year and month of commissioning: The calendar year and month when the reporting data center began providing IT services.

Secondly, there is information about the operation of the reporting data center:

  • Redundancy level of the electrical infrastructure at high-voltage level/low-voltage (line) level/rack level.
  • Redundancy level of the cooling infrastructure at room level/rack level.

Annex II

Annex II lists the key performance indicators that must be monitored, collected, and reported to the European database, as well as the measurement methodologies.

Energy and sustainability indicators:

  • Installed IT power demand („PDIT”, în kW)
  • Data centre total floor area(‘SDC’, in square metres).
  • Data centre computer room floor area(‘SCR’, in square metres).
  • Total energy consumption(‘EDC’, in kWh) of the reporting data center
  • Total energy consumption of information technology equipment(‘EIT’, in kWh)
  • Electrical grid functions
  • Average battery capacity (‘CBtG’, in kW)
  • Total water input(‘WIN’, in cubic metres)
  • Total potable water input(‘WIN-POT’, in cubic metres)
  • Waste heat reused(‘EREUSE’, in kWh)
  • Average waste heat temperature(‘TWH’, in degree Celsius)
  • Average setpoint information technology equipment intake air temperature(‘TIN’, in degree Celsius)
  • Types of refrigerants used in cooling and air-conditioning equipment within the computer room floor area of the data center
  • Cooling degree days (“CDD”, in degree-days)
  • Total renewable energy consumption(‘ERES-TOT’, in kWh)
  • Total renewable energy consumption from Guarantees of Origin(‘ERES-GOO’, in kWh)
  • Total renewable energy consumption from Power Purchasing Agreements(‘ERES-PPA’, in kWh)
  • Total renewable energy consumption from on-site renewables(‘ERES-OS’, in kWh)

ICT Capacity Indicators:

  • ICT capacity for servers(‘CSERV’)
  • ICT capacity for storage equipment(‘CSTOR’, in petabytes)

Data Traffic Indicators:

  • Incoming traffic bandwidth(‘BIN’, in gigabytes per second)
  • Outgoing traffic bandwidth(‘BOUT’, in gigabytes per second)
  • Incoming data traffic(‘TIN’, in exabytes)
  • Outgoing data traffic(‘TOUT’, in exabytes)

Annex III

Annex III lists the sustainability indicators that must be calculated for each data center based on the information and key performance indicators from Annexes I and II, along with the calculation methodologies.

  • Power Usage Effectiveness (PUE)

EDC and EIT, as both defined in Annex II, shall be used to calculate the PUE of a data centre: PUE = EDC/EIT;

  • Water Usage Effectiveness (WUE)

WIN, as defined in Annex III and EIT, as defined in Annex II but expressed in MWh, shall be used to calculate the WUE of a data centre: WUE = WIN/EIT;

  • Energy Reuse Factor (ERF)

EREUSE and EDC, as both defined in Annex II, shall be used to calculate the ERF of a data centre: ERF = EREUSE/EDC;

  • Renewable Energy Factor (REF)

ERES-TOT and EDC, as both defined in Annex II, shall be used to calculate the REF of a data centre: REF = ERES-TOT/EDC.

  1. Annex IV. Who has access to information and what data will be publicly available?

Annex IV lists the information available to the public in the European database on data centers. At both the national (member state) level and the European Union level, the following data will be made available:

  • Total number of reporting data centers
  • Distribution of data centers by size categories
  • Total installed power for information technology (PDIT)
  • Total energy consumption (EDC)
  • Total water consumption (WIN)
  • Average PUE – total, by type, and by size category
  • Average WUE – total, by type, and by size category
  • Average ERF – total, by type, and by size category
  • Average REF – total, by type, and by size category

Annex IV also lists the size categories for reporting data centers, based on the installed power for information technology of the data center:

  • Very small data center: 100-500 kW
  • Small data center: 500-1,000 kW
  • Medium data center: 1-2 MW
  • Large data center: 2-10 MW
  • Very large data center: > 10 MW

The Delegated Regulation allows only the disclosure of data from Annex IV, aggregated at the member state and EU level. The Commission and member states will maintain the confidentiality of information from Annexes I and II. If information is transmitted through national reporting systems, member state authorities will ensure its confidentiality. Furthermore, colocation data center operators may collect KPIs from Annex II from their clients through an anonymous internal mechanism.

  1. How to report. Deadlines

The Regulation requires the reporting of data from the Annexes by September 15, 2024, then by May 15, 2025, and subsequently on an annual basis. Therefore, less than two months remain until the next reporting deadline. The communication of information will be made to the European database using a national reporting system that each member state must implement. The information and key performance indicators to be reported must cover the calendar year preceding the reporting year. If a data center has been operational for less than a year, the operator will report only for the respective operational period. Unfortunately, only a few of the member states that have transposed the EED into national legislation have established a national reporting system – among these are Germany and Austria.

Visit this link to read the full text of Delegated Regulation (EU) 2024/1364.

The Delegated Regulation is not the only sustainability regulation targeting the data center industry. For example, the Corporate Sustainability Reporting Directive (CSRD)  requires organizations to report ESG (Environmental, Social, and Governance) initiatives, including third-party greenhouse gas emissions. This means that data centers may be required to report their own emissions and provide customers with details about their carbon footprint. The Taxonomy Climate Delegated Act  establishes criteria for evaluating economic activities that contribute to the European Union’s climate objectives.

The European Union is committed to achieving climate neutrality by 2050, and data centers are increasingly subject to sustainability regulations. We anticipate these regulations will allow the data center industry to leverage sustainability as a strategic advantage and a catalyst for innovation.

The UK is betting on AI. “AI Opportunities Action Plan 2025,” an example for European states

The whole world is buzzing with excitement over AI. While the U.S. and China are engaged in a veritable “chip war,” billion-dollar initiatives and investments are pouring in, with each country aiming for a share of the vast potential of Artificial Intelligence. Recently, Donald Trump announced the Stargate project, through which the U.S. will build AI infrastructure worth up to $500 billion. On the other hand, China announced an $8.2 billion investment fund, created through a public-private partnership, adding to the $13.8 billion fund launched last year.

In February 2025, Europe responded forcefully. During the “Artificial Intelligence Action Summit,” European Commission President Ursula von der Leyen launched InvestAI, an initiative through which the European Union will mobilize €200 billion for AI investments, including a €20 billion fund for the development of gigafactories, and the construction of 12 next-generation AI hubs.

Romania has entered this race timidly, launching its national AI strategy in 2024. Last year, the Technical University of Cluj-Napoca announced the construction of the first Artificial Intelligence research center in Romania, through an investment of over 105 million RON. Some countries, like the United Kingdom, recognize the profound implications of AI’s impact on humanity’s evolution and are mobilizing exemplarily. In January 2025, the UK took a decisive step in the global race for Artificial Intelligence by launching its national strategy, the “AI Opportunities Action Plan.” What lessons can Romania take from the United Kingdom’s example?

What is the “UK AI Opportunities Action Plan 2025”?

The strategy fully adopts the 50 recommendations made by Matt Clifford CBE (tech entrepreneur and chairman of the UK’s Advanced Research and Invention Agency – AIRR). It focuses on infrastructure development, flexible regulations, and accelerating economic growth. With a €16.6 billion commitment from the private sector and the prospect of over 13,000 new jobs, this plan not only drives innovation but also strengthens the UK’s position as a strategic hub for AI. This complements the €29.7 billion investment already announced in October 2024 at the International Investment Summit.

In the introductory section of the “Plan,” we learn that although the United Kingdom is the third-largest AI market globally and hosts an impressive number of talents and leading companies such as Google DeepMind, ARM, and Wayve, a new strategy is necessary. Why? The UK risks losing ground to the rapid AI advancements of the US and China. At the same time, “The risks from underinvesting [in AI] and underpreparing, though, seem much greater than the risks from the opposite”, the document states. The “AI Opportunities Action Plan” is divided into three sections, each corresponding to a government commitment. Below, we have summarized the concrete measures that the United Kingdom will take in the coming period.

  1. Investments in the foundation of AI

The United Kingdom needs world-class computing and data infrastructure, access to talent, and regulation. As a result, the government aims to ensure access to a sufficient number of data centers and computing power to support innovation and drive the development of future industries. To achieve this goal, it will invest in Sovereign AI compute (owned by or allocated to the public sector) to respond quickly to national priorities such as AI research and the support of critical services. At the same time, it will promote domestic computing (capabilities owned and operated by private companies), which will generate AI-based jobs and businesses. Additionally, it will develop international partnerships to access complementary resources (“international compute”) and support joint AI research.

In practice, the United Kingdom will develop, within six months, a long-term plan for the country’s AI infrastructure needs, supported by a 10-year investment commitment. Other measures include the following:

  • Starting in the next six months, AIRR’s capacity will be expanded by at least 20 times by 2030.
  • Sovereign computing resources will be strategically allocated through the appointment of “AIRR Program Directors” with extended autonomy, who will focus on specific missions.
  • “AI Growth Zones” (AIGZ) will be created to facilitate the rapid construction of AI data centers.
  • International partnerships will be established to increase the types of computing capabilities available to researchers and to stimulate collaborations in AI research.
  • As developers need access to quality data, the UK will unlock public and private datasets. In this regard, a National Data Library (NDL) will be created, which will identify datasets with the greatest economic and social potential. NDL will also create best practices for the secure publication of datasets that can be used to train AI.
  • Inefficient regulation could hinder AI adoption in key sectors. Regulatory authorities will be required to publish annual reports on how they have supported innovation and AI-driven growth in their fields.

In addition, the United Kingdom aims to train, retain, and attract the next generation of AI scientists and entrepreneurs. In the short term, this means training “tens of thousands of AI professionals by 2030,” including through “alternative routes” such as internships and employer-led professional training programs. The UK also aims to take inspiration from Singapore, which created a national online platform for AI skills development, and South Korea, which integrated digital literacy and AI into its education system.

  1. Accelerating AI adoption across the economy

The United Kingdom plans to make AI a central element of how services are delivered and productivity is achieved. At the same time, the government will focus on its role as a major user and client of AI services/products to support the adoption of new technologies in the private sector. To achieve these goals, the UK will adopt a flexible approach, “SCAN → PILOT → SCALE.”

  • SCAN means the government will constantly monitor the evolution of AI technologies and learn about their new use cases to integrate them effectively into its projects. In practice, this means appointing an AI specialist for each mission, creating a government team that analyzes the market, and collaborating with AI providers to understand and influence the development of future technologies.
  • PILOT refers to the rapid development of prototypes and the swift procurement of public services to launch pilot projects in high-impact areas. How? By creating a consistent framework of best practices that evolves over time for the development and procurement of AI technologies, rapid prototyping, and testing for key projects. Additionally, by providing an efficient experimentation environment with quick access to datasets, language models, and computing resources, and a fast, phased AI procurement process that facilitates the rapid funding of pilot projects and reduces bureaucracy as investments grow.
  • SCALE focuses on expanding and applying successful AI solutions across various fields so that they can help citizens and improve productivity and efficiency. To achieve this, the government will support successful pilot projects, fund them, and expand them nationwide. This process also includes national AI tenders to enable rapid adoption across different sectors.
  1. “Position the UK to be an AI maker, not an AI taker”

By 2029, the UK government estimates that AI will become a dominant factor in economic performance and national security, which is why it will support research and development of frontier AI capabilities, including in emerging fields such as AI for science, robotics, and Embodied AI. In this context, UK Sovereign AI will be created, a government-supported research lab that will collaborate with the private sector to maximize economic benefits. UK Sovereign AI will invest directly in companies, create AI Growth Zones, and form international partnerships. It will also ensure responsible access to the country’s most valuable datasets and research, support UK AI organizations in priority national projects, and attract external talent, including by recruiting promising founders or executive directors. Furthermore, it will facilitate deep collaboration with the national security community.

Take a look at the full text of the UK AI Opportunities Action Plan.

What can we learn from the UK?

The UK AI Opportunities Action Plan provides a remarkable example of an advanced AI strategy, similar to those in countries like Germany and France (). However, unlike these countries, the UK places a stronger emphasis on the practical details of implementing the national AI policy. Romania seems to have fallen into the same trap: starting in 2024, our country also has a national strategy in the AI field but still lacks a clear implementation plan.

In this context, we see several directions to learn from. Countries with advanced AI strategies invest heavily in research, development, and the implementation of Artificial Intelligence technologies, establishing clear policies for their integration into essential areas such as education, healthcare, industry, and national security. These strategies promote strong partnerships between the public and private sectors (including internationally), attract talent and resources, and create a legal framework that supports innovation. Additionally, such an approach involves consistent investments in digital infrastructure and data management, facilitating access to cutting-edge technologies and maximizing the economic and social impact of AI.

Although we cannot match the UK in terms of AI investments, as we rely heavily on European funds (through programs such as the PNRR, Digital Europe, and Horizon Europe), Romania can leverage its strengths. These include a tradition of mathematics and computer science education, which has trained hundreds of thousands of experts working at the largest international tech companies, as well as high-speed internet infrastructure, essential for the development of the IT industry. Additionally, Romania benefits from lower labor and energy costs compared to other European countries, a stable energy network, and great potential in the green energy industry.