Category: News

Energie, baterii și centre de date: modelul integrat prin care Monsson vrea să transforme hub-urile energetice în poli digitali

Industria centrelor de date traversează un moment similar cu cel al energiei regenerabile de acum două decenii: un val tehnologic accelerat, nevoia de capital semnificativ și un cadru de reglementare care trebuie să țină pasul cu investițiile. În acest context, Monsson mizează pe un model integrat – energie regenerabilă, stocare prin baterii (BESS) și centre de date co-locate – pentru a poziționa România pe harta europeană a infrastructurii critice pentru AI și cloud.

În acest interviu, Sebastian Enache, Head of M&A Monsson, explică lecțiile învățate din dezvoltarea pieței regenerabile, rolul stocării în susținerea consumatorilor mari de energie și ambiția companiei de a dezvolta o platformă scalabilă de centre de date de până la 200 MW în mai multe regiuni strategice ale țării.

Ca informație colaterală, Sebastian Enache va fi prezent și pe scena DataCenter Forum 2026, în cadrul unui panel care va contura similitudinile și sinergiile dintre sectorul energetic și cel al centrelor de date.

 

DataCenter Forum: În energie regenerabilă, România a trecut prin etape de maturizare și standardizare. Ce lecții din dezvoltarea pieței de energie regenerabilă considerați că ar putea fi aplicate industriei de centre de date locale, mai ales în ceea ce privește autorizarea și integrarea în rețea?

 

Sebastian Enache, Head of M&A Monsson: În energie regenerabilă, România a început să construiască încă din 1999 o direcție strategică, iar în 2003 au fost instalate primele turbine eoliene de capacitate mare pentru acea vreme. Deși existau discuții și despre parcuri fotovoltaice, tehnologia solară nu era atunci suficient de performantă și competitivă, astfel că prioritatea a fost acordată eolianului, soluția cea mai eficientă în acel context. După aproape 25 de ani, vedem că ambele tehnologii au evoluat semnificativ, sunt standardizate, bancabile și aproape la fel de răspândite.

Lecția principală este că orice val tehnologic are momente critice, strategice, în care cadrul de reglementare, capacitatea administrativă și curajul investitorilor fac diferența dintre stagnare și accelerare. La începutul anilor 2000, toată lumea vorbea despre regenerabile, însă puțini aveau experiență reală, legislația era incompletă, iar procesele de autorizare erau neclare. Doar câțiva investitori curajoși au ales să intre într-un domeniu încă nematurizat, asumându-și riscuri semnificative.

Astăzi văd o paralelă evidentă cu industria centrelor de date. Dacă în regenerabile vorbim despre echipamente mari, cu durată de viață de 20–40 de ani, care trebuie să producă energie în mod stabil și predictibil, în zona de data center discutăm despre o infrastructură critică, la fel de complexă și gândită pe termen lung, dar care găzduiește tehnologii ce se schimbă la 2–3 ani. Serverele pot fi înlocuite relativ rapid, însă infrastructura energetică, de răcire și conectivitate rămâne o investiție pe termen lung și necesită predictibilitate legislativă și operațională.

Din experiența regenerabilelor, o lecție esențială pentru centrele de date este importanța unui proces de autorizare clar, coerent și digitalizat, precum și a unei strategii transparente de integrare în rețea. Lipsa de coordonare între investitori, operatori de rețea și autorități poate întârzia proiecte critice ani de zile. În schimb, planificarea anticipată a capacităților de rețea și definirea unor zone prioritare de dezvoltare pot accelera investițiile și reduce riscurile.

Consider că există o convergență naturală între cele două industrii. Profesioniștii din zona centrelor de date trebuie să colaboreze activ cu autoritățile, nu doar pentru a evidenția importanța acestor investiții, ci și pentru a contribui la definirea unui cadru legislativ adaptat realităților tehnologice. Construcția rapidă, conectarea eficientă la rețea și amplasarea în zone adecvate, cu infrastructură energetică robustă, sunt elemente critice pentru ca România să valorifice acest nou val de dezvoltare, așa cum a făcut, în cele din urmă, și în domeniul energiei regenerabile.


DCF: Există un mare interes la Data Center Forum pentru investiții durabile. Cum vedeți rolul stocării de energie (BESS) și sursele regenerabile în susținerea operațiunilor unor centre de date cu consum intensiv – de exemplu pentru AI sau cloud – în România și Europa?

 

Sebastian-Enache

Sebastian Enache: Interesul pentru investiții durabile în zona centrelor de date este firesc, mai ales în contextul creșterii accelerate a consumului generat de AI și cloud. Din perspectiva mea, stocarea energiei prin baterii (BESS) și sursele regenerabile vor juca un rol esențial în susținerea acestor operațiuni în România și în Europa.

Tehnologia BESS este relativ nouă la scară largă, însă în ultimii 5 ani a cunoscut o dezvoltare spectaculoasă, atât din punct de vedere tehnologic, cât și economic. Dacă în trecut integrarea unui sistem de stocare nu era fezabilă financiar pentru majoritatea proiectelor, astăzi aproape toate parcurile eoliene și fotovoltaice moderne includ BESS, iar în paralel vedem dezvoltarea de sisteme stand-alone dedicate serviciilor de sistem. BESS nu mai este doar un accesoriu, ci un element care oferă flexibilitate, stabilitate și predictibilitate rețelei.

În acest context, paralela cu centrele de date este evidentă. Până recent, BESS nu era perceput ca o componentă esențială pentru un data center, unde accentul era pus pe redundanță clasică – generatoare diesel și UPS-uri. Însă, pe măsură ce consumul crește și volatilitatea prețurilor energiei devine o realitate, sistemele de stocare vor deveni, în opinia mea, o componentă standard. De ce? Pentru că permit optimizarea costurilor, prin achiziția de energie la prețuri reduse în perioadele de surplus și utilizarea acesteia în intervalele de vârf. În același timp, oferă o sursă suplimentară de redundanță și pot susține integrarea directă a producției regenerabile locale.

Mai mult, BESS facilitează conectarea eficientă a unui centru de date la un mix energetic verde, reducând impactul asupra rețelei și crescând stabilitatea operațională. Pentru centrele de date AI, unde cererea de putere este mare și constantă, combinația dintre regenerabile, stocare și un management inteligent al energiei poate face diferența între un proiect bancabil și unul vulnerabil la riscuri de preț și disponibilitate.

Viziunea Monsson este clară: un centru de date este mai bine poziționat lângă un hub energetic regenerabil, integrat cu capacități de stocare, decât ca un consumator izolat în rețea. Prin corelarea investiției inițiale în producție și stocare cu profilul de consum al centrului de date, randamentul global al proiectului poate crește semnificativ – estimările noastre indică un potențial de peste 25%. În acest fel, sustenabilitatea nu este doar un obiectiv de imagine, ci devine un avantaj competitiv real.


DCF: Monsson este cunoscută în principal pentru proiectele din energie regenerabilă și stocare. Ce v-a determinat să vă extindeți spre infrastructura de centre de date și ce sinergii vedeți între aceste două industrii? (Mă gândesc la un modelul integrat de energie + baterii + centre de date)

 

Sebastian Enache: Știu, Monsson este cunoscută în principal pentru dezvoltarea de proiecte în energie regenerabilă și stocare, însă extinderea către infrastructura de centre de date reprezintă o evoluție naturală a viziunii noastre. Așa cum am menționat anterior, obiectivul nostru este să atragem cât mai mult consum energetic relevant în proximitatea hub-urilor noastre de producție. Credem că viitorul aparține modelului integrat – energie + baterii + centre de date.

În esență, secretul constă în a aduce consumul lângă producție. În modelul clasic, energia este produsă într-o zonă, transportată pe distanțe mari și consumată în altă parte, ceea ce implică pierderi, costuri ridicate de racordare și presiune suplimentară asupra rețelei. Prin poziționarea centrelor de date în apropierea parcurilor eoliene și fotovoltaice, integrate cu sisteme BESS, putem optimiza întregul lanț investițional. Reducem costurile de infrastructură de transport, scădem timpul de implementare și creăm un ecosistem energetic mult mai eficient.

Pentru dezvoltatorul de centre de date, avantajul este unul strategic: acces la energie competitivă, predictibilă și, din ce în ce mai important, verde. Pentru noi, ca dezvoltator de capacități regenerabile și stocare, integrarea unui consumator mare și constant precum un data center crește stabilitatea financiară a proiectului și îmbunătățește randamentul investiției. Practic, vorbim despre o optimizare suplimentară a investiției tuturor părților implicate.

Sinergiile sunt evidente și din perspectivă tehnică. Bateriile permit echilibrarea producției regenerabile și adaptarea la profilul de consum al centrului de date. În același timp, centrul de date oferă un consum relativ stabil, ceea ce ajută la monetizarea proiectelor energetice. Într-un astfel de model, rețeaua națională devine un partener, nu singura soluție, iar presiunea asupra infrastructurii publice este redusă.

Vedem acest model integrat ca pe un avantaj competitiv pentru România. În contextul în care Europa caută locații pentru dezvoltarea de capacități AI și cloud, disponibilitatea energiei verzi la scară mare este un criteriu esențial. Dacă putem oferi investitorilor un pachet integrat – teren, energie regenerabilă, stocare și soluții rapide de conectare – atunci putem transforma hub-urile energetice în poli digitali.

Pentru Monsson, această extindere nu înseamnă ieșirea din zona noastră de expertiză, ci valorificarea ei într-un mod mai complex și mai eficient. Credem că viitorul infrastructurii critice este unul integrat, iar combinația dintre energie, baterii și centre de date reprezintă un pas logic în această direcție.


DCF: Dezvoltarea unui centru de date pilot de 50 MW în România este un demers ambițios. Care sunt principalele asemănări și diferențe între dinamica investițiilor în energie regenerabilă și cea în centre de date – din perspectivă de reglementare, finanțare și scalabilitate?

 

Sebastian Enache: Dezvoltarea unui centru de date pilot de 50 MW este, într-adevăr, un pas ambițios, însă din perspectiva noastră el se bazează pe o fundație solidă. Capacitatea de conectare la rețea de 50 MW o avem deja prin proiectul nostru hibrid de la Mireasa, unde operăm 50 MW eolian, 35 MW fotovoltaic și aproximativ 200 MWh de baterii. Această combinație ne oferă condiții ideale pentru a susține un centru de date cu consum mare, în mod sigur și pe termen lung.

Din perspectiva asemănărilor, atât investițiile în regenerabile, cât și cele în centre de date sunt intensive în capital, depind de accesul la rețea și necesită predictibilitate legislativă și contractuală pentru a fi finanțabile. În ambele cazuri, investitorii caută stabilitate a cadrului de reglementare, vizibilitate asupra veniturilor și un orizont clar de dezvoltare.

Diferențele apar în dinamica tehnologică și în cadrul de reglementare. Energia regenerabilă beneficiază astăzi de un cadru legislativ matur, în timp ce pentru centrele de date nu există încă o legislație specifică sau zone clar definite unde dezvoltarea să fie prioritizată. De aceea, ne dorim să colaborăm activ cu autoritățile pentru a contribui la crearea unui cadru predictibil și competitiv.

Ne implicăm în această zonă pentru a sprijini dezvoltarea industriei locale de centre de date și, așa cum am fost pionieri în proiecte mari de energie regenerabilă, ne dorim să fim printre primii care construiesc un model integrat și în această industrie.


DCF: Proiectul pilot de 50 MW este doar începutul sau reprezintă un model replicabil? Ce planuri aveți pentru a transforma această inițiativă într-o platformă de extindere mai largă în regiune?

 

Sebastian-Enache

Sebastian Enache: Proiectul pilot de 50 MW nu este un demers izolat, ci începutul unui model replicabil pe care dorim să îl extindem la nivel național și regional. Strategia noastră vizează dezvoltarea unor centre de date cu capacități între 50 MW și 200 MW în 7 zone strategice ale României, printre care Constanța, Satu Mare și Arad. Alegerea acestor locații are la bază atât accesul la infrastructură energetică solidă, cât și poziționarea geografică și conectivitatea la rețelele de transport și fibră optică.

Obiectivul nostru este să construim o platformă integrată, în care energia regenerabilă, stocarea prin baterii și consumul mare – reprezentat de centrele de date – să funcționeze într-un ecosistem optimizat. Modelul dezvoltat la 50 MW poate fi scalat natural la 100 sau 200 MW, folosind aceeași logică: producție locală de energie, capacitate de stocare și infrastructură pregătită pentru extindere etapizată.

Dorim să valorificăm poziția geopolitică strategică a României. Deși în prezent majoritatea centrelor de date sunt concentrate în Europa de Vest, vedem semnale clare că Europa Centrală și de Est, inclusiv România, pot deveni un hub relevant pentru AI, cloud și infrastructură digitală. Prin această platformă, ne propunem să poziționăm România ca un punct central în noua hartă digitală a Europei.


DCF: Care sunt cele mai mari provocări pe care le anticipați în integrarea centrelor de date cu rețelele existente de energie regenerabilă și stocare – din punct de vedere tehnic, operațional sau de reglementare?

 

Sebastian Enache: Provocări există și vor exista întotdeauna atunci când vorbim despre infrastructură critică la scară mare. Integrarea centrelor de date cu rețelele existente de energie regenerabilă și stocare aduce atât oportunități, cât și provocări tehnice, operaționale și de reglementare.

Din punct de vedere tehnic, principala provocare este capacitatea rețelei și gestionarea fluxurilor de energie într-un mod stabil și predictibil. Centrele de date, în special cele dedicate AI sau cloud, au un consum mare și constant, iar rețeaua trebuie să poată susține aceste sarcini fără a crea dezechilibre. Totuși, un avantaj major este că proiectele regenerabile existente sunt deja racordate și, în multe cazuri, includ sisteme BESS. Acest lucru creează premisele pentru co-locarea centrelor de date lângă capacități de producție și stocare, reducând presiunea asupra infrastructurii publice și oferind o soluție mai eficientă decât în multe alte țări.

Operațional, provocarea este integrarea inteligentă a producției regenerabile intermitente cu profilul de consum al centrului de date. Aici intervin bateriile și sistemele avansate de management al energiei, care pot asigura flexibilitate, redundanță și optimizare a costurilor.

Din perspectivă de reglementare, autorizarea rămâne un punct critic. În prezent, nu există un cadru dedicat care să trateze integrat dezvoltarea de centre de date în proximitatea hub-urilor energetice. Este nevoie de claritate, predictibilitate și proceduri accelerate pentru proiectele strategice.

În ceea ce ne privește, venim cu o parte din soluție: capacitate de racordare deja disponibilă, tehnologie matură, investiții asumate și teren pregătit pentru dezvoltare. Restul depinde de colaborarea cu autoritățile și de apetitul investitorilor de a valorifica această oportunitate.

World Economic Forum Calls for Greener Data Centers

Data centers are experiencing steady growth in energy consumption, reaching approximately 415 TWh globally in 2024, with estimates of around 945 TWh by 2030 (IEA). Although these figures exceed the national consumption of some smaller countries, data centers are by no means the most energy-intensive industry. However, with the rapid proliferation of AI, interest in this topic is increasing significantly. Recent analyses by MIT have shown that a GenAI cluster can consume seven to eight times more energy than a typical workload. Moreover, beyond the training phase, the actual operation of a large-scale model (inference) can account for up to 90% of the total energy consumed over the model’s lifetime.

And yet, there are sustainable data centers that demonstrate how “overhead” energy (the consumption required for cooling, UPS systems, and other auxiliary infrastructure) can be reduced by up to 84%. Based on a recent analysis by the World Economic Forum, this article presents several strategies through which data centers can become “green” by adopting a unified, ecosystem-wide approach.

Sustainable Data Centers: Current Situation

world economic forum

An analysis published by the European Parliament estimates that a large-scale data center consumes as much electricity annually as 100,000 households. As a result, data centers can be comparable to small cities in terms of energy and water consumption.

For example, in the U.S., they could account for up to 12% of total electricity consumption by 2030 (McKinsey), and in Loudoun County, Virginia, a single cluster consumed 21% of the area’s total energy in 2023. Network reliability remains a critical issue: in Fairfax County, U.S., a minor disruption in 2024 forced 60 data centers to switch to backup generators.

In the EU, data centers represent roughly 3% of total electricity demand, but this percentage varies between countries, exceeding 20% in Ireland. By 2030, this share could reach 30%. In Ireland, some local authorities have already halted data center projects due to overloaded electrical grids and limited availability of green energy.

Beyond the massive energy consumption, data centers also put pressure on other resources. A medium-sized data center uses over 1 million liters of water daily, and global demand for raw materials for digital infrastructure and technologies is expected to grow sixfold, while demand for lithium could increase fortyfold (IEA). At the same time, electronic waste reaches 62 million tons per year, with most data centers recycling only a portion of their infrastructure.

Energy Efficiency is The Foundation for Sustainable Data Centers

Energy efficiency is the cornerstone of sustainability in data centers. It starts with design and includes end-to-end measurement of consumption. Differences can be significant: the World Economic Forum shows that a data center operating with a PUE of around 1.1 can use up to 84% less overhead energy than one with a PUE close to 2.0, highlighting the importance of good design practices.

Optimizing infrastructure—from energy source and location to cooling technologies—delivers major savings. For example, water-based cooling solutions can reduce energy consumption by up to 40% and improve thermal efficiency by 3.5 times compared to air-based cooling.

The Importance of Workload Optimization

Workload optimization is rarely at the center of sustainability strategies, even though it offers one of the greatest opportunities to reduce energy consumption—especially with the acceleration of AI. Essentially, efficiency means eliminating waste before seeking compensatory solutions: how IT tasks are configured, distributed, and executed determines whether the infrastructure delivers real performance or just additional energy consumption. Here are some ideas to consider for sustainable data centers:

  • Virtualization and consolidation – running multiple applications on the same servers to reduce unused capacity.
  • Intelligent workload placement – allocating tasks to the most suitable and energy-efficient infrastructure.
  • Modernizing legacy systems – new architectures deliver higher performance with lower energy consumption.
  • Workload tuning – adjusting workloads to maximize the use of available hardware.

Circular Economy Strategies

Data centers still exploit too little of the potential of the circular economy, reusing or recycling only a fraction of their infrastructure. A sustainable approach involves interventions across the entire lifecycle of the infrastructure—from energy-efficient systems and advanced cooling solutions to services and technologies that extend equipment lifespan and reduce waste.

But the principles of circularity do not apply only to hardware—they also concern resources like water. Models such as GPT-3 can consume up to 500 ml of water for every 10–50 responses. At billions of interactions globally, this seemingly minor use adds up to a huge footprint.

  • Closed-loop cooling systems, wastewater recycling, and rainwater collection can reduce water consumption by 50–75%. 
  • Intelligent water management systems can further cut usage by up to 25% by using real-time sensors to monitor and adjust water consumption according to cooling demands, leveraging predictive optimization algorithms.

E-waste can’t be ignored

In 2022, humanity generated 62 million tons of e-waste—82% more than in 2010. By 2030, this volume is expected to reach 82 million tons, growing five times faster than documented recycling, according to the fourth Global E-waste Monitor report. Operators of sustainable data centers can reduce e-waste and environmental impact by avoiding physical destruction of equipment and adopting more sustainable alternatives:

  • Data sanitization and reuse. Instead of physically destroying HDDs and other equipment, data center operators can completely erase sensitive data—either internally, if they have the necessary expertise, or through partnerships with specialized IT asset disposition (ITAD) firms. The sanitized equipment can then be refurbished, resold, or donated to non-profit organizations in underprivileged regions, giving it a second life and reducing environmental impact.
  • Efficient hardware lifecycle management. Adopting modular and easily repairable systems allows individual components—such as power supplies, memory, or processors—to be replaced or upgraded without discarding entire servers. This approach provides multiple benefits: it reduces the frequency of full replacements, optimizes operational costs, and significantly lowers waste volume.

Ultimately, the sustainability of a data center depends on precise consumption monitoring and a holistic approach to infrastructure. When treated as continuously evolving systems, data center operators can turn high energy consumption and resource waste into real opportunities for efficiency and ongoing optimization.

Romania on the Rise: Featured in the EUDCA 2026 Data Center Report

The European Data Centre Association (EUDCA) recently published the State of European Data Centres 2026 report, showing that Europe is investing heavily in its digital future: between 2026 and 2031, data centers are expected to attract €176 billion. While the FLAP-D region dominates the scene with over 6,600 MW of IT power in 2026, Central and Eastern Europe (CEE) is preparing to catch up, totaling 883 MW. Romania is mentioned three times in the report, ranking among the fastest-growing colocation markets in the region, confirming that the local digital infrastructure is entering a phase of expansion.

EUDCA 2026 Highlights Romania and CEE Growth

EUDCA 2026

In Central and Eastern Europe (CEE), Switzerland currently holds the largest IT capacity in the colocation sector (302 MW in 2026, 464 MW in 2031), but Poland could surpass it, with estimated growth from 197 MW in 2024 to 511 MW in 2031 (CAGR 15%), according to EUDCA. Poland maintains an annual growth of 15% in the hyperscale sector as well.

Alongside Poland, Romania is among the markets with the highest annual growth (14%), with IT capacity projected to increase from 27 MW in 2025 to 66 MW in 2031. Both countries are only surpassed by Croatia (30% annual growth) in the region.

EUDCA notes that although Southern Europe records the fastest growth, with Spain, Italy, and Portugal benefiting from new (transatlantic and Mediterranean) cables, cloud expansion, and renewable energy, Central and Eastern Europe shows a more varied growth pattern, with Poland and Romania leading the increase while other markets develop more slowly.

Rapid Expansion and Record Investments

According to the report, the traditional colocation market is stabilizing, while large, scalable data centers are attracting huge investments. The FLAP-D region remains Europe’s largest data center cluster, but its growth is slowing due to energy shortages, lack of land, and difficulties in obtaining permits. Development is expanding toward Northern Europe, Southern Europe, and Tier-2 cities (medium and small).

Cities such as Madrid, Milan, Warsaw, Zurich, and Brussels are becoming international hubs. However, according to the EUDCA report, limited energy access is blocking many investments, and Europe risks being unable to triple its data center capacity by 2035 as planned.

Colocation has become the engine of Europe’s digital infrastructure, with IT capacity growth projected to exceed 23.8 GW by 2031. The 27 EU member states will contribute 17.8 GW, with most of the new capacity being scale colocation.

State of European Data Centres 2026 - Opera

State of European Data Centres 2026

Key figures for 2026–2031 in Europe:
• €5–6 billion – estimated annual investments in the traditional colocation sector (retail/wholesale).
• €25–26 billion – estimated annual investments in large AI-dedicated data centers (“scale colocation”/AI hubs).
• 27% – projected annual growth of colocation data centers.

Major Socioeconomic Impact

Data centers are a pillar of the European economy: they attract billions of euros in investment and are expected to create over 300,000 new jobs. By 2031, the data center industry could support approximately 778,000 full-time employees across Europe. However, challenges remain – according to EUDCA market research, in 2025 the main obstacle for most operators in recruiting staff for data centers is the lack of specialized study programs.

The same report also notes that last year, the colocation sector contributed €53 billion to the EU GDP, and by 2031, the data center industry is expected to reach a total contribution of €137.5 billion, growing at 16.3% annually.

• At the community level, the data center sector contributes through infrastructure modernization and public-private partnerships. Increasingly, operators support green projects, such as reusing heat for urban heating networks or programs that help stabilize the electrical grid. In addition, they invest in renewable energy through long-term contracts (PPAs).

• Data centers also support digital inclusion, providing local connectivity and access to cloud services for SMEs. Examples include the 250 MW Pelagos campus in Gibraltar, which features a public leisure area, and the Microsoft community fund in Dublin (Clondalkin), which provides €100,000 annually for local digital education projects.

Sustainability for Greener Data Centers

In 2024, data centers in the European Union (colocation, hyperscale, and enterprise) consumed approximately 57.9 TWh, representing 2.1% of the EU’s total electricity consumption (the colocation sector accounts for 48% of this, or 27.6 TWh). The majority (over 90%) comes from renewable sources.

Operators primarily use Guarantees of Origin (GoOs) certificates, but long-term Power Purchase Agreements (PPAs) are gaining ground, covering 34% of total consumption. Increasingly, “high-impact GoOs” are being used, which support the construction of new solar and wind parks, align consumption with production, and integrate with the local grid, directly contributing to the development of renewable infrastructure in Europe.

Additionally, water usage is now widely monitored and measured, with most operators implementing conservation measures (e.g., optimizing cooling system controls, using recycled/non-potable water).

Key figures summarizing resource efficiency in European data centers:
• Colocation data centers have an average PUE of 1.36, hyperscale facilities are more efficient, while small enterprise facilities remain less efficient. The overall European average is 1.40.
• PUE values vary across Europe: Nordic and Baltic countries reach an average of 1.19, Northwest Europe – 1.45, and Southern Europe and Central & Eastern Europe record 1.57 and 1.54, respectively, influenced by climate, design, and infrastructure age.
• Average WUE (Water Usage Effectiveness) in colocation data centers: ~0.31 L/kWh
• 62% of operators use water-based cooling, though not necessarily at all sites.
• Germany introduced, through its national version of the Energy Efficiency Act, the first European requirement for heat reuse in new data centers.

Steady Pace of Investments in Hyperscale Data Centers

In Europe, annual investments in data centers are expected to remain around €7 billion. Announcements from hyperscalers seem to exceed this figure because they also include IT, and other related investments, with some expenditures occurring after 2031. Neocloud operators are gaining ground alongside established hyperscalers, differentiating themselves through greater architectural flexibility, higher density, and faster deployment times.

Investments in hyperscale campuses remain strong, with standard projects of 100–500 MW IT, while neocloud operators accelerate capacity delivery using pre-designed, liquid-cooling–ready modules.

Investments in hyperscale data center construction and installation (2024–2031) are projected as follows:
• 2025: Europe: €5.4 billion | EU-27: €4.1 billion
• 2026: Europe: €7.0 billion | EU-27: €5.6 billion
• 2027: Europe: €7.7 billion | EU-27: €6.2 billion

Despite regulatory, geopolitical, and cybersecurity challenges, Europe’s data center industry is accelerating, with growth showing no signs of slowing. Explore the full State of European Data Centres 2026 report for the complete insights.

Data Center Trends 2026: Industrialization Becomes Reality

 The data center sector is ramping up investments starting 2026, with a massive global impact expected: by 2030, around USD 3 trillion could be invested, according to Moody’s and JLL. Europe is keeping pace—just in 2026, the EU may invest over EUR 50 billion, while Romania, entering the scene with the Black Sea AI Gigafactory, could attract around EUR 5 billion through this project.

 The rules of the game are changing fast: speed, modularity, and optimization are becoming the new industry standards. Read on to discover the trends that will shape data centers this year.

 Data Centers Enter Industrialization: Faster, Modular, Repeatable

 

Accenture analysts predict that in 2026, the data center sector will enter a new stage: industrialization, where speed and predictability of delivery are decisive. Ambitious timelines are now the standard, and those who build quickly gain market share, although this pace accelerates costs and puts pressure on supply chains.

 Design standardization, modular construction (using standardized, off-site prefabricated units), and digital tools such as Building Information Modeling (BIM) allow for risk reduction and faster project completion, while long-term partnerships and mature supply chains become essential for success, according to Soben/Accenture.

 Data centers are no longer built as one-off projects but as repeatable products. Modularization and platform-based design mean using prefabricated components, digital models, and automated design rules to build faster and more safely. Because more components are produced in factories rather than directly on-site, they can be pre-tested so that they arrive ready for assembly and free of defects.

 

  • In 2025, Uptime Intelligence identified 30 proposals for campuses over 1 GW (https://uptimeinstitute.com/resources/research-and-reports/five-data-center-predictions-for-2026) and nearly 100 projects of hundreds of MW globally, in addition to the 200 already existing.
  • Thus, gigawatt-scale campuses are emerging, including in Europe, which require the same type of design, energy, and cooling systems that can be easily replicated from one site to another.
  • Parametric MEP (Mechanical, Electrical, Plumbing) design allows for quick changes by adjusting values rather than redoing the entire project, enabling fast, standardized, and easily expandable construction.

 

Edge Data Centers Take Off

 

Edge data centers are growing rapidly, closer to cities and industrial areas, driven by the fast adoption of 5G technology and IoT devices. According to Research and Markets, cited by Soben, the market is expected to grow from USD 15.4 billion in 2024 to USD 39.8 billion in 2030 (https://sobencc.com/news/data-centre-trends-2026/).

 The number of IoT connections is projected to increase from 19.9 billion in 2025 to 60.6 billion in 2034 (Statista), and real-time applications—telemedicine, autonomous vehicles, industrial automation—make modular edge data centers essential. Large operators are expanding their geographic presence to avoid energy grid congestion and to leverage local renewable energy sources. In 2026, location, deployment speed, and last-mile resilience therefore become key competitive advantages.

 

Power Grids Under Pressure: What Makes a Difference in 2026

 

Data center energy consumption is rising rapidly and could reach 1,050 TWh in 2026, driven by high demand for AI and GPUs, according to the International Energy Agency (IEA). In major data center markets and hubs in the U.S., Europe, and Asia, the average time to secure a connection to the public electricity grid exceeds four years. This situation puts pressure on grids, causing bottlenecks and delays, while operators are looking for hybrid and alternative solutions.

 Immediate and hybrid solutions include on-site generation (https://www.cbre.com/insights/books/european-real-estate-market-outlook-2026/data-centres) using gas (+ renewable energy), gas turbines or piston engines, microgrids, and “behind-the-meter” systems that rely on gas or fuel cells as the primary source, with the grid as backup. At the same time, operators combine green energy (solar, wind) with battery storage to increase reliability and delivery speed, according to CBRE.

  • Operators are even acquiring land near nuclear plants or repurposing former plants for local generation and storage, preparing for future technologies such as green hydrogen and small modular reactors (SMRs).
  • Additionally, they are collaborating more closely with utility providers from the feasibility stage and exploring solutions that allow the data center to become an active grid participant through large-scale storage and demand response programs.

While gas remains a practical short-term solution in the U.S., in Europe, the focus is on renewable energy and “private wire transmission” solutions (dedicated lines connecting a specific energy producer directly to a consumer). In EMEA, this mix can reduce costs by up to 40% (https://www.jll.com/en-us/insights/market-outlook/global-data-centers) compared to reliance on the public grid, according to a JLL report.

 

Cooling Technologies of the Year?

 

Liquid cooling is becoming the standard in AI data centers, with adoption rising from 14% in 2024 to 33% in 2025, and it is expected to reach 40% in 2026, according to Trendforce cited by Accenture. In the near future, data centers will use a combination of air-based and liquid-based cooling, while water-intensive evaporative methods will be phased out due to sustainability concerns and water scarcity.

 New liquid cooling technologies include cold plates, direct-to-chip, microfluidic, immersion, and two-phase cooling. These solutions promise higher energy efficiency and reduced water consumption, with energy savings for cooling of up to 50–60%. Liquid cooling is therefore expected to become mainstream, not just for specialized applications, as rack densities exceed the limits of traditional cooling technologies.

 The focus in the coming years will be on standardization and interoperability, ensuring the integration of mixed cooling systems with power supply, monitoring, and operations so that efficiency and sustainability are maximized, regardless of the chosen cooling method.

 

 

AI Optimizes Design, Operation, and Performance of Data Centers

 

AI is transforming the data center industry, from design to operation and optimization. According to Gartner, investments in AI-optimized servers and infrastructure are expected to grow by 19% in 2026 (https://www.gartner.com/en/newsroom/press-releases/2025-10-22-gartner-forecasts-worldwide-it-spending-to-grow-9-point-8-percent-in-2026-exceeding-6-trillion-dollars-for-the-first-time), although supply constraints may limit short-term demand.

 We will see how automated tools like BIM, machine learning algorithms, and AI-enabled equipment optimize airflow, energy consumption, and cooling. Additionally, digital twins and integrated AI platforms, which centralize data, allow real-time testing and adjustments, maximizing efficiency and uptime, according to global consulting firm Black & White Engineering.

 At the same time, AI-based DCIM tools are becoming increasingly sophisticated, automating maintenance, predicting failures, and optimizing performance in real time, enhancing sustainability and operational efficiency. On the other hand, operators’ roles are evolving—they must interpret AI models, manage complex systems, and coordinate intelligent equipment equipped with IP interfaces.

 

Chips, Fiber Optics, and “New Electrification Topologies” (Uptime Intelligence)

 

On one hand, hyperscalers have already started developing their own AI chips to reduce dependence on Nvidia and increase operational efficiency. Google produces TPUs (Tensor Processing Units), Amazon is developing Tranium chips for internal use, and AMD is preparing to launch a new generation of GPUs in 2026.

In addition, technologies such as Hybrid Electrical/Optical Fabrics, Silicon Photonics, and Layer-1 Encryption enable data centers to handle large data volumes and unpredictable flows, near-instant connections (~10 µs), and secure sensitive data in transit.

 Uptime Intelligence also anticipates that increasing data center density will drive the emergence of new electrical technologies, including medium-voltage (MV, 11 kV+) distribution closer to IT equipment, the reintroduction of direct current (DC), and products such as new MV UPS systems, 800V DC UPS, and solid-state transformers. These solutions will reduce costs and complexity while improving overall energy performance, even in small and medium-sized data centers.

 

Vacancy Rates Fall, Construction Costs Rise

 

In Europe, data center vacancy rates continue to decline: after falling below 10% at the end of 2024, they are expected to reach a historic low of 6.5% by the end of 2026 (https://www.cbre.com/insights/books/european-real-estate-market-outlook-2026/data-centres), according to CBRE. Beyond hyperscalers, more companies—including those offering GPU-as-a-Service (GPUaaS)—are now seeking to lease tens of megawatts of capacity across different regions in Europe. Even if over 750 MW are added this year—the equivalent of France’s entire colocation market in 2025—it will not be enough to meet demand, driving up prices and the need for innovative solutions.

  • Construction costs have risen significantly in recent years. Between 2020 and 2025, the global average cost increased from USD 7.7 million to USD 10.7 million per MW, an annual growth of 7%, and for 2026, JLL estimates an additional 6% rise to USD 11.3 million per MW.
  • The main factors in site selection remain speed of network connectivity, followed by community support, latency, and proximity to clients; however, for larger projects, cost variations become increasingly relevant.
  • According to Soben/Accenture, while traditional cloud data centers cost between USD 8–10 million per MW, large-scale AI centers (GW+) can reach up to USD 17 million per MW. As a result, building larger data centers does not automatically reduce costs, as was previously assumed.

 

Supply Chain Risks: Rare Earth Elements (REEs)

 

As China restricts exports of rare earth elements, essential for data centers and connected infrastructure (e.g., fiber optic cables and permanent magnets), supply chain risks are intensifying. The IEA reports that in 2023–2024, China produced 60% of the world’s REEs (https://www.iea.org/reports/energy-technology-perspectives-2023/clean-energy-supply-chains-vulnerabilities) and 94% of all permanent magnets. Between 2023 and 2025, Beijing expanded restrictions to an increasing number of elements and products.

 This situation creates a major vulnerability for Europe, where data center projects risk being impacted by shortages of critical materials. In response, mining and refining projects are currently being developed in the U.S., Australia, Brazil, Tanzania, and India, while REE recycling—currently less than 1% of consumption—is expected to become a solution in the coming years to avoid supply bottlenecks.

 

Sustainability and Circularity

 

Sustainability has become a central principle in data center design, influencing everything from prioritizing low-carbon materials and modular construction to water management, heat reuse, and integration of renewable energy.

 In Europe, pressure from authorities, investors, and local communities is increasing: water-free or hybrid cooling technologies, recycled water use, low-carbon materials, BREEAM certifications (which assess overall building sustainability, including energy and water use), and transparent carbon reporting are all being required.

 Beyond PUE, other metrics are becoming increasingly important, such as CUE (carbon footprint), WUE (water usage), and life cycle assessments (LCA) of buildings. Compliance with these metrics can facilitate permitting and attract investment.

 Since 2024, the EU has required operators to report energy performance and utilize waste heat, and stricter rules are expected from 2026 under the new “EU Data Centre Energy Efficiency Package,” scheduled for the first half of the year.

  • Attention to local communities is also increasing. Microsoft recently announced its five-point “Community-First AI Infrastructure” plan, committing to building data centers responsibly, with a positive impact on the communities where they are developed.

 

Skills Shortage and the Need to Develop New Competencies

 

Demand for skilled trades in data center construction is skyrocketing, especially for electricians, plumbers, carpenters, and MEP specialists. In mature markets, contractors and their supply chains are already overstretched, while emerging markets will require support and time to adapt. The growth of AI data centers and high-density facilities accelerates the need for engineers specialized in mechanics, power, cooling technologies, AI infrastructure, electrical network interconnection, and ESG (Environmental, Social, and Governance) experts.

Companies are addressing this shortage through training programs and partnerships with universities, such as the Google STAR Program, grants for electrician training, and upskilling courses offered by providers like Schneider Electric University. A new trend is that Generation Z is increasingly drawn to construction careers, attracted by high salaries and the opportunity to contribute to the digital economy. Those who succeed in attracting, training, and retaining these talents will have a clear advantage in delivering and operating modern data centers.

 Looking ahead to 2026, in brief? Data centers will be built faster, modular, and more efficient, but challenges will also drive innovation and adaptation. For more information, check out the main reports behind this article:

 

 

The Black Sea AI Gigafactory and Romania’s Contribution to Europe’s AI Agenda

Europe is rapidly advancing the development of a continent-wide AI infrastructure, and the numbers clearly reflect the scale of this ambition. Following the launch of the AI Innovation Package in 2024, which expanded EuroHPC’s role to full support for AI model development, the EU has already established 13 AI Factories. In 2025, the Commission raised the stakes through InvestAI, a plan aimed at mobilizing EUR 200 billion (mostly private investment), of which up to EUR 20 billion represents the public contribution dedicated to gigafactories.

Together with the Apply AI strategy, Europe is approaching an ecosystem of at least 17 publicly or public-privately funded AI (giga)factories, on top of which private projects are emerging. NVIDIA alone has recently announced a roadmap for building 20 AI factories in Europe, including 5 gigafactories — bringing the company’s active projects on the continent to 22, with another 15 planned by 2030, according to the Centre for European Policy Studies (CEPS). (See the full list of Nvidia’s projects in Europe here.)

At the local level, at the end of November, the Romanian government approved the memorandum mandating the Ministry of Energy and the Ministry of Finance to coordinate the implementation of the Black Sea AI Gigafactory project — an AI and supercomputing hub worth up to EUR 5 billion. The competition for European funding is open, and Romanian authorities are already preparing the official proposal to be submitted in March 2026.

What better moment to revisit the premises of the AI Gigafactory program and assess Romania’s plans and chances?

From AI Factories to AI Gigafactories: Objectives, Funding, and Perspectives

In February 2025, we reported that the President of the European Commission, Ursula von der Leyen, launched InvestAI, the initiative that includes a dedicated EUR 20 billion fund for building AI Gigafactories (AIGFs). The program aims to develop at least five such “gigafactories” — massive compute infrastructures conceived as an extension of the AI Factories program, capable of training models with hundreds of trillions of parameters by integrating over 100,000 AI chips and highly efficient, automated data centers.

Through InvestAI, together with contributions from member states, the EU seeks to provide researchers, startups, and industry with access to large-scale computing capacity, accelerating the development of advanced AI models and strengthening Europe’s technological sovereignty.

The estimated cost of an AI Gigafactory is EUR 3–5 billion, with funding shared between the public and private sectors. More specifically, public contributions can cover up to 35% of CAPEX, financed by the European Commission and the member states, while the remaining investment and 100% of OPEX come from private companies and investment funds. At the European level, funding dedicated to AI Gigafactories is therefore distributed as follows: 17% from the European Commission, 17% from EU member states, and 66% from the private sector.

The AI Gigafactories program is the natural extension of the AI Factories initiative , funded through EuroHPC with nearly EUR 2 billion. AI Gigafactories will build on this model but at a far larger industrial scale, creating the infrastructure needed for training advanced AI models.

The Countries Entering the Race for AI Gigafactories. What Comes Next?

In 2024–2025, the European Commission launched a “call for expressions of interest” to assess market readiness. A total of 76 proposals were submitted, ( covering 60 sites across 16 member states, indicating total intended investments of more than EUR 230 billion. Given the scale and complexity of the submissions, the official call for AI Gigafactories (AIGFs) has recently been postponed to 2026, instead of late 2025 as originally planned.

Although the European Commission has not officially disclosed the member states that submitted expressions of interest by the 20 June 2025 deadline—citing confidentiality reasons—several countries have announced their participation publicly. Among them are Romania, Austria (Vienna), Czechia (Prague), Spain (Mora la Nova), as well as Germany and the Netherlands, according to Data Center Dynamics.

Respondents to the call include data center operators, telecom companies, energy providers, European and global technology partners, as well as investors. According to the Commission, collectively, respondents expect to procure at least three million GPUs.

The call was launched to assess interest and create an informal registry of candidates. Between December 2025 and early 2026, the European Commission will engage with respondents to refine the proposals: technical details, location, specifications, funding structure, and sustainability plans.

  • On 4 December 2025, the European Commission signed a Memorandum of Understanding (MoU)  with the European Investment Bank and the European Investment Fund to support the development and financing of AI Gigafactories.
  • What’s next? The official call for proposals will be launched in early 2026 (estimated submission deadline: March 2026).
  • AI Gigafactories are expected to become operational by the end of 2028.

What Are Romania’s Plans for the Black Sea AI Gigafactory?

The Black Sea AI Gigafactory project proposes installing more than 100,000 AI accelerators in two stages: Phase I in Cernavodă and Phase II in Doicești, locations chosen for their energy advantages and digital infrastructure. Powered by up to 1,500 MW of zero-emission energy, primarily nuclear, the gigafactory would position Romania as a strategic supercomputing hub in Europe.

Cernavodă provides direct access to nuclear power, along with excellent fiber connectivity and submarine cable links, while Doicești offers the advantage of an “industrial site with potential for SMR co-location, hybrid cooling, and integration into the national high-speed communications network” (source). The project also includes a strong cybersecurity component, supported by the expertise built around the presence of the European Cybersecurity Competence Centre (ECCC) in Bucharest.

The memorandum mentioned at the beginning of this article—approved by the government on 27 November—places the Ministry of Energy and the Ministry of Finance in charge, with support from the Authority for Digitalization of Romania, to coordinate preparation, negotiations, and the involvement of all key public, private, academic, and research stakeholders to bring and implement the Black Sea AI Gigafactory in Romania.

What else does the memorandum reveal?

  • The Ministry of Energy becomes the main coordinator of the project and assumes primary responsibility for data center policy, leading the development and oversight of strategies in this field.
  • Romania could attract additional funding through European instruments such as the European Innovation Council Fund, TechEU Scale-up, the EIB Group’s “European Tech Champions Initiative,” or the InvestEU.
  • The gigafactory will integrate advanced energy-efficiency technologies—from sustainable cooling and renewable energy via PPAs to heat reuse and grid-resilience measures—to meet national and EU climate objectives.
  • Selection criteria also include responsible water usage and the adoption of circular-economy principles.
  • Romania is running a major investment program in nuclear energy, including the refurbishment of Unit 1 at Cernavodă (extending its lifetime by another 30 years), the construction of Units 3 and 4, and the development of advanced nuclear technologies such as the NuScale SMR at Doicești.
  • The gigafactory will boost competitiveness by providing advanced compute capacity to industry, startups, SMEs, and researchers; create jobs; develop new skills; and, through new cross-border fiber corridors and trusted compute services, strengthen regional connectivity—complementing EuroHPC infrastructure and supporting interoperable collaboration between countries.

We are encouraged that, regardless of the European Commission’s decision after the March 2026 stage, Romania does not plan to pause the project. On the contrary: in the event of a negative response, authorities are prepared to recalibrate the plan, attract investors, and move forward with implementation, with technical support from the World Bank Group and other international financial institutions. Romania is not stepping back from its ambition to build a regional AI hub — and that may be the best industry news as we enter 2026!

Server racks storing AI datasets for simulation, training and predictive tasks

Data Center Cost in 2025-2026? Liquid Cooling Drives Budgets Even Higher

The global data center boom, fueled by the generous budgets of hyperscalers and major investors, is steadily pushing construction costs upward. Analyses show that developers are already facing the challenge of working with a “final price” that is impossible to fix from the start. Estimating data centers budgets is becoming increasingly difficult as material prices fluctuate, critical equipment has delivery times exceeding a year, new technologies entail additional costs, and every project stage involves financial risks.

In early November, Turner & Townsend published the Data Centre Construction Cost Index 2025-2026, providing a clear view of the market’s actual state. This is currently the only index exclusively dedicated to data center construction costs, making it an essential reference for investors, developers, and operators. Its analysis helps us understand the directions the market is heading and the implications for future projects.

Liquid Cooling vs. Traditional Data Centers: Understanding the Cost Gap

According to the Turner & Townsend report, costs are rising, and the transition to liquid cooling systems involves significantly higher expenses compared to traditional (air-cooled) data centers. In numbers, the reality looks like this: construction costs for traditional cloud data centers increased by 5.5% in 2025 compared to last year. However, this growth is moderate compared to the 9% jump reported in 2024, indicating that the market is beginning to stabilize. The average construction sector inflation of 4.2% is felt less acutely in the data center segment as local supply chains develop, easing price pressures.

The situation is different for data centers using liquid cooling systems designed for AI workloads. In the U.S., construction costs for these facilities are, on average, 7–10% higher than those of traditional data centers with the same IT capacity. These high-density centers are more complex to build, integrate significantly more expensive technical and cooling systems, and see rapidly increasing demand in markets such as the U.S., the U.K., Europe, and East Asia, as companies seek to support increasingly intensive AI workloads.

Density Pays Off in the Long Run

The same Turner & Townsend report shows that, precisely because of their density, AI data centers can be a more cost-effective choice in the long term. They often feature more flexible designs, which can reduce project costs. Additionally, higher density allows for a smaller building footprint, and “mega campuses” designed to run AI models across multiple interconnected buildings bring significant economies of scale.

On the other hand, traditional cloud data centers require complex measures to ensure service continuity (technical redundancy measures), which considerably increase construction costs and, in the long term, operational expenses as well.

Developers’ Insights

How do developers view the 2025 situation? Nearly half of respondents reported construction cost increases of 6–15%, while 21% say costs have risen more than 15%. Additionally, under inflationary pressure, the majority (60%) anticipate further increases of 5–15% in 2026. About 21% are even more pessimistic, expecting inflation to exceed 15% in the coming year.

Major European capitals are climbing the ranks in data center construction costs, approaching the levels of large U.S. cities. Paris and Amsterdam reach $10.8 per watt, comparable to Portland, while Madrid and Dublin surpass U.S. cities such as Atlanta, Phoenix, and Columbus.

Check out the full Turner & Townsend report here.

Future-Proof Data Centers: Hybrid Designs for AI and Cloud Workloads

According to McKinsey, while AI training currently drives the size and scale of data centers, future facilities will be hybrid, combining training workloads, inference tasks, and cloud operations. These centers could surpass in size even the facilities considered large just two years ago.

Today, the time between requesting services and starting construction of a data center can range from 12 to 36 months, depending on type, design, size, and location. Optimizing the construction process can significantly shorten these timelines: for example, a U.S. architecture firm completed the design of a 929 m² data center in Colorado in just 30 days, using an Agentic AI platform – the first project of its kind entirely designed by artificial intelligence.

Such innovations could reduce construction time by 10–20% and deliver similar capital savings, potentially cutting global projected expenditures of $1.7 trillion by 2030 by up to $250 billion, according to McKinsey analysts.

They also recommend several key directions that could transform how data centers are designed and built, including:

  • Designs should allow for phased expansion, modularization, and off-site assembly.
  • Prefabricated and modular solutions currently account for 40–60% of data center components, with some projects using up to 80–85%. These solutions accelerate construction, reduce on-site labor, and improve quality.
  • Generative scheduling tools allow simulation of thousands of scenarios to optimize resources and work sequences, reducing delivery time by up to 20%.

Discover more in McKinsey’s full analysis.

In short, the data centers of tomorrow will get more expensive unless builders embrace innovative technologies and scalable, modular designs. These approaches help developers cut costs, accelerate delivery, and handle complexity more effectively, ensuring the infrastructure is ready for the rising demands of AI and cloud workloads.

From Myths to Metrics: What’s the Real Energy Footprint of the Data Center Industry?

How much power does the infrastructure behind our digital world really consume? From alarmist reports to optimistic forecasts, the energy use of data centers has become a true “gray zone.” Numbers, rumors, analyses, and “myths” abound: some claim that AI will double or even triple global electricity demand by 2030, while others argue that green technologies will balance it all out. The truth? It’s more nuanced than it seems—and lies somewhere in between. In the following lines, we’ll explore it through some of the most relevant recent reports and studies.

The Four-Scenario Perspective of the International Energy Agency (IEA)

According to an IEA report ,  in 2024, data centers consumed approximately 415 TWh, representing 1.5% of global electricity use. Over the past five years, their energy consumption has grown by 12% annually. The increasing adoption of artificial intelligence is expected to double consumption by 2030, reaching around 945 TWh, equivalent to 3% of global electricity use. Between 2024 and 2030, the energy consumption of data centers will grow by 15% per year, more than four times faster than the average growth across other sectors.

AI servers will be the main driver of this growth, accounting for half of the total increase, while traditional servers will contribute 20%. The U.S. and China will generate nearly 80% of the additional demand, with increases of +240 TWh (+130%) and +175 TWh (+170%), respectively, followed by Europe (+45 TWh, +70%) and Japan (+15 TWh, +80%).

However, this represents only the Base Case scenario of the IEA report — the reference or most likely scenario considered by analysts. The agency also outlines three alternative scenarios, reflecting different possible industry evolutions:

  • The Lift-Off Case: accelerated AI adoption, more resilient supply chains, and greater flexibility in location and operations. Result: rapid expansion of data centers. Energy consumption: >1,700 TWh in 2035, +45% compared to the Base Case. Global share: 4.4% of total electricity consumption.
  • The High Efficiency Case: significant progress in energy efficiency at the software, hardware, and infrastructure levels. Result: the same digital/AI demand met with reduced energy use. Energy consumption: approximately 970 TWh in 2035, savings of >15% versus the Base Case. Global share: 2.6% of total electricity consumption.
  • The Headwinds Case: slower AI adoption, local bottlenecks, and strained supply chains leading to delayed data center expansion. Result: stable IT capacity after 2030, limited growth, and increasingly efficient equipment. Energy consumption: about 700 TWh in 2035, <2% of global electricity consumption.

An important note: A March 2025 report by 4E TCP (a program under the IEA), authored by Radu Dudău (President of Energy Policy Group) and Vlad Coroamă (founder of the Roegen Center for Sustainability in Switzerland), shows that in 2023 data centers consumed 300–380 TWh globally (excluding cryptocurrency activity), and by 2030 their consumption is estimated at 600–800 TWh, representing 1.8–2.4% of the global electricity demand. According to the authors, global consumption will not reach 900 TWh in five years due to AI, as this scenario would require investments of USD 9,000 billion in CAPEX alone, plus operational costs. The most realistic scenario involves investments between USD 2,000 and 3,000 billion over the next five years.

The same report also includes a table summarizing most studies estimating the global energy consumption of data centers, as well as a table with studies on the global energy consumption of AI. Read the full report here.

Deloitte and the Impact of GenAI on Energy Consumption

In 2025, data centers account for approximately 2% of global electricity use, equivalent to 536 TWh, according to Deloitte. However, the energy consumption of GenAI data centers is increasing much faster, which could lead to a doubling of global consumption by 2030, reaching around 1,065 TWh. Estimates vary depending on processing and AI efficiency: if optimizations fail to materialize, consumption could exceed 1,300 TWh. Critical consumption for essential components (GPU, CPU, storage, cooling, networking) could reach 96 GW globally by 2026, with AI operations potentially consuming over 40% of that total.

  • The annual energy consumption of AI data centers will reach 90 TWh in 2026, roughly ten times more than in 2022. In the first quarter of 2024, global net additional demand for AI reached ~2 GW, marking a 25% increase compared to the previous quarter.
  • According to Deloitte analysts, if just 5% of daily global searches used GenAI, the required energy would be 3.12 GWh per day and 1.14 TWh per year, equivalent to the electricity used by approximately 108,450 U.S. households.
  • Regionally, data centers will represent 6% of total U.S. electricity consumption in 2026 (≈260 TWh) — the same share as in China. In the United Kingdom, data center energy demand could increase sixfold over a decade, driven mainly by AI adoption.

Access the full Deloitte analysis here.

McKinsey’s Outlook with Europe in Focus

Demand for data centers in Europe is set to surge in the coming years, rising from 10 GW in 2024 to 35 GW by 2030, while the industry’s energy consumption will nearly triple, from 62 TWh to over 150 TWh. According to McKinsey, at this pace, data centers will account for 5% of Europe’s total electricity consumption, up from just 2% today, and will generate 15–25% of the region’s new net electricity demand. The average annual growth rate of electricity use will reach 13% CAGR, and most of the required energy will come from renewable sources, in line with the commitments made by major industry players. Find the full McKinsey report here.

Goldman Sachs Research: Global Data Center Power Demand to Increase by 165% by 2030

Goldman Sachs analysts estimate that global data center power demand will grow by 50% by 2027, and by up to 165% by the end of the decade, compared to 2023. More specifically, demand is expected to rise from approximately 55 GW in 2025 to 84 GW in 2027, reaching a total online capacity of around 122 GW by 2030. The share of AI in the total energy consumption of data centers will reach 27% in 2027, while cloud workloads will account for 50%, and traditional workloads for 23%. The power density of infrastructure will also increase from 162 kW/m² in 2025 to 176 kW/m² in 2027, driven by higher AI-related demand.

In Europe, over the next decade, the data center pipeline—estimated at around 170 GW—could generate a 10–15% increase in the region’s total energy consumption, equivalent to about one-third of its current total. Read the full Goldman Sachs Research analysis here.

How Is Data Center Energy Consumption Evolving at the National Level?

  • United States: Data centers used 176 TWh in 2023 (4.4% of total national electricity use), up from 58 TWh in 2014. Energy demand has tripled over the past decade and could reach between 325 and 580 TWh by 2028, representing 6.7–12% of the country’s total electricity consumption. (Source: The U.S. Department of Energy)
  • United Kingdom: Data centers use about 2.5% of the country’s electricity, equivalent to 5 TWh per year, and by 2030 their consumption could reach 22 TWh/year. (National Energy System Operator)
  • Ireland: Data centers account for roughly one-fifth (20%) of annual energy consumption, and in the Dublin area, they represented almost half (48%) of the city’s total electricity use in 2023. (Commission for Regulation of Utilities)
  • Germany: The annual electricity consumption of data centers is about 20 TWh, projected to reach 38 TWh by 2037. Major network operators forecast that by 2045, it could rise further to 88 TWh.
  • Netherlands: Data centers consume 3.7 billion kWh, or 3.3% of the country’s electricity, equivalent to 13.32 PJ or 0.44% of the Netherlands’ total energy consumption (3,024 PJ). 90% of this energy is green. (Dutch Data Center Association)
  • France: Data center electricity consumption rose from 3 TWh in 2019 to 10 TWh in 2022, representing 2.2% of the country’s total electricity use. It is estimated that by 2050, consumption will increase by 74%, according to Aurora Energy Research.
  • Spain: According to consultancy firm DNV, data centers in Spain consumed over 6 TWh of energy in 2024, and this consumption is expected to increase to approximately 12 TWh by 2030 and reach 26 TWh by 2050, representing a growth of more than 300% compared to the current level.
  • Meanwhile, in Romania, data centers consumed approximately 1.5 million MWh in 2022 out of a total of around 50.518 million MWh used nationwide. In other words, this represents 2.3–2.5% of the national electricity consumption, up from 2.1% in 2019, according to estimates by Tema Energy.

Aside from these figures, it is also worth mentioning the Energy Efficiency Directive  (revised in 2023), which estimates that in the EU, data center consumption will rise from 76.8 TWh in 2018 to 98.5 TWh in 2030 (+28%), representing approximately 3.2% of the Union’s total electricity demand.

Although it seems we cannot agree on a single set of figures or a single consumption scenario, it is becoming increasingly clear that as digitalization and AI technologies accelerate, data centers are playing a growing role in the national and global energy mix. Efficient energy management and investments in green energy are becoming essential for a sustainable future.

Interview with Victor Vevera, ICI Bucharest: What resources and services will RO AI Factory provide, and when will it become operational?

The EuroHPC Joint Undertaking (EuroHPC JU) network of AI Factories — high-performance computing facilities optimized for artificial intelligence — is a key component of Europe’s strategy to strengthen digital sovereignty and competitiveness. These centers aim to provide start-ups, SMEs, researchers, and industry players with access to AI-optimized supercomputers, data, talent, and expertise, enabling Europe to become a sustainable “AI continent.” So far, 19 AI Factory locations have been selected across 16 EU Member States, including Romania. The network is being developed gradually, connecting infrastructures, supercomputing centers, universities, start-ups, and industry into an integrated European ecosystem for AI innovation.

We spoke with Adrian Victor Vevera, General Director of the National Institute for Research and Development in Informatics – ICI Bucharest, about Romania’s first AI Factory project — its objectives, coordination, and implementation plans.

  1. What is ICI’s role within the consortium, and how are the partners involved?

The National Institute for Research and Development in Informatics – ICI Bucharest is the national coordinator of the RO AI Factory consortium and the host entity of the future AI-optimized supercomputer. Its role is to ensure the strategic, operational, and technical coordination of the entire project, as well as the integration of the infrastructure into the EuroHPC European ecosystem.

Consortium partners contribute complementarily: technical universities (the POLITEHNICA University of Bucharest and the Technical University of Cluj-Napoca) provide academic expertise and skills development; research institutes (INCDSB, ICIA) support the development of AI applications in areas such as life sciences and data modeling; while industry and association partners (Transilvania IT, CNIPMMR, RoDIH) ensure the connection with the private sector, start-ups, and SMEs. Together, the consortium functions as an integrated platform bringing together research, education, and innovation.

  1. Where will the AI Factory be located, and what infrastructure will it host?

The RO AI Factory will be hosted at the ICI Bucharest data center, which already features enterprise-grade infrastructure designed for critical systems operation. The dedicated space for the supercomputer is equipped to accommodate high-density hardware and includes redundant power supply, modern liquid- and air-cooling systems, climate control, monitoring, and physical security. The data center is connected via the RoEduNet network to the GÉANT European research infrastructure, with the possibility of bandwidth expansion. In parallel, ICI Bucharest is upgrading its power and cooling capacities to support the new AI infrastructure.

  1. What is the expected computing power of RO AI Factory?

The RO AI Factory supercomputer will be a EuroHPC-class system, optimized for artificial intelligence workloads, advanced modeling, and the training of foundation and large language models (LLMs). It will deliver an unprecedented computing capacity in Romania, with a theoretical peak performance exceeding 5 exaFLOPS in AI operations. This places it among the most advanced AI systems in Central and Eastern Europe, capable of training and deploying next-generation artificial intelligence models — including large-scale generative and multimodal models. Through this capability, RO AI Factory will accelerate scientific research, industrial digitalization, and the development of innovative solutions with major economic and societal impact.

  1. What is the budget, and what is the estimated operational period?

The total estimated budget of €50 million covers both the acquisition and implementation of the hardware and software infrastructure and its operation over a five-year period. The funds will be used to procure the supercomputer and supporting infrastructure and to cover operational expenses — energy, maintenance, staffing, and connectivity — for the full five years. The budget also includes funding for the development and provision of AI services, training programs, start-up support initiatives, and ecosystem-building actions, implemented over a three-year contractual period.

The project’s financial model incorporates long-term sustainability mechanisms through national co-funding and integration with the EuroHPC and Digital Europe programs, ensuring that the RO AI Factory continues to operate beyond the initial implementation phase.

  1. What team and human resources will be involved?

The RO AI Factory operational team will initially include over 50 specialists, such as system engineers, AI and HPC experts, researchers, software developers, trainers, and AI ethics and compliance specialists. Recruitment will be carried out from among the consortium institutions, academia, and the private sector, through dedicated training and talent-attraction programs.

Romania already has a strong base of professionals in IT and engineering sciences, and the project aims to directly contribute to reducing brain drain by providing a competitive research and innovation environment aligned with European standards.

  1. Who will benefit from the infrastructure, and how can they access it?

RO AI Factory is designed as an open and accessible infrastructure for a wide range of users: academic and research institutions, start-ups, SMEs, technology companies, and public institutions. Access to computing resources will be provided through calls for projects, proposals, or subscription-based models, depending on user category.

Private IT and related companies will be able to use the platform for training their own AI models, testing AI solutions, and optimizing commercial products. There will also be dedicated acceleration programs, mentorship sessions, and training workshops to help the private sector leverage the infrastructure and expertise available through RO AI Factory.

  1. What is the implementation timeline, and what are the main milestones?

The RO AI Factory project will follow a phased implementation plan over approximately three years:

  • Year 1: hardware procurement, site preparation, and installation of core systems.
  • Year 2: service integration, interconnection with national and European data networks, and the launch of user programs.
  • Year 3: full operationalization of the national AI ecosystem and expansion of services for business and public administration.

It is estimated that RO AI Factory will become fully operational by the end of 2027, strengthening Romania’s position as a regional hub for AI innovation in Central and Eastern Europe.

Platform Global 2025: Looking Ahead to the Next Decade in Data Centers

From September 7 to 9, Platform Global 2025 took place in Antibes, France—a landmark summit dedicated to leaders building and financing digital infrastructure worldwide. The event brought together C-level executives from the data center, cloud, edge, and networking sectors, as well as investors, technology leaders, policymakers, regulatory authorities, real estate experts, energy transition specialists, consultants, and analysts, providing a comprehensive forum to explore market trends and growth opportunities.

The program featured over 100 speakers, conferences, and networking sessions covering topics such as AI-driven development, land and energy strategies, long-term energy solutions, mergers and acquisitions, sustainable investments, global geopolitical risks, quantum computing, emerging markets, demand and leasing, cloud sovereignty, how inference is reshaping the edge landscape, and the latest lessons learned by hyperscalers in liquid cooling.

In short, the challenges and outlook for the data center industry can be summarized across four “fronts”: energy, policy (regulation), AI, and sustainability. Energy constraints in the FLAP-D region are driving projects to areas with large available energy blocks. On the policy side, increasingly strict regulations and local issues already emerging in some markets (restrictions and opposition from authorities or local communities) are putting pressure on developers. Large-scale AI deployment is primarily occurring in North America and China, while sustainability remains important, however, outside of Europe, has not yet received the same level of attention.

Below is a summary of some of the most relevant ideas and conclusions for the data center industry from Platform Global 2025.

Investments & Market Growth

According to Nicola Hayes, Director at Platform Markets, global investments in data centers have increased by 53% this year. Tomas Peshkatari (Global Infrastructure Partners / BlackRock) explained that this demand will lead to a doubling of capacity over the next five years, with an annual growth rate of 22%. In addition, contracts have extended to 10–15 years, with investment returns of 8–9%. Against this backdrop, Wes Cummins from Applied Digital highlighted that the U.S. remains ahead of Europe, where finding land and energy for new projects is much more challenging, while China is rapidly building and scaling infrastructure. “The amount of capital we are spending now is enormous—it’s shocking even for me,” said Wes Cummins. Smaller companies like Nebius and Lambada Labs are also growing rapidly, adapting to increasing demand. Charles Antoine Beynet (DataOne) added that while in the U.S. you can deploy 100 MW in a year, in countries like France, the UK, or Germany, the process takes much longer due to limited access to electricity and strict regulations.

Energy Challenges and Solutions

On the energy front, Nathan Luckey (Macquarie Group) explained that energy in Europe is greener but more prone to intermittency, which can lead to blackouts, as recently seen in Spain. Pablo Ruiz-Escribano (Schneider Electric) added that, although energy is available, access takes too long. Building new transmission lines can take up to 10 years, according to Neil Cresswell (Virtus), who noted that, for example, data center power in the UK doubles every five years—from 100 MW to 250 MW, to 500 MW. Richard Bienfait from Stack Emea highlighted that energy costs are much higher in Europe than in the U.S., raising the question, “Why build here?” Sean James (Microsoft) explained that data centers tend to develop in clusters in certain regions, requiring huge amounts of energy, and energy quality can be affected if a large load is suddenly shut down.

On the solutions side, Ash Evans from Google noted that lower-cost locations will emerge as AI monetization evolves, and that the tech giant prefers to control all MEP systems (mechanical, electrical, and plumbing), even though leasing remains an option. Solutions discussed include generators to relieve grid load, batteries (e.g., Tesla Megapacks – xAI DC), and specialized software to optimize energy usage. For example, at Google, on average, only 66% of contracted power is actually used, according to Ash Evans.

Data centers can accelerate energy access when standard grid connections are unavailable using several strategies. According to McKinsey, these include:

  • Selecting new locations with shorter connection times (e.g., emerging cities, provinces such as Aragon);
  • Leveraging existing infrastructure (e.g., abandoned industrial sites, former coal plants, industrial parks);
  • Building on-site generation capacity and microgrids (e.g., gas plants, small modular reactors);
  • Accelerating the development of energy capacity together with suppliers (e.g., PPAs, joint grid investments, restarting or extending the life of nuclear/gas/coal plants).

McKinsey’s analysis also shows that data center operators are increasingly exploring behind-the-meter (BTM) options to secure energy directly at the source, including:

  • Solar and wind: Highly variable due to intermittency, but aligned with sustainability goals; requires storage capacity and large land areas. A feasible long-term solution (after 2030) if storage costs decrease.
  • Hydropower: Seasonal variability (reduced output in dry season), limited construction opportunities near dams, and additional connectivity costs.
  • Natural gas: Controlled variability (can be turned off when not needed), proximity to the source is essential to minimize transport investment; long-term sustainability remains debatable, though cheaper Carbon Capture and Storage (CCS) technologies could offset this.
  • Nuclear (SMR): High reliability, zero carbon emissions, most promising BTM option, but realistic only after 2030.

Conclusion: BTM feasibility before 2030 is limited, depending on cost reductions in storage and CCS; in the long term, nuclear power is the most likely dominant solution.

Currently, the cheapest on-site microgrids from a Levelized Cost of Energy (LCOE) perspective are based on gas generators. However, in most cases, grid power remains cheaper than microgrids, with differences depending on technology and country. Additionally, partnerships between energy companies and data center operators are becoming more frequent.

According to McKinsey, data center operators can leverage energy company opportunities to generate an additional 2–4% revenue. Practically, data centers become active participants in grid management, not just consumers, contributing to stability and energy efficiency. This can be achieved through several mechanisms:

  • Peak Shaving & Load Levelling: Adjusting energy consumption to reduce demand peaks that strain the grid, maximize solar energy capture, and efficiently use capacity during low-demand periods;
  • Frequency Regulation: Data centers can rapidly increase or decrease supply to maintain grid frequency (50/60 Hz) during unforeseen events;
  • Voltage Control: Managing reactive power to maintain near-constant voltage amid increasing volatility;
  • Price Arbitrage: Buying and selling energy based on hourly or minute-by-minute price variations, optimizing costs and avoiding additional peak-hour charges.

Locations, Regulations & Markets

In addition to immediate access to energy and fast construction permits, flexible data center design—allowing adaptation to future demand—is equally important. Eric Boonstra (Kevlinx Data Centers) emphasized that location choice is primarily dictated by clients. For example, cities like Brussels are currently under-supplied and will soon attract new data center investments, while established hubs such as Frankfurt and Amsterdam face energy-related challenges. Milan, although seemingly saturated, has the potential to surpass Amsterdam if the current growth trend continues.

Amine Kandil (N+One Datacenters) sees major future developments in North Africa, particularly Morocco, while Umberto Sordino (EAE) highlighted strong growth in the Nordic countries, Greece, India, and the Philippines. Dan Thomas (GreenScale) noted that the Nordics and Portugal offer cheap energy and the newest networks, although energy access times remain long, they are expected to improve. Oliver Schiebel (hscale) stressed the importance of having multiple high-quality sites for clients to choose from, and Robert Bath (FoundDigital DS) added that 2N redundant power path designs will become increasingly common.

  • According to McKinsey, the geographic strategy of data centers in Europe is shifting due to rising AI demand and energy shortages in Tier 1 and Tier 2 cities. Development is moving toward Tier 3 cities, new provinces, and areas around Tier-1 hubs.
  • New locations must meet criteria such as energy cost and type, connectivity, latency, proximity to Availability Zones (AZs), and the level of support from the local ecosystem.
  • Typical time to access energy: over 5 years in Tier-1 cities, 3–5 years in secondary cities, and 2–3 years in emerging locations.

Design, Flexibility & Cooling Technology

According to experts at Platform Global 2025, although data centers are often developed based on demand, a significant portion is built speculatively due to the importance of time-to-market, making financial risks and design assumptions crucial. Safi Farooqui (Brookfield Asset Management) highlighted the importance of hybrid cooling systems and the construction of high-density data centers. Pablo Ruiz-Escribano (Schneider Electric) noted that the industry has changed completely over the last 30 years: data centers are now part of the grid, white space impacts grey space, and the talent shortage for design, construction, and operation remains a challenge.

Wes Cummins provided an example illustrating the growing importance of flexibility. The first Applied Digital data center used 100% liquid cooling and 25% air-based cooling, whereas now the company designs buildings with a flexible mix of up to 50% air-based cooling. Vincent Barro (Schneider Electric) added that DCIM systems are increasingly becoming true copilots, using AI to automate operations and optimize consumption. The Schneider representative also emphasized the growing importance of microgrid developments. Tom Kingham (CyrusOne) believes that in the near future, data centers will require 600V or even 800V systems, while Andy Hayes (Polar) spoke about the increasing demand for Neocloud and AI colocation, especially in the pharmaceutical industry.

AI Demand & Leasing

AI-focused data centers, such as Neocloud, are already being largely built in Europe. The demand for AI is enormous and real—not just hype—and adoption rates are increasing rapidly, says Christina Mertens (Virtus). According to her, the way data center space is leased has also changed: from flexible contracts of a few years for a few racks or 1 MW in a large facility, leasing has evolved to entire floors or even entire sites being leased by a single client, with longer-term and more secure contracts. The more a data center is customized for a client, the stronger the relationship and commitment become.

  • Global demand for data centers is expected to triple, reaching over 180 GW
  • By 2030, AI/ML and HPC workloads will account for 71% of the total, while GenAI will grow from 14% in 2025 to 40%.
  • According to McKinsey, although significant capacity increases are announced through 2030, supply may remain below estimated demand after 2027. In Europe, the gap could reach 10 GW.
  • This gap will put pressure on energy grids in key regions but will also create opportunities for new hubs and the entry of new market players.
  • McKinsey estimates that Europe will account for approximately 15% of global workload volume by 2030, much of it driven by AI inference services.

Platform Global 2025 showed that data centers are more than just infrastructure—they’re hubs of innovation, where energy, AI technologies, and smart development strategies will shape who leads the next digital wave.

Learning from Downtime: What Recent Data Center Outages Reveal

A data center being offline for just a few hours can mean losses of hundreds of thousands or even millions of dollars. The “Annual Outage Analysis 2025” report by the Uptime Institute highlights a paradox in the industry: although the overall frequency of outages and reported severity have decreased for the fourth consecutive year, their financial and reputational impact is becoming increasingly severe. In 2024, more than half of the surveyed organizations (54%) reported that their most recent significant outage exceeded $100,000, and one in five reported losses of over $1 million.

What have the past few years really looked like for data center operators in terms of operational disruptions? Beyond charts and statistics, every percentage point hides real stories—high-profile incidents and major financial losses that demonstrate just how fragile the balance between availability and downtime can be.

Energy, the “Achilles’ Heel” of the Data Center Industry

Even though only 9% of incidents reported in 2024 were classified as serious or severe—the lowest level ever recorded by Uptime—power remains the “Achilles’ heel” of data centers, accounting for more than half (54%) of major-impact outages. The numbers become even more telling when placed alongside real-world cases—let’s look at a few concrete examples.

In October 2023, a failure in the electrical distribution system at a Microsoft data center in the Netherlands caused an outage of nearly two hours , after the switchover from the public grid to backup generators partially failed. The incident impacted key Azure services—from App Service and SQL DB to storage and virtual machines—and about 1% of racks lost power. Full recovery took until the evening, with some storage accounts affected for several hours, directly impacting customers and critical services that depended on them. Microsoft did not disclose details about the financial impact of this outage.

Read the article “Power cuts, a major challenge for data centers” to explore some of the solutions and preventive measures recommended for operators.

Cooling, Network, and IT – the Next Major Risk Factors

The Uptime Institute report also shows that, after power, cooling systems (13%), networks (12%), and IT systems (11%) follow as leading causes of outages, confirming that critical infrastructure remains vulnerable precisely at the points where it should be strongest.

We already know that heatwaves are not a data center operator’s best friend. Back in July 2022, Google’s and Oracle’s London data centers  were hit by a record-breaking heatwave, with temperatures soaring above 40 °C, which caused failures in cooling systems. Oracle’s first announcement about the incident stated that “unreasonable temperatures” had affected its cloud and network equipment at its South London data center, causing outages throughout the day and impacting customers. Google, in turn, partially shut down cloud services for several hours as a protective measure to prevent equipment damage and prolonged downtime, affecting a small number of users and causing temporary unavailability for services such as WordPress web hosting in Europe.

A more unusual incident was recently shared by Rick Bentley, founder of Cloudastructure and Hydro Hash, which operates a hydro-powered crypto-mining data center. This one occurred in Montana, USA, where the data center “froze solid overnight” . The problem, in this case, was the rapid temperature drop from -6 °C to -34 °C in less than 24 hours. Bentley emphasized that, although the team believed it was prepared, the combination of extreme cold with a power outage made the incident unavoidable.

Complex IT Infrastructures Mean More Frequent Outages

As mentioned earlier, in 2024 nearly a quarter of major-impact outages were caused by IT and network issues—a trend explained by the increasing complexity of infrastructures and the risks associated with misconfigurations. Uptime Institute data confirms this: the most common causes of IT-related outages are network and connectivity problems (30%), IT systems and software (23%), power outages (18%), third-party IT services such as public cloud or SaaS (8%), and cooling issues (7%).

A representative case is the incident on July 20, 2025, involving Alaska Airlines. This highlights that the damage is not only financial but also reputational. The U.S. airline suffered a critical hardware failure in its data centers, which led to the grounding of all flights for about three hours, between 8:00 p.m. and 11:00 p.m. PT. The issue disrupted core flight operations and also affected its subsidiary, Horizon Air. As a result, on July 21, FlightAware data showed that 7% of flights (66) were canceled, while another 12% (110) experienced delays, leading to crowded airports and confusion among passengers. The hardware failure was reportedly caused by a third-party component, with the company stating it was working with the vendor to resolve the issue.

Operational Outages Caused by Human Error on the Rise

In 2025, outages caused by human error increased by 10 percentage points compared to 2024, with the most common cause being failure to follow procedures—possibly amplified by the industry’s rapid growth and staff shortages. Investments in employee training and real-time operational support can help mitigate these risks. According to Uptime Institute, over the past three years, the main causes of major human errors have been, in addition to not following procedures (58%), staff following incorrect processes (45%), understaffing (18%), insufficient preventive maintenance (16%), and omissions in data center design (14%).

A Final Note

As data center infrastructure becomes increasingly complex and interconnected, operational risks are diversifying and becoming more costly. Even infrastructure designed to be robust can be vulnerable to extreme conditions or configuration errors, underlining the importance of an integrated prevention strategy.

To reduce the risk of outages in data centers, several complementary measures are essential: redundant power systems (generators and UPSs) to ensure uninterrupted hardware operation; regular maintenance and testing, supported by monitoring and predictive analytics; failover to mirror sites for rapid traffic redirection; disaster recovery plans with checklists and regular drills; and staff training to minimize human error.