Data centre investors and operators can address power supply and stability concerns by integrating renewable energy sources, advanced cooling technologies and energy-efficient architectures to ensure sustainable operations.
At the same time, investing in grid resilience, battery storage and strategic partnerships with utility providers will help meet the growing AI-driven demand while maintaining reliability and scalability.

Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, said: “The AI boom is creating an unprecedented demand for energy in the computing sector. Massive GPU installations require vast amounts of electricity to run and – critically – to keep cool. This isn’t just a concern from a business point of view, but from an environmental one. Maintaining uptime requires a huge operational expenditure on energy, while innovative approaches will be required to ensure the new technology doesn’t derail efforts to reduce IT infrastructure’s carbon footprint.
“Many data centre operators are making strides to optimise current systems with both of these needs in mind. They are investing in cutting-edge cooling technology and GPU hardware that reduces energy consumption, both of which can have a significant impact on overall usage. Larger providers have also been investing in renewable energy sources such as wind, solar and geothermal, powering their data centres through means other than fossil fuels. Another possible approach is to build data centres in colder locations, where the atmospheric chill can be used to help control the centre’s temperature.
“However, the sheer scale of AI energy usage means that these efforts on their own are unlikely to meet the demand. As a result, many hyperscale data centre operators are turning to higher-capacity innovations – particularly hydrogen fuel cells and nuclear power generation. Those might sound like outlandish solutions to this particular problem, especially as hydrogen continues to be a niche power source despite its widespread availability. However, executed correctly, both offer high-capacity, cleaner alternatives to oil or natural gas, making them worthy of serious attention. Nuclear power’s risk profile and political sensitivity do pose challenges, though, which may ultimately lead to slower adoption of this method across the industry as a whole.
“The other key area of development is in the efficiency of AI models themselves. The quicker and smarter their operation becomes, theoretically the lower their power consumption will be for any given task. AI developers and coders therefore also have a role to play in controlling energy usage – not to mention the potential for AI models themselves to analyse and identify possible solutions to the problem, given the right training and time.
“Ultimately, maintaining stability and uptime for AI installations while balancing the need to reduce harmful emissions is a real challenge. Achieving it will require collaboration across the IT and energy industries, and with governments. AI use cannot come at the cost of increased global heating or deforestation – alternatives to fossil fuels must be prioritised, along with R&D on energy use reduction.
Micah Gertson, Associate Partner, Altman Solon:

There is no question that power is the bottleneck in data centre builds right now, especially when trying to address AI. For greenfield builds, getting a new grid offtake agreement for 10+ MW can take years depending on the utility. For brownfield DCs, supporting higher rack densities by upgrading the power distribution and cooling inside the data halls is complicated enough without disturbing your customer’s mission-critical IT infrastructure.
In the greenfield context, we have observed three key power strategies from our leading DC clients which helps them to significantly benefit from the AI wave:
- Prioritise power – Make power-banking as important as land-banking by adding on-the-ground power procurement teams to your land prospecting team. Knowing how to work with utilities across each country/geography is a granular task greatly improved by in-market presence. For larger international players, this likely means hiring 10s of people dedicated to power development versus the few that many operators have today.
- Create a ‘menu’ of sites – DC tenants deploying IT infrastructure for AI move quickly and are themselves under significant time pressure to monetise their assets (mainly GPUs). Having a menu of 10+ MW site options for which you have developed power helps operators provide optionality for AI-driven tenants. Location doesn’t have to be perfect these days, e.g. suburban Milan or Madrid or rural locations in the Nordics can be very interesting to a tenant if it gives them the ability to deploy 10+ MWs of GPUs in six to nine months.
- Extract value from powered land – As an investor, look to invest in operating platforms that have larger land and power banks and at least one scaled AI-driven DC tenant. Even if they won’t develop every site into an operating DC, we see an increasingly healthy market for selling powered land- so there are more ways than ever to monetise the power team’s efforts.
In the brownfield context, we find addressing AI workloads at scale to be much more opportunistic. The main strategy that an enterprise colocation operator can employ is to set up some rows or cages of rear-door heat exchange racks to offer to their customers as a proof-of-concept. This kind of localised liquid cooling deployment limits the requirement for large-scale plumbing refits for liquid loops and CDUs, while allowing you to offer high density racks of 50+ kW to take advantage of unutilised power capacity.
AI-driven DC demand is an exciting but also challenging opportunity to address. This segment is still nascent, and there is no silver bullet – which means there are likely good opportunities for most operators and investors.
Ted Oade, Director of Product Marketing at Spectra Logic:

The AI boom is still in its early stages, yet power availability and consumption have already become critical concerns. The rapid expansion of AI and data-driven applications has significantly increased energy demand, forcing data centre operators to balance power constraints, grid stability, and sustainability goals while meeting AI’s growing appetite for compute resources.
To capitalise on AI opportunities, data centres are shifting from traditional CPU-based servers to high-performance GPU servers, which consume up to 13 times more power in the same footprint. Meanwhile, utility providers struggle to scale their infrastructure, which is slow and costly. Some propose nuclear solutions, such as small modular reactors (SMRs), but the regulatory, environmental and social challenges make them unlikely near-term solutions.
The growing power demand of storage
Power consumption isn’t just about compute resources. Storage also plays a major role, as data centres rely heavily on high-performance flash and disk-based solutions that
require constant power and cooling. As AI adoption accelerates, so does the volume of data – inputs, outputs, models and inference processes – all adding to the storage energy burden.
Innovators are developing power-reduction strategies such as AI-optimized infrastructure, liquid cooling, efficient software, on-site renewable energy, and smart utility grids. However, implementing these solutions at scale requires significant time and investment.
Magnetic tape: A proven, low-power alternative
One readily available solution is magnetic tape storage, a trusted technology protecting about 20% of the world’s data for decades. Unlike hard disk drives (HDDs) and solid-state drives (SSDs), which require continuous power for operation, tape remains offline until needed, reducing lifetime energy consumption by approximately 87% compared to HDDs.
Tape also offers the lowest total cost of ownership (TCO) and can scale beyond an exabyte in a single system, making it an ideal solution for data centres looking to optimise efficiency.
Tiered storage for power optimisation
Data centre operators can significantly reduce power consumption by implementing tiered storage architectures that offload inactive or ‘cold’ data (which typically makes up around 80% of stored data) to tape. Meanwhile, frequently accessed ‘hot’ data remains on high-performance SSDs and HDDs. This approach ensures that AI workloads have immediate access to essential datasets while minimising overall energy demand.
Tape technology continues to evolve, with upcoming advancements such as LTO-10 expected to deliver higher data density, faster streaming performance, and greater per-device capacity than the latest hard drives.
A smarter approach to AI-driven storage
To address power supply and stability concerns while meeting AI-driven demands, data centre operators must adopt strategic storage solutions that balance performance, cost and sustainability. Expanding the use of tape-based storage offers a low-energy, cost-effective, and reliable alternative to traditional disk-based storage, helping data centres optimise power use, enhance grid stability and support AI’s future.
By integrating tape into their storage strategies, operators can build more efficient, resilient infrastructure in an era of unprecedented data growth.
Gary Higgins, Director of Security and Risk, DeterTech:

Data Centre floorspace has almost doubled in Europe in the past decade. The desire to offset high energy costs has also spurred the related construction of further on-site infrastructure such as solar farms and battery storage facilities. Such consistent and surging demand for power and capacity is of course music to the ears of investors and operators. However, it has also caught the attention of organised criminal gangs (OCGs).
At DeterTech we operate a crime intelligence portal for the police and critical infrastructure companies that helps track the movements and targets of these OCGs. Based on this we can
confidently state that data centres, particularly those under construction or undergoing significant upgrades are highly vulnerable. In most cases the target is cable, which we’re seeing taken in eye-watering quantities.
For example, in 2024 there were at least 120 reported cable theft offences against UK solar farms and a staggering 150% year-on-year increase in reported thefts of more than 20km of cable at a time. Just as concerning is the high likelihood of sites being hit again within a matter of months. Often just as quickly as the stolen cable can be replaced.
The result is unexpected delays, disruption and increased insurance premiums. That’s a problem for any operator. But it’s compounded for data centre operators who typically schedule new builds and upgrades within strict timeframes to avoid disrupting key business periods. It is in any operators’ self-interest to revisit security plans and to implement additional stringent physical measures, especially during the construction and refurbishment phase.
First it is important to consider each site and its specific vulnerabilities. The construction and refurbishment phase is when planned permanent security solutions are most likely to be offline, less effective or poorly placed. Often, perimeter fencing can also easily be cut with an angle grinder. So now is the time to bring in temporary visually verified intruder detection systems.
Unlike unmonitored CCTV, these systems can actively deter theft and easily be relocated to the parts of the site where they can have the greatest impact. They can also trigger a rapid security or police response before offenders are able to steal cable and create power supply and stability issues. Furthermore, the number of devices can quickly be scaled up on sites that are considered particularly vulnerable or that have been recently hit. This changes the environment and significantly reduces the chances of repeat victimisation taking place.
Finally, the clearly signposted forensic marking of valuable items such as cable, equipment and machinery is also highly advised. SmartWater has a 100% conviction rate in contested court cases meaning it not only aids recovery but also acts as a clear deterrent to criminal activities.