Magazine Button
Are data centre designers heading for ‘judgement day’ over AI?

Are data centre designers heading for ‘judgement day’ over AI?

EuropeFacilities & ServersInfographicsResearchTop StoriesUnited Kingdom

Onnec’s recent report highlights the importance of holistic design and the challenges that AI is set to bring, inevitably changing the rules for data centres.

Artificial Intelligence (AI) has hit the mainstream. New tools and applications, such as text and video generation, are growing in popularity and have the potential to supercharge business growth and productivity.

With 35% of global organisations adopting AI, and worldwide AI spend set to reach US$300 billion by 2026, we have now reached a tipping point where AI adoption is becoming exponential. But to realise AI ambitions, data centres will be under huge pressure to meet skyrocketing demand – operators will want to ensure Business Continuity, but also gain and retain customers. AI raises major implications over how to design the data centre of the future. In the rush to meet demand, its important operators tread carefully to avoid making an expensive mistake.

Onnec: AI is changing the rules for data centre design

Irreversible implications

AI has taken centre stage and is rewriting everything we know about data centre design. To cater for rising AI demand, operators have been completely rethinking their design approaches. In fact, some hyperscalers hit pause on projects until they better understood the requirements for AI workloads.

The key area under consideration comes down to AI-compute. Data centres have traditionally relied on Central Processing Units (CPU) powered racks. But to cater for AI, Graphics Processing Units (GPU) will be required, which consume more power, generate additional heat and occupy more space.

This makes decisions regarding the CPU versus GPU allocation during the initial design stage crucial, as essential infrastructure, such as power, cooling or cabling can be difficult to replace once built. To effectively cater to demand, operators need to get the CPU/GPU split right. But to do that, they need a clearer picture of demand to establish whether existing designs, infrastructure and cabling can cope.

Re-designing for AI

Understanding this split will give operators a much clearer view of design requirements and the infrastructure required. AI will dramatically change design requirements in multiple areas, such as:

Power – AI-compute requires high-performance processors (GPUs and DPUs) that draw more power than traditional CPUs.

Cooling – With compute demands rising, racks will become more demanding. Liquid cooling is preferred for high-performance chips and can be more cost-effective, but air cooling will still have a role to play.

Networks – The popularity of AI will bring an explosion in network traffic between applications within data centres, between data centres and to end users. Network infrastructure will be under increased pressure, with much higher data throughput than ever before.

Cabling – Poor quality cabling will struggle in intense environments with lots of throughput. Cabling is the foundation for data centre connectivity and a critical component for AI-compute.

Cybersecurity – As well as physical infrastructure, data centre operators also need to consider the shift to AI-assisted data centre operations. Designers must factor in countermeasures to ensure malicious forces cannot sabotage data centre operations. 

To future-proof and cope with explosive demand, operators need to take a holistic design approach that considers all aspects of the data centre – from cabling to cooling. When developing future data centres, operators must be mindful that a small change in one area can create ripples that sabotage the performance of an entire data centre. For example, poor cabling can squander the value of high-performance data centre hardware.

Click below to share this article

Browse our latest issue

Intelligent Data Centres

View Magazine Archive