Magazine Button
Data centre cooling in the hyperscale era

Data centre cooling in the hyperscale era

InsightsPower & CoolingTop Stories
NGD campus

Phil Smith, Construction Director, Next Generation Data, gives Intelligent Data Centres an insight into stepping up to the challenges of rising power densities. 

The dawn of a new decade is seeing many more IT projects and applications consuming greater power per unit area. Power densities in the hyperscale era are rising. Some racks are now pulling 60 kWs or more and this trend will only continue with the growing demand for high performance computing (HPC) as well as GPUs/IPUs for supporting new technologies such as Artificial Intelligence.  

Power and cooling are therefore top priorities for data centre operators. However, even though the availability of sufficient existing and forwards power is vital, it’s already proving a stretch for both the power capacity and local electricity distribution infrastructure of many older data centres and those located in crowded metropolitan areas.  

Putting super-efficient cooling and energy management systems in place is a must. For cooling there are various options; installing, for example, the very latest predictive systems and utilising nano-cooling technologies. However, these may only be viable for new purpose designed data centres rather than as retrofits in older ones. Harnessing climatically cooler locations which favour direct-air and evaporative techniques is another logical step, assuming such locations are viable when it comes to the accessibility, cost, security, power and connectivity considerations.  

Clearly, efficient cooling has always been critical to data centre resilience as well as from an energy cost optimisation perspective. But it now matters more than ever, in spite of next generation servers being capable of operating at higher temperatures than previous solutions.  

HPC is a case in point. Would-be HPC customers are finding it challenging to find colocation providers capable of providing suitable environments on their behalf, especially when it comes to the powering and cooling of these highly-dense and complex platforms.  

Suitable colocation providers in the UK – and many parts of Europe – are few and far between. The cooling required demands bespoke build and engineering skills as many colos are standardised/productised; so unused to deploying the specialist technologies required.  

HPC requires highly targeted cooling. Simple computer room air conditioning (CRAC) or free air cooling systems (such as swamp or adiabatic coolers) typically do not have the capabilities required. Furthermore, hot and cold aisle cooling systems are increasingly inadequate for addressing the heat created by larger HPC environments which will require specialised and often custom-built cooling systems and procedures.        

Cooling and energy management in practice 

Fit for purpose data centre facilities are actually becoming greener and ever more efficient in spite of the rise in compute demands. However, best practice necessitates real-time analysis and monitoring for optimising cooling systems plant and maintaining appropriate operating temperatures for IT assets, without fear of compromising performance and uptime.  

Central to this, and to maximising overall data centre energy efficiencies, are integrated energy monitoring and management platforms. An advanced system will deliver significant savings through reduced power costs and by minimising environmental impact.  

Abundant power and significant on-going investment in state-of-the-art cooling and energy management technology ensures Next Generation Data’s (NGD) hyperscale data centre in South Wales continues to meet and future-proof highly varied customer requirements. From delivering standard 4kW rack solutions up to 60kW per rack and beyond – with resilience at a minimum of N+20%. 

The 750,000 sq ft colocation and cloud hosting facility was the first to receive a UK Government Climate Change Agreement (CCA) certification in 2014 making it exempt from carbon taxes. It was also the first in Europe to source 100% of its power from renewables, an initiative taken from Day One over 10 years ago. Further, the considerable 180 MW direct from Supergrid power supply remains unique in Europe.  

Stulz  

On NGD’s 250,000 sq ft ground floor, comprising 31 separate data halls drawing a total of 32 MW, a Stulz GE system is installed. The indoor unit has two cooling components, a direct expansion (DX) cooling coil and a free cooling coil. It utilises outdoor air for free-cooling in cooler months when the outside ambient air temperature is below 20°C, with indirect transfer via glycol water solution maintaining the vapour seal integrity of the data centre.  

The system automatically switches to free-cooling mode, where dry cooler fans are allowed to run and cool the water to approximately 5°C above ambient temperature before it is pumped through the free cooling coil. In these cooler months dependant on water temperature and/or heatload demands, the water can be used in ‘Mixed Mode’.  

In this mode the water is directed through both proportionally controlled valves and enables proportional free cooling and water-cooled DX cooling to work together. Crucially, 25% Ethylene Glycol is added to water purely as an anti-freeze to prevent the dry cooler from freezing when the outdoor ambient temperature is below zero. 

In warmer months when the external ambient temperature is above 20°C, the system operates as a water-cooled DX system and the refrigeration compressor rejects heat into the water via a plate heat exchange (PHX) condenser. The water is pumped to the Transtherm air blast cooler where it is cooled and the heat rejected to air. 

Vertiv 

Vertiv EFC 450 indirect free cooling units are used exclusively on NGD’s 250,000 sq ft top floor for indirect free cooling, evaporative cooling and DX backup. There are 67 units providing 28.5mw of cooling on an N+1 basis. These indirect free cooling units allow NGD to control the ingress of contaminants and humidity, ensuring sealed white space environments.  

The system works in three modes. During winter operation mode return air from the data centre is cooled down, leveraging the heat exchange process with external cold air. There is no need to run the evaporative system and the fan speed is controlled by the external air temperature. 

In summer, the evaporative system must run in order to saturate the air. This enables the unit to cool the data centre air even with high external air temperatures. By saturating the air, the dry bulb temperature can be reduced. In the case of extreme external conditions, a Direct Expansion (DX) system is available to provide additional cooling. DX systems are sized to provide partial back up for the overall cooling load and are designed to provide maximum efficiency with minimum energy consumption. 

Custom HPC cooling  

Where required, for specialist HPC and other high density environments, NGD provides dedicated cooling solutions, such as the one designed for a major international insurance firm. The company required a 40kW rack configuration including direct liquid cooling to ensure optimised PUE.  

Working closely with the customer, NGD’s engineering team designed, built and installed the HPC rack environment, including a bespoke direct liquid cooling system, in less than three weeks. The liquid cooling allows highly efficient heat removal and avoids on board hot spots, therefore removing the problems of high temperatures without using excessive air circulation which is both expensive and very noisy.  

Click below to share this article

Browse our latest issue

Intelligent Data Centres

View Magazine Archive