Markus Gerber, Sr. Business Development Manager at nVent, discusses a simple framework for choosing the right cooling systems and how organisations can identify the ideal time to implement liquid cooling. He outlines some challenges data centres face in the cooling process and how they can be managed.
What is liquid cooling and why is it important to an organisation’s infrastructure?
Liquid cooling is not a new concept, it has been in existence for some time. The underlying principle of cooling involves transferring heat from one medium to another or using a medium to dissipate the heat. In this case, we utilise liquid to remove the heat and the primary motivation behind adopting liquid cooling is to bring the cooling closer to the heat source.
In the past years, we have seen a progression in cooling methods: from cooling entire rooms to cooling rack walls, then moving to single racks and now the focus is shifting to cooling IT equipment itself. The main aim is to achieve higher efficiency, better manage thermal concerns and promote sustainability within current installations.
How is immersion cooling different from direct liquid cooling and what are the differences in deployment?
The main difference is in the immersion cooling approach which captures almost all the heat with liquid. As a company, we focus on a single-phase cooling media. This means the liquid we employ maintains a consistent state. Other solutions with dual-phase liquids exist but we break them down into two specific approaches called ‘direct-to-chip cold plate cooling’ and ‘precision chassis-based immersion cooling’. For both approaches, we utilise the same infrastructure components used in the direct-to-chip.
In this system, we have a distribution unit, manifold unit and cooling distribution unit responsible for filling the manifold and host connections. The manifold connects to the server or chassis and is filled with a dielectric liquid to efficiently dissipate heat.
What is direct-to-chip (D2C) liquid cooling and why is it considered an easy-to-maintain and scalable system? What are some challenges data centres face in the cooling process and how can they solve these challenges?
With direct-to-chip, there is a heat pipe or cold plate which is placed in the hottest parts of the equipment like the CPU, GPU and can also be extended to other parts like memory and power supplies. A liquid flow through this component will remove heat from the server.
Using this system in your building creates one challenge. In a standard server rack, power distribution units are placed at the rear to power individual equipment. But, in incorporating liquid cooling, a manifold is needed to distribute liquid to the equipment. This means bringing liquid into the white space, getting it even closer to the IT setup and establishing a secondary fluid network inside the white space.
When is it an ideal time to implement liquid cooling and what steps should organisations take?
There is no perfect time to start. For organisations, you need to consider your IT demands, computing demands and future strategies. There are no limitations to this technology and it can be integrated into existing white spaces without newly built facilities. You can start by designating a corner in your white space for directorship, create a proof of concept or introduce a cube in an existing data centre based on your specific requirements.
Organisations should decide based on their kind of computing loads, even traditional ways of cooling are not bad. If you have low-density servers, sticking with traditional cooling may be reasonable due to the available infrastructure and cost-effectiveness. However, if you deal with mid or high-end loads and anticipate future changes in requirements, exploring new data centre cooling methods becomes a logical choice. So, assess your IT demand and make informed decisions accordingly.
How can organisations identify the right cooling system and what infrastructural changes do they need to make?
It is important for every organisation to assess their IT aspect and plan appropriately from the outset. Although the technology may not be new, people may need guidance. Some experts perceive introducing liquid in whitespace as a risk, but with proper planning and the right components, this risk can be mitigated. There are existing installations that already demonstrate that this is secure, sustainable and energy efficient.
Although it may require rethinking some established habits, the key lies in proper planning, collaborative efforts and all companies working together considering the sustainability and energy efficiency of their systems. They all need to bridge the gap between IT facility infrastructure and making sure the direct-to-chip cooling infrastructure meets all the requirements on the server side. In a nutshell, the process revolves around assessment and design layout planning.
How does nVent help its customers to protect IT assets and maintain high availability (uptime) at minimal operational costs?
Our company has a slogan ‘think global, act local’ which is a main advantage we have as a company. We are a global company with years of experience in immersion and direct-to-chip cooling working extensively with hyperscale programs in the past. Our approach now involves leveraging and standardising that engineering expertise, manufacturing capabilities, quality standards and certifications. Currently, we are focused on standardising all these skills and expertise for enterprise use, whether it’s an on-premise data centre or a multi-tenant data centre site.
Our standardised components include a construction toolkit, allowing us to combine different components with in-built redundancies manufactured to highest quality standards to create tailored solutions for each specific demand.
The rise of Machine Learning (ML) is driving the adoption of liquid cooling. What’s your take on this and how do you see this evolving in future?
Machine Learning is an integral part of AI. It currently plays a significant role in our business and positions us well in this domain by fueling the development of new technologies, especially on the coding side.
As I reflect on my journey with coding, I realise how exciting it is to see the industry advancing. Machine Learning is a driver and what drives it is the computing load required for systems. Computing load that can no longer be cooled by air drives the adoption of liquid cooling, hence the introduction of new technologies like Machine Learning.
We all want to reduce the carbon footprint and liquid cooling allows you to use higher temperatures, become more sustainable and by extension, helps to drive the market if operated efficiently. Efficiency is crucial in operating these systems and that’s a key takeaway for us. I am eagerly looking forward to more installations and advancements in this field.Click below to share this article