Magazine Button
Data centre design considerations and components critical to operational success

Data centre design considerations and components critical to operational success

Data CentresFacilities & ServersInsightsOperations & SystemsTop Stories

Meeting the design requirements of a data centre is a non-negotiable for determining the success and longevity of a facility and ensuring speed to market. Abed Jishi, VP, Design, EMEA & APAC, Vantage Data Centers, provides insight into why the overall design objective is to always ensure predictability so that customers can scale quickly and reliably in any geographic region.

Demand for high-quality hyperscale data centres across the globe has never been higher. Though the hyperscalers are leading the charge for more space, power and connectivity, large enterprise organisations are a further contributing factor. As with the hyperscalers, they recognise that high-quality purpose-designed and built data centres offer many benefits versus the alternative of operating their own self-build/managed solutions.

Both hyperscalers and enterprise customers rightfully expect their data centre partners to ensure their technology and applications are powered, cooled, protected and connected when and how they want, irrespective of geographic location. Continuous IT availability through the provision of scalable, future-proofed power and resilient critical infrastructure are prerequisites. So too are optimised energy efficiencies and industry leading PUEs and compliance with environmental, security, quality and operational regulations. 

All the above will serve to inform and help to shape the data centre design and development cycle while also ensuring speed to market – without compromising the safety of staff and construction workers – and maximising sustainable construction and subsequent operation. At Vantage we’ve evolved a replicable and scalable blueprint for bringing together all the elements needed to meet these demands: an application of building technologies and experts certified to design, build, operate and maintain the infrastructure.

The design and development cycle effectively starts in tandem with the initial planning applications and discussions with local council authorities and utilities. A potential data centre location will undergo a thorough risk assessment including the availability of suitable land, water, power and fibre. For example, a dedicated on-site, high-voltage substation may be required as was the case at our new Johannesburg campus. This first required our engineers to upgrade the pre-existing Council power lines and modify the pylons.      

Other risks will include proximity of a potential site to a flood plain, flight path or earthquake risk. Geopolitical risk factors and availability of construction workers will also need to be mitigated.

Critical components

Space, power

The move to high-density computing means power to space ratio is increasingly critical for achieving highly concentrated power to rack in ever smaller footprints. Growing hyperscale cloud and HPC deployments are already driving up rack densities to unprecedented levels.

To address these challenges, we are now providing up to 300 watts per square foot, allowing a standard data module to support an average of 8.3 kW of power to each rack. In addition, 4 MW of IT capacity is available within one data module. All electrical components are located outside the server room in dedicated containers. Apart from allowing more space for racks, such a design approach facilitates M&E installation and ensures maximum reliability in any location as all equipment is designed and premanufactured by world-class suppliers. Resilience is assured through N+2 for mechanical systems and distributed redundancy for electrical systems.

Cooling

Cooling efficiency has always been critical to data centre resilience and uptime as well as for energy cost optimisation. But in the fight against climate change making these as energy efficient as possible, it matters more now than ever. We design our facilities to use virtually no water for cooling. In areas where we have to use water because of climate conditions or customer requirements, we explore solutions that minimise any negative impact – sometimes even partnering with utilities to use recycled or reclaimed water.

In our experience, having utilised various cooling technologies over the years – including air-cooled chillers, water-cooled chillers, indirect heat exchange and liquid to the rack – leveraging an adaptive cooling architecture gives customers more choice. It allows customers the flexibility to deploy up to 20 kW racks with traditional air-cooled technology and more than 50 kW racks using liquid to rack cooling solutions. An integrated economiser capability reduces compressor energy based on the ambient temperature outside, so when the weather is favourable, cooling becomes ‘free’ and less resource-intensive.

The air-cooled chillers plant has an N+2 component redundancy and is part of a closed-loop configuration which requires a near zero water recharge, enhancing our goal of sustainable data centre operations. We’re also working towards pushing the medium for heat exchange closer and closer to the source — i.e., the chip — with a goal of an even more efficient and just as reliable cooling approach.

Highly efficient airflow of positive pressure hot aisle containment is achieved in part by maintaining the computer room air handling (CRAH) units outside each data module, on opposite sides. Again, this means more room for racks while allowing CRAH maintenance to be performed away from critical IT infrastructure.

Connectivity

A further important consideration is levels of connectedness as, along with space and power, this directly influences how far a customer can expand in the future at the same site; the best data centres are hyperconnected with a plethora of carrier and gateway options. The worst are isolated sheds with little connectivity. 

Campus sites should therefore be chosen specifically to provide access to diverse carriers. To facilitate this, our campus design employs a modified ring topology for inter-building connectivity. First, carriers bring their fibre to the edge of the campus, entering through dedicated underground point-of-entry pathways separated by a minimum of 500 feet for safety. For added redundancy, carriers bring in a minimum of two laterals off diverse paths. From there, connections continue to entry points located within the data centres’ Meet-Me-Rooms (MMR) in which customers can opt to install their own network equipment for cross-connecting to any carrier.

Security and operational management

At a minimum, physical security should take a multi-layered approach, starting with high-grade perimeter fencing, obstructions such as anti-ram bollards as well as thorough access control procedures for personnel and visitors, CCTV recording and highly trained security personnel. However, there is scope for continuous improvement.

For example, at Vantage, one of the areas we see potential in is Artificial Intelligence (AI) for perimeter defence and intrusion detection. These solutions utilise cameras and other Internet-of-Things (IoT) sensors to determine and identify activity on campus and alert security personnel if it’s determined to be a threat.

On the operational side of things, modern energy management and building management systems go a long way in providing the comprehensive monitoring, measurement, alerting and reporting that operators and customers need. However, an operator must still conduct regular scheduled maintenance cycles which requires trained engineers to be on hand 24/7. Further initiatives such as predictive diagnostics and carrying sufficient on-site spares are further prerequisites in this department, enabling engineers to respond quickly to potential or actual equipment failures.

Being totally confident in the critical infrastructure also requires it to be rigorously tested. Some data centres will have procedures to test their installations but still rely on simulating total loss of incoming power. Ultimate proof comes with Black Testing where, under strictly controlled conditions, incoming mains power is isolated allowing the UPS to take the full load before the emergency backup generators kick in.

Sustainability

The industry’s move to net zero also clearly impacts on data centre design and requires a holistic approach. One that that considers the impact on carbon, energy, water, waste and the communities wherever data centres operate. For example, by designing facilities to limit waste and lower carbon footprint by making them as energy efficient as possible and run on renewably sourced power. And exploring opportunities to reduce the carbon intensity of the fuels used for backup power generation.

Using building materials that have lower embodied carbon is as further initiative. Recently in Virginia, Arizona and California, we used fibreglass reinforced plastic (FRP) instead of steel for some equipment supports and pedestrian walkways, reducing embodied carbon by approximately 1,800 MTCO2e. We’re also exploring ways to export waste heat from our Zurich and Frankfurt campuses into the local district heat network. 

In summary, to deliver sustainable fit-for-purpose and highly resilient data centres requires a holistic design cycle: centred on maximising certainty of outcome across all areas of the data centre. This demands a standardised approach while remaining flexible enough to meet customer-specific needs. From a turnkey to a total build-to-suit scenario, the design objective is to always ensure predictability so that customers can scale quickly and reliably in any geographic region.

Click below to share this article

Browse our latest issue

Intelligent Data Centres

View Magazine Archive