The past year has seen many of us working from home and spending a lot more time using streaming services and staying indoors. This has caused a greater demand on data centres worldwide leading to increased sales of equipment. Additionally, demand will also be driven by 5G data, Internet of Things (IoT) applications, the evolution of Edge Computing and local data centres which will all impact this market significantly in the coming years.
A key area for data centres is thermal management. Most data centres rely on air-conditioned rooms and large heat sinks for the individual components. However, in the future, this may not be feasible for all cases, especially in smaller Edge Computing sites. Power consumption is always a big concern for data centres and hence we expect to see more passively cooled centres, leading to a more careful selection of thermal materials. Direct liquid cooling or even immersion cooling is seeing some greater interest in recent years but regardless of the overall thermal strategy adopted, the considerations around thermal interface materials (TIMs) are crucial.
TIMs are required to transfer heat from the operating component to its heatsink. In a data centre, TIMs can be found on processors and chipsets on sever boards, various switch and supervisor components and in the power supplies, to name a few locations. Many have used and continue to use typical thermal greases for their TIM in data centres, while these present good thermal conductivity and easy application, they are susceptible to pump-out and becoming brittle over time. This limits thermal performance in the long run and requires maintenance on the system. Alternative forms of TIMs such as pads and phase change materials are gaining traction, enabling even easier application and longer lifetimes.
Another key trend for data centres is the increasing power density. 1 kW per rack may have been considered a high power density in the past, but in 2018, the average was closer to 7 kW. However, for many large data centres, 15 kW per rack may be more typical, with some reaching 20 kW or more. A critical challenge with this increase is managing the heat generated. This is another key driver for higher performance and longer lifetime TIMs, a trend that will only continue to grow in importance.
The report from IDTechEx, Thermal Interface Materials 2021-2031: Technologies, Markets and Opportunities, considers the forms and compositions of TIMs, benchmarks commercial products and details new advanced materials. It also analyses current TIM applications in emerging markets as well as the key drivers and requirements in these areas such as electric vehicle batteries, data centres, LEDs, 4G and 5G infrastructure, smartphones, tablets and laptops. In addition, 10-year granular market forecasts are given for each of these segments in terms of application area and tonnage.
We hear from a number of experts to glean their thoughts on the data centre trends shaping the future of the industry and how business leaders should be adapting their approach.
Amr Alashaal, Regional Vice President – Middle East at A10 Networks: “We’ve become an application-centric society. The cloud has played a huge role as we move away from strictly on-premises data centres to a hybrid cloud and multi-cloud approach. And, of course, the COVID-19 pandemic has had a profound impact on how we are building, delivering and consuming our applications. Below are some trends that data centre managers should keep an eye on:
- With more employees working from home due to the pandemic, organisations will continue to see growing demand for cloud application delivery. For some, the pandemic is accelerating their plans to use public clouds and private clouds, while others will adjust their original plans to meet the growing demand.
- Migration to the cloud will continue to grow and over 90% of enterprises will have adopted a hybrid cloud strategy by the end of 2021.
- 83% of enterprise workloads were expected to be in the cloud in 2020. That number will grow to over 90% in 2021. This will drastically increase secure application delivery and load balancing needs.
- With the growth in the number and use of essential business apps for online work and shopping, organisations will spend more budget on web application security to prevent cyberattacks directed at vulnerable applications.
- Due to significant investment in data centre hardware, more organisations than expected will take a hybrid cloud approach instead of ‘cloud only’, for application delivery.
“The future of modern application environments will incorporate multiple clouds as well as on-premises systems. Companies increasingly need application traffic management capabilities that simplify deployments across these mixed or even all-cloud environments and provide centralised traffic inspection, visibility, analytics and integrated services.
“The need for secure application delivery solutions to maximise application availability, performance, security and customer experiences is a critical business requirement that spans on-premises systems and data centres as well as public, private and hybrid clouds. Some companies will need hardware appliance solutions that fit into their existing architectures and IT processes. Others are moving more rapidly to cloud-ready software solutions that can operate on virtual machines or bare metal servers and support new technologies like Kubernetes and other container solutions to improve portability, scalability and consistency across distributed environments. In today’s transitional, hybrid world, many companies will require both.
“The speed and degree to which they move to new technology models and infrastructures like the cloud and 5G is a complex business decision that will vary from company to company. Wherever a company is in its journey towards infrastructure modernisation, including cloud, multi-cloud and hybrid cloud, technology providers need to be there with solutions that will allow for optimal outcomes in protecting against evolving cyberthreats and ensuring that application traffic is managed for optimal performance, uptime and cost-efficiency.”
Duncan Clubb, Head of Digital Infrastructure Advisory at CBRE: “You’ve probably heard about ‘the Edge’; it is being mentioned all over the trade press and at all the conferences that used to be dedicated to data centres. But if you asked three people what the Edge is, you will get at least four answers. None of the definitions are necessarily wrong, they just come from varied viewpoints. Telecommunication companies have one view (at least), content delivery networks have another, cloud companies yet another and so on… The one common theme is about putting IT resources nearer to where they get used.
“The Edge is merely a location where you put some form of processing power – simply because you have to. This is an important distinction. We have spent the last few years centralising processing power in enterprise data centres and cloud services – it was supposed to be the most efficient way. So why are we looking at a distributed network of systems, with a grid of small data centres, apparently breaking the mould that was meant to represent best practice and deliver the most efficient IT infrastructure?
“The reason is simple – distributed infrastructure allows you to deploy a new class of application: the Edge-native application. Edge-native applications need a raft of special features that classic or cloud architectures cannot provide in all locations, including low network latency and high bandwidth. Latency – the delay between transmitting and receiving data on a network – is highly sensitive to distance, so it follows that the nearer you can put processing power to its users, the lower the latency. This is really important for some industrial or safety-critical apps, as well as AR/VR and gaming, but I would argue that it’s the bandwidth that is the most important factor at the Edge.
“Edge-native applications are still in their infancy, but one thing most have in common is that they generate or consume huge quantities of data. Much of that data is in the form of high-definition video feeds and many Edge-native applications have been written to perform processing and analytics on multiple video streams. The use of Machine Learning or AI systems to analyse these huge data feeds is creating new requirements for compute resources to be provided at the Edge so backhaul networks do not become clogged. One example in the retail industry is the use of video and ML systems to automate shops so that they can be operated without cashiers or tills – it started with the Amazon shops, but now all the big retailers are moving this way too. These shops can use many tens or hundreds of high-definition (4K) cameras, each of which needs 15-32Mbps of bandwidth. It’s not hard to see that data or video-intensive applications are going to drive the need for Edge Compute – a distributed data centre model that will enable the next generation of app by providing local processing power. The good news for the data centre industry is that this will add to the enterprise and cloud core, not take away from it. If anything, Edge Compute will also increase demand at the core.”
Herman Chan, President, Sunbird Software: “The data centre industry is currently undergoing a multitude of transformations that are permanently changing how data centres are managed, but three trends are shaping up to have the largest impact.
- Sustainability. According to KPMG, 80% of all worldwide companies (and 90% of the largest companies) now report on their sustainability. 30 years ago, that figure was just 12%. Sustainability has become a primary concern for customers, stakeholders and government regulators as they seek to reduce environmental impact, maximise efficiency and lower operating expenses. While large-scale data centres and cloud operators are looking towards hyper-efficiency and zero carbon footprints in the coming decades, sustainability is now a concern for all data centre professionals, raising the bar for the entire industry. Data centre managers need to understand the impact on sustainability when making decisions about whether workloads will be done in owner-operated/colo data centres or in the cloud.
- Hybrid. IDC expects that by 2022, more than 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, multiple public clouds and legacy platforms to meet their infrastructure needs. Yet, about 30% of organisations say that migration to the cloud is a key challenge. Data centre managers are faced with big decisions about where their workloads will take place. They need to weigh up many factors including cost, time-to-market, risk factors, security concerns and whether efficiency improvements would enable them to take on more workloads in owner-operated/colocation data centres. As organisations tailor their workloads and spending based on their own unique needs, both cloud computing and enterprise data centres are poised to play major roles in the IT mix for modern businesses.
- Decentralisation and Edge Computing. According to Gartner, more than 75% of all enterprise-generated data will be created and processed outside of the traditional data centre or cloud by 2025. Instead, much of this data will be handled at decentralised Edge sites. Modern infrastructure managers are faced with more sites, more remote assets and unique requirements that are causing them to rethink the management paradigm. Without the ability to go onsite, they still need to maximise utilisation of space and power resources, issue work orders to remote hands, monitor the health of all sites, track all assets and connections, and secure all sites and equipment. To achieve this, leading experts are developing comprehensive remote infrastructure management strategies supported by intelligent hardware, environment sensors, remote security equipment and remote monitoring, tracking and operations software.
“Data centre managers need to be able to adapt to these trends or risk falling behind. Industry experts are deploying second-generation DCIM software to turn the challenges of modern data centre management into opportunities to improve operations.”
Ashraf Yehia, Managing Director, Eaton Middle East: “For data centre operators, the Uninterruptible Power Supply (UPS) has long represented a critical safeguard against potentially damaging power anomalies, as well as vital battery backup to ensure Business Continuity during an unexpected power outage. Yet thanks to new technology, data centre UPSs now have the capability to achieve a dual benefit — transforming from a load on the grid to a value-generating asset.
“Most large-scale data centres have deployed substantial battery banks to provide adequate backup in the event of a blackout. Yet the reality is, these batteries sit unused the vast majority of the time because power outages occur infrequently. Operators in today’s hyperscale, multi-tenant and other large data centres now have the opportunity to leverage this underutilised asset, turning their UPS into a profit centre and supporting the grid as a distributed energy resource (DER).
“The bidirectional converter technology in UPS systems combined with battery systems offering a longer lifespan and higher cycle rates has created the potential for this evolution of the traditional UPS. While the data centre maintains control of its energy — choosing how much capacity to offer and when — it has the ability to convert the traditional power backup into an energy storage device, providing a range of benefits to operators seeking to lower energy bills and optimise consumption.
“With EnergyAware technology, the UPSs can help organisations optimise the energy costs and generate additional revenue as markets open up for grid services. Opportunities include providing peak shaving to help avoid or reduce demand charges, shifting energy consumption for time-of-use rate optimisation and providing frequency regulation to help grid operators meet explosive growth demands. The energy-aware technology also enhances faster adoption of renewable energy into the power grid, improving the sustainability score for data centres.
“The implications of the EnergyAware UPS are significant, both for the grid and for data centre operators. The technology can be utilised to lower demand and peak time charges, as well as contribute to clean energy goals. Customers can use an existing asset to create a new revenue stream and lower energy costs while still providing a vital backup solution.”
David Friend, CEO and Founder of Wasabi: “In 2017, thousands of websites across South Korea went offline for weeks as a result of a ransomware attack on the web-hosting firm Nayana. Even after paying a then-record $1 million ransom, hundreds of customer websites could not be recovered and thousands more were left in business limbo as they waited for Nayana to go back online.
“Over the following months, it was revealed that Nayana had suffered a direct attack on its Linux servers at its data centre – one of the first publicised instances of a whole data centre being compromised by a ransomware attack. In the four years since the attack, the threat of ransomware has continued to grow for both data centre operators and enterprises in virtually every industry.
“The ransomware threat doesn’t just beget a deadweight loss in terms of either forcing data centre operators to pay out a ransom or allowing their data to be irrevocably lost. Suffering a ransomware attack, especially one that brings down client side operations, means a tremendous loss in productivity and customer confidence. Given how lucrative this practice is for cybercriminals and the ongoing rise of cryptocurrencies that can facilitate ransom extraction, this is a trend the data centre industry must adapt to.
“In the past, the only entry point from ransomware was usually a single on-premise user who succumbed to a phishing email. However, with the increasing sophistication of hackers, it’s now possible for security lapses in a client or a technology partner to provide an entry point for a ransomware package to laterally move into a data centre.
“That means that hardening the data centre needs to be a top priority. Security experts generally emphasise the importance of keeping systems patched, but it’s important to recognise that hackers can be one step ahead of your software and firmware.
“That’s why data centre leaders need to push for the adoption of extensive analytics in their data centres to monitor traffic on servers and networks, so as to be able to spot unusual behaviour. This should be complemented by the use of white lists to restrict processes and applications allowed to run on servers, so as to prevent the deployment of many ransomware packages.
“Another important thing for industry leaders to consider is ensuring that, alongside keeping their existing systems patched, they also avoid using obsolete systems in their data centres. Legacy hardware and software can present an easy access point for malware of all sorts, including ransomware and can risk compromising your entire data centre.”
Click below to share this article