Ehab Kanary, VP Sales, EMEA Emerging Markets at CommScope, discusses the complex demands placed upon data centre managers and why this is increasing the need for them to up their infrastructure game. Kanary draws attention to what’s driving this change across the data centre space and how network operators can address it and prepare their facilities for tomorrow’s requirements.
As data centre applications become more resource-intensive and fluid, network managers must up their infrastructure game. The data centre environment is constantly changing, which should surprise absolutely nobody. But some changes are more profound than others and their long-term effects more disruptive.
To be clear, data centres — whether hyperscale, global-scale, multi-tenant or enterprise — aren’t the only ones affected by such fundamental changes. Everyone in the ecosystem must adapt, from designers, integrators and installers to OEM and infrastructure partners.
In the Middle East and North Africa region for instance, the data centre market value is estimated at US$3.4 billion for 2022 and is expected to rise to US$10.4 billion by the year 2028 – according to RationalStat’s analysis.
Moreover, we are witnessing an increased focus on the region by international companies – Microsoft opened the first global data centre region in Qatar, while AWS launched another region in the UAE allowing customers to run workloads and store data securely, while serving end-users with lesser latencies. Investments and initiatives such as these bring new opportunities for a cloud-first economy.
We are witnessing the next great migration in speed, with the largest operators now transitioning to 400G applications and already planning the jump to 800G. So, what makes this latest leap significant? For one thing, the move to 400G then 800G and eventually 1.6T and 3.2T officially marks the beginning of the octal era, which brings with it some fundamental changes that will affect everyone.
What’s driving changes in data centre infrastructure
Increases in global data consumption and resource-intensive applications like Big Data, IoT, AI and Machine Learning are driving the need for more capacity and reduced latency within the data centre. At the switch level, faster, higher-capacity ASICs make this possible. The challenge for data centre managers is how to provision more ports at higher data rates and higher optical lane counts. Among other things, this requires thoughtful scaling with more flexible deployment options. Of course, all of this is happening in the context of a new reality that is forcing data centres to accomplish more with fewer resources (both physical and fiscal). The value of the physical layer infrastructure is largely dependent on how easy it is to deploy, reconfigure, manage and scale.
Identifying the criteria for a flexible, future-ready fibre platform
Several years ago, we began focusing on the next generation fibre platform. So, we asked our customers and partners: ‘Knowing what you know now — about network design, migration and installation challenges, and application requirements — how would you design your next-generation fibre platform?’ Their answers echoed the same themes: easier, more efficient migration to higher speeds; ultra-low-loss optical performance; faster deployment; and more flexible design options.
In synthesising the input and adding lessons learned from 40+ years of network design experience, we identified several critical design requirements necessary for addressing the changes affecting both our data centre customers and their design, installation and integration partners:
- The need for application-based building blocks
- Flexibility in distributing increased switch capacity
- Faster, simpler deployment and change management
- Application-based building blocks
As a rule, application support is limited by the maximum number of I/O ports on the front of the switch. The key to maximising port efficiency lies in your ability to make the best use of the switch capacity. Traditional four-lane quad designs provided steady migration to 50G, 100G and 200G. But at 400G and above, the 12- and 24-fibre configurations used to support quad-based applications become less efficient, leaving significant capacity stranded at the switch port. This is where octal technology comes into play.
MPO breakouts become the most efficient multi-pair building block for trunk applications. Moving from quad-based deployments to octal configurations doubles the number of breakouts, enabling network managers to eliminate some switch layers. Moreover, today’s applications are being designed for 16-fibre cabling.
Yet, not every data centre is ready to move away from their legacy 12- and 24-fibre deployments. They must also be able to support and manage applications without wasting fibres or losing port counts. Therefore, efficient application-based building blocks for 8f, 12f and 24f configurations are needed, as well.
Another key requirement is for a more flexible design to enable data centre managers and their design partners to quickly redistribute fibre capacity at the patch panel and adapt their networks to support changes in resource allocation. One way to achieve this is to develop built-in modularity in the panel components that enable alignment between point-of-delivery (POD) and network design architectures.
In a traditional fibre platform design, components such as modules, cassettes and adapter packs are panel-specific. As a result, changing components that have different configurations also involves swapping out the panel as well. The most obvious impact of this limitation is the extra time and cost to deploy both new components and new panels. At the same time, data centre customers must also contend with additional product ordering and inventory costs.
In contrast, a design in which all panel components are essentially interchangeable and designed to fit in a single, common panel would enable designers and installers to quickly reconfigure and deploy fibre capacity in the least possible time and with the lowest cost. So too, it would enable data centre customers to streamline their infrastructure inventory and its associated costs.
Simplifying and accelerating fibre deployment and management
The final key criteria is the need to simplify and accelerate the routine tasks involved in deploying, upgrading and managing the fibre infrastructure. While panel and blade designs have offered incremental advances in functionality and design over the years, there is room for significant improvement.
Additionally, the issue of polarity management also deserves mention. As fibre deployments grow more complex, ensuring the transmit and receive paths remain aligned throughout the link become more difficult. In the worst case, ensuring polarity requires installers to flip modules or cable assemblies. Mistakes may not be identified until the link has been deployed and resolving the issue adds time.
Collaborating to connect now and next
Data centres are rapidly evolving as data speeds and infrastructure complexity increase. This is especially true within hyperscale environments where lane speeds are accelerating to 400G, 800G and beyond and fibre counts across all layers of the network multiply.
It is therefore important that network managers, designers, integration professionals and installers continue to collaborate closely to help data centre operators maximise their existing infrastructure investments while preparing for future applications.Click below to share this article