Magazine Button
Schneider Electric expert on the importance of modernised DCIM systems

Schneider Electric expert on the importance of modernised DCIM systems

Data Centre TechnologiesData CentresIndustry ExpertMore NewsSoftware

Hybrid IT infrastructure has become increasingly complex and legacy DCIM software does not adequately adapt to these complexities and recent growth. We speak to Kevin Brown, Schneider Electric’s Senior Vice President of EcoStruxure Solutions, about the trend of DCIM 3.0 and the importance of DCIM systems that adapt and evolve with the IT infrastructure landscape.

Kevin Brown, Schneider Electric’s Senior Vice President of EcoStruxure Solutions

What is the trend of DCIM 3.0 and how has this evolved from legacy platforms?

Infrastructure management is a concept that goes back to when PCs first started being used as servers in the 80s. To support these PCs, we started building small UPS and traded software. That was the first version of infrastructure management, or DCIM, but we’ve always seen that the infrastructure follows technology trends.

After the dot-com bubble burst in 2000, suddenly CIOs looked at those PC servers and decided they had to get it under control. They put them into racks and pulled them into data centres .The infrastructure followed them and that started the trend of the data centre creating new requirements for the software, with new considerations around how much space, power and cooling capacity was required.

In around 2008, the industry started thinking about PoE, considering the efficiency of the data centre infrastructure, and that’s really when the term DCIM first started being used. Looking at the current market, we believe we’re at the beginning of another 20 years’ cycle as the pandemic highlighted a couple of things regarding CIOs and new challenges.

We’re trying to embrace this and ask how the infrastructure management software needs to evolve, as it has in its first two cycles, to meet the challenges that CIOs are starting to experience.

What obstacles are organisations with outdated IT infrastructure currently facing?

It’s not only about the outdated infrastructure, but also about where the infrastructure is located. If you ask most CIOs how many servers or assets, they have on their network you will receive vague answers. They’re not too aware – but we believe that’s not going to be acceptable going forward. 

Firstly, you must make sure all of these things are resilient. In the DCIM 2.0 wave, the concern was whether the data centre was resilient and had tier certification, whereas now a CIO goes to colo or the public cloud, not fully understanding it.

Secondly, cybersecurity is a new challenge because the attack surface is much bigger, with a 40% increase in firms reporting cybersecurity incidents. This challenge is not just about the data centre, it’s about a whole hybrid IT infrastructure that needs to be more resilient and secure. 

How is the IT infrastructure landscape evolving and why is it important for DCIM systems to adapt and evolve with it?

There is this new heightened view and awareness about resiliency in hybrid infrastructures. If my wiring closet goes down as I work from home, there’s a problem. Since the pandemic, CIOs are dealing with this level of complexity that they didn’t have in the past. 

One of the factors emerging is sustainability. CIOs are considering what future questions will be asked such as ‘what is the energy consumption of your IT’? Or ‘what is the carbon footprint of your IT and what are you doing to manage, measure and reduce it’? About 60% of IT energy consumption is consumed inside of data centres and 40% is outside of data centres. By 2035, those numbers are going to reverse, with 60% of the energy consumption being outside of what we consider a data centre and 40% inside the data centre, which means the implication that more than half of the IT energy consumption is the responsibility of the CIO. Sustainability is something that DCIM 3.0 aims to address.

Why should open, vendor-agnostic software be seen as essential to data centres?

CIOs should have the ability to customise and integrate data into a customer’s environment. There’s this recognition that we need data from IT Asset Management Tools, building management systems, power management systems and even sustainability systems. With this data, we can answer the questions about infrastructure locations, firmware patches and security policies. It’s part of an overall system – that is one of the things that’s been essential about becoming more open and vendor agnostic. 

Why should data centre security, resilience and sustainability all be of concern to data centres and how is this driving the evolution of the IT infrastructure landscape?

When you look at the CIO and their responsibility, there is a fundamental trend of the role becoming more involved in business and sustainability, forcing them to start asking questions about their partners such as those in colocation facilities. To answer these questions, they need the right tools.

Historically, data centres have been relying on an SLA and that is not to say in the future that the SLA is not going to be enough. But there’s going to have to be something more powerful behind it, to ensure an understanding of how they are operated and provide a more granular level of data than in the past.

Why are Edge deployments now considered as mission critical and why is a new capability of software tools required for this type of infrastructure?

We think of Edge as anything that’s outside of the traditional data centre. It could be, for example, a wiring closet or server room. It requires a whole different toolset to manage these distributed environments as it’s unlikely there’s any dedicated staff on-site to deal with any issues that arise, unlike in a classical data centre.

If you consider a retail organisation with 2,000 geographically dispersed stores, for example, if something goes wrong, it’s a lot more challenging. How do you secure those locations – both physically and in terms of cybersecurity? You don’t have the same controls that you have in a classical data centre, but you still need to apply those best practices.

To do that, you need to use technology and take a proactive approach. We’ve done a lot of work with developing algorithms to help predict failures on the systems but, even still, if a failure is predicted, who will go and address that? It might be that there are partners to work with. These distributed environments and hybrid IT infrastructures create a new set of challenges that we don’t yet have all the answers to, but it’s important to treat it in the same way as a classical data centre.

Can you introduce us to EcoStruxure and how it is modernising IT infrastructure?

EcoSruxure is an overall architecture across all our different businesses, but in my segment we’re focused on IT and there are a few key areas we’re focusing on to drive improvements.

We’re highly focused on the challenges of what’s going on inside the white space for those colocation providers I mentioned. For example, how do you manage the complexity of the capacity constraints, cooling, power and all the classic things that DCIM 2.0 did, but also make it more open and available to their clients? When you look at the CIO and some of the challenges of hybrid infrastructure, we know there are a couple of things that we want to do better.

We want to use AI and Machine Learning as that’s a way to manage this complex infrastructure. We also want to provide more flexibility in deployment options and to be more modular in nature to deal with new capabilities and scale. Those are just a couple of things we’re driving on to make these tools even more flexible and meet the challenges of this complex environment.

How is Schneider Electric ensuring a smooth process for the integration of EcoStruxure IT?

We recognise it’s not just about selling a package of software – it’s about

ensuring the business outcome the customer is looking for.

We have our base set of technology and capabilities and then we need to identify what the customer really needs and how we can provide the integration and custom projects that meet those requirements.

Some colocation customers have customised reports and portals that they want to provide their customers with to help give them a unique experience and we need to help them with that.

We identified this a few years ago and invested in building a global team so we can provide these services in different geographic locations as customers expand and scale.

What makes our approach unique is that we recognise the need for integration and that we have global expertise – this is demonstrating success and we’re excited about the position we’re in.

Click below to share this article

Browse our latest issue

Intelligent Data Centres

View Magazine Archive