Magazine Button
The top infrastructure challenges that restrict AI’s potential

The top infrastructure challenges that restrict AI’s potential

BlogsSoftwareThought LeadershipTop Stories

Patrick Lastennet, Director of Marketing and Business at Interxion, A Digital Realty Company, examines the roadblocks facing the acceleration of AI innovation. He says it is important to develop a strong infrastructure strategy for AI deployments from the start.

The hunger for Artificial Intelligence (AI) is growing. Companies in every industry are exploring ways that AI can accelerate innovation and deliver a powerful competitive edge. However, designing AI infrastructure is complex and overwhelming and, as a result, 76% of businesses regard infrastructure as an obstacle to AI success.

Still, that’s no excuse to slow down progress. With more companies actively pursuing or at least trailing AI, those who wait will only fall further behind.
A recent survey of IT decisions makers across eight European countries found that nearly two-thirds of enterprises (62%) are currently deploying or testing AI, while another 17% plan to use AI in 2020.

Respondents cited a number of infrastructure roadblocks that limit the deployment of AI at scale, from a lack of resources – capital, people and physical infrastructure – to an unclear company strategy that doesn’t take AI into consideration.

Since AI deployment is a slow build for many companies, a huge technological gap will form between those that reach the deployment stage and those yet to begin planning. Companies reluctant to invest in AI will miss the opportunity to gain a competitive advantage.

That’s why it’s important to develop a strong infrastructure strategy for AI deployments from the start. Here’s what to consider.

Roadblocks to success

Frequently, companies leading significant AI research and development do so without important initial input from IT. As a result, teams unfortunately produce shadow AI – AI infrastructure created under IT’s radar, which is challenging to successfully operationalise and ultimately unproductive. Companies can avoid shadow AI by leading with an infrastructure strategy specifically optimised for AI.

The survey highlighted incalculable costs as the top issue (21%). From the need for new investment in people and equipment, to unforeseen costs along the winding road between AI design and deployment, to rapid innovation and shifting of technology needs, AI investment is potentially massive and difficult to predict. Moreover, internal disconnect between IT and AI development can result in low ROI if the company fails to deploy the technology.

The lack of in-house expert staff also poses a significant challenge. Companies typically need to bring on specialised developers, which can be costly and requires time for new staff to learn the business to meet AI design with organisational goals.

Inadequate IT equipment also blocks companies from envisioning how AI fits into their operation. According to the survey, many worry their current infrastructure is not optimised to support AI and fear data centres have reached full capacity.

Roadblocks at the strategy phase are largely similar across industries but specific infrastructure decisions can vary based on industry. Legal or compliance requirements, such as GDPR, as well as the type of data and work processes involved, all factor into AI infrastructure decisions.
The study found that 39% of companies across industries use major public clouds – most often these were manufacturers looking for flexibility and high-speed. Meanwhile, 29% of respondents prefer in-house solutions with support from consultants – most often financial, energy and healthcare companies that wish to keep their personally identifiable information (PII) data under tight security and greater control.

Elements of successful AI infrastructure

With so many companies starting from ground zero, it’s imperative to nail-down a clear strategy from the start, since rearchitecting later can cost a lot of time, money and resources. There are several boxes companies need to check to successfully enable AI at scale.

First, businesses need to be able to ensure they have the right infrastructure to support the data acquisition and collection necessary to prepare datasets used for AI workloads. In particular, attention must be given to the effectiveness and cost of collecting data from edge or cloud devices where AI inference runs. Ideally this needs to happen across multiple worldwide regions, as well as leveraging high speed connectivity and ensuring high availability. This means businesses need infrastructure supported by a network fabric that can offer the following benefits:

• Proximity to AI data: 5G and fixed line core nodes in enterprise data centres bring AI data from devices in the field, offices and manufacturing facilities into regional interconnected data centres for processing along a multi-node architecture.
• Direct cloud access: Provides high performant access to cloud hyperscale environment to support hybrid deployments of AI training or inference workloads.
• Geographic scale: By placing their infrastructure in multiple data centres located in strategic geographic regions, businesses enable cost-effective acquisition of data and high-performance delivery of AI workloads worldwide.

As businesses consider training AI/Deep Learning models they must consider a data centre partner that will in the long-term be able to accommodate the necessary power and cooling technologies supporting GPU accelerated compute and this entails:

• High rack density: To support AI workloads, enterprises will need to get more computing power out of each rack in their data centre. That means much higher power density. In fact, most enterprises would need to scale their maximum density at least three times to support AI workloads – and prepare for even higher levels in the future.
• Size and scale: Key to leveraging the benefits of AI is doing it at scale. The ability to run at scale hardware (GPU) enables the effect of large-scale computation.

A realistic path to AI

Most on-premises enterprise data centres aren’t capable of handling that level of scale. Public cloud, meanwhile, offers the path of least resistance, but it isn’t always the best environment to train AI models at scale or deploy them in production due to either high costs or latency issues.

So, what’s the best way forward for companies that want to design an infrastructure to support AI workloads? Important lessons can be learned by examining how businesses that are already gaining value from AI have chosen to deploy their infrastructure.

Hyperscalers like Google, Amazon, Facebook and Microsoft successfully deploy AI at scale with their own core and edge infrastructure often deployed in highly connected, high-quality data centres. They use colocation heavily around the globe because they know it can support the scale, high-density and connectivity they need.

By leveraging the knowledge and experience of these AI leaders, enterprises will be able to chart their own destiny when it comes to AI.

Click below to share this article

Browse our latest issue

Magazine Cover

View Magazine Archive