Many organisations are feeling the strain as they try to cope with modern enterprise storage infrastructures which are spread across a diverse range of different locations. Neil Stobart, Vice President, Global System Engineering at Cloudian, explains the issues which businesses face and how to overcome them.
Storage infrastructure has evolved significantly in recent years. What are the main issues that this is presenting for modern businesses?
Like virtually all aspects of IT, the world of storage has developed at a rapid rate. The days of organisations relying on a single on-premises server with storage sitting in the backroom are long gone. Modern enterprise storage infrastructures may include hundreds or even thousands of interconnected users, applications, network connections and devices – all spread across a diverse range of different locations. What’s more, these infrastructures now commonly utilise a mixture of on-premises and public cloud storage, all of which leads to a lack of visibility which makes them increasingly hard to monitor and manage efficiently.
It’s no surprise that many organisations are feeling the strain. Monitoring and managing this new generation of increasingly complex and geographically distributed storage infrastructure requires a huge amount of time and attention from already overstretched IT teams. Not only do they have to meet users’ ever-evolving capacity and bandwidth demands; they also need to stay on top of numerous operational factors to ensure services can be delivered without a hitch. Outages and disruptions need to be kept to the absolute minimum.
Just how problematic is the visibility challenge for organisations?
The short answer is that it can be extremely problematic. If businesses lack a comprehensive view of their storage infrastructure, they will be less likely to spot any potential issues or troublesome trends that could indicate serious problems further down the line. Factors, such as network latency, excess capacity usage, malfunctioning components or rogue user behaviour, can all point towards long-term problems, such as potential security risks or misdirected spending. For example, a pronounced spike in data downloads could signal a data security problem and that users are not compliant with security policies.
Poor visibility also makes it significantly harder to optimise systems. Are users experiencing a good service? Are any nodes running low on capacity? Is the infrastructure optimally configured? These types of questions are much harder to answer accurately without full visibility across the entire storage infrastructure.
That’s why modern businesses need access to tools that provide a full, graphical view of their infrastructures. With such tools in place, they will be able to react to potential problems swiftly and proactively across all locations before they start to cause more serious issues, in turn enabling more intelligent management.
How does this link to issues around scalability?
The capacity and performance scalability of data storage systems must be key considerations for modern organisations – especially those that are heavy data users. Traditional ‘scale-up’ storage platforms have definite scalability limits and once the limits are reached, the only option is to expand by adding separate independent storage systems. This adds to the management overhead (i.e. more devices to manage) and datasets are split between devices, making it more difficult to locate and use data together as intended.
This presents a need for distributed storage systems with ‘scale-out’ architectures. Such systems allow for the use of standard commodity server hardware to build large scale clusters that provide the storage capacity and performance needed to meet today’s data explosion challenges. Datasets are stored in a single virtual namespace, even though the data itself is distributed across multiple devices, thereby removing the issue of data being tied to independent storage devices.
Distributed storage systems can also be dispersed across multiple geographical locations, with the management software providing flexibility to manage data consistency, availability, protection and Business Continuity requirements.
What benefits can business realise by increasing their understanding of and visibility into their cloud infrastructures?
Many businesses today employ hybrid cloud infrastructures, which can make it extremely difficult and demanding for IT teams to maintain full awareness of the data usage across the entire environment and accurately monitor cost of the public cloud component. On top of the basic cost of cloud storage, businesses also must account for costs associated with network bandwidth and data egress fees – both of which can quickly creep up, especially in large organisations.
This is where having access to a graphical, panoramic view of the entire storage infrastructure can make a major difference. This will enable businesses to get a detailed look at their usage by filtering for specific users, buckets or storage nodes, allowing them to quickly and proactively address any inefficiencies or anomalies before they drive costs upwards.
Another key factor is security and compliance, with the enforcement of security policies now being critical to maintaining business reputation and success. The need to ensure compliance and avoid unauthorised access to valuable or sensitive data is particularly challenging in a distributed environment.
But, with a full, unified view of their hybrid cloud infrastructure, businesses will be able to make the necessary adjustments and enforce their security and compliance policies when needed. For example, they could set up automated alerts to monitor LAN and/or WAN connections, data egress rates or user storage and access quotas. This would enable them to respond in a timely manner to fix and prevent any incidents that could compromise company policies.
How does the recently launched Cloudian HyperIQ solution help businesses solve some of the storage infrastructure challenges they’re coming up against?
Cloudian HyperIQ is an observability and analytics solution that empowers businesses to go beyond basic monitoring and reporting. Through a single graphical interface, it provides continuous monitoring and real time insights into their storage infrastructures across on-premises and hybrid cloud environments, regardless of their geographical location. This makes it easier for businesses to track system operations and make informed decisions.
HyperIQ features three core capabilities: intelligent monitoring to reduce administrative overheads and lower operational costs; predictive maintenance to minimise outages and disruptions and advanced user analytics to detect usage anomalies and enforce compliance policies. These are supplemented by monthly health checks that enable enhanced security and resource optimisation.
It also provides planning capabilities. These focus on predicting where the IO bottleneck will appear next, as well as looking at capacity utilisation to understand when to start planning upgrades and analysing whether the network is being over-utilised and impacting performance.
This all helps businesses overcome the visibility and complexity issues they are currently facing, allowing them to better attack the problems of increasingly intricate storage infrastructures. From reducing mean time to repair, to increasing availability and accelerating new deployments, businesses will be able to adapt more easily to workload demands and maintain peak performance.