Magazine Button
Delivering security and resilience is key to the success of containers

Delivering security and resilience is key to the success of containers

Facilities & ServersInsightsOperations & SystemsTop Stories

Gijsbert Janssen van Doorn, Director Technical Marketing at Zerto, discusses some of the challenges of using containers, as well as some of the benefits which lead to their success.

Virtual Machines (VMs) have served IT teams and their organisations well for many years, providing a highly effective architecture that separates the operating system and applications from the underlying hardware. Among their many benefits, they optimise resource usage and provide high availability to any and all applications.

However, containers are now becoming an increasingly popular alternative to VMs. According to Gartner, by 2023, more than 70% of global organisations will be running more than two containerised applications in production, up from less than 20% in 2019.

Key to their growing adoption is that they provide application developers with the ability to package small, focused code into independent, portable modules that include only what’s needed to run the applications. This makes the development process extremely agile and fits neatly into the wider technology infrastructure trends driving cloud-based enterprise application and IT strategy.

Digital Transformation has also been a big driver for companies to start using container technologies such as Kubernetes as part of a wider cloud-native strategy. The impact of the COVID-19 pandemic has further accelerated this trend, as more organisations see the benefits of changing and digitising their business models.

Container challenges

That’s not to say that the adoption of containers is without challenges. Finding the right expertise to design, manage and deploy Kubernetes can be difficult because it requires new disciplines and a lot of training to build the skills needed within enterprise IT teams.

Defining the right storage needs for Kubernetes can also create something of a headache. Kubernetes’ initially favoured applications such as stateless network services and processes that only consume CPU, network and memory, and do not need any storage. Unfortunately, today’s requirements are rather more ‘stateful’ and most applications deliver value by manipulating data that must be stored somewhere.

Differences in architecture between VMs and containers also means the supporting ecosystem around the application and the containers needs to change. Monitoring, logging, security tools and especially data protection need to be reconsidered in order to support the container ecosystem effectively.

Most important of all, however, is the need to remain focused on applying the same levels of security and resilience previously delivered to VM-based applications. For example, top-tier, customer-facing applications are generally protected with multiple data protection and disaster recovery solutions, while lower-tier applications would be protected with high RPO backups only, or not protected at all. Although Kubernetes offers some limited capabilities, enterprises may find limitations in its ability to deliver true end to-end protection and resilience.

Containers also differ from mature virtual environments in that there are fewer methods of ensuring new workloads are configured correctly for data protection. Even those next-gen applications that are built with internal availability and resilience in mind are often still without an easy and simple way to recover from risks such as human error or malicious attack. But, becoming agile and recovering quickly without interruption are key to the successful and safe implementation of container-based application environments.

From an operational perspective, containers also require different tools, processes, knowledge and experience, and ultimately, a different approach. Ideally, resilience and data protection should integrate with existing Kubernetes workflows to minimise any impact on a developer’s day-to-day workload. From an operational perspective, a lot of work needs to be done to make sure containers can be effective without abandoning the ecosystem of trusted and mature operational tooling and processes. Re-using existing solutions is a better alternative than investing time and money for training teams to use a new set of tools.

Opting for non-native solutions from legacy backup and disaster recovery providers will only add time, resources and barriers to application development and delivery. Selecting the right data protection solution makes a substantial difference in an organisation’s agility. Using a native solution, however, can help drive a ‘data protection as code’ strategy. This means data protection and disaster recovery operations are integrated into the application development life cycle from the start and applications are born protected. As a result, organisations using this approach will be able to ensure the resilience of their applications without sacrificing the agility, speed and scale of containerised applications.

Looking to the future

Infrastructure and operations teams should look for a platform that delivers the necessary availability and resilience without sacrificing the development speed of enterprise applications and services. This means being able to protect, recover and move their containers without adding more steps, tools and policies to the DevOps workload.

Containers open a world of agile development possibilities and are here to stay. But delivering data protection, disaster recovery and mobility for Kubernetes applications, whether on-premises or in the cloud are key parts of the jigsaw. Those organisations that can easily protect, recover and move any Kubernetes application and its persistent data are well placed to deliver on the advantages offered by containers as part of their wider technology strategy.

Click below to share this article

Browse our latest issue

Intelligent Data Centres

View Magazine Archive