Four Container Management Considerations

Jannie Delucca

FavoriteLoadingAdd to favorites

“A deployment method as much as a development method”

Container adoption has grown in popularity considerably over the past decade. Gartner recently predicted that by 2024 containers will be the default option for three quarters of new custom enterprise applications. This demand means that the total value of the container market is set to double by 2024.

The surge in interest, writes Martin Percival, Red Hat, is evidence of the benefits that container technology provides the enterprise.

Containers allow you to package and isolate applications with their entire runtime environment, which includes all of the necessary “back-end” software needed to run it, such as configuration files, dependencies, and libraries. This makes it easy to move the contained application between environments – from development to testing and production – while retaining full functionality, and since containers share an operating system kernel with other containers, this technology draws fewer resources than an equivalent virtual machine (VM) setup.

The benefits of containers include faster application delivery, better life cycle management, smoother update cycles, and better integration between developers and operations teams. Additionally, they enable organisations to take full advantage of their burgeoning cloud computing infrastructure, especially the increasingly popular but complex environment of the hybrid cloud.

In practice, enterprises that choose to adopt containers should consider the following when converting to and managing a container environment.

1) Properly integrating containers into the data centre

First, ask yourself about the context you’re running containers in.

This is a simple question, but it points to the fact that while containers are extremely useful, they are not a panacea, but rather one part of your broader toolkit. In enterprise software development, you cannot regard any part of your software stack as being independent of everything else you’re running.

To properly make use of containers, you need to be able to integrate them with the rest of your stack and your IT infrastructure. You need to develop, implement, and maintain a plan to have containers fit within your security, authentication and networking services. This plan will be essential when it comes to scaling up your container infrastructure, which will see containers interact with many more parts of your IT stack.

2) Managing virtual machines

As mentioned above, one great advantage offered by containers is the need for fewer resources relative to an equivalent VM configuration. While VMs still have an important role in the enterprise environment when it comes to hosting the operating system for containers to run on, you can find you’re overusing VMs and creating a sprawling, complex, and resource-hungry environment that’s incredibly hard to manage and more prone to error. Organisations need to find a way to properly organise, plan and manage the VMs you’re using, while containerising some of the existing workloads.

 3) Getting orchestration right

Container management requires a lot of planning. Organisations need to ensure multiple containers can work together at once, that those containers can be combined with non-containerised applications, and that they can communicate with resources across an organisation’s IT environment. Figuring out how your containers will interact with the rest of your environment is essential, especially  when deploying containers in the context of a mix of different technologies and computing platforms.

Much of the legwork for this is done by an orchestration engine, which has three main tasks. Firstly, the orchestration engine brings multiple services and instances of these services together to form a single application. Secondly, the orchestration engine chooses where to place and launch these applications, attaching network, computing, storage and security profiles. Thirdly, the engine manages the state of the container cluster and the application nodes, monitoring how they are operating and talking to one another.

The last several years have seen many earlier technical and logistical challenges of container orchestration put to bed, owing to the rise of Kubernetes, an open-source platform that automates many of the manual processes involved in container orchestration. Kubernetes has now become the de facto orchestration engine of choice, attracting support from across the enterprise community. For any organisation looking to fully implement a container-based infrastructure, it’s worth appraising the rest of your tech stack to accommodate it.

 4) Working with legacy systems

Legacy hardware and software is an enduring staple of many enterprise environments. Especially when you have a complex stack and organisation behind you, the question is often one of “how can we make full use of our legacy systems?”, rather than “how can we replace our legacy systems?”

While adopting containers can be disruptive to an enterprise, it doesn’t have to be. Containerisation should be considered as a deployment method just as much as it is a development method. Breaking down existing workloads into containers can improve the performance of your legacy systems, while also allowing your organisation to develop the newer cloud-native applications that containers are so useful for.

See also: SUSE Spends Reported $600m on the Last Big Independent K8s Distribution

Next Post

AWS User Data is Being Stored, Used Outside User's Chosen Regions

Increase to favorites “I feel this is heading to get them in trouble” Updated July 10, eleven:15 BST with minimal adjustments, to reflect AWS’s updated phrases.  AWS consumers are sharing sensitive AI knowledge sets which include biometrical knowledge and voice inputs with Amazon by default — and several did not […]