Containers can help organizations achieve their business objectives quickly and efficiently. These software instances package up code and all its dependencies to enable applications to run quickly and reliably among multiple computing environments. This portability makes containers a great fit for microservices and other modern methodologies that support DevOps development processes.
When the pandemic hit, organizations with agile development frameworks such as DevOps had an advantage over competitors that didn’t. The speed and agility that DevOps provides in creating new applications let them quickly adapt to changing conditions. Those that weren’t able to employ DevOps had to develop new capabilities using less agile development processes, which generally take much longer. The success that organizations had with DevOps and containers helped solidify them as essential elements of an agile development mindset.
Using DevOps processes, a monolithic application that used to take three years to build can be broken down into a number of small applications, each of which can be developed much more quickly and easily. Features are developed and bugs are addressed rapidly, and new iterations of software are produced continuously.
As organizations become familiar with containers, they are moving them into more complex projects, such as distributed databases, data lakes and data flow systems for cameras that stream in signaling data from devices. These sophisticated use cases make it even more important for organizations to think about how much technical debt they are willing to take on as they develop a containerized architecture.
Custom Containers, Off the Shelf or a Hybrid?
Organizations generally fall into two camps when it comes to containers. One group wants to derive the value from containers but doesn’t have a team of people to maintain a custom-built Kubernetes cluster. The other group has the capability to build its own cluster and staff to dedicate to ongoing maintenance.
For most organizations, the ROI would map that they fall into the first group, but a significant number attempt to exist somewhere in the middle. They typically opt for hybrid frameworks that consist of a fundamental platform with some customization.
For organizations considering the adoption of DevOps and containers, all of this leads to a critical question: Do you have the processes, the people and the supporting tools to efficiently maintain container frameworks and resources? Put another way, what is your tolerance for absorbing the technical debt that deep customization can incur?
Minimize Risk with a Strong Framework for Development
If the answer to that question is “zero tolerance,” the public cloud may be your best move. In the cloud, an organization can buy into containers and move forward. If the organization can’t move to the public cloud for some reason (compliance issues are a common example here), you have other options. Two important ones are Red Hat’s OpenShift and VMware’s Tanzu, container orchestration platforms that help organizations manage complex container environments.
The factor that makes containers challenging — and projects more likely to fail — is the risk of assuming more technical debt than an organization is comfortable taking on. If you are able to manage your environment effectively and have codified your development process, containers are a powerful tool. However, as complex container environments grow, managing them becomes a greater challenge. If you don’t establish a framework (people, process, product) correctly, you may incur deep technical debt while receiving very little return.
Containers can be extremely powerful when you deploy them with processes and people capable of leveraging them for their intended purpose: rapid, iterative development and deployment of new features and capabilities.