What is containerization? Essential DevOps Tools for container management
Devopsity on 15-07-2022
Devopsity on 15-07-2022
Containerization means using tools called containers - packages of software with essential elements needed for starting an application. You can find source codes, configuration files, testing environments, and system libraries inside the container. And why use containers?
They make DevOps processes more simple and faster, like creating software development automation or automated testing. Containers can be easily reused and as they have all batteries included they will work in almost every DevOps environment. As containers are simple software development processes, they don’t typically use many resources as opposed to full-blown OS’es on Virtual Machines, their use allows to achieve greater density and often better node resource utilization.
Efficient configuration management is especially important when the number of containers becomes so huge that a human wouldn’t keep up with it. Although implementing configuration management is complex, may cause errors, and generally require well-educated DevOps and development teams, it’s very profitable. It provides not only flexibility and operational efficiencies in the software development life cycle, but also high code quality and security.
Before starting learning about containers, let’s define Kubernetes, also known as K8s. Kubernetes is an open-source system for automation, scaling, deployment and management of containers. The first version appeared in 2014 and was designed by 3 developers from Google.
It’s very flexible, thanks to API, and can work with various tools. Kubernetes has 5 basic objects:
Besides, Kubernetes is often used for hosting microservices.
Kubernetes is de facto standard technology that claims to one day be the platform for everything Cloud Native and have the same interface on every possible private and public cloud provider. It may be an overkill for a web app with few components but if you have anything more complex than that, you will not be wrong to consider K8s as the platform to go. Kubernetes hosts all flavors of compute workload from Web and Data, through IA and IOT to massive multi-tenant SaaS platforms with various different network topologies and architecture patterns. As an example application for K8s workload, you could imagine a microservice structured SaaS e-learning platform, working on multiple regionalized Kubernetes clusters with complex dependencies and multiple language versions. Unified hosting environments work often across regions and cloud services providers, with multiple development stages and ad hoc preview environments. It’s a typical picture in a late startup or scale-up company where the pace of innovation and ability to run quick and dirty experiments is highly valued.
The second containerization software is OpenShift designed by Red Hat. It’s partly built on Docker and a Kubernetes container platform based on cloud technology. It’s also considered platform-as-a-service (PaaS). OpenShift provides leads the process of deployment, continuous testing, and delivering cloud applications. The main advantages of such a system are centralized policy management, constant security, built-in monitoring, and integration with many other DevOps tools, first of all with Kubernetes.
As Openshift is essentially a proprietary implementation of Kubernetes with enhanced security standards and all-batteries included approach to side tooling, monitoring, and enterprise grade policy management, it’s a favorable choice for larger enterprises to handle usually large and diversified workloads. Essentially OpenShift is addressing the same scenarios as Kubernetes, but usually in more mature corporate and financial environments where risk and change management is a high priority. Siloed corporate teams usually have a slower pace of work and lower appetites for risky experimentation, therefore they prefer matured DevOps tools with well-known names behind, like RedHat.
Apart from complex containerization software, DevOps teams can use more simple tools instead or as a supplementation to Kubernetes or Docker. One of them is Nomad developed by HashiCorp. Nomad is a flexible scheduler and orchestration for managing various applications on the same infrastructure, using one versatile library that can be implemented in many environments and provide the same user experience. It can be used both for container and non-container software projects and supports GPU workflows based on machine learning or artificial intelligence.
Nomad, like most Hashicorp creations, aims to be a jack of all trades in the world of hosting workloads. If you have a decent size of heterogeneous environments with a mix of iron, virtual machines, and containers to run, you would benefit from Nomad’s ability to manage this lot on a massive scale, giving you the same IaC language to use and within a single pane of glass to watch.
Another DevOps tool produced by Red Hat is Podman - an open-source project based on Linux which runs containers under Open Container Initiative (OCI) - an open governance structure for creating open industry standards in containerization. Podman is very often compared to Docker and Kubernetes, and in fact, can work with both of them. The difference is that Podman is focused on creating pods that later organize containers under the same denomination as single units. Thus developers can share resources from various containers, but for the same application inside a pod.
Example use cases for Podman are mainly small local microservice deployments or containerized pet software projects where you would require a good standard and Cloud Native configuration management tools with security built-in, but not necessarily a massively scalable solution yet.
All the top DevOps tools mentioned above are suitable for managing huge software development projects with a lot of containers. However, there are various other solutions for smaller applications. AWS ECS, Azure Container Instances, and DigitalOcean Apps are just examples of cloud provider-specific container management solutions. They usually fit well with small applications consisting of one or more containers with simple metrics-driven autoscaling and high availability concepts based on multi-region networking and traffic load-balancing.
As popular DevOps tools like AWS ECS or Azure Container Systems are designed to use all best practices and infrastructure concepts of specific vendors from High Availability to data security, these types of workloads are perfect for small and medium-sized containerized web applications that don’t have too complex interdependencies. A good example would be a web application consisting of PHP backend with a separate React frontend with some dependencies like managed PostgreSQL database and Redis server.
You can also try serverless tools. Simple one container applications do not require complex solutions anymore. If you don’t need to handle huge traffic or perhaps you want to run a container on schedule, then simple AWS Lambda or Google Cloud Functions could go a long way. These are usually employed as components in a more complex flow of data, event-driven environments, and for backend side work but could as well serve core API purposes.
Simple python function or API service consumed directly by the client or other applications with small or moderate CPU and memory footprint and without significant runtime.
The key here would be to not use serverless containerized workload for applications with significant traffic higher than a few million requests per month.