Introduction to Containers-as-a-Service
Recently, there has been a lot of buzz regarding containers in the cloud world. If you’re in the IT world, you’ve probably witnessed all the excitement regarding this new technology. A significant number of people feel that it will entirely revolutionize how operating systems interact with both hardware and software in the cloud. Others feel that it’s exactly what the cloud has needed all along to unlock its full potential.
So, what exactly is Containers-as-a-Service? How is it being leveraged in the cloud? Do its benefits warrant the buzz and excitement?
To comprehend the whole concept, you need to first define its roots…
Where it All Started
Over the years, the IT industry has enjoyed a lot of pivotal breakthroughs, all aimed at improving performance and service delivery. A significant number of these breakthroughs in the last 10 years have been on virtualization, with each new technology geared at reducing time to value and boosting overall resource utilization. The public cloud, along with API-based administration and multi-tenancy, fueled the improvement of these core goals and with time, users were able to effectively utilize single cores out of physical machines in their processes. In as much as this was largely perceived as ‘efficient’, it created one significant problem- virtualization of entire servers even during the execution of simple processes. Could it further be broken down to grant users the exact resources they needed without virtualizing entire machines?
Fortunately, with the motivation to come up with cheaper, faster software which could execute tasks at a much smaller scale, Google took up the challenge. They rallied their teams to abstract further to enable finer grained control. To implement this, they built cgroups, added it to the Linux kernel, and optimized it to develop a smaller, separate execution parameters called containers. They were basically simplified and virtualized operating systems which they primarily utilized in powering all of their applications.
In a couple of years, the technology grew to be picked up by Docker, who additionally developed interoperable format for container applications. Google is therefore the brains behind CaaS, while Docker further developed it into a much adaptable format.
The Linkage With Paas and Iaas
Caas has introduced a whole new perspective by forming an intermediate layer between Paas by Iaas, and consequently changing the historical order and interaction between the two.
Infrastructure as a Service has been primary aimed at granting users access to flexible raw assets. Platform as a Service on the other hand, gives lockdown experiences optimized for special use cases. In addition to operating systems, they form the three logical server layers. While the former represents hardware assets including physical and virtual ones, the latter delivers application runtime. In simple terms, IaaS users get NICs, hard drives, CPUs and Ram while PaaS is centered on management environments for Python, Ruby, Java, etc.
So, what do you do when you need a generic framework to efficaciously handle processes on different scales? That’s where CaaS comes in. As PaaS delivers process runtime and IaaS provides the critical hardware, CaaS merges the two to grant you a flexible platform.
The Prime Benefits
CaaS has generated buzz because of its significant benefits especially its increased efficiency compared to hypervisors (in system resource terms). It achieves efficiency by eliminating all the unnecessary hardware resources to leave you with just a tiny portion of what you actually need to comfortably run your application. The rest of the hardware resources is directed to other simultaneous processes. Consequently, users utilize their servers more efficiently by running 4-6 times the number of applications compared to virtual machines.
Secondly, CaaS has largely simplified the deployment of apps by packaging them as one-command line, registry-stored, singularly addressable deployable components. What makes this even better is the fact that it can be remotely executed from anywhere.
The abstraction of operating systems through CaaS has considerably affected the booting process. Instead of waiting for the entire computer to boot in a minute or so, your resources are availed to you in just 1/20th of a second. This fundamentally improves process efficiency and speed.
CaaS has essentially impacted open source software applications by improving composability. It eliminates a considerable amount of boilerplate, specialized, error prone work by lowering the risk through containers housing compact scripted applications. Of course to smoothly achieve this, developers have to dedicate a lot of time and resources in installing and configuring njinx, node.js, RabbitMQ, GlusterFS, Hadoop, MOngoDB, memcached, MySQL collectively in single boxes to provide platforms for their applications.
Another core implication is cost savings during testing. In a standard Virtual machine, testing is usually charged for a minimum of ten minutes to an hour of the computer processing time. This of course translates to very cheap costs on simple, single tests. Problem however, comes in if you’re regularly running hundreds or probably thousands of tests since costs will severely shoot up. On containers however, since you can simultaneously run thousands of tests on the same server, the cost of multiple tests remains equal to a single one.
Finally, CaaS has powered faster and efficient development by granting users the privilege of running several containers on one computer. Although it’s possible to maintain several virtual machines on one computer, their number is always just a fraction of the sum of containers which can be handled.
Author: Davis Porter
Image courtesy: Stuart Miles, freedigitalphotos.net