Protect against Log4J without patching
Learn More
Protect against Log4J without patching
Learn More

blogEducationalLinux Container Adoption | Container Security | Runtime Security


April 29, 2020 By Ayush Singh, Global Manager
Linux Container Adoption in 2020 With A (Historical) Perspective

Brian Kernighan famously said “Don’t comment bad code—rewrite it.”

That is what Linda did for a good 8 months. Being a senior programmer, she does a ‘Brian Kernighan’ on all her coding for a web app she has been working on that predicts movements in stock prices.

She had been leading a team of 5 devops engineers building an AI system for the web app. Using Tensorflow, devops was in full swing and the team was bullish on an early completion with the libraries and software already defined to meet their deadline and to shift the application from dev to deployment.

But before they even got that far real issues began to erupt and the sky began to fall down around her.

The handshake between the app scripts and the Python server was failing. Since there was only a limited testing environment set up before the deployment effort, the repair was going to be time-consuming and the repair efforts loomed large in Linda’s thoughts.

There were other problems, which further aggravated the initially discovered problems, including the absence of written policies, the inability to fit into the existing network topology, and on top of it all – the storage requirements and actual available storage were totally out of sync. In short,

  • The CI/CD pipeline wasn’t flowing
  • The sysadmin notified the team that too much storage space was being used
  • The whole system started emitting errors and the primary mission of code deployment was pushed to a date that was out of sight.

The key takeaway from this kind of experience is that compatibility testing and validation should be a primary concern in an application that relies on unrelated components.

Ideally in the world of appsec development, no stone should be left unturned – from system paths to versions. In Linda’s case, there was limited QA, insufficient organizational policy and controls, a minimal testing environment, no automation, poor documentation of code fixes and, of course, no plan B.

Such experiences can be like déjà vu for many where they fail to make changes to their processes, so that bringing an application live on the deployment server remains a daunting task for many. Linda’s 5-member team can vouch to the veracity of this unpleasant fact.

Having said that, in 2020, we expect to see a shift of critical production workload deployments move to containers in an effort to optimize server capacity versus using  isolated standalone Linux environments.  We’re  supported in this prediction by a recent study, the Cloud Native Computing Foundation survey report which  announced that use of containers in production increased, from 2018 to 2019, shooting up an additional 15% in the latest year.

First, What is a Container?

What’s the definition of a Container? Containers are nothing but an isolated environment running a virtualized operating system .

What does a container do?

A container keeps individual environments separate from the host system and each other.

Doesn’t that sound like virtualization?

It does but there is a big catch. Virtualization, through Virtual machines or hypervisors, is based on emulation whereas containerization is based on shared operating systems.

Similar to partitions, Linux containers run isolated workloads. Each container has their own set of processes, filesystems and network stacks while all using the root OS on the hardware (b), unlike VM’s that have their own copy of the OS (a).

As  James Bottomley, former Parallels’ CTO,  described it, one can “leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application.”

Features (or benefits, rather) of Linux containers

  1. Less overhead –Containers consume less resources as they are not dependent on having separate OS images.
  2. It’s easy on resources- With Containers, you have the leeway to run more than one instance of an OS, on a single host. Overall, you consume less system resources.
  3. Much smaller in size: You can generally expect a container to use somewhere around 10 MB (compared to the case of VMs, where the OS can be in the GBs)
  4. Fast– VMs take several minutes to go live (booting the OS and triggering the apps to run) unlike containers that start “just in time”. Moreover, because containers are just self-contained sandboxed environments, they also don’t eat up resources.
  5. Portable– Containers come with portability and agility as a side benefit. They can be easily deployed on one host environment easily moved to another. Let’s go back to Linda. Suppose she built an application and hosts it on Ubuntu Linux 16.04. Because of some internal requirements, she now needs to move the app to Enterprise Linux 7.4 server. Due  to the different Linux distribution, she would have no choice but to create a new distribution package to allow the transfer With containerization, Linda can easily shift the images from one distribution to the other, without any problems.

The Evolution of Containers – A Brief History of Container Technology

Rome was not built in a day, and the concept of isolated environments for production is similarly, not an overnight story.

2020 Linux containers were a long time coming. Today’s growing popularity towards Linux containers, where processes are kept separate from the host system, has its roots from the 1970s.  The origins of resource sharing and time division.

1979: Unix Chroot (change root) command

If we go back to an earlier  time (when I was not yet born), we can see how Unix V7 in late 1970s paved the way with the  “chroot” system call. What did it do? Each  process could have its root directory in a different place in the filesystem.

2000: FreeBSD Jails – jail command introduced into the FreeBSD operating system.

And then a real change came in the form of FreeBSD Jails in 2020. Its unique advantage came from  dividing a FreeBSD computer into smaller Jails (partitions of a computer)– basically small systems within itself, each with unique IP addresses.

It was a significant improvement  from Chroot and had the advantage of process sandboxing.

2001: Linux VServer

Akin to FreeBSD Jail, Jacques Gélinas’ VServer project was another game changer. This was the time when containers as a technology found its  home in Linux. Implemented with a patch to the Linux kernel, it allowed “running several general purpose Linux servers on a single box with a high degree of Independence and security.”

2004: The year of Solaris Containers and the  “Local Zone”

The world saw the advent of Solaris containers in 2004. It came with ‘zones’ which were a type of containerization.

2005 to 2007 didn’t see much in the way of new disruptions, barring Google’s attempt with its  ‘Process Controls’ that evolved and found a home in  Linux kernel 2.6.24. This led to the birth of what we know as LXC i.e. Linux Containers.

Fast forward to  the early 2010s, and a number of players (LXC, Warden, and LMCTFY) tried to leave their mark in the world of containers. Little did they know what future was holding.

Year 2013 and the Docker breakthrough

What came as an open source platform in March 2013, under the name dotCloud, proved to be the Renaissance in container technology.

Docker changed the container landscape through portable images and a user-friendly interface, making distribution of containers a breeze.

And as they say, the rest is history.

In the final analysis, Container Security is here to stay

It goes without saying that with the growing adoption rate of containers, we  could be seeing challenging times as far as security and management of containers is concerned.

But one thing is for sure – containers are pretty cool …and DevOps folks are beginning to drink the Kool-Aid.



K2’s Next Generation Application Workload Protection Platform meets today’s need for runtime security in an easy to use, easy to deploy solution.  K2’s unique deterministic security detects new attacks without the need to rely on past attack knowledge, is lightweight, and adds under a millisecond of latency to the running application.  To aid in quick remediation of vulnerabilities, K2 also provides detailed attack telemetry including the code module and line number being in the code being attacked, while at the same time integrating with leading firewalls to do real time attacker blocking.

Change how you protect your applications.

Find out more about K2 today by requesting a demo, or get your free trial.



Share this

Leave a Reply

Your email address will not be published. Required fields are marked *


K2 Cyber Security delivers the Next Generation Application Security Platform to secure web applications and container workloads against sophisticated attacks in OWASP Top 10 and provides exploitable vulnerability detection during pre-production. K2’s Platform is deployed on production servers for runtime protection of applications and on pen-testing/pre-production/QA servers for interactive application security testing to identify the location of the vulnerable code. K2’s solution generates almost no false positives, eliminates breaches due to zero-day attacks, detects attacks missed by traditional security tools like Web Application Firewalls and host based EDR, finds missed exploitable vulnerabilities and dramatically reduces security cost. K2 Cyber Security is headquartered in the USA and provides cyber security solutions globally.


K2 Cyber Security, Inc.

2580 N. First Street, #130

San Jose, CA 95131