As I wrote earlier this year there are trends in tech that are taking over the market. Some topics are only virtual discussions with no real-world impact to most IT organizations (nevertheless still interesting ones). But when I talk about overtaking the market in this case, I mean real world adoption with projects and revenues and not only slides about cool-sounding ideas that are in a distant future.

Info Box With this post forward, I will publish some of my thoughts about technology and dynamics in the overall tech market on sebastianbonk.com in English. Some articles and pages will be available in German and English, some only in one language. This step required a bunch of fundamental changes to the architecture of the still static-generated and Jekyll-based website. If you are interested in the technical details, please visit the Changelog page.

Kubernetes is among those topics. The developer community is talking heavily about it.1 Kubernetes gets a lot attention from the media.2 Even the C-Suite is picking up the word. From my perspective there are two main reasons worth considering and exploring. One is the rise of container technology (the obvious one). Containers brought the portability of applications. And made it possible to run them efficiently including the tons of dependencies that are coming with modern developed software. And with the growing adaption of the container technology emerged the need for a better management and orchestration solution – especially at scale.

The other reason is not as obvious. The promise of hybrid cloud still hasn’t delivered what the marketing has promised. Cloud Management Platforms (CMP) never really took off. Most companies are using the cloud in a hybrid way.3 But in most companies workloads are not shifted between public clouds and private datacenters automatically at all time because of better prices or performance at another location. Instead most companies leverage the power of multiple clouds by deciding for each workload/application which cloud provider is best suitable for the job. Kubernetes has the power to finally make good on the promise for flexibility of hybrid cloud.

What I see additionally is that Kubernetes is as well bridging the gap between the companies that only run applications “as they are” and those who are developing software themselves for a living. Internal IT departments and with them their CIOs have long time denied that there is such a thing as dependencies (and to build knowledge about this stuff). They mostly thought in terms of buying and managing applications like Salesforce, ServiceNow, O365, SAP, Microsoft Dynamics etc. But the rise of self-developed applications showed them that the old tricks aren’t cutting any more.

And history tends to repeat itself: With the rise of Kubernetes I see great similarities/parallels to what happened to virtualization with VMware roughly 10 years ago.4 Back then VMware paved the way how we think about servers. Today nobody really considers installing software bare metal (directly to a physical server) anymore. But VMware never invented server virtualization.

Virtualization dates back to 1960s and the need to partition large mainframes.5 VMware just brought it to the chasm for the x86 architecture. (If you are a regular visitor to the blog, you see some recurring themes. Geoffrey Moore with his thinking about Crossing the Chasm is one of them.) One important thing to mention before you ask why there was the need to reinvent the wheel if virtualization/virtual machines (VMs) are already doing the trick. VMs have a big technical downside. You always need a full operating system. Even when it is just a lightweight Linux distribution. Containers can be smaller because they need just one OS on the host level that can be shared across multiple containers.

Back to the road to Kubernetes: If Virtualization equals container orchestration and VMware equals the trailblazer – what companies are driving the adoption k8s?

A Brief history about containers

For a better understanding what the fuzz regarding Kubernetes is all about let’s take a quick look back on the rise of container technology. We will encounter a bunch of popular companies which shaped the situation as it is today.

A Container in a nutshell

The idea to run multiple workloads or applications on the same machine (physical or virtual machine) is not new. Linux got the feature to logically separate tasks and programs on one host operating system with Chroot6 – in the early 1980s. But what is a workload or in this case a container? Modern applications still need to be “build” from the source code with all their necessary dependencies. Like other software libraries and requirements for packages that are used in our software. For this case we can use a container – one thing that contains everything we need. To run a container, we need a container image. The image works pretty similar to a CD/DVD. The disc is able to run in most of the players that support them. But discs and container images share the same shortcoming: It’s not possible to save data back to the container image, so there is the need for persistent storage outside of the container. In modern applications these are mostly databases. And these databases can be used in the cloud as-a-Service – without worrying about the underlying infrastructure. So, container images are ready-to-run applications that can be “put into play” on a wide range set of players.

Docker

Still in theme of CDs and DVDs container technology was developed by multiple companies. Little bit similar to Blu-ray and HD-DVD. In the end, Docker7 mastered the technology in the best way and shaped how we think about containers. But to be good at the core container technology is only one part of the equation if you want to run them on a large scale.

Container Management Platforms

One container is helpful. Two to maybe ten elevate your application in the smooth world of cloud portability and improve your DevOps workflows with easy to create testing, staging and production environments. More than 100 give you headaches. With 1.000 or even 10.000 it becomes a nightmare. It doesn’t really matter if you have 100 different applications or only one application with heavy usage that needs to be spun up 100 times. You definitely need a container management solution. Now we are on the path to k8s. Let’s try to paint a timeline how we got from the logical separation of workloads/apps to Kubernetes. (To be honest, I have read a lot of articles and heard stories in live talks and YouTube videos. But for me it seems this could be a thing for historians to explore in much more detail.)

Google Borg

The infrastructure team at Google enabled their developers to deploy apps in an agnostic way to the underlying hardware. In simple terms: Just throw the app on to the platform and the solution built by the infrastructure team takes care of the rest. And when you go back in time a little you find out that Google is using container technology since 20068 and mastered it at least in 2014. In this year the company ran/spun up already 2 billion containers – per week.9 If you have such a cool solution it comes with the permission to go crazy with the naming. The system to run those containers was named Borg.10

Project Seven (of Nine)

In 2013 Google engineers came up with the idea to build a system from Borg. The permission for nerdy-naming conventions was still on and they went with a name “Seven of Nine”.11 When you plan to build a solution that has the potential to change infrastructure questions forever you seem to have two possibilities. First, make it a proprietary software, invest a lot of your own money in R&D, license it, sell it and hopefully make a lot of money in return. Or second, go the total other direction and make it open source and let the community contribute to the project as well. Now you get an iterated software more or less for free after you provided the first version.

Kubernetes 1.0

Google decided after some convincing of their leadership team for the latter option and released Kubernetes 1.0 in 2015 and made it the first project of the Cloud Native Foundation in 2016.12 The Cloud Native Foundation was set up by the Linux Foundation and a bunch of industry leading companies (including Google) to promote the use of container technology.13 And why is Kubernetes now called k8s? The answer is astonishing simple. It’s “k” + 8 other letters + “s”.

Roughly at same time something interesting took place, the battle between Docker Swarm and Kubernetes. As we learned earlier Docker mastered the container technology – but not how to run them in large infrastructures. The infrastructure solution from Docker is called Docker Swarm and is/was somewhat similar to Kubernetes. Brendan O'Leary from GitLab coined the battle the “dark ages”.14 To see what actual helped k8s to win the container management battle we need to take a look at the rest of the market.

Bosh and the Pivot(al) to k8s

The wish to give more power to the developers (to improve the efficiency of the development process) and to run applications without questions about the underlying infrastructure was not exclusively a Google one. But still close to Googles heart. Top ex-Google engineers15 went to VMware16, at a time when Pivotal not yet spun out and brought back – this will get interesting at a later point in time. And they thought it would be a clever idea to name the project pretty similar to what they did at Google. The letters “Bo” stayed the same and they just iterated once further (+1) with the last ones form “rg” to “sh”.17

Now it can get a little bit confusing. One of the big use cases for Bosh was Cloud Foundry. A so-called PaaS offering to give even better access to the developers:18 On Cloud Foundry pre-defined apps like a web application could be spun up out of a service catalog. The container-like deployment was done by both systems and the newly provisioned application was made directly available to the developer.

There are more parallel developments to the events at Google/CNCF and VMware. Cloud Foundry and Bosh got transferred from VMware to the newly setup Cloud Foundry Foundation. The non-profit-organization is backed by the Linux Foundation, too. And all solutions were made open source19 as well. The people who were critical for the work on Bosh and Cloud Foundry formed with the backing of EMC and VMware (which was controlled by EMC as well) Pivotal20 to focus on their work with open source software solutions.

But the folks at Pivotal realized that k8s was the hot thing, and they had to build their own version of Kubernetes with PKS (Pivotal Container Service). Pivotal was doing in the end both: Bosh + Cloud Foundry Foundation and Kubernetes with PKS.

With the recent acquisition of Pivotal by VMware there is now (or again?) a vertical integration in place: Hardware/Servers from Dell/EMC, the still necessary virtualization layer from VMware and Pivotal Pivotal Containter Service (PKS). To understand how Dell got suddenly in this equation we need to go back in time – again and just a little bit. Everything started in 2016 when Dell bought EMC and used a clever merger strategy with a tracking stock play21 to get VMware for cheap and at same time back to the public markets. In 2018 and after two years of holding the stock and in this way avoiding the massive tax bill because of the valuation rise from VMware, Dell did another neat accounting trick and changed the tracking stock for VMware with Dell stock.22 And just a few months after this transaction was complete, they brought Pivotal back into the family. The final announcement of acquisition went online on the 30th of December last year.23 And a lot of people in the industry just missed it. Good timing.

Now everything is back under one roof and called Tanzu.24 It’s a storyline easy to get lost in, but I guess at every step of the way it made somewhat sense to people involved at EMC/DELL/VMware/Pivotal.

If you would ask Michael Dell, he is one of the large shareholders of the now combined company, he perfectly knows how the workload of the future should be handled. Use Dell EMC servers leverage the virtualization power of VMware to provide clusters of servers and then use PKS what is now called Tanzu Kubernetes Grid on top to deploy and run your applications. And if you want to additionally use a public cloud vendor go and buy Tanzu Mission Control.

IBM + Red Hat

Now, since we got a taste how an on-premise k8s solution could look like we can look for similarities in the market. Red Hat decided to side with the Google container stack and Kubernetes25 with their release of OpenShift 3.0.26 OpenShift is a derivate of Kubernetes branded and improved by Red Hat. Red Hat is known for their strong support and service model in the industry. If your company wants to use the power of open source software but has not enough engineering power to run at on your very own and when you want someone who has your back if something goes wrong, the solutions from Red Hat are a viable option.

Now we can go to the interesting part. IBM has their own servers (of course) and has built one of the early virtualization layers with IBM Power/PowerVM.

IBM bought Red Hat in 2019.27 And IBM Power can run Red Hat OpenShift28 as well. Looks pretty similar to me what Dell/EMC/VMware and Pivotal are doing.

Cisco

Cisco recently announced on their Live! event in Barcelona a second bet on k8s.29 One with VMware (CCP) and one without VMware based on the virtualization technology KVM (Hyperflex Applications Platform or HXAP). With the announcement of VMware buying Pivotal it makes sense, right? All the playbooks are looking strangely familiar/similar: Owning the Hardware, having virtualization-layer you can control as proven middleware and a derivate of k8s on top to run the applications of the future.

Kubernetes in the public cloud

But not only the hardware companies are fighting for Kubernetes market share. As well the public cloud vendors from Alibaba, AWS, to Google and Microsoft with Azure are offering their Kubernetes derivates on as-a-service basis. Select the amount of compute power and setup your clusters for Kubernetes. Everything will be provided by the chosen public cloud vendor. But you still need to configure everything inside your k8s and even after best practices emerged in the past weeks and month it will still take some time before we finally see a trickle-down effect from what the big guys are already doing. Kelsey Hightower draws a cool comparison: that we are just throwing our apps on k8s like we are sending emails and not worrying about the underlying necessity of “compute power”.30 But if you tell a DevOps Engineer today that Kubernetes is already easy as email you will certainly get a very special look.

Vanilla Kubernetes

Because Kubernetes is open source you can just download and install it on your hardware running some distribution of Linux.31 But then you have to manage a pile of stuff on your own as well. The general sentiment from persons that run IT architectures for enterprise companies is to wish these adventurous people the best of luck. Because without service contracts, support and dedicated service level agreements (SLAs) with fixed reaction times nobody wants to take their chances of promising uptime.

It doesn’t matter how confusing the road to this point was, one thing seems pretty clear to me. Every IT vendor is betting heavy on the k8s trend. Alois Reitbauer from Dynatrace32 just said to me in January with a cool quote “the platform wars are over”. I think he is right. But is it all “rainbows and ponies” with containers and the power of Kubernetes? Definitely not. To use the help of the Cynefin framework,33 Containers are at least complicated and k8s is definitely a complex riddle to solve.

A Keptn to steer the huge Kubernetes container ship

The container landscape is full of references to the maritime world and uses the symbolic language of sailing the seas. Containers are the workloads. Even when Kubernetes is the more readable version of κυβερνήτης which is Greek and already means “commander” it feels more like a large and expandable ship to deliver tons of containers. And maybe there is the need for a real navigator at top of the bridge – a captain. Dynatrace has a department that is responsible for the adoption of cloud native technologies and bringing the cultural shift of DevOps/DevSecOPs/NoOps/AIOPs to their customers (the distinction between all the *Ops variants has definitely the potential for another post). This team builds an out-of-the-box solution to get a head start into the world of k8s. The name of their solution is Keptn.

Full disclaimer: We at avodaq are working closely together with the Dynatrace team responsible for Ketpn. Their skills super-charged our journey to cloud native and the collaboration gave us the foundation to deploy and maintain our solutions on k8s as well.

The reason behind Dynatrace betting hard on a cloud native world with their Autonomous Cloud Enablement approach (ACE) is more than an act for the love of developers. The power of the Dynatrace platform (they don’t want to be seen as a pure APM solution anymore) is clear. Kubernetes is easily observable (with traces, logs and metrics) and gives all users of platforms like Dynatrace the right insights. Expensive solutions need to shine in production. And with a successful adoption of k8s, accelerated with Keptn for an easier start into the complex world of Kubernetes, they can make (more) good on their promises of the OneAgent.

One core strength of tech – that I deeply love and that gets me excited every time I think and write about – is the pitching of promises. Every new technology can have the power to change and/or transform the world. I just recently learned the distinction between both.34 I tend to believe in those promises. And k8s got strong fundamentals, from the technical track record to industry recognition, real world adaption and of course financial support from different vendors. Additionally, Kubernetes could finally pull off the hybrid cloud for the digital transformed world. As I pointed out earlier (in German) digital transformation is the automation of processes with software. More and more often those software solutions need to be written by developers and can’t be just bought from a vendor as-a-Service. Those individual programmed applications need to run somewhere. When they are engineered in a cloud native way they are designated to run in a container on Kubernetes. Kubernetes has won, this time it seems for real. The common abstraction layer for the hybrid cloud world is finally here. But who will win the Kubernetes on-premise wars? The question has the same importance for the public cloud vendors (maybe another post). I’m excited to watch how this plays out.

  1. https://www.cbinsights.com/research/report/future-open-source/
  2. CB Insights Newsletter from December 3, 2019: https://us1.campaign-archive.com/?u=0c60818e26ecdbe423a10ad2f&id=d976a92587&e=1fc15c4b51
  3. https://www.idc.com/getdoc.jsp?containerId=prUS45625619
  4. https://trends.google.com/trends/explore?date=all&q=%2Fm%2F03bxqg9,%2Fm%2F02r297q,%2Fm%2F0272hgj
  5. https://www.probrand.co.uk/it-services/vmware-solutions/history-of-virtualisation
  6. https://en.wikipedia.org/wiki/Chroot
  7. https://www.docker.com
  8. https://queue.acm.org/detail.cfm?id=2898444
  9. https://speakerdeck.com/jbeda/containers-at-scale?slide=2
  10. https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/
  11. https://cloud.google.com/blog/products/gcp/from-google-to-the-world-the-kubernetes-origin-story
  12. https://www.cncf.io/announcement/2016/03/10/cloud-native-computing-foundation-accepts-kubernetes-as-first-hosted-project-technical-oversight-committee-elected/
  13. https://en.wikipedia.org/wiki/Cloud_Native_Computing_Foundation
  14. https://youtu.be/rq4GZ_GybN8?t=375 (from Minute 6:20)
  15. https://www.wired.com/2011/11/cloud-foundry/
  16. http://webcache.googleusercontent.com/search?q=cache:dJLElFu8jZAJ:https://tanzu.vmware.com/content/blog/comparing-bosh-ansible-chef-part-1&client=safari&hl=de&gl=de&strip=1&vwsrc=0 (Sorry for the text only cache reference. The original page is no longer available on the public VMware website, which makes sense.)
  17. https://twitter.com/marklucovsky/status/728950262593953792
  18. https://go.forrester.com/blogs/14-12-10-the_cloud_foundry_foundation_the_key_driver_of_a_breakthrough_in_paas_adoption/
  19. https://www.zdnet.com/article/open-source-paas-cloud-foundry-open-doors/
  20. https://en.wikipedia.org/wiki/Pivotal_Software
  21. https://www.bloomberg.com/opinion/articles/2015-10-13/dell-will-issue-a-lot-of-not-quite-stock-to-pay-for-emc
  22. https://www.reuters.com/article/us-dell-vmware-idUSKBN1JS11X
  23. https://www.vmware.com/company/news/releases/vmw-newsfeed.VMware-Completes-Acquisition-of-Pivotal.3b73174e-4485-4ff9-8c5d-56c54de6db86.html
  24. https://thenewstack.io/vmware-tanzu-cloud-foundation-4-further-blends-vms-and-kubernetes/
  25. https://www.eweek.com/cloud/red-hat-reimagines-openshift-3-paas-with-docker
  26. https://docs.openshift.com/enterprise/3.0/whats_new/ose_3_0_release_notes.html
  27. https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future
  28. https://blog.openshift.com/openshift-commons-briefing-openshift-on-ibm-power-with-manoj-kumar-ibm/
  29. https://newsroom.cisco.com/press-release-content?type=webcontent&articleId=2047267
  30. https://www.youtube.com/watch?v=9OHNejqXOoo&feature=youtu.be&t=297 (from minute 4:57)
  31. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  32. https://www.dynatrace.com/company/leadership/
  33. http://alumni.media.mit.edu/~brooks/storybiz/kurtz.pdf
  34. https://www.burrus.com/2017/07/change-transformation-know-difference/