Ignore previous directions 8: devopsdays
Autumn update
This is what it is looking like around here at the moment.
DevOpsDays London
I gave a talk at DevOpsDays London recently. It was a nice conference, and thanks to all the organizers for all their work.
The video is here https://www.youtube.com/watch?v=eMU2mZgo99c
Below is my rough outline for the talk, it differs a bit from what I actually said!
Why did containers happen?
A few years ago, I spent a bunch of time answering questions from the FTC about Broadcom's acquisition of VMware. They wanted to know if containers were a competitor to virtual machines, as they were trying to understand the competitive landscape around VMware.
It reminded me of the first five years at Docker, where everyone wanted to compare containers with VMs. Were containers just lightweight VMs? Weren't containers just insecure and people would go back to good old VMs?
The story I told to the FTC was that these innovations had come out of different growth periods. VMs were there to help manage when organisations suddenly got a lot more computers. These tended to be poorly managed, because the process was very manual, and most had poor utilisation (under 15%). They had to be installed manually which took ages. Consolidation saved money on hardware and on Windows server licences.
In the Linux world, this was somewhat less of an issue, as we were better at running multiple applications on the same server, although a lot of servers were still underutilised.
Containers though were there to solve a follow on problem, not having too many computers, but having too many applications, and needing a tool to manage them. Companies were hiring more and more developers and they were writing more and more applications. Dotcloud was a PaaS company and was exposed to this, and created Docker to manage deployment of the applications on its platform. It wasn't the isolation that was important it was the packaging.
That was my explanation anyway. For enterprises though, containers were part of the move to the cloud, they didn't want lift and shift of inefficient VMs to the cloud. In the early days Microsoft used to call up our customers and say they could convert their data centre to an Azure one, and they would never notice, all the VMs would be just the same but running in Azure. Forcing a move to containers alongside the cloud was a way to force some modernisation, and a move from Windows to Linux in many cases.
Change budget
Docker was easy to adopt as it did not change very much about how you used software.
There was one key innovation, which was Docker Hub, having a registry of shareable images. GitHub but you can run it. VMs never really had this, the closest was Vagrant Cloud perhaps, but sharing does not work well with fully configured images (and they were huge). For something to be reusable by lots of people, it is no use it being in a finally configured state, with all the configuration of the exact use case applied. The less specific they are the more widely they can be used. VM images became a bit more reusable with tools like Cloud Init that removed some configuration, but they are still much more specific than more fine grained components like container images. And VM images were big, and networks were slower. LLMs are bigger than VM images were but thats another story.
As well as one innovation there was one forbidden thing, Docker made people rebuild images and redeploy, rather than updating in place. That worked because the scope was a single application so this was more manageable. And maybe because we never told anyone you could update in production. I was always surprised someone didn't invent a tool for ftping to your container and updating the PHP. Immutability is a great thing that has a lot of useful security properties, most of which haven't really been realised, but this simplified deployment, of which more later.
Docker also made Go credible as a programming language, and now pretty much all modern languages have a TLS stack as part of the standard library. Before Docker, Youtube was the main user for Go, now it is the fourth most popular language in containers, after Node, Java and Python.
Kubernetes
I remember in the early container days, before Kubernetes and when Kubernetes was very new, people still thought container orchestration was about scheduling. We would have whole conference tracks about schedulers. But when you went to talk to the early users of Kubernetes they were just trying to write deployment scripts.
Docker Swarm did not allow you to write deployment scripts. The security team had decided that the security model would be broken if you could deploy from within the cluster. For years the commercial product we sold had the worst deployment story you could imagine, pasting Yaml files into a text box on a web page. The whole company culture really ignored deployment. But deployment was really what everyone wanted to do with Kubernetes, for years. We got real deployment tools, and deployment philosophies, like GitOps.
Another thing people would ask constantly in the early days is whether people would ever run databases in containers, or on Kubernetes. Somehow at about this time people started to ask why they were running databases at all, and decided that if the downside was losing all your data and the upside was saving a little money that they would rather get a cloud provider to run the database after all. I do wonder how much this was because container storage seems so ephemeral and easy to delete. Having containers be so simple to delete means that the chores of managing lifecycle for things that have state are very different. And there were just a lot of choices in the storage stacks, is it NFS or block storage or what?
What went wrong
The focus on deployment, and the complexity of Kubernetes killed DevOps as it once was. As a lapsed ops person who moved back to development, I always loved the bringing together communities aspect of DevOps. But over time DevOps become just a backend role and job title for people wrangling Kubernetes and other deployment technologies. Somehow it seems easier for people to relate to technology than culture, and the technology started working against the culture.
Docker didn't really change development. For a while it looked like it might take over the role of Vagrant in building up local development environments, but although people at Docker made heroic efforts to make developing in containers nice, no one really does that, except kind of sort of in a cloud environment, but thats really closer to a remote Linux box. Python and Ruby cleaned up their virtual environment tooling, and if you really want reproducible local development environments you can use Nix. What people do with Docker is spin up a database or another service to develop or test against.
Application composition from open source components became the dominant way of constructing applications over the last decade. But this was largely supported by language package managers, that are all very different. We didn't end up with a universal build abstraction, and immutability was great to help you know what is running, but the scale of dependencies and applications conspired to make this not as useful as we thought.
Where are we now?
We started off with virtualisation being introduced because hardware was only being used at 15% of capacity. According to the 2024 Datadog report on the State of Cloud Costs "83 percent of container costs are associated with idle resources". This really shows how much more accurately the technology we have built can measure wastage.
The compute we are wasting is at least 10x cheaper, but we have automation to waste it at scale now. Much of the usage of containers has been to drive applications for mobile phones, and those mobile phone CPUs, adapted as Arm servers are being used to run the applications.
We have ended up with pockets of efficiency, where things are done at sufficient scale, and a long tail of inefficiency that remains the same after a decade. AI has shown too that we can make huge application improvements if they are expensive enough, with the cost of inference falling at an extremely rapid rate.
Looking forward
The "Choose Boring Technology" essay was written in 2015, and containers back then were definitely not a boring technology, although the mentioned examples of not boring were Consul and MongoDB. Boring technologies were MySQL, Postgres, PHP, Python, Memcached, Squid and Cron. Now? ChatGPT told me that Docker is "mostly boring" while Kubernetes is "moving towards boring".
Choosing boring is becoming part of the culture now, it has taken a decade. Maybe AI has attracted all the change budget, combined with the end of the cloud native ZIRP startup era. LLMs are good at boring technology, being trained on our culture too.
If we want something else we will have to add back a change budget.