The spike in interest we’re seeing in both microservices and containers is all about one thing: speed. With the broad availability of on demand elastic cloud infrastructure, every company I talk to is in a race to deliver better software faster. Forrester calls this era the Age of the Customer. Every company is now competing to win, serve, and retain customers better - and customer experience is king.
Microservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.
Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their lifecycles.
Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.
Yes, enterprises should care about containers and Docker, and the data already show that they do. Forrester’s most recent developer survey data shows about 4-5% of developers deploy apps into production containers today. That’s pretty low production adoption, but still strong for a technology no one was talking about two years ago. Roughly 30% of enterprise developers tell us they are actively exploring containers, though - that’s a big number this early in the game and tells me developers are driving enterprise interest.
Now, the vast majority of early adopters are companies building cloud-native apps. These are the built-to-scale, microservices apps born in the cloud and designed to run in the cloud. But over the next 3 years, we expect to see companies also exploring containers for more traditional stateful applications, like more traditional enterprise database workloads. That’s why so many virtualization vendors, storage and network software companies, and public cloud providers are rushing to create services to help enterprises run more and more types of apps in containers.
Think of this trend like a previous one about ten years ago. That’s when server virtualization started to get really popular. First, developers figured out how great virtualization was for running multiple apps on shared servers. Then IT Operations teams started to see how much this isolation could help them, too. But virtualization changed a lot about how infrastructure was used and managed. Virtual machines were suddenly mobile. They were easily duplicated and scaled. You could run a lot more of them on one box than you could before. And that caused a lot of challenges, but created a huge market of opportunity. We had to rethink how we allocated and connected storage to VMs. We had to rethink how network resources were assigned, shared, and how our approach to security had to change. This is happening all over again with containers.
Not too many enterprises have large-scale Docker deployments in production. The ones who do are typically using Docker for net new development or when they redesign an existing app, especially if they plan to deploy that app on an elastic cloud infrastructure. That could be either a public or private cloud.
But that’s going to change. Once you’ve identified an app that’s a good candidate for containers, you can start looking beyond just shorter development cycles and look to how containers can help you drive even more efficiency in your data center deployments than you can get with virtual machines alone. That’s the next frontier, and I expect we’ll get there pretty quickly. Why launch an entire VM, with a complete OS, which might take minutes, when you can launch a container in a running OS, which might take seconds or even less?
That’s where containers get interesting for the enterprise, and that’s why we’re seeing such a surge in new storage and networking technologies around containers. It’s easy to run a stateless app that only lives for seconds in a container, but if you’re going to run a database workload that needs persistent storage, and might need to share data with other containers, you need some powerful storage virtualization technology adapted for containers.
Absolutely. Think about what we needed to make virtual machines highly portable, efficient, and secure - then factor in the impact of a bunch of new, smaller, and maybe shorter-lived containers. There’s two ways to look at the impact. You can run containers in a VM (and the big virtualizaiton vendors are actively shrinking the footprint of their VMs to make that even easier) or you can run them on bare metal if you need that kind of performance.
In the first case, you can inherit some of the storage and network control features of the VM, but you still have a bunch of new processes vying for the same underlying infrastructure. In the second case - on bare metal - you’ll need to rethink how you present storage and networks to containers, how they share them, and especially, how data is persisted and managed.
Think about what happened when VM consolidation ratios started going up - more VMs per box - and when we started doing new things with VMs, like desktop virtualization. What we saw were crazy new IO requirements, and contention for storage and network resources that were hard to predict. Random I/O patterns, boot storms, and brittle network architectures all made dynamic virtualization harder - until we solved them with software-defined storage and software-defined networking.
We’re still solving those problems with cool new technology every day, and containers will only make software-defined infrastructure more important. Basically, the more dynamic your application architecture is, the more you need software-defined control over all of your infrastructure.
Today, most containers might just rely on simple overlay file systems, but it’s only a matter of time before your developers will want containers that can provision their own storage, replicate it, snapshot it, etc - all the advanced storage services IT Ops teams rely on to make VMs run so well.
I’d recommend I&O teams start by sitting down with their friends in application development to find out first, how containers can help accelerate the software development lifecycle. Containers are a great way for dev and test teams to cut some of the friction out of software delivery. You can have a dev define a container and pass it unchanged down through test - no more “doesn’t work on my machine.” Get involved early in the process, because you want to be ready from a production standpoint when apps are ready to actually be deployed in containers.
Docker containers will almost certainly open up new ways to boost your server efficiency. Do you need a full VM for this app? Or does it take to long to launch an app with a full VM? Consider a container. Another thing for I&O teams: adopting containers can expand your deployment options. Especially in the cloud - you’ll be able to run a Dockerized app in nearly any cloud you want, pretty soon, in addition to your VMware or Microsoft or Red Hat virtualized infrastructure. But I would caution I&O teams to pay attention to HOW the app will run, how long it will live, who it needs to talk to, and what kind of storage it needs.
Since this market is evolving so rapidly, some vendor is probably already working on storage optimizations, or container security, or trusted networking…but you’ll need to stay close to your trusted vendors to stay on top of that. Like any open source community, the Docker community will experience some growing pains. Keeping the players straight will take some investment. But just like server virtualization a decade ago, the payoff will be well worth it.