Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

The intersection of DevOps, microservices, containers, and storage

by Rob Whiteley on August 3, 2015

In a recent post I wrote about the intersection of Docker and storage — one of the fastest growing areas of customer interest.

For this post I want to step back and look beyond Docker to the broader conversation around microservices and DevOps needs to get more involved with how and where data is stored in a microservices architecture.

Microservices are a scalable alternative to monolithic apps and streamline application development and distribution. This modern approach to software systems and application development represents a departure from the traditional way of designing apps. As Neal Ford, architect at agile development firm, ThoughtWorks, so eloquently put it, “Microservices are the first post DevOps revolution architecture.”

DevOps is based on professional principles that include culture, measurement, automation and sharing. According to an excellent survey run by PuppetLabs, we see that 63% of organizations are already adopting DevOps practices (see below and check out the full report here).

PuppetLabsDevOpsInfographic

DevOps describes the mindset and the organizational structures that are necessary to deeply integrate development and operations, breaking down silos so apps can be brought to market more easily. It teaches IT ops to be more app-minded, while empowering developers to own more of the app lifecycle. Personally, I think DevOps is more about dev and less about ops, though. AWS taught us that if you make a process easy enough, the developer will do it themselves.

That’s where microservices come in. Microservices empower developers to develop, deploy, and operate their apps more efficiently.

What are microservices?

Microservices, defined by Martin Fowler (co-author of the Agile Manifesto) is “an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms.” Let’s be honest. This is not a new idea. Aspects of this have existed since Linux and UNIX became available. However, technological advances like cloud computing, containers, and the shift from systems of record to systems of engagement mean that everything old is new again.

Put simply, microservices work by breaking up traditional apps into components with APIs — which may include a vast number of extraneous parts — and separating them into individual services that can be distributed and replicated on a per-app basis. Think of it as dynamically assembling an app, rather than compiling it as one monolithic blob. Moreover, microservices simplify the development process so that developers are able to treat infrastructure like code, automating many of the operational functions that otherwise would be handled by IT.

Microservices, containers, and persistent data storage

For the sake of argument, let’s equate microservices to containers since microservices are typically deployed in a container. Google is at the forefront of microservices architecture and they achieve this by spinning up over two billion containers per week. The beauty of microservices and containerization is that it gives organizations that aren’t Google the ability to split up an application into set of smaller, interconnected services — enabling any organization to move at the speed of Google and other internet giants.

So all’s well, right? Maybe not. Traditional enterprises differ from the likes of Google in a significant way: stateful applications.

Most enterprises have a set of apps that are stateful and require persistent storage, so finding a way to connect microservices and containerized data to storage poses a challenge — and a bottleneck to production environments. While app developers can spin up a container in just a few seconds, inefficient storage architectures can take hours or even days to provision and allocate the necessary storage space. Forrester estimates that 58% of organizations fall into this conundrum. See below for an excerpt and you can read the full report here.

Forrester_provisioning_stats

Docker data libraries provide their own form of rudimentary data management, which are adequate for web apps that are stateless or ephemeral in nature. But what about stateful apps? When it becomes necessary to spin up multiple instances of a larger software framework like Hadoop, there is the need to spin up corresponding storage capacity for them. And that’s where Hedvig comes in.

How Hedvig fits in a microservices architecture

MicroservicesContainersStorageDevOpsIntersection

Hedvig pools data into a single, elastic cluster, ensuring that any microservice can readily access persistent data. The Hedvig Distributed Storage Platform has four microservices-friendly capabilities that developers and DevOps alike at our customers use:

  • Elasticity and rapid self-provisioning capabilities: Traditional storage is rigid. It can become a bottleneck that slows provisioning, especially if IT is guessing at how much capacity is required. Software-defined storage improves this method by providing elastic storage that can be incrementally scaled as and when the app needs it.
    How Hedvig helps: Hedvig also automates the provisioning process, allowing the app developer to include storage in the orchestration process for software stacks.

  • Easy Integration via an API: The days of manual app development are long gone. Any modern storage solution must have a simple API to connect to any framework that a developer may be using.
    How Hedvig helps: The Hedvig REST API provides a full suite of programmable APIs that handle provisioning, monitoring, analytics, debugging, and configuration of the storage cluster.

  • Easy deployment and rollback: With large enterprises like Amazon rolling out software updates as quickly as every 11.6 seconds, it can be incredibly valuable to quickly roll back changes that have been recently deployed into production.
    How Hedvig helps: The Hedvig Distributed Storage Platform allows for an unlimited number of snapshots and clones that give developers access to every iteration of their application so that nothing is lost.

  • Scalability and performance: Microservices are dynamically spun up and require storage that keeps pace. Since app development is an extremely data-intensive task that is done for a relatively short time, a storage solution with extensive read/write capability is needed.
    How Hedvig helps: Hedvig sequentializes random I/O, making it extremely write-friendly Hedvig also automatically scales from just a few nodes to hundreds and harnesses the power of the cluster for repairs and advanced storage capabilities.

As microservices architectures are adopted beyond just the web giants, enterprises will eventually create stateful applications that require more persistent data storage. Software-defined storage is the only way you can do this at a reasonable cost. A monolithic approach is untenable.

Your goal is to deploy reusable microservices. To spin up more instances without the need to recode. However, in order to go mainstream, you’ll need new tools like software-defined storage that make bringing new services to market faster and more flexible.

To learn more, watch this on-demand webinar where experts from both Docker and Hedvig demonstrate how our combined solution helps DevOps teams create manageable container environments.

Watch Webinar

 

Rob Whiteley

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.
See all authored articlesÚ