Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

Software-defined storage is the soul of OpenStack

by Rob Whiteley on November 17, 2014

openstackheart1

OpenStack is a fascinating, polarizing topic in enterprise IT. Companies – both vendors and IT execs – are flocking to the technology. It promises to democratize cloud computing, enabling companies to operate private clouds with the efficiency of public cloud providers. If you’re not familiar with OpenStack, I’m partial to this primer on CIO.com. If you are familiar, then you know some espouse it as a game-changer. Some caution its DIY nature as too burdensome. The truth, as always, is somewhere in the middle.

Regardless of where you sit on the OpenStack debate, the results are impressive. Consider these stats from a recent IDG report:

  • 46% of companies use OpenStack in production, up from 32% in 2013.
  • 27% of companies are using it for Dev/QA, down from 34% in 2013.
  • 27% of companies are kicking the tires (PoC), also down from 34% in 2013.

Blog_3_Fig_2_OpenStack_adoption

Bottom line: OpenStack is moving into production.

That’s the good news.

The bad news is that production environments are very different than test, dev, and QA. Namely, performance and reliability shoot to the top of the requirements list. And this is particularly acute with OpenStack storage. Here’s why.

A mentor of mine, David Wu, described storage as “the soul of computing; it is what provides the state, the context, for applications. It is what IT organizations persist, protect, and secure.” He goes on to talk about the rest of the IT stack, and it’s well worth a read for his technical vision.

I bring this up because storage is the soul of OpenStack. OK, in all fairness and in a slightly less dramatized fashion, it’s actually one of three major components (see image, courtesy of OpenStack). But it’s arguably the most important component as OpenStack goes into production environments. I say that because the compute and network components are actually stateless. They don’t serve or transmit anything of value if the data is lost, corrupted, or unavailable. And based on the customer conversations we’ve had, companies are quickly coming to this realization.

Blog_3_Fig_3_OpenStack_components

Given that, we’ve seen a lot of demand to bring “enterprise grade” storage to the forefront of the OpenStack conversation. Here’s where things get a bit nuanced, though. OpenStack storage really only provides a set of components for interfacing with storage systems: Swift (object) and Cinder (block, with partial file) are predominant, with Manila (file) as something that is in an experimental phase. Even with all these options, though, only object storage has been implemented with any success at scale (read this for a good object storage primer). 

So why do you need all three flavors, then? As OpenStack enters production its use cases expand. In this environment, recoding production applications is unacceptable and the majority of enterprise apps were not built with object stores as the underlying data platform. Thus, block and, to a lesser extent, file are cropping up more frequently as requirements to support the hundreds to even thousands of apps that may be deployed as OpenStack workloads.

Here’s a quick snapshot (pun intended) of OpenStack storage flavors, benefits, and use cases.

 OpenStack_Storage_Comparison_Table

  In this respect, OpenStack is a microcosm typifying all of today’s storage challenges. You’ll need a mix of block, object, and file depending on your applications and workloads. Gone are the days of using just object or just block. But this means yet more islands of storage. More administrative interfaces. More operational inefficiencies. And, most importantly, this mix brings an impediment to self-provisioning and automation – the reason you started down the OpenStack road in the first place. 

Just as I discussed with hyperconverged vs. hyperscale, this is where the right software-defined storage (SDS) platform comes in. SDS provides a single platform that can be presented as block, object, or file storage as necessary. It’s inherently scalable, utilizes commodity infrastructure, and is built with cloud-like workflows and APIs. In short, software-defined storage is built to be the soul of your OpenStack deployment.

But consider SDS options carefully. Like networking in the traditional hypervisor world, storage can be the bottleneck of your deployment. We’ve heard many stories about storage determining the success or failure of an OpenStack implementation.

When selecting an SDS platform, make sure it is:

  • Elastic. I’m not just talking basic scale-out architectures. If your solution can only cluster and scale to a few dozen nodes, then it isn’t going to be the storage fabric you’ll need for your private cloud. It should be elastic and seamlessly scale, without human intervention, to hundreds if not thousands of nodes.
  • High-performance. Many solutions are built on object store architectures because of the scalability requirement above. That’s fine, but when they are used as block or file system in OpenStack they often become a performance bottleneck. Look for systems that provide high performance across all three storage flavors.
  • Full-featured. You don’t want a platform that requires you to adopt new, foreign workflows. Nor do you want a solution that doesn’t maintain the common enterprise storage features like QoS, dedupe, and compression. Instead, pick an SDS that natively integrates with OpenStack while still accommodating the functionality you’re used to in traditional architectures.

The good news is that software-defined storage platforms that do all three exist. If you are one of the thousands of businesses going down the OpenStack path, focus on these attributes to ensure storage is not the bottleneck.

Looking for more info? Download our OpenStack tool kit to learn how you can apply Hedvig software-defined storage to your OpenStack cloud.

Download

Rob Whiteley

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.
See all authored articlesÚ