Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

How to avoid Cinder and Swift compromises in OpenStack storage

by Rob Whiteley on May 22, 2015

OpenStack is mainstream, but only for very large organizations.

At least that’s my conclusion after spending a week at OpenStack Summit up in Vancouver. It was a great event at a great location. This year felt markedly different from previous summits. The tone subtly shifted from evangelizing and educating to bragging. And I don’t mean that negatively. We were treated to great customer testimonials from Walmart, eBay, TD Bank Group, Google, CERN, NASA JPL, and others.

These lighthouse testimonials had one glaring fact in common: massive scale. OpenStack clearly works as framework for companies building large-scale clouds. In that respect it’s a “mainstream” technology. However, these companies also have the time, energy, and resource to commit to OpenStack cloud building. Thus, I don’t think they accurately reflect reality -- as you’ll see in the data below.

“Main Street” companies -- the average mid-to-large sized enterprises -- have no such luxury. Hence why most of the implementations I discussed with attendees were in test/dev or experimental in nature. One Director of IT Infrastructure (at a $500M enterprise) summed it best when he told me:

“Right now OpenStack is just too hard to implement. There are too many moving parts. Too many projects to stitch together. I just don’t have the staffing to do that. Give it another year, and hopefully it will be ready for us.”

Now I may be having biased conversations, but storage kept coming up over and over again as the one component making OpenStack so difficult. Between Cinder, Swift, and the emerging Manila projects (there’s a table summarizing these here), OpenStack mirrors the storage conundrum at large. There are silos of storage needed for specific use cases and for specific capabilities.

The OpenStack Foundation’s own Superuser publication has some fantastic data that bears this out. As you can see below, 77% of OpenStack deployments have less than 100 TBs of block storage, while 80% have less than 100 TBs of object storage.

OpenStackBlockStroageAdoptionOpenStackObjectStroageAdoption

What’s the difference between Ceph and Swift?

The primary culprit from the fragmentation and small deployment sizes comes from the fact organizations need to deploy multiple solutions for each “flavor” of storage. This was further highlighted by Christian Huebner, a Cloud Architect at Mirantis, who lead an excellent session at OpenStack entitled Swift vs Ceph from an Architectural Standpoint.

In his presentation Christian outlined the performance and advanced storage features of Ceph (used predominantly for block storage) vs the multi-site, multi-region replication and scalability of Swift (used predominantly for object storage). He noted that many Mirantis customers are forced to choose, or cobble together an architecture that combines both. Why? Because Ceph provides great storage within a data center, but doesn’t support adequate capabilities to replicate among data centers.

Below is a shot I took from the audience. Believe it or not, this is the simplified view of what Christian recommends to customers based on what’s currently available in OpenStack open source storage.

CephandSwiftArchitectureforOpenStack

But there’s something else that Christian said during his speech that resonated with me: “Workarounds always end up costing you more in the end.”

Truer words have never been said when it comes to the current state of affairs in OpenStack storage. 

Yes, Ceph is “free.” Yes, you can implement Swift for “free.” However, the reality is the underlying open source technologies require massive workarounds. The proposal above is the best hope you have, and it’s predicated on the hope that the community will create some Cinder and Swift enhancements, which could take years.  Otherwise, manual configurations and integrations will mean enterprises will have to spend significant time, energy, and money on getting this up and running. Not to mention the lack of support and risk inherent with such a fragile kludge.

How to optimize OpenStack storage

That’s why the Hedvig Distributed Storage Platform took a completely different approach to our OpenStack storage solution. Hedvig provides:

  • Block, file, and object all from the same platform via native Cinder and Swift integration.

  • The ability to set granular, per-volume (Cinder) or per-container (Swift) policies for capabilities like compression, dedupe, snaps, and clones.

  • A distributed platform that’s been optimized for multi-site and multi-region replication. You can set a replication factor of one to six and determine a destination datacenter or clouds for each replica.

HedvigforOpenStack

Put simply, you could replace the entire workaround Christian showed in his slide above with a single, logical cluster that spans all six sites. It’s 7-8x faster than Ceph and provides even more multi-site, multi-region functionality than Swift.

Want to learn more about why Hedvig is modern storage for a modern cloud? Click below to get a solution brief, Ceph comparison, demo video, or to request your own custom demo to showcase our OpenStack integration.

Learn More

 

Rob Whiteley

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.
See all authored articlesÚ