Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

Hyperconverged or hyperscale? Yes please.

by Rob Whiteley on February 10, 2015

Hyper converged-vs-hyperscale.pngThere’s a lot of debate in the industry on the best way to deploy infrastructure in private and public cloud datacenters.

In the last few years, the latest model to gain traction is hyperconvergence.

As with any new concepts, there are many different definitions floating around. For the sake of this post I’ll use TechTarget’s definition

Hyper-convergence (hyperconvergence) is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor.

You can also find great hyperconvergence primer on Wikibon.

Hyperconverged solutions make sense. They provide simplicity and what I’ll call linearity, meaning you get predictable performance and capacity increments. And analysts predict this will be a big market. In fact, Nutanix worked with IDC to estimate a potential $50 billion dollar market for hyperconvergence. Yes, that’s with a ‘b.’

To understand why, let’s backtrack a bit. Hyperconverged gained traction with VDI as its marquee workload. ESG senior analyst, Mark Bowker, estimates in this article that as many as 60% of hyperconverged products initially sold were for VDI. Since then hyperconvergence has gained momentum with workloads beyond VDI. Many organizations now see it as a basic, modular building blocks for enterprise virtualized workloads, especially apps like Exchange, SQL Server, and even Oracle databases.

But hyperconvergence is not a great fit for all environments. Choosing hyperconvergence means you’ll be scaling compute and storage in lockstep. This is the right choice when simplicity is the primary requirement, but not ideal for elastic workloads.

Enter hyperscale solutions.

Hyperscale is a game changer. As with traditional architectures, compute and storage are scaled independently. However, unlike traditional server/array architectures, hyperscale environments deploy racks of equipment using commodity, whitebox components. The goal of hyperscale, as described on Wikipedia, is “to scale appropriately as increased demand is added to the system.” This is the backbone of web goliaths like Google, Amazon, and was most notably popularized by Facebook with its Open Compute Project.

Hyperscale is a much more suitable storage architecture for elastic workloads, especially Hadoop, Cassandra, and other NoSQL flavors. It’s also a better platform for enterprises building out their own clouds with technologies like OpenStack or Docker.

So, as with most things, the answer is not either/or – it’s both. That’s where the right software-defined storage (SDS) comes in.

SDS solutions separate the “controller” from the “array.” The controller provides the necessary block, file, and object interfaces for the compute tier. The array manages the data, as well as providing advanced features like de-duplication, compression, and QoS.

Hyperconverged_hyperscale_comparison.png

SDS makes hyperconverged versus hyperscale a deployment choice, not an architectural constraint. If the “controller” and “array” components are deployed on the same physical node, then, voilà, you have a hyperconverged appliance. If you decouple the two components, then you have hyperscale. Moreover, you can actually mix-and-match both deployment options with the same platform, radically simplifying implementation and provisioning workflows. SDS gives you the flexibily to deploy hyperconvergence when necessary, but also democratizes hyperscale, making it available for mainstream enterprises, not just Internet giants.

Datacenter architectures are never static. The best converged architecture today is not the best scale-out option for tomorrow. Instead, pick components that are designed with flexibility in mind. If you require simplicity, then pick hyperconverged; if you want elasticity, then go for hyperscale.

Don’t lock yourself into a datacenter architecture that fails to meet your needs 3-5 years out. You’ll end up back in this mess.

Datacenter-mess

Instead, audit the workloads you have now and forecast the ones on the horizon. Pick an SDS platform that accommodates them all – you no longer have to choose one versus the other.

HED_DummiesLPebookCover_sg_1e.pngTo learn more download our free eBook: Hyperscale Storage for Dummies. It provides an overview of hyperconverged versus hyperscale solutions, the best use cases for each, and a hand tool to select which is best for you.

Download

Rob Whiteley

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.
See all authored articlesÚ