Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

Why software-defined storage makes 8 TB drives an economic reality

At Hedvig, we have the pleasure of being exposed to cutting edge storage trends. In fact, our recent glossary post is dedicated to demystifying this brave new world. But sometimes cutting edge can be simple. This occurred to me when I came across George Crump’s Storage Switzerland report on How to Use 8 TB Drives Safely in the Enterprise. You can download a PDF of this report here.

To explain why this is cutting edge, I have to back up a bit.

Operational overhead is the silent killer of storage. Here at Hedvig we call it “human latency.” It’s the amount of time it takes to provision storage capacity in today’s environments. According to a recent Forrester report (requires form submission), 58% of organizations measure this human latency in days, weeks, or even months. Only 14% can measure it in minutes -- the benchmark of a true cloud. Here's a figure from that report.


Thus, when calculating the ROI of a new storage system, it’s natural to focus on OPEX. The true economic impact is derived from eliminating human latency through automation, self-provisioning, thin provisioning, outdated management tasks, etc.

But OPEX is a double-edged sword.

Yes, it means a software-defined storage (SDS) solution can be 60-70% less expensive than a hardware-defined counterpart. But many companies struggle to make that business case. These “soft costs” are representative and often intangible. If you’re in a traditional enterprise environment then nothing beats a CAPEX business case. Trust me, I know. I experienced this at Riverbed. The flagship WAN optimization product increases employee productivity, improves user experience, and eases the setup of complex WANs. But then there is bandwidth reduction, which is good ol’ hard-cost savings. Avoid upgrading a circuit for $15,000/month or go from three T1s down to one. When you can save up to 95% bandwidth then your business case is a no-brainer. Buy it for the CAPEX savings, love it for the OPEX savings.

As an emerging technology, software-defined storage has yet to achieve this no-brainer stage. It’s true that you can deploy on commodity infrastructure. And that’s compelling! To lower your cost-per-bit to store data (the equivalent of Riverbed lowering the cost-per-bit to transmit data, or bandwidth) it makes sense to use off-the-shelf hardware. A solid SuperMicro server that costs $9,000 will give you 24 cores, 128 GBs of RAM, 1.6 TBs of SSD, and 20 TBs of HDD. Deploy a three node cluster and you’re looking at $27,000 street price. Trying buying a monolithic array with that capacity, performance, and availability and you’re looking at $70,000 -- easy!

The question then becomes: Is the cost of the SDS license less than that delta? Start by calculating all the soft costs and operational improvements. Soon you’ll have a bulletproof business case that shows the TCO savings are significant. You just need a few variables like the average salary of a virtualization or storage admin; the average number of trouble tickets submitted; the length of time it takes to perform storage operations; etc. All very real costs that only increase as you scale.

But wouldn’t it be so much simpler if the CAPEX argument in and of itself was a no-brainer? Then that pesky CFO would sign off without hesitation and all that OPEX would just be icing on the proverbial modern infrastructure cake.

Well, we’re quickly approaching this reality. It will come in the form of large-capacity drives. First up will be 8 TB drives, but 10 and 20 TB drives are not that far off either.

Why 8 TB drives make SDS an economic reality

Let’s first start by stating 8 TBs is not arbitrary. It’s the current sweet spot in terms of cost-per-bit (see table below).


Drive size

1 TB

2 TB

4 TB

6 TB

8 TB

Total cost



















It’s also a useful and persuasive model for calculating CAPEX. 8 TB drives can be found for about $260, meaning a cost-per-bit of just $0.0000000000041. Ok, that’s hard to grasp, so let’s go with a more common metric. It equates to a cost-per-GB of $0.0325 or, if we calculate it in the other direction, a cost-per-PB of $32,500.


As an example, if we look at 2 TB drives, the average cost is closer to $100 (let’s use $90 to be conservative). That’s a cost-per-GB of $0.045 and a cost-per-petabyte of $45,000. That means 8 TB drives have a 27% immediate savings right out of the box. Imagine what happens when 8 TB drives drop in price, or 10 TB and 20 TB drives appear? Now we’re getting to 60% savings on just CAPEX.

I know what you’re thinking. Great, you convinced me the economics of larger drives are compelling, but what’s that got to do with software-defined storage?

Why SDS makes 8 TB drives an economic reality

Quite simply, hardware-defined storage can’t use 8 TB drives. The main culprit is RAID. Repair times for these drives, regardless of the vendor, will be measured in weeks. Hardware solutions simply cannot accommodate larger drives without abandoning some of their fundamental architectural components.

Modern software-defined storage platforms don’t use RAID. Hedvig, for example, uses a distributed systems approach with replication to repair failed disks and nodes. You can read all about in our technical whitepaper (also requires form submission). We can repair an 8 TB drive in minutes, depending on network and server configurations. In fact, this is the value of the Hedvig approach. The bigger the cluster, the faster the repair. This means that not only do we make 8 TB drives valid today, but we’ll make 10 TB and 20 TB valid as they become available.

More importantly, software solutions like Hedvig enable you to procure the commodity server and drive components when they make sense. You’re basically riding Moore’s Law as optimally as possible. Running low on capacity? Then make sure your next order from SuperMicro is packed with 8 TB drives. 10 TB drives available? Then make sure next week’s SuperMicro order is packed with those! You get the idea.

I highly recommend George’s paper on 8 TB drives. We’re approaching the economic tippingpoint.


Commodity servers, commodity (high-capacity) drives, and software-defined storage all mature at the same time. As you plan your storage purchases, make sure the business case includes these higher capacity drives and consider the no-brainer software-defined storage element. Oh, and solve your OPEX problems as a bonus.

Contact us today if you’d like a demo or help calculating your software-defined storage business case.

Request a Demo

Rob Whiteley

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.
See all authored articlesÚ