Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

2016 predictions: Rounding up 8 trends on software-defined storage

by Avinash Lakshman on December 29, 2015

Here at Hedvig we’re wrapping up a successful first year out of stealth. It’s been a great year of working with customers and evangelizing a new, modern approach to storage. It’s no surprise to us that IDC is predicting a 35 percent market growth rate for software-defined storage in the next three years. In fact, we think that may even be a bit conservative.

It’s natural at this time of year to not only reflect upon our accomplishments, but also to look forward to how this market will evolve. Customers are constantly bringing us new and interesting storage problems to solve. Our software is going places I would have never imagined just one year ago!

As a result, I’ve been honored as SandHill, StorageNewsletter and VMBlog have all asked me to provide 2016 predictions. We gathered up all the predictions into a single list of the top 8 trends we expect to see next year.

8-SDS-predictions-2

Prediction #1: Software-defined storage breaks out of test environments

So far, most SDS deployments have been in test/dev, VDI, or non-production IT environments. But with data growing 10x faster than storage budgets and the relentless need for IT to do more with less, we'll see SDS applied to traditional tier 1 and tier 2 environments in 2016. We expect a wholesale shift away from traditional arrays to software-defined solutions that lower the cost to own and operate storage by more than 60%.

Prediction #2: ARM climbs to five percent of the server chip market

ARM servers now make up about one percent of the market. Next year, that figure creeps to five percent. Although that figure seems insignificant, it has huge downstream implications for data center costs and efficiency. The chip design firm, ARM Holdings PLC, forecasts ARM share will grow to 20 percent by 2020, citing an average cost of $100-200/chip versus Intel's $600/chip as a big reason. We think the jump to five percent next year will be fueled by scale-out, software-defined infrastructure (server, storage, and compute) that needs to run on commodity servers.We'll even go so far as to say we’ll see our first ARM-based software-defined storage deployment by the end of the year.

Prediction #3: Storage achieves a hyperconverged equilibrium

Hyperconverged solutions (especially appliances) are a good fit for small/medium businesses, remote offices, and back office apps like Exchange. But enterprises anxious to reduce complexity began deploying hyperconverged solutions everywhere, creating sub-optimal scaling and infrastructure economics. Expect to see a more rational, balanced approach to storage architectures in 2016. That means a small shift away from hyperconverged to hyperscale: a more traditional architecture where storage is decoupled from the compute tier. The difference? Hyperscale storage is an all-software tier that’s scale-out and deployed on commodity servers.

Prediction #4: Production adoption of containers plateaus at 50 percent

DevOps.com surveyed 285 organizations and found 38 percent are already using containers in production environments. The same survey found 65 percent of respondents plan to deploy containers in production in 2016. We think that's a bit optimistic, and believe production adoption will climb to around 50 percent. The reason? Bottlenecks. Infrastructure components like security, stateful data storage and monitoring are still not where they need to be for widespread production adoption. Although the broader ecosystem is tackling these issues, it will take 12-18 months for best practices and reference architectures to permeate the community.

Prediction #5: Enterprises stop the insanity and deploy containers on bare metal

Expect to see a shift in deployment modes and architectures as we see enterprise adoption of containers — and Docker in particular — climb to this 50 percent mark. Why? Because enterprises currently dipping their toes in the Docker waters commonly experiment with containers deployed inside of VMs. There are legitimate security and storage reasons for doing so, but it's not ideal. Expect to see enterprises getting more comfortable deploying containers in a true microservices architecture, with specific processes and app components running in separate Docker containers on bare metal.

Prediction #6: No more than 30 percent of companies deploy OpenStack in production

Containers and OpenStack are not mutually exclusive. In fact, customers tell us the data center of the future will be a mix of VMs and containers, with OpenStack as an orchestration layer atop both. But containers are a faster growing technology. A 2015 Red Hat survey found 16 percent of organizations are using OpenStack in production, less than half the container adoption rate cited above. There's good momentum as companies get serious about private clouds and we previously predicted that the number of production deployments will roughly double. Large enterprises will lead the way; they stand the best chance of getting the requisite talent needed to successfully stand up and run an OpenStack cloud.

Prediction #7: Cloud storage goes hybrid

While public cloud storage can cost as little as a few pennies per gig per month, companies with more than a petabyte of cloud storage are finding it cheaper to deploy on-premises software-defined storage clusters. The cost of the data center, infrastructure, power, and cooling can be less than public clouds. But don’t take our word for it. Read customer LKAB’s experience running Hedvig on Cisco UCS hardware. We’ll see companies use intelligent cloud gateways and software-defined storage that support hybrid cloud. Hot and warm data will remain in private clouds, while older, colder data gets migrated to public clouds. The key will be solutions that are smart enough to do this dynamic migration automatically.

Prediction #8: Enterprise Hadoop becomes a "virtual-first" deployment

Several studies, including a great benchmark from VMware, found that virtualizing Hadoop actually improves performance. In 2016, companies will take Hadoop from tire-kicking and departmental experiments to production by adopting a "virtual-first" stance. In doing so, Hadoop can be more effectively run on shared storage (studies have shown this introduces eight percent or less performance decrease). This approach yields economics for a central data lake that leverages snapshots, clones, deduplication, and storage tiering, all of which are needed to make Hadoop production-ready.

Finishing the year with a bang

HappyNewYearImage.pngLet’s see how right or wrong we are on these predictions in 2016. Next year we'll do a fresh set of predictions as well as assess this batch. Whatever the case, we’re looking forward to a fantastic year at Hedvig. Until then, from all of us at Hedvig, we wish you a wonderful holiday season.

Happy New Year!

Can't wait that long? No worries! Click Get Started and we'll be in touch to help modernize your storage.

LEARN MORE

Avinash Lakshman

Avinash Lakshman

Avinash Lakshman is the CEO & Founder of Hedvig. Before starting Hedvig, Avinash built two large distributed systems: Amazon Dynamo and Apache Cassandra. As the pioneer of NoSQL systems, Avinash is passionate about using distributed systems to disrupt a storage space that hasn't seen any real innovation over the last decade.
See all authored articlesÚ