Thoughts and Updates from the Hedvig Blog

Subscribe to the blog
Get updates sent directly to your inbox

Pretender or Contender? What enterprises need in distributed systems

by Avinash Lakshman on August 31, 2016

hedvig.jpgNot long ago at the IT Press Tour that Hedvig hosted (credit to Wojciech Urbanek for the photo!), I went on a bit of a riff about just what exactly constitutes the type of distributed systems that are needed in a modern enterprise. And my definition of these distributed systems includes additional aspects above and beyond the standard, well-known definition.

This comes from my experience working on two formative systems: I coauthored Dynamo at Amazon and developed Apache Cassandra at Facebook.

We’ve had distributed systems for a while now. Today there are solutions that adhere to the standard definition. Let’s call these the pretenders in terms of what's truly needed in modern enterprises. And then there are those that have additional scalability and self-healing attributes. Let’s call these the contenders for what enterprises really need. Using this extended definition of a distributed system, I thought it’d be useful to discuss the crucial differences between contenders and pretenders in our age of modern infrastructure.

In my view, some of the current definitions of a modern distributed system are too simplistic to be useful. For example, in Chapter 1.1 of Distributed Systems: Principles and Paradigms (2nd Edition), Andrew S. Tanenbaum and Maarten Van Steen define a distributed system as "a collection of independent computers that appears to its users as a single coherent system." That's a good starting point. Wikipedia expands on this with: "Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components."

Many systems adhere to this definition, but I'll make the assertion they're the "pretenders" that don't meet the needs of today's fastest growing enterprises.

Pretender systems: Run across multiple compute nodes, but aren’t scalable and self-healing

Relational databases such as Oracle, IBM DB2, Informix, Sybase and others could be considered distributed because you could install multiple instances of them and network them. However, scaling in this instance is only vertical and, while it’s possible to scale a RDBMS across multiple servers, it’s very difficult and time consuming. We’ve seen a lot of replication and duplication techniques evolve in the database world that help with distribution, but they don’t make these relational databases a distributed system. There are still master/slave, shared components, and consistency issues.

Contenders systems: Run across multiple compute nodes, but with distributed metadata, shared-nothing architectures across multiple sites, and self-healing

Pretender_vs_contender_banner.png

There are obvious examples of distributed systems, like the internet, the technology for which stretches back more than 40 years. Component systems like NTP and DNS are good examples of distributed systems. Of course, by design there is no single point of failure. We’ve seen other data-sharing applications like BitTorrent that are also somewhat distributed, given their peer-to-peer nature and performance increases as more computers come online. But for me, the modern, distributed systems we see in today’s data centers are emerging applications like Elasticsearch, Hadoop, and Spark.

What makes these contenders for what enterprises need in a distributed system today? In my experience, I’ve concluded there are three crucial components of any distributed system – whether it’s storage, compute, applications or networking or any other piece of the modern infrastructure technology stack:

  1. Distributed metadata across all instances. True, in the past there are examples of applications that could run across multiple computers (RDBMS already mentioned) but they didn’t share metadata. I believe distributed systems need every instance of software to be aware of every other instance of software, creating a meshwork of awareness. Even more important than “seeing” and “talking” to each other, these software instances actually understand what’s on each instance. It’s not just parallelized; it’s much more networked in terms of how it works. Critical to this is a robust substrate capable of passing metadata updates reliably. I would go so far as to argue that the #1 reason why many modern systems are not what enterprises need is because they lack this scalable metadata – metadata must be partitioned and replicated across the cluster. Absent this, many systems are simply scale-out systems more so than distributed systems.
  2. A shared-nothing architecture. What’s known as shared-nothing architecture is also important, meaning nodes aren’t dependent on other nodes to complete tasks. A single node can go down and there is no resultant single point of failure. Even if a handful (or many nodes) fail, the truly distributed system can survive, albeit perhaps at a slightly degraded level of performance. In the past, perhaps there were two brains in a “distributed” system (one in active-active or active-passive mode where one could take over). But in modern distributed systems, if we have 500 nodes, then we have 500 brains and all of them can talk to one another. The fact that each brain can run its own metadata and data processes means a single failed brain (node) doesn’t take down the entire system. I also believe these enterprise-grade distributed systems stretch a shared-nothing architecture across two or more sites simultaneously. For bonus points, those sites can and should be either private data centers or public clouds.
  3. Self-healing. If (and when) something does go wrong, the system should be able to rebuild itself, return to whatever is the ideal state. This is critical because if a system suffers some sort of outage, say a node or three go down, the system can rebuild what was on those nodes that failed. It’s OK. The system continues to run until it “heals” and returns to full performance. Of course, if there’s no rebuild capability, the system will run until a certain number of nodes do fail, and then the system goes down entirely. And, by the way, this self-healing element of modern distributed systems can only work because of the whole networked effect of distributed systems in the first place. What’s especially powerful about the Hedvig Distributed Storage Platform and other true distributed systems is they harness the collective power of all the individual nodes to do the rebuild. The larger the cluster, the more nodes can participate, the faster the repairs.

What enterprises need in a modern distributed system has evolved. There are clear differences between what I call the “pretender” distributed systems and the “contender” systems, of which Hedvig is a proud member. When I was at Amazon working on Dynamo, which is arguably the genesis of the whole NoSQL movement, we had to overcome the inherent limitations of SQL systems because they were not truly distributed. Dynamo helped give birth to the truly distributed system as applied to a database. And, now, at Hedvig we’ve carried that even further to enterprise storage. It’s an exciting place to be!

If you're technically inclined, watch this great series of whiteboard videos from Storage Field Day 10 that illustrate these concepts.

Watch Videos

Avinash Lakshman

Avinash Lakshman

Avinash Lakshman is the CEO & Founder of Hedvig. Before starting Hedvig, Avinash built two large distributed systems: Amazon Dynamo and Apache Cassandra. As the pioneer of NoSQL systems, Avinash is passionate about using distributed systems to disrupt a storage space that hasn't seen any real innovation over the last decade.
See all authored articlesÚ