<img src="//bat.bing.com/action/0?ti=5794969&amp;Ver=2" height="0" width="0" style="display:none; visibility: hidden;">

Why Hyperconverged Architectures Win

[fa icon="long-arrow-left"] Back to all posts

[fa icon="pencil'] Posted by Lewan Solutions [fa icon="calendar"] May 5, 2015

Much has been made recently by the likes of Nutanix, Simplivity, Atlantis, and even VMware (vSAN, EVO|RAIL) about the benefits of hyper-coverged Architecture.

I thought I'd take a few moments and weigh in on why I think that these architectures will eventually win in the virtualized datacenter.

First, I encourage you to read my earlier blogs on the evolution of storage technology and from that I'll make two statements. 1.) physical storage has not changed and 2.) what differentiates one storage array vendor from another is not the hardware but the software their arrays run.

Bold statements I know, but bear with me for the moment and let's agree that spinning disk is no longer evolving, and that all storage array vendors are basically using the same parts - x86 processors, Seagate, Fugitsu, Western Digital hard disks, and Intel, Micron, Sandisk or Samsung flash. What makes them unique is the way they put the parts together and the software that makes it all work.

This is most easily seen in the many storage companies who's physical product is really a Supermicro chassis (x86 server) with a mix of components inside. We've seen this with Whiptale (Cisco), Lefthand (HP), Compellent (Dell), Nutanix, and many others. The power of this is evidenced where the first 3 were purchased by major server vendors and then transitioned to their own hardware. Why was this possible? Because the product was really about software running on the servers and not about the hardware (servers, disks) itself.

Now, let's consider the economics of storage in the datacenter. The cheapest disk and thus the cheapest storage in the datacenter are those that go inside the servers. It's often a factor of 5-10x less expensive to put a given disk into a server than it is to put it into a dedicated storage array. This is because of the additional qualification and in some cases custom firmware that goes onto the drives that are certified for the arrays; and the subsequent reduction in volume associated with a given drive only being deployed into a single vendor's gear. The net being that the drives for the arrays carry a premium price.

So, storage is about software, and hard disks in servers are cheaper. It makes sense then to bring these items together. We see this in products like VMware vSAN, and Atlantis USX. These products let you choose your own hardware and then add software to create storage.

The problem with a roll-your-own storage solution is that it's necessary to do validation on the configuration and components you use. Will it scale? Do you have the right controllers? Drivers? Firmware? What about the ratios of CPU, Memory, Disk, and Flash? And of course there is the support question if it doesn't all work together. If you want the flexibility to custom configure then the option is there. But it can be simpler if you want it to be.

So here enters the hyper-converged appliance. The idea is that vendors combine commodity hardware in validated configurations with software to produce an integrated solution with a single point of contact for support. If a brick provides 5TB of capacity and you need 15TB, buy 3 bricks. Need more later? Add another brick. It's like Legos for your datacenter, just snap together the bricks.

This approach removes the need to independently size RAM, Disk, and CPU; it also removes the independent knowledge domains for storage and compute. It leverages the economy of scale of server components and provides the "easy button" for your server architecture, simplifying the install, configuration, and management of your infrastructure.

Software also has the ability to evolve very rapidly. Updates to existing deployments do not require new hardware.

Today the economics for hyper-converged appliances have fallen short of delivering on the price point. While they use the inexpensive hardware the software has always been priced at a premium.

The potential is there, but the software has been sold in low volumes with the vendors emphasizing OpEx savings. As competition in this space heats up we will see the price points come down. As volumes and competition increase the software companies will be willing to sell for less.

This will drive down the cost, eventually making legacy architectures cost prohibitive due to the use of proprietary (and thus low volume) components. Traditional storage vendors who are based on commodity components will be more competitive, but being "just storage" will make their solutions more complicated to deploy, scale, and maintain. The more proprietary the hardware the lower the volume and higher cost.

For these reasons - cost, complexity, and the ability of software to evolve - we will see hyper-converged, building block architectures eventually take over the datacenter. The change is upon us.

Are you ready to join the next wave? Reach out to your Lewan account executive and ask about next generation datacenters today. We're ready to help.

Lewan Solutions
Written by Lewan Solutions

  • View & Submit Comments

[fa icon="envelope"] Subscribe to Email Updates



[fa icon="comments-o"] Follow us

Get even more great content, photos, event info and industry news.



[fa icon="calendar"] Recent Posts