Are All Flash Arrays ready for the Enterprise?

When flash exploded onto the scene, our industry gushed over its speed, which makes perfect sense considering that we’re talking about a media that is potentially 10,000 times faster than its predecessor.  With all that speed can only two potential drawbacks:

  1. The write endurance of the cells
  2. Cost

Every vendor out there has been working to address both points. Focusing on protecting the DWPD (Diskful Writes Per Day) of the SSD to keep cells from being hammered too hard is a consistent outcome across vendors, even if each has its own unique way of getting there. Up until now, cost has been addressed through creative discounting and really aggressive ratios. Funny thing, the bulk of all capacity savings in the industry today still comes from Thin Provisioning which is not really measured by a ratio.  Still the promises of cost reducing ratios still happen daily: “Need 100 TB Mr. Customer?  We’ll sell you 20TB and guarantee you 5:1 Data Reduction!”

But do these conversations matter anymore? 

We’ve pretty much nailed wear endurance and now that the $/GB (dollar per GB) is organically way down (thanks to the densities of the drives), do we need to continue talking about this?

Think about our own marketing now:

We’re not alone. More vendors are offering higher density flash drives at prices that are as economically feasible as spinning disk and that’s without data reduction ratios. You don’t have to search Google very long to find a couple of vendors who are able to deliver both the latest greatest flash technologies without worrying about ratios or wear endurance.

Okay so now that’s over with... What’s the next step? Where do flash arrays go from here?

We’ve reached that point where the two most salient drawbacks of flash are now off the table. Now it’s time to focus on the enterprise and ask ourselves what is required of flash arrays to take it over in full.

Historically, three things were always required to be considered a Tier-1 storage platform:

  1. Performance
  2. Resiliency
  3. Replication

 If you’re an enterprise customer, you need the lowest latencies because you simply run the most diverse workloads. And because you measure seconds of interruption in 10s of millions of dollars, NO outage is ever tolerated. By extension, you need to make sure your data is secured off-site to protect it from catastrophic outages. 

So those three simple bullets make a ton of sense but: Where do flash arrays stand on them?  Performance? CHECK!  Resiliency? Hmmmm… Are flash arrays more resilient than their predecessors? SSD > Spinning disk with MTBF and overall failure rates.  No argument there.  But are the “arrays themselves” more resilient? The answer is NO.  Two-node arrays are simply not enterprise. They can never be. BUT that doesn’t mean that there isn’t more than one flash array that scales out to offer greater resiliency. In that way flash is no different than the world of spinning disk arrays because there’s really only three or four Tier-1 storage arrays. So based on that let’s give flash a “CHECK” here.

That leaves us with Replication.  Uh oh. This is what we’re talking about:

The picture is clear: Flash as a media may be ready for the enterprise BUT not everyarray is.

Just because most people would never invest their business in a 2-node limited array unless forced to by budget constraints doesn’t mean I’m saying those arrays are useless. Not at all. What I’m saying is that there is a MAJOR difference between a flash array and an enterprise flash array. (Not a revolutionary groundbreaking statement.)

More than ratios

The flash array conversation has centered on rations since its inception but a compelling ratio isn’t enough anymore. (Quite frankly it never was.) How do ratios help you when you need to replicate your data? Or federate your data? Do they get you better integration with VMware?  Or deliver a quality of service that allows you to set thresholds on latency, where they’re really important? And what do you tell your business unit when you can’t scale your data?  “Sorry we weren’t protected from a site failure, but hey we got 3:1!” 

The fact is the industry now has 3.84TB SSD, and sizes are still growing. The scale of increasing densities further commoditizes the SSD and by extension rendering the ratio conversation a thing of the past. 

Because not all vendor architectures can deliver on these larger sizes or newer technologies, they continue to hang on to old ones, with played out arguments that focus on “eMLC vs MLC” and cling to the conversation of ratios in an attempt to dazzle with you something that is no longer relevant. The sad thing is those conversations expose exactly what’s wrong with their architectures and they don’t even know it.  Consider:

  1. Every vendor offering these larger, newer drives is also backing it up with 7 year warranties. So what’s wrong with an architecture that can’t implement them and is still clinging to eMLC? How long will we let them sell this inefficiency as a feature?
  2. For those vendors who do offer MLC & 3D Nand, what’s taking them so long to get to the larger sizes? An even better question: Why is it that a vendor can add 1.92TB SSD and still not increase their overall RAW Capacity or addressable effective capacity? What’s wrong with that architecture that the totals don’t increase with the additions?

More importantly, if those architectures can’t address the ever-changing world of SSD technology and densities, then how on Earth can they expand their array to actually deliver Tier-1 Functionality?

The HPE advantage

We definitely paved the way for this conversation. We have been consistently ahead of the rest of the industry when it comes to adopting new, denser, and more cost-effective SSD technologies:

  • We were 12-18 months ahead of the other storage vendors with 20% more capacity from the same sizes, offering 480GB & 920GB drives.
  • We were 6-18 months ahead of the other storage vendors with 1.92TB SSD.
  • We were 6-12 months ahead of everyone else with the 3.84TB SSD.

That aside, if a picture is worth a 1000 words, then this picture that defines our native replication offering is worth 10 times that much:

I’m officially putting the world on notice: HPE 3PAR StoreServ is the only true Tier-1 Enterprise All-Flash array.

Architecture matters and ours has consistently given us the ability to be out in front of the competition. Having the most robust replication suite isn’t enough for us. Having the fastest performance, or inarguably the greatest level of resiliency isn’t enough. Unlike other arrays, we prioritize resiliency at every level—and then we take it a step further by delivering an optimized backup model that is still decades away for most other companies.

And we’re not done there.  Not only is our architecture designed to handle multiple workloads effectively, eliminating the need to focus on one single workload as you would with limited 2-node arrays but we took it a step further and offer the most robust QoS (Quality-of-Service) in the industry today: 

The only QoS that offers latency as a threshold. 

You’re welcome and have a wonderful and happy holiday!  Please remember to be safe and come 2016, as you’re ready to take the next steps with storage, keep in mind what being Tier-1 really means.

 - @StorageMuscle

I get the sense that many end users have become less conservative when flash hit the scene. A case in point is Thin Provisioning. This is and has been a great technology. However, it wasn't long ago when the majority of users were petrified that they may run out of physical space to store new data. Does being conservative also mean lower standards?

回复

要查看或添加评论,请登录

Jorge Maestre的更多文章

  • Twas the Month Before Christmas...

    Twas the Month Before Christmas...

    Twas the month before Christmas And North Pole IT Was having some issues With their all SSD They had two arrays And the…

    9 条评论

社区洞察

其他会员也浏览了