The All Flash Array - to Buy or Not to Buy

All flash storage arrays are all the rage. There are numerous relatively new entrants into the storage market seeking to upend the big player status quo. All of my customers are being approached and asking their own questions. They’re planning ahead and trying to decide if they should go all flash. “Can I afford it?” “Is spinning disk dead?” “Do they have all the features I want? “Will the vendor be here in three years?” And on and on. They all want to know if jumping into an all flash array is going to be a good investment. They’re hearing the siren song and these are real dollars and they don’t want to make a bad investment bet.

In a discussion I first hit on Chad’s Blog regarding vendor negativity and FUD – which is in itself an interesting discussion, go read it – I read Mark Burgess' comment:

As you say in your post most non-enterprise customers want a “Swiss Army knife” of a product (i.e. VNX or FAS) and surely we are approaching a point whereby their flash capabilities are good enough for most organisations.

In the medium term does the all-flash array actually make sense?

The HDD is going to be with us for some time and even when it does disappear their will still be different tiers of Sold State storage (i.e. performance and capacity optimised) so all arrays will need to continue to handle multiple drive tiers.

This applies equally to Pure as well so it would be good to get Vaughn’s comments on this.

I even wrote a blog on the subject at http://blog.snsltd.co.uk/does-the-all-flash-array-really-make-sense/

This comment and Mark’s more detailed post hit on what I’m seeing “in the field” with my customers here in LA. His argument is probably closest to my own current thinking on the subject. The argument is that for most customer workloads, the performance requirements for most workloads don’t require flash. Those that do are a small subset so as Mark says, “a small amount of flash makes sense.” In other words, for a lot of customers, a hybrid approach with flash and spinning disk is going to do the job.

As always, understand your workload!

The reality, of course, is that many customers have a hard time characterizing their workloads. The price for flash continues to plunge so there’s a real sense that throwing low-latency, high-IOPs flash at their workload will end all performance concerns. Enterprise vendors are ideally looking for customers making a three year investment. Customers, on the other hand, more often would like to squeeze five years out of their enterprise arrays. They don’t want to make a choice that paints them into a corner down the road.

Most of my customers are more capacity-bound than performance-bound. Of course they all have performance challenges, but for the most part, they are dealing with capacity growth so the $ per GB metric matters more to them than $ per IOP and guaranteed low latency. The numbers skew in favor of spinning media with SATA capacity disk now hitting the 6T mark in enterprise arrays. Flash will never approach the price of disk on a capacity basis.

Or will it? Wikibon makes the argument that on a raw cost basis, we’re already at parity and prices will continue to plummet for storage as a whole. They argue that in 6 years time, data center storage will be 40 times lower than today! It’s a long, detailed piece of research and an interesting read. They make some interesting points:

The attempts to mitigate the mechanical problems have been significant, but the impact of disk storage limitations on application design has also been stark. As Wikibon has pointed out in previous research, applications have been written and designed with small working sets that fit into small buffers. Write rates have been suppressed to meet the write buffer constraints. The number of database calls within a business transaction have been severely limited by the high variance of disk-based storage; this is necessary in order to reduce operational and application maintenance complexity and cost. Large applications are still designed as a series of modules with loosely coupled databases. Data warehouses and analytics are separated from operational transaction systems.

To paraphrase their overall argument a bit, the cost of flash and it’s capabilities will upend the application architectures that have largely designed around the inherent limitations of spinning media. Therefore the all-flash data center is just around the corner.

Does this sound a little like predictions of the death of tape and the mainframe?

Even the smallest of organizations are seeing massive growth in capacity data. Most of that data is idle and it’s hard to see where buying flash that holds idle data will make economic sense. When you dig into the workloads of an SMB or medium enterprise customer, there’s a subset of really active data. Perhaps moving some or all of their VMware datastores to All Flash right now makes sense. Or they have key databases that can benefit from throwing higher performing hardware at the problem. But again, this gets those customers into a hybrid environment rather than a true “all-flash” data center.

Similarly, relatively large environments may have more active workload data that can and should live on flash alone today. As Wikibon points out 10K and 15K SAS shipments have flattened. But the customers I am talking to still only see flash as a go-forward strategy for workloads that would have landed on those disks. They don’t see their entire data center going all flash any time soon. And keeping the features and environment they are comfortable with managing is better than adding a new, separate storage silo to manage – even if the management of the next generation array is arguably “easier”.

So what I see is some customers perhaps adding cache tiers to their existing storage environment. Some are adding flash shelves or separate controllers for very specific, typically new, applications such as VDI that are tailor-made for flash. Some may still find it less expensive to use the legacy 10K media for a few more years or even mix flash and capacity SATA. This may be an entirely different discussion towards the end of 2015 but customers making decisions right now are finding benefits from mixing some flash storage into a larger environment dominated by spinning media.

One thing I do wonder is if more research will go into power-saving technologies for capacity SATA disks. We don’t hear much about spin-down as a power saving measure in enterprise arrays for example. If engineering and development goes into a capacity tier for flash that is automatically managed by the array, then perhaps Wikibon is right and 6 years down the road, almost no one will be buying disks for data center storage. I know I and my customers will be watching with interest.

  • Hu Yoshisa of HDS wrote back in July that he thought the AFA market will eventually dissapear in favor of hybrids back in July of 2014.
  • Storage Swiss took issue with Hu’s conclusions. The article also speculates that there may eventually be tiers of flash.
  • Chris Evans is compiling an independent comparison of all of the AFAs he can find.

 Share!

 
comments powered by Disqus