IBM’s Software Defined approach, helps organisations get the most value from data with something old and something new.
Flashing the cache?
A colleague told me that he had a conversation with a very large customer not too long ago, wherein they said that they were considering only buying flash arrays in the future. This particular client’s claims were that flash is so much more efficient that it pays for itself over time compared to other solutions. He also stated that if he architected to the highest need, the lowest would be taken care of just as well. Finally, he claimed that flash has recently become so much more competitive in price vs. spinning disk that cost shouldn’t be a factor. I smiled to myself and thought about how nice it would be to have a customer like this with an “unlimited” budget and the ability to over-architect without concern, but it did encourage me to do some research to substantiate these claims – or validate my scepticism.
The reality is that the claims were in the right direction, but unless you had a workload very much skewed towards performance and not capacity, or a small amount of total data not entirely factual. Flash is very fast, and this customer was correct that flash could be used for nearly any workload, including archival, but it makes no sense at all to use it in that way – only a very small percentage of enterprise data is performance-sensitive. Finally, when flash is typically 5 x the cost of disk, and disk 5 -10 x the cost of tape for capacity (although much cheaper per IOP remember), the economics of flash simply don’t justify its use as the sole storage media in a data centre.
Restoring some balance
Let’s step back a bit and examine the realities of all-flash arrays (AFAs). AFAs are quite a bit more efficient in power, cooling, and noise requirements simply because they are electrical and not mechanical. This fact allows AFAs to be higher-density, and less expensive to run. Flash is also ridiculously fast at handling transactional data – pushing into millions of transactions per second at incredibly low latency. This means near-zero impact of traditional array features like snapshots, deduplication, compression, replication, and encryption. And finally, when combined with software-defined-storage capabilities like IBM Spectrum, AFAs can participate in transparent tiering across storage types and platforms that used to be solely an in-array capability.
The net result of it is that AFA has matured to be a vital component of a well-designed data centre. Flash very clearly sits in the tier0 segment of data centre design (high-performance, 2-10% of workloads). Flash plays a significant role in transactional workload acceleration, social media mining, security, real-time analytics, and early stage growth technologies like AI and robotic process automation. However, flash just doesn’t make financial and workload sense for static services like file and archival, which comprise the overwhelming majority of stored data (in some cases up to 95% of enterprise data load). In all fairness, some AFA vendors have added deduplication into their flash arrays to emulate archival capabilities and reduce the apparent cost per usable GB of their arrays, but in the end, it’s an interesting choice in expenditure: imagine needing a car to go grocery shopping in, so you buy a racecar with a trailer attached, rather than an SUV or minivan? Just because you can doesn’t mean you should.
Could IBM have the answer?
Believe it or not, while most major storage manufacturers are “killing off” spinning disk, tape, of all things, is making something of a comeback. The idea of ridiculously inexpensive near-line storage has resurrected a product most predicted would be dead by now. IBM is one of the last remaining providers of physical tape libraries, and for good reason. Tape systems in conjunction with the IBM Spectrum archive platform cost roughly £2,000 per TB for Flash versus <£45 per TB for Tape. That’s a massive difference in cost between tape, spinning disk, and flash – significant when you consider that 80-90% of data generated by enterprise workloads is never touched after its first 90 days of life. It’s important to note that much of this stored data has very high value to business for compliance and, increasingly, data mining and analytics, but it simply doesn’t have the performance or even access requirements to justify placing on disk.
The real key here is that AFAs provide most value when used in a multi-tiered storage subsystem linked by a cross-platform SDS layer to handle the shift of data as aligned to business value and processing needs. Sure, an array that has both spinning disk and flash in different trays can handle the needs of small businesses and tightly controlled workloads. The fact is that modern enterprise workloads will require dedicated, purpose-built flash where the entire system is designed to handle massive concurrent random workloads with minimal latency. The challenge really is how to pay for it? Maybe it’s time to reconsider tape as an inexpensive way to store that majority static data as part of an overall storage architecture plan – and re-invest those savings into focused, high-performance flash capabilities?
Want to know more?
If this article has piqued your interest, why not visit our dedicated page on IBM Flash Tape, where you can read more on how to get the best of both: http://www.flashtapestorage.co.uk/ or, if you prefer, drop me a line, email@example.com.