Start with the Workload

By | August 11, 2014

wokrloadMy colleague Matt Watts offered up a provocative video blog post back in May entitled “The demise of tier 1, throwing ‘flash grenades’ at Fortress VMAX” and an even more incendiary one called “” more recently.  Judging by the comments on that one, it touched a nerve with several ex-NetApp employees now working for small Flash startups, believing Matt to be overly partisan and dismissive of their support and scale credentials. Actually I found it incendiary as well, but only in a Lynne Truss kind of way – ‘whom’ not ‘who’, dear boy. Congratulations to Matt though, following the blogging ethos of the great marketer, Seth Godin – making it controversial or funny. Both if possible.

Death to Tier 1?

Anyway, concentrating on the first post, Matt posits that flash will kill tier 1. Whilst I agree in the direction he is taking,  I have a slightly tangential view (well I would, wouldn’t I) to the ‘demise of tier 1’ statement. Personally I don’t see tier 1 actually dying off. Certainly it will evolve. Its economic footprint will certainly diminish and its constituents will change. Tier 1 will continue to exist but it will predominantly mean a combination of hybrid and All Flash Arrays.

Flash will indeed be one of the protagonists that challenges the over-investment and over-provisioning in tier 1 that we see all the time, but so too will cloud. There is a seemingly inexorable headlong rush into cloud computing. Most IT departments have eschewed moving tier 1 applications wholesale into a cloud paradigm so far, but it is certainly the direction it will go. Moving processing from on-premise data centers into private clouds run by cloud service providers or public clouds run by the hyperscalars will certainly change the tier 1 conversation.

Start with the Workload

But maybe we are looking at this the wrong way around.  I think we need to look at this from a top down perspective. We need to start at the workload.

I wrote simplistic post two years ago in which I tried to make the case that the choice of‘storage was the most important decision the CIO would ever make. I wholeheartedly apologize for that. I no longer think that. Sheer, partisan cant.  Of course there are all sorts of very important decisions that a CIO has to take and choosing the enterprise’s storage platform is unlikely to be his most important consideration. My point back then was that most IT departments were moving from a silo’d based approach for the delivery of applications to one founded on a shared virtualized architecture. Therefore the storage platform was becoming more critical with components such as scalability, multi-tenancy and non-disruptive operations becoming necessary capabilities.

Today the discussion is more likely to start with a consideration of ‘workloads’. Only when the CIO and his or her lieutenants have fully understood the types of workloads and the expectations that the business has for the processing of them, can they begin to design, plan and implement to meet the service level objectives. Only when then they have a clear view of that can they hope to decide whether workloads should be on-premise, near-the-cloud or in-the-cloud. Understanding the capacity vs latency dynamic is just the start. The data associated to the workload has a lifecycle and may need to move across the tiers based on its value. Tier one data today may not be tier one data tomorrow. Flash offers unparalleled low latency centricity. But it comes at a significant cost. High capacity through spinning disk offers affordable scalability but comes with a relative performance hit.

For some workloads there is a very obvious answer. Take in finance and banking for example. Swaps & derivatives processing, arbitrage trading or complex financial transacting in the securities markets – all good candidates for All-Flash storage. Latency has to be as low as possible to take advantage of market movements, counter party settlement windows and currency shifts.  For other workloads it isn’t as clear cut. Database applications for instance. High IOPS is clearly important for real-time transacting, but does all the data need tier one storage? Probably not. In fact one global banking organization estimated they had over 3000 Oracle instances but estimated that no more than 10% could be described as ‘business critical’ and needed the maximum IOPS and lowest latency. They had many multiple so-called “tier 1 storage platforms” which were running all those instances. In that situation why would you provision those highly performant systems to cover the 90% of the applications that were not deemed business critical? Surely a cheaper approach could be better utilized.

In our experience All Flash Arrays are ideal architectures for a highly important but limited number of workloads – for those customers that want the lowest latency and high IOPS possible. That’s where the NetApp EF550 is finding a lot of acceptance. However for the vast majority of customers, the marriage of sophisticated data management capabilities together with highly performant All Flash delivers the best of both worlds. This is where the NetApp All Flash FAS comes in. With the #1 Storage OS, Data ONTAP and its set of rich data management features such as compression, deduplication, thin-provisioning, snapshotting and cloning, customers are getting the best of both worlds.  If capacity is an issue then a hybrid of SSD (flash) and HDD (spinning disk) is a popular option as well combining those two media in either the E-Series or the FAS systems.

The point is, if you start with the workload and understand the service level objectives for the processing of the application together with the requirements for the data stewardship and governance, then the discussion of tier 1 takes care of itself. And, the whole market hype around flash becomes a rational discussion about the role Flash plays in an overall data center architecture rather than a bandwagon jumping exercise.

Leave a Reply

Your email address will not be published. Required fields are marked *


× eight = 56