Avoiding the Doping Scandal in Storage Performance
For those that don’t know, my background is in Mathematics & Physics which, as a wise man once pointed out to me, is why I have OCD tendencies around numbers.
I like precision, I don’t like estimates or guesstimates, and I’m not a big fan of vendor spreadsheets that show how their technology will reduce your Capex or Opex and provide virtually immediate ROI, because we all know there are so many variables that they cannot possibly be particulalry accurate.
If I followed these models ultimately I could go in ever-decreasing circles where I have ultimate performance, at little cost, with no footprint and it pays for itself before I’ve bought it. Hooray for that!
Back in my precise world it’s important that we know what it realistically achievable, and more importantly what is achievable in specific environments with specific applications. One thing we have learned is that whilst all storage technology may look similar from the outside, it doesn’t always perform in a similar manner. One thing I’m asked repeatedly is how to decide between vendor technologies and what is the optimal solution for customers.
The answer is not simple, there are many variables that can affect the performance of any storage environment, and why for specific workloads there will be a solution which will work better than others for specific criteria. When sizing storage solutions we need to look at a multitude of variables;
- Performance requirements in terms of IOPS, Latency & Bandwidth
- Read / Write ratios
- Application usage
- Block size in use
- Typical file sizes
- Whether compression is applicable,and how well data may compress
- Deduplication and how well data can be deduplicated
Now here comes the challenge; 64% of IT organisations don’t know their application storage I/O profiles & performance requirements; so they guess. The application owner may closely know the performance and capacity requirements, but adds extra to accommodate growth and ‘just to be safe’. The IT department takes the requirements and adds some more for growth and ‘just to be safe’ because ultimately we cannot have a new storage subsystem which does not deliver the required performance.
This means performance planning can be guesswork, with substantial under or more likely over-provisioning, and the unseen costs of troubleshooting and administration providing more significant overheads than should be necessary.
The ultimate result of this can be a solution which meets all the performance requirements but is inefficient in terms of cost and utilisation.
This is where Computacenter come in; working closely with our latest Partner LoadDynamix we can;
- ACQUIRE customer specific workloads and understand exactly the requirements
- MODEL workloads to understand the scale of solution required and ramp up workloads to find the tolerance of existing infrastructure
- GENERATE workloads against proposed storage platforms to ascertain optimal solution, and how many workloads can be supported on a platform
- ANALYSE the performance of proposed solutions which factual data, not vendor marketing figures
Coupling this approach provides an exact science for sizing the storage solution, and coupling this with Computacenter’s real world experience ensures my OCD tendencies can be fully satisfied.
The Computacenter / LoadDynamix Partnership announcement can be found here;
I like accuracy; working together with LoadDynamix we can achieve that not just for me, but more importantly for our customers and their users.
Coming Soon – Look out for the #BillAwards2015 announcing in December; want to know who wins these prestigious awards? Follow me on twitter @billmcgloin for all the answers