Tag Archive | Storage

In a Minute: Software Defined Storage

As 2011 was a year of us talking about “Cloud”, closely followed by the “Big Data” wave of 2012 then 2013 is shaping up nicely as the year of the “Software-Defined” entity, where multiple technologies are being covered by the “SDx” banner.  Let’s have a brief look at what this means for the world of storage.

In the world of data we are used to constants; Controllers that manage the configuration of the environment and the placement of data, disks grouped together using RAID to protect data and the presentation of this data to servers using fixed algorithms. In effect when we wrote data we knew where it was going and could control it’s behaviour, we could replicate it, compress it, de-duplicate it and provide it with the performance level it needed, and when it needed less performance, then we just move it somewhere else – all controlled within the storage array itself.

Software defined Storage changes this model; it can be thought of as a software layer, put in place to control to control any disks attached to it. The storage services we are used to (snapshots, replication, de-dup, thin provisioning etc) are then provided to the Operating System from this layer. This element of control software will be capable of sitting on commodity server hardware, in effect becoming an appliance initially at least, and will be able to control commodity disk storage.

This does not really constitute some of the features of storage virtualisation, where a control plane manages a number of storage resources, pooling them together into a single entity; rather it separates the management functionality removing the need for the storage controllers – the most expensive part of a data solution. Therefore one of the driving factors for the uptake of Software Defined Storage is an obvious reduction in cost, and the ability to provide data service regardless of the hardware you choose.

The challenge to this is that data should be regarded differently to other aspects of the environment; data is permanent, packets traversing network are not, and even the virtual server environment does not require any real form of permanence. Data must still exist, and exist in the same place whether power has been present or not. We are now starting to see a generation of storage devices, note I was careful not use the phrase arrays, which are looking more capable of offering a Software Defined storage service, through the abstraction of the data and controller layers.

So what does this all mean for storage in the datacentre?

My main observation is that physical storage arrays will be with us for a long time to come and are not going away. However the potential for disruption to this model is greater than ever before, the ability to use commodity type storage and create the environment you want is compelling. With the emerging ability of software to take commodity hardware, often from several vendors simultaneously and abstract the data layer then the challenge to the traditional large storage vendors becomes a real and present danger.

I believe the rate of change towards the software defined storage environment will ultimately be more rapid and see greater early adoption to the proven concepts of server virtualisation, it will cause disruption to many existing major vendors, but ultimately end-users will still require copious amounts of disk technology, so the major players will remain exactly that. Whilst some niche players may make it through the big boys will still dominate the playground.

Data – The new Rock’n’Roll

“ Data is the new oil”

“The most valuable currency in the world is not money, it’s information”

– A couple of great quotes written by people much more eloquent than me. However I do have one of my own ;

Data is the new rock’n’roll

Just as rock’n’roll transformed music scene the use, and future potential use, of information is dramatically changing the landscape of a data centre. Historically the storage array was effectively the drummer of the band, required but sitting fairly quietly in the background, and whilst a vital component it was not necessarily the first thing people thought of when putting the band together. Even now, if you look at a picture of any band, the drummer is the one hanging about aimlessly in the background, try naming the drummer in any large and well-known bands; it’s much harder than you think. And so it was with storage and data; the storage array would sit somewhere towards the back of the datacentre whilst the shiny servers were the visible component, and the items that got the most attention.

As we hit 2013 that all changes; the storage array is the Kylie of the datacentre, it’s the sexiest piece of equipment in there. And so it should be given that upwards of 40% of a customer’s IT budget is spent simply on provisioning the capacity to house data.

At Computacenter, we’ve made a large investment in our Solution Centre. Whats sits in the front row now? Of course it’s the data arrays; with the latest technology from EMC, HP, HDS, IBM and NetApp all showcased. Why is it front row? Obviously as it’s the most important component of any solution nowadays. And of course, it looks sexy, or is that just me?

The storage array is now front and centre, it’s the first component to be designed when re-architecting an environment. Why? Simply because a customer’s data is their most valuable asset, it’s transforming the way people do business; it’s changing the way we interact with systems and even each other, your data is now the lead singer in the band.

Data is the one thing that is getting attention within the business; it’s the one thing you have making the front pages of “Heat” magazine – Where’s it going? What’s it doing? Is it putting on weight? Is it on a diet? What clothes is it in? Should it be in rehab? But as the manager of the data (or the band) there is one simple question that you want answered; how do I make money out of it?

And that, dear reader, is the $64,000 question. The good news is that is becoming ever more possible to use your data as a revenue generation tool, we are only starting to see business value being generated from data, as 2013 progresses we will see some niche players mature (and possibly be acquired), we’ll see an increased push from the mainstream vendors and we’ll start to see ways of manipulating and using data that we just couldn’t contemplate when the storage was simply providing the rhythm section.

Even converged systems, the boy bands of the decade, which perform in harmony always have one better singer than the rest, well he’s the data

So: Compute, Networking, and Software, the gauntlet is down; Data is the new rock God, it’s the Mick Jagger to your Charlie Watts, you want the crown back? Come and get it, but for now it’s all mine.

All the data architects out there can join me as I sing (with apologies to Liam & Noel) “…Tonight, I’m a rock’n’roll star!”

Cut Me – I Bleed Data

I decided to clean out my home office; I’d had enough of the 56K modems lying around, and needed the space. What I didn’t expect was to find a museum of data storage concentrated in such a small space. I suspected at the time I wouldn’t need the 5.25” 720k floppy disks to upgrade to VMS v5.1 again, but who knows maybe I should keep them – so I did, along with the 2000ish 1.44Mb floppy disks and random associated hard disks. Now when I Google floppy disks the first thing that appears is an explanation of what a floppy disk is, or rather was.

Next I moved onto some more recent technology, surely I wouldn’t have to worry about throwing out USB memory Sticks, would I? Having counted somewhere around a 100 of the things lying around the house I decided that this was maybe the time that I didn’t really need 10x 64Mb sticks cluttering up space, after all my new shiny 64Gb version is now 1000x bigger.

This got me thinking about the state of the data storage market, and the changes going on. Whilst the capacity of floppy disks rose slowly and fairly consistently we have seen some spectacular changes in the storage marketplace. We got used to disk capacities doubling every 2 years, then this changed to 18 months, then suddenly the 2Gb drives became 200Gb then 400, then suddenly the 1Tb drive had landed.

It was at this time we started to expect development to slow down – after all as a wise Star Trek engineer once said “you cannae change the laws of physics, Captain” Well, you know what Scotty, actually we can and did, 2Tb drives appeared, now 3Tb are not uncommon in datacentres and 4Tb are available on Amazon.

Surely sometime disk drives have to stop evolving? Well, yes and no, they may stop evolving in their current form, but the requirements to store more and more data, and to hold it for longer and longer goes on unabated. Hmmm, what do we do now?

Well, change the form of course. When it comes to storing information, hard drives don’t hold a candle to DNA. Our genetic code packs billions of gigabytes into a single gram. A mere milligram of the molecule could encode the complete text of every book in the British Library and have plenty of room to spare. All of this has been mostly theoretical—until now. In a new study, researchers stored an entire genetics textbook in less than a picogram of DNA—one trillionth of a gram—an advance that could revolutionise our ability to store data.

Initially there may seem to be some problems around using DNA to store data; first, cells die—not a good way to lose your valuable information. They also naturally replicate, introducing changes over time that can alter the data (and whilst we accepted this on a floppy disk it’s unthinkable now). To get around this challenge a research team at Harvard created a DNA information-archiving system that uses no cells at all. Instead, an inkjet printer embeds short fragments of chemically synthesised DNA onto the surface of a tiny glass chip. To encode a digital file, researchers divide it into tiny blocks of data and convert these data not into the 1s and 0s of typical digital storage media, but rather into DNA’s four-letter alphabet of As, Cs, Gs, and Ts. Each DNA fragment also contains a digital “barcode” that records its location in the original file. Reading the data requires a DNA sequencer and a computer to reassemble all of the fragments in order and convert them back into digital format. The computer also corrects for errors; each block of data is replicated thousands of times so that any chance glitch can be identified and fixed by comparing it to the other copies.

By using these methods they managed to encode a complete book, just under 6Mb in size onto a single strand of DNA. Now, obviously this comes at a price beyond the reach of customers for now, but at the rate the data storage market moves who knows how we will upgrade our storage capacity in the future; it is estimated that a double DNA strand could encode 10 Exabytes of data or 11,529,215,046,100Mb, that’s quite a lot of floppy disks.

So, now when you hear us data guys talking about “Big Data” and not being scared by the volume element, maybe you’ll understand why.

In a few years time when you need to add an Exabyte or two to your data capacity, don’t worry – I’ve an armful right here.