Archive | In A Minute RSS for this section

In a Minute – EMC World

EMC World 2013 took place in the Venetian Hotel and Sands Conference centre on 6th-10th May 2013. Attended by over 12,000 staff, partners and customers there were several product announcements and a range of upgrades to existing technology. The main points of interest were as follows;

  • ViPR (pronounced Viper); The major announcement. EMC’s entry into the world of Software Defined Storage. ViPR will be (initially at least) an appliance designed to abstract the control plane and data plane. The control plane will effectively be a storage hypervisor, managing the storage (data plane) underneath, which on day one will be EMC’s VNX, VMAX or Isilon and any NetApp arrays, other vendors to follow. The Data plane can be commodity storage in the future. First products due to ship late 2013, so final verdict is reserved until then. Initial release sounds very much like a Gen 1 product, so expect push back from other vendors, but the roadmap sounds fairly compelling, and comes under the “product to watch” category”. Rumour has it that EMC belief this to be their best Gen 1 product yet released, and is their future. ViPR will offer pooled storage resources presenting Block, File and Object based presentation and include simplified management and automation. Full review in a separate post.
  • Pivotal – Announced before EMC World, but had a lot of focus, Pivotal is a partnership between EMC & VMware with GE investing heavily, this is designed for next generation Cloud and Big Data applications. Pivotal splits into three areas; Data Fabrics, Application Fabrics and Cloud Fabrics. Pivotal 1 launched late 2013, again one to watch
  • XtremIO – Available now in limited quantities but a big focus. EMC’s All-Flash Array (AFA). Provides a lot of the functionality expected of Enterprise class arrays, combined with very high performance. Want to see one? Contact me, I’ve got one!
  • EMC Velocity Partner Program – the partner program changes to allow all partners to be “Business Partners” with specialities in relevant areas. Look out for Computacenter changing from one “Velocity Signature Solution Centre” logo to about 20 different Business Partner logos. Those PowerPoint slides suddenly got very busy.
  • Isilon upgrades – Isilon is proving to be an excellent acquisition for EMC, look out for forthcoming enhancements including deduplication, auditing ability and integration with HDFS, combined with additional scalability. Also the required enhancements to the SynqIQ replication functionality are being delivered.
  • SRM Enhancements – New suites of management products, sharing a common interface with ViPR. Let’s face it – these were needed.
  • Continuous Availability enhancements – The ability to combine VSPEX + VPLEX is designed to eliminate complexity in this area for relevant customers
  • VNX upgrades are on the way, but still under NDA (if you are internal ask me nicely)
  • BRS (Backup & Recovery Services)  – Enhancement to the Data Domain range, with further development in Avamar technology means this remains a focus areas for both EMC and partners.

Summary; EMC World remains one of the Must-Attend events in the industry. Whilst some of the announcements are of future products which are work in progress, theses do give an insight into the direction the company is going. Joe Tucci stated that EMC will remain true to its roots, but with an increasing investment in software based products. EMC World proved a worthwhile investment in time.

In a Minute: Software Defined Storage

As 2011 was a year of us talking about “Cloud”, closely followed by the “Big Data” wave of 2012 then 2013 is shaping up nicely as the year of the “Software-Defined” entity, where multiple technologies are being covered by the “SDx” banner.  Let’s have a brief look at what this means for the world of storage.

In the world of data we are used to constants; Controllers that manage the configuration of the environment and the placement of data, disks grouped together using RAID to protect data and the presentation of this data to servers using fixed algorithms. In effect when we wrote data we knew where it was going and could control it’s behaviour, we could replicate it, compress it, de-duplicate it and provide it with the performance level it needed, and when it needed less performance, then we just move it somewhere else – all controlled within the storage array itself.

Software defined Storage changes this model; it can be thought of as a software layer, put in place to control to control any disks attached to it. The storage services we are used to (snapshots, replication, de-dup, thin provisioning etc) are then provided to the Operating System from this layer. This element of control software will be capable of sitting on commodity server hardware, in effect becoming an appliance initially at least, and will be able to control commodity disk storage.

This does not really constitute some of the features of storage virtualisation, where a control plane manages a number of storage resources, pooling them together into a single entity; rather it separates the management functionality removing the need for the storage controllers – the most expensive part of a data solution. Therefore one of the driving factors for the uptake of Software Defined Storage is an obvious reduction in cost, and the ability to provide data service regardless of the hardware you choose.

The challenge to this is that data should be regarded differently to other aspects of the environment; data is permanent, packets traversing network are not, and even the virtual server environment does not require any real form of permanence. Data must still exist, and exist in the same place whether power has been present or not. We are now starting to see a generation of storage devices, note I was careful not use the phrase arrays, which are looking more capable of offering a Software Defined storage service, through the abstraction of the data and controller layers.

So what does this all mean for storage in the datacentre?

My main observation is that physical storage arrays will be with us for a long time to come and are not going away. However the potential for disruption to this model is greater than ever before, the ability to use commodity type storage and create the environment you want is compelling. With the emerging ability of software to take commodity hardware, often from several vendors simultaneously and abstract the data layer then the challenge to the traditional large storage vendors becomes a real and present danger.

I believe the rate of change towards the software defined storage environment will ultimately be more rapid and see greater early adoption to the proven concepts of server virtualisation, it will cause disruption to many existing major vendors, but ultimately end-users will still require copious amounts of disk technology, so the major players will remain exactly that. Whilst some niche players may make it through the big boys will still dominate the playground.