I remember the days when using awk, sed and grep on a log file was a really powerful way to extract useful data to help troubleshoot issues, or better plan a complex application deployment and management.
Now the amount of data that is generated by systems, applications and devices has proliferated to the extent that we are unable to use the old techniques to get information from the systems managed today.
A popular route for analysts is to download software on their laptops to help with this challenge and one of the more popular choices is visual analytics from Splunk. This personal need and learning has driven a “Shadow IT” style of adoption of the tool for Operational Intelligence in organisations. The Computacenter UK Infrastructure Operations team experienced some early success in this very manner. The initial benefits were amazing, but thought was needed on how to evaluate it as a corporate tool in order to drive operational efficiency and intelligence across the Global Managed Services Business.
On consideration of using a traditional approach for a proof of concept and pilot phase, which would take weeks to plan and more time to execute, it was decided to try a different approach. Something more agile was needed in order to benefit from quick results, the ability to test the software, its ease of use, and the other business benefits it could drive.
So with a little gamification and the flexibility of our Solutions Center, a competition was conceived…
The competition was to run for four-weeks only between teams from across the Computacenter Group. The challenge was to use Splunk’s visual analytics tool to address a Managed Services business problem in four weeks with just an afternoon’s worth of training. The teams were based in the UK, Germany, South Africa, Hungary and Spain, from both Service Desk and Infrastructure Operations.
All participants were given an overview of the tool and as mentioned before, half a day training which was run from the Solutions Center in Hatfield and broadcast across to the other countries via live presentation and video feed. A central infrastructure was provided with the software pre-installed.
The results were amazing, all the participants were data analysts so knew exactly what they wanted to get out of their data and were able to visualise it in the short space of time given to them. With varying help from Splunk experts, all were able to create compelling business-relevant dashboards, in just four weeks with very little training and while still doing their operational day jobs.
The results have allowed us to view what the art of the possible can be and now we can start further planning the use of this innovation driving software.
Congratulations to anyone that spotted the above to be a quote from the third President of the USA, Thomas Jefferson, and although he may have said it in 1803 the relevance remains today.
I’m no longer sure which generation I belong to, I come from an age when disk drives could be measured in Megabytes, nowadays we don’t talk in Gigabytes and some of us don’t even talk inTerbytes any more. We know data is exploding and we know technology develops to cope with this; however that’s evolution not revolution.
I believe we are at the cusp of the next revolution in technology. To be the next big thing has to fundamentally change how we do things. It has to change how we look and think about our world; it has to be revolutionary.
It used to be that we got excited by individual pieces of technology; maybe our first laptop, maybe our first 1Tb drive, maybe our first smartphone which you just love to hold, and be seen with.
But whilst these may be considered revolutionary, they remain point solutions – they are single dimensional.
We’re moving into a multi-dimensional world of IT. We’re moving from single dimensional solutions to Multi-dimensional solutions
- Where everything has an impact on everything else
- Where every piece of technology has to interact with everything else
That’s just in business, what about the personal world, where your smart phone has to interact with your car, which has to interact with your microwave, which has to interact with your television, so when you get home everything is in its place. How do you choose? And more importantly how do you control it all?
The problem with multi-dimensional solutions is that there so many choices to be made. We are seeing the start of this wave now in the ‘Software Defined’ world, where it gets harder to identify components of a solution, but really why should we care anyway?
So what do we do in this multi-dimensional, software-defined world of IT?
- Should you ignore everyone and continue as you are, after all it works doesn’t it?
- Maybe putting everything in the cloud and consuming as a service is the answer
- Why not adopt all the new methods, be seen to be progressive but continue to do everything the same old way?
- What if you adopt every new solution out there, change all your processes to get all that benefit you’ve been promised? How much disruption would that cause? & what if it doesn’t deliver?
It is a minefield out there, and as with all minefields it’s always good to have someone with experience to guide you through it. This is where Computacenter come in.
Every generation has an obligation to renew, reinvent, re-establish, re-create structures and redefine its realities for itself. Get ready for the next generation.
“ Data is the new oil”
“The most valuable currency in the world is not money, it’s information”
– A couple of great quotes written by people much more eloquent than me. However I do have one of my own ;
Data is the new rock’n’roll
Just as rock’n’roll transformed music scene the use, and future potential use, of information is dramatically changing the landscape of a data centre. Historically the storage array was effectively the drummer of the band, required but sitting fairly quietly in the background, and whilst a vital component it was not necessarily the first thing people thought of when putting the band together. Even now, if you look at a picture of any band, the drummer is the one hanging about aimlessly in the background, try naming the drummer in any large and well-known bands; it’s much harder than you think. And so it was with storage and data; the storage array would sit somewhere towards the back of the datacentre whilst the shiny servers were the visible component, and the items that got the most attention.
As we hit 2013 that all changes; the storage array is the Kylie of the datacentre, it’s the sexiest piece of equipment in there. And so it should be given that upwards of 40% of a customer’s IT budget is spent simply on provisioning the capacity to house data.
At Computacenter, we’ve made a large investment in our Solution Centre. Whats sits in the front row now? Of course it’s the data arrays; with the latest technology from EMC, HP, HDS, IBM and NetApp all showcased. Why is it front row? Obviously as it’s the most important component of any solution nowadays. And of course, it looks sexy, or is that just me?
The storage array is now front and centre, it’s the first component to be designed when re-architecting an environment. Why? Simply because a customer’s data is their most valuable asset, it’s transforming the way people do business; it’s changing the way we interact with systems and even each other, your data is now the lead singer in the band.
Data is the one thing that is getting attention within the business; it’s the one thing you have making the front pages of “Heat” magazine – Where’s it going? What’s it doing? Is it putting on weight? Is it on a diet? What clothes is it in? Should it be in rehab? But as the manager of the data (or the band) there is one simple question that you want answered; how do I make money out of it?
And that, dear reader, is the $64,000 question. The good news is that is becoming ever more possible to use your data as a revenue generation tool, we are only starting to see business value being generated from data, as 2013 progresses we will see some niche players mature (and possibly be acquired), we’ll see an increased push from the mainstream vendors and we’ll start to see ways of manipulating and using data that we just couldn’t contemplate when the storage was simply providing the rhythm section.
Even converged systems, the boy bands of the decade, which perform in harmony always have one better singer than the rest, well he’s the data
So: Compute, Networking, and Software, the gauntlet is down; Data is the new rock God, it’s the Mick Jagger to your Charlie Watts, you want the crown back? Come and get it, but for now it’s all mine.
All the data architects out there can join me as I sing (with apologies to Liam & Noel) “…Tonight, I’m a rock’n’roll star!”
As a follow up to my recent blog “Cut me – I bleed data”, where I looked at the potential for DNA storage, I thought I would look at how the human body can create data, and how it can be used for our benefit. We are all used to the concept of pedometers; where a small device carried on the person counts the numbers of steps we take in a day. I’m fairly sure all the devices I’ve tried are faulty as it must be more than 300 steps from home to car to office to desk to coffee shop, right? Walking 10,000 steps per day is good for your health apparently, so I may be a little bit short of my daily target.
However a few things caught my eye recently; the first two are very similar – the “Fitbit” and Nike fuelband, both work in similar fashion and take the pedometer concept to the next level. These devices have the same basic aim; to encourage us to lead a healthy active lifestyle and to monitor our progress and feedback in a way that is of benefit to us. They can track our steps, distance travelled, calories consumed and can measure if we are climbing stairs. We can use the App provided on our smartphones, tablets or any other device to input the food we consume and track our goals graphically if we want.
Ever woke up tired in the morning, wanting just another 5 minutes? Well, the next interesting thing they can do is measure how we sleep and what our sleep patterns are; this can then be used to wake us gently in the correct sleep phase to ensure we are ready for the day. Without thinking about it you are slowly building a database about yourself, we create the data and use the instrument to record it, and you wondered where all the growth of data you keep hearing about is coming from? Some of it is your fault, I’m afraid.
That’s all data generation we can control, we choose to wear the device, download the data wirelessly, stand on the wireless scales and transfer information about ourselves, but what about things we would like to control but really not sure how to? What if we wanted to measure heart rate, brain activity, body temperature and hydration levels and rather than having our own database we wanted to share it with our doctor or consultant? We’re not too far away from reaching that stage.
An American based company has piloted the concept of stretchable electronics products that can be put on things like shirts and shoes, worn as temporary tattoos or installed in the body. These will be capable of measuring all the criteria above. Another company will begin a pilot program in Britain for a “Digital Health Feedback System” that combines both wearable technologies and microchips the size of a sand grain that can ride on a pill right through you. Powered by your stomach fluids, it emits a signal picked up by an external sensor, capturing vital data. Another firm is looking at micro needle sensors on skin patches as a way of deriving continuous information about the bloodstream.
The data generated by this technology could be used for Business Intelligence purposes in the healthcare markets, it could be shared between yourself and your doctor allowing proactive activity to occur to improve the care offered and improve efficiencies, and ultimately to reduce costs. No more waiting 7 days to see a doctor, your chosen device downloads data which can be shared with your practitioner, who in turn sends you an email recommending more exercise and more vegetables in your diet.
The ability to use anonymous data from a group of patients would allow health care providers to spot patterns over an entire population or specific geographies. For example, the need for continuous data on blood glucose levels, particularly Type I diabetes patients, has become critical in the treatment of the disease, providing impetus for monitoring devices.
If this kind of information exists for a lot of people, it is arguably folly to not look for larger trends and patterns. And not just in things like your blood count, because overlays of age, educational level, geography and other demographic factors could yield valuable insights. The essence of the Big Data age is the diversity of data sets combined in novel ways.
These technologies could be used to get people with difficult to pin down conditions like chronic fatigue to share information about themselves, this could include the biological data from devices, but also things like how well they slept, what they ate and when they got pain or were tired. Collectively, this could lead to evidence about how behaviour and biology conjure these states, and ultimately could lead to a solution to such problems.
So it’s not just businesses that can benefit from the analysis of data, individuals and the population at large are potential benefactors of the emerging ability of technology to provide analysis of seemingly random collections of data. As I hit the weekend I may not need a wearable electronic device to tell me my brain activity is slowing down or my hydration levels increase, but it won’t slow down the amount of data I’m able to generate on myself, and the contribution this data makes to my future health. Maybe I’ll be able to store my personal database on my own DNA, who knows?
I decided to clean out my home office; I’d had enough of the 56K modems lying around, and needed the space. What I didn’t expect was to find a museum of data storage concentrated in such a small space. I suspected at the time I wouldn’t need the 5.25” 720k floppy disks to upgrade to VMS v5.1 again, but who knows maybe I should keep them – so I did, along with the 2000ish 1.44Mb floppy disks and random associated hard disks. Now when I Google floppy disks the first thing that appears is an explanation of what a floppy disk is, or rather was.
Next I moved onto some more recent technology, surely I wouldn’t have to worry about throwing out USB memory Sticks, would I? Having counted somewhere around a 100 of the things lying around the house I decided that this was maybe the time that I didn’t really need 10x 64Mb sticks cluttering up space, after all my new shiny 64Gb version is now 1000x bigger.
This got me thinking about the state of the data storage market, and the changes going on. Whilst the capacity of floppy disks rose slowly and fairly consistently we have seen some spectacular changes in the storage marketplace. We got used to disk capacities doubling every 2 years, then this changed to 18 months, then suddenly the 2Gb drives became 200Gb then 400, then suddenly the 1Tb drive had landed.
It was at this time we started to expect development to slow down – after all as a wise Star Trek engineer once said “you cannae change the laws of physics, Captain” Well, you know what Scotty, actually we can and did, 2Tb drives appeared, now 3Tb are not uncommon in datacentres and 4Tb are available on Amazon.
Surely sometime disk drives have to stop evolving? Well, yes and no, they may stop evolving in their current form, but the requirements to store more and more data, and to hold it for longer and longer goes on unabated. Hmmm, what do we do now?
Well, change the form of course. When it comes to storing information, hard drives don’t hold a candle to DNA. Our genetic code packs billions of gigabytes into a single gram. A mere milligram of the molecule could encode the complete text of every book in the British Library and have plenty of room to spare. All of this has been mostly theoretical—until now. In a new study, researchers stored an entire genetics textbook in less than a picogram of DNA—one trillionth of a gram—an advance that could revolutionise our ability to store data.
Initially there may seem to be some problems around using DNA to store data; first, cells die—not a good way to lose your valuable information. They also naturally replicate, introducing changes over time that can alter the data (and whilst we accepted this on a floppy disk it’s unthinkable now). To get around this challenge a research team at Harvard created a DNA information-archiving system that uses no cells at all. Instead, an inkjet printer embeds short fragments of chemically synthesised DNA onto the surface of a tiny glass chip. To encode a digital file, researchers divide it into tiny blocks of data and convert these data not into the 1s and 0s of typical digital storage media, but rather into DNA’s four-letter alphabet of As, Cs, Gs, and Ts. Each DNA fragment also contains a digital “barcode” that records its location in the original file. Reading the data requires a DNA sequencer and a computer to reassemble all of the fragments in order and convert them back into digital format. The computer also corrects for errors; each block of data is replicated thousands of times so that any chance glitch can be identified and fixed by comparing it to the other copies.
By using these methods they managed to encode a complete book, just under 6Mb in size onto a single strand of DNA. Now, obviously this comes at a price beyond the reach of customers for now, but at the rate the data storage market moves who knows how we will upgrade our storage capacity in the future; it is estimated that a double DNA strand could encode 10 Exabytes of data or 11,529,215,046,100Mb, that’s quite a lot of floppy disks.
So, now when you hear us data guys talking about “Big Data” and not being scared by the volume element, maybe you’ll understand why.
In a few years time when you need to add an Exabyte or two to your data capacity, don’t worry – I’ve an armful right here.