Tag Archive | hp

A New Wave of Notebook Innovation

Intel have for decades enjoyed near total domination of the Commercial PC market, providing the core components – CPUs and Chipsets to the OEMs and to their credit have continued to innovate. As they add more features and improve the performance of their silicon platforms enabling OEMs to innovate their PC design by making them thinner and lighter, there’s a feeling that this still doesn’t address the ever-increasing User Experience demands.

With the proliferation of consumer devices in the modern workplace (smartwatches, smartphones, tablets etc) there’s a concern that so much choice can both distract and even overwhelm users. The lack of time spent un-interrupted by these ‘skinny’ clients, whilst providing unrivaled connectivity can become a distraction to those who simply need to focus and concentrate on our business tasks. Research suggests that even in the face of so much choice the notebook PC continues to be the main go-to business device. During 2020 it is predicted that the ‘Millennial’ generation will become the dominant demographic group in the workforce. What organisations like Intel therefore need to ensure is that the needs of this generation are being addressed by their future technologies. Unsurprisingly the ‘User Experience’ and usability will play a big part.

User Experience Targets

Based on the User Experience targets above, I think it is safe to say that the notebook PC as a device is not going anywhere, but its usability and the experiences you get from it can be improved upon.

Intel recently released a high-level blueprint of how they and the PC OEMs are looking to deliver these experience improvements to users; its known in the industry as Project Athena.

Project Athena – Laptop Innovation Rooted in Human Understanding

It’s worth noting that Project Athena is a 2-3 year view so it’s not about dropping in a ton of new technologies in one hit, but we are already seeing some encouraging progress.

Project Athena focuses on three main areas – Always Ready, Adaptive and Focus.

CPU chipset efficiencies will continue to drive improvements in battery life but inbuilt AI capabilities will also help with this – The much heralded 5G and WIFI 6 or AX standards once they arrive ‘en masse’ and have greater coverage are expected to not only deliver faster speeds but be much more robust and reliable due to the increases in the available spectrum they operate in. 

The 2-in-1 form factor which is the touchscreen notebook with near 360-degree hinge continues to gain share from the traditional simple clam-shell device as it offers the best user interaction – touch, pen, keyboard. 

AI is going to play a part in improving such areas as voice recognition and enabling ‘Do not disturb’ features to ensure outside distractions are kept to a minimum. Monitoring when there’s a reduction in user interactions will enable the device to intelligently reduce or suspend power-states to those parts of the sub-systems that are not in use. This will result in power saving and improving battery life. 

A more ‘Tablet-like’ experience in a package that delivers enhanced performance is the target with features that include a < 1 sec from lid up to login time, whilst utilising Intel’s next generation mobile CPU core technologies. Persistent memory provided by Intel’s Optane technology also plays a big part in improving performance and decreasing wait times.

Whilst Project Athena maybe a 2-3 year vision its far from being pure ‘vapour-ware’ today. HP Inc has recently announced the first to market Athena v1 commercial product with their Elite Dragonfly notebook.

All notebook PCs that conform to the Project Athena specification will feature the Intel distinguishing label, shown to the right.

The design criteria from Intel to conform to the Athena standard is expected to evolve as the supporting technologies develop. Version 1 is believed to be based roughly on the following target criteria – 

  • Chassis Design – 15mm Z height (allow 17mm this year under right conditions)
  • <1 second from lid up to logon
  • No performance degradation when unplugged from power cord
  • >16 hours of battery in video playback mode
  • >9 hours continuous intensive browser usage
  • 4 hours of battery charged in 30 mins

The Benefits of Project Athena

With the goal of Project Athena being to drive the next wave of innovation into notebook PCs we can expect to realise the following benefits and improvements over traditional Notebook PCs –

  1. Improved productivity and User Experience
  2. Usability – All day battery-life with rapid charging and intelligent use of AI
  3. Connectivity – Provided by Thunderbolt 3, WiFi 6 and 5G 
  4. Performance – Latest CPU and next generation Optane storage
  5. Design – Thinner, lighter designs that feature multiple input methods
  6. User satisfaction – A more responsive ‘without delay’ user experience

What Next ?

The other leading commercial PC OEMs are expected to follow HP Inc’s lead in releasing Athena v1 class devices so I would urge you to take a look at the Dragonfly to judge it for yourself. 

Contact your Computacenter Account Manager to find out ways we can help you understand more about Project Athena.

Data – The new Rock’n’Roll

“ Data is the new oil”

“The most valuable currency in the world is not money, it’s information”

– A couple of great quotes written by people much more eloquent than me. However I do have one of my own ;

Data is the new rock’n’roll

Just as rock’n’roll transformed music scene the use, and future potential use, of information is dramatically changing the landscape of a data centre. Historically the storage array was effectively the drummer of the band, required but sitting fairly quietly in the background, and whilst a vital component it was not necessarily the first thing people thought of when putting the band together. Even now, if you look at a picture of any band, the drummer is the one hanging about aimlessly in the background, try naming the drummer in any large and well-known bands; it’s much harder than you think. And so it was with storage and data; the storage array would sit somewhere towards the back of the datacentre whilst the shiny servers were the visible component, and the items that got the most attention.

As we hit 2013 that all changes; the storage array is the Kylie of the datacentre, it’s the sexiest piece of equipment in there. And so it should be given that upwards of 40% of a customer’s IT budget is spent simply on provisioning the capacity to house data.

At Computacenter, we’ve made a large investment in our Solution Centre. Whats sits in the front row now? Of course it’s the data arrays; with the latest technology from EMC, HP, HDS, IBM and NetApp all showcased. Why is it front row? Obviously as it’s the most important component of any solution nowadays. And of course, it looks sexy, or is that just me?

The storage array is now front and centre, it’s the first component to be designed when re-architecting an environment. Why? Simply because a customer’s data is their most valuable asset, it’s transforming the way people do business; it’s changing the way we interact with systems and even each other, your data is now the lead singer in the band.

Data is the one thing that is getting attention within the business; it’s the one thing you have making the front pages of “Heat” magazine – Where’s it going? What’s it doing? Is it putting on weight? Is it on a diet? What clothes is it in? Should it be in rehab? But as the manager of the data (or the band) there is one simple question that you want answered; how do I make money out of it?

And that, dear reader, is the $64,000 question. The good news is that is becoming ever more possible to use your data as a revenue generation tool, we are only starting to see business value being generated from data, as 2013 progresses we will see some niche players mature (and possibly be acquired), we’ll see an increased push from the mainstream vendors and we’ll start to see ways of manipulating and using data that we just couldn’t contemplate when the storage was simply providing the rhythm section.

Even converged systems, the boy bands of the decade, which perform in harmony always have one better singer than the rest, well he’s the data

So: Compute, Networking, and Software, the gauntlet is down; Data is the new rock God, it’s the Mick Jagger to your Charlie Watts, you want the crown back? Come and get it, but for now it’s all mine.

All the data architects out there can join me as I sing (with apologies to Liam & Noel) “…Tonight, I’m a rock’n’roll star!”

How can a good enough network really be good enough?

A quick look at the current popular enterprise networking infrastructure platforms and they all seem to suffer from a similar predicament – almost without exception the functionality is good, reliability levels are high and performance (in relevant terms) delivers against expectations.

The reasons for this rather stable state include a networking journey to date that embraced the pain of interoperability and standardisation many years ago, the common use of high performance off the shelf network processing asics (with a few notable vendor exceptions) and until recently no real need to change the status quo.

After numerous years of highly effective network solution design by the extensively trained and highly talented network engineers, that embraced inherent technology limitations and extracted maximum performance we now have our “good enough” networks. I reiterate that there are many great network engineers that underpin the largest enterprises in the world, make complex networking “just work” and deliver business outcome after outcome – helping in many cases to hide that fact that below the surface all is not as well as it may seem.  

But surely, if you were given a blank sheet of paper and networking / security designs were architected with a clean view of the vendor landscape plus tomorrows business outcomes as well as today’s, would you still design yesterdays way? If the business outcomes of today and definitely tomorrow differ from the network usage approach of yesteryear surely good enough can’t still be “good enough”.

A five year old network designed and configured for large volumes of direct connected network servers with one Gigabit interfaces surely won’t be good enough for a densely consolidated converged infrastructure requiring multiple ten Gigabit network interfaces. Equally a multi layer network topology originally configured for hundreds and potentially thousands of physical servers, with multiple physical network interfaces has very different operational and performance characteristics to a distributed switch, hypervisor virtualised network layer.

The stage is set for good enough (or worse) networks to be evolved in line with tomorrow’s application and business requirements. Software defined networks (SDN) underpinned by the open standards aligned with OpenFlow and Openstack protocols and frameworks may in time enable the granular levels of flexibility and capability required to personalise today’s “good enough” general purpose networked infrastructure footprint into outcome specific networked topologies. This blog was set to discuss the well crafted Cisco ONE strategy that leverages the value delivered by OpenFlow and Openstack and clearly positions a customer journey that leverages existing technologies interfaced with the emerging software network footprints and equally the highly innovative HP VAN software aligned network play that leverages IMC and IRF tightly woven into those same open network software foundations, to deliver tangible application aligned networking.

But both of those great stories may now be somewhat pale when compared to VMware shock acquisition of Nicira. Put simply the worlds dominant x86 hypervisor vendor now includes a highly regarded SDN networking core that can be leveraged in numerous and as yet unannounced ways that could potentially paint a new picture for enterprise networking. (save this for another blog).

So “Good enough networks” in the not too distant future may become a thing of the past. Will they ever be “perfect networks”, unlikely due to the ever changing nature of business and increasing levels of complexity, but could they become much closer aligned with the levels of flexibility and adaptability and cost effectiveness currently sought by enteprise network customers. “Quite possibly”…….

And then they will be more than “Good enough”.

Until next time

Colin W

Twitter: @Colinwccuk