How can a good enough network really be good enough?
A quick look at the current popular enterprise networking infrastructure platforms and they all seem to suffer from a similar predicament – almost without exception the functionality is good, reliability levels are high and performance (in relevant terms) delivers against expectations.
The reasons for this rather stable state include a networking journey to date that embraced the pain of interoperability and standardisation many years ago, the common use of high performance off the shelf network processing asics (with a few notable vendor exceptions) and until recently no real need to change the status quo.
After numerous years of highly effective network solution design by the extensively trained and highly talented network engineers, that embraced inherent technology limitations and extracted maximum performance we now have our “good enough” networks. I reiterate that there are many great network engineers that underpin the largest enterprises in the world, make complex networking “just work” and deliver business outcome after outcome – helping in many cases to hide that fact that below the surface all is not as well as it may seem.
But surely, if you were given a blank sheet of paper and networking / security designs were architected with a clean view of the vendor landscape plus tomorrows business outcomes as well as today’s, would you still design yesterdays way? If the business outcomes of today and definitely tomorrow differ from the network usage approach of yesteryear surely good enough can’t still be “good enough”.
A five year old network designed and configured for large volumes of direct connected network servers with one Gigabit interfaces surely won’t be good enough for a densely consolidated converged infrastructure requiring multiple ten Gigabit network interfaces. Equally a multi layer network topology originally configured for hundreds and potentially thousands of physical servers, with multiple physical network interfaces has very different operational and performance characteristics to a distributed switch, hypervisor virtualised network layer.
The stage is set for good enough (or worse) networks to be evolved in line with tomorrow’s application and business requirements. Software defined networks (SDN) underpinned by the open standards aligned with OpenFlow and Openstack protocols and frameworks may in time enable the granular levels of flexibility and capability required to personalise today’s “good enough” general purpose networked infrastructure footprint into outcome specific networked topologies. This blog was set to discuss the well crafted Cisco ONE strategy that leverages the value delivered by OpenFlow and Openstack and clearly positions a customer journey that leverages existing technologies interfaced with the emerging software network footprints and equally the highly innovative HP VAN software aligned network play that leverages IMC and IRF tightly woven into those same open network software foundations, to deliver tangible application aligned networking.
But both of those great stories may now be somewhat pale when compared to VMware shock acquisition of Nicira. Put simply the worlds dominant x86 hypervisor vendor now includes a highly regarded SDN networking core that can be leveraged in numerous and as yet unannounced ways that could potentially paint a new picture for enterprise networking. (save this for another blog).
So “Good enough networks” in the not too distant future may become a thing of the past. Will they ever be “perfect networks”, unlikely due to the ever changing nature of business and increasing levels of complexity, but could they become much closer aligned with the levels of flexibility and adaptability and cost effectiveness currently sought by enteprise network customers. “Quite possibly”…….
And then they will be more than “Good enough”.
Until next time