In the first bit of my ‘Thinking Cloud’, I tried to consider the shape of cloud computing and what it really means – that perhaps the cloud metaphor was being stretched a little too far. In the second part I regroup onto ground closer to my normal stomping ground, and consider what impact the changes in Part I will have on the underlying infrastructure – how will broadband and the internet need to change to support a “cloud thinking”.
There is a view that “next generation broadband” is simply an evolution of all that has gone before but I agree with David Brunnen in his recent blog that we aren’t looking at an upgrade – this change is much more fundamental than that.
Ten years ago Evans and Wurster wrote an excellent book, Blown to Bits, looking at some of the reasons the dot.com bubble burst and what strategies could help you survive and prosper. One of the concepts they discuss is how all markets have a trade off between what they call reach and richness – market reach, and the ability to customise and tailor a solution. All businesses sit somewhere along that line with perhaps McDonalds at one end with a global reach bit a low ability to offer you anything other than what’s on their globally fixed menu, and Saville Row tailors at the other end who would find it impossible to become global businesses but can offer you precisely what you want.
Applying this model to telecoms is quite telling – the curve is disjointed, with commodity broadband offerings at one end, then corporate and wholesale products at the other with few if any offering in between.
At the consumer end exists xDSL products where differentiation is limited to little more than brand and contention, while at the other end are wholesale Ethernet products attached to MPLS clouds – and there is almost nothing in between. Not only is this not helpful for customers and large sections of the telecommunications industry, it is no longer necessary – next generation networks have the capability to break this model, if the market environment is allowed to change with it.
Back in the late 1990’s I did some work on what we called then “3D networks” – network technologies have traditionally been 2D with a trade off between geographical reach and speed – dial-up links can cover great distances but at slow speeds, while the new 100 Gbit Ethernet standard over copper looks set to cover just 10m. This was a useful model when considering copper-based networks but is somewhat pointless for fibre networks since there is a variant of every standard from 100 Mbps to 100 Gbps which is able to cover 40km or more with no signs of abating – the research into terabit Ethernet looks likely to follow the same trend. (note the log scale below)
The third dimension we considered was policy – the ability to tune and shape the environment to support network users. This becomes informative when contrasting existing infrastructure and next generation technologies – setting aside the net neutrality arguments for a moment. With the capability to deliver seemingly endless bandwidth but with a richness previously only available to the wholesale and large corporate markets, next generation broadband offerings fundamentally change the shape of the market.
The NICC is currently formalising the Active Line Ethernet standard for NGA networks – this will have at its core requirements to deliver at least four VLAN’s to each customer, currently each with five qualities of service. This shifts broadband environments from monochromatic to having 20 colours in a spectrum. With distance as a barrier removed – the broadband market will never be the same again.
How does this affect “cloud thinking”?
Today’s internet can be a pretty hostile environment for applications. So before we go any further let me tackle the net neutrality debate head on. There is an interesting difference in sentiments between users of corporate networks and the internet. In the corporate world finely tuned 3D networks are a good thing – they ensure applications run optimally and the business runs smoothly. On the internet, traffic shaping is seen a severe curtailment and fundamentally wrong. Why?
Because internet service providers use policy-networking techniques to minimise the impact their customer have on their expensive networks. This is fundamentally bad and I fall 100% behind the supporters of net neutrality – today.
However, there is a fundamental difference in NGA, or at least the way the UK is approaching it. With at least four VLAN’s, your ISP can’t monopolise your access to the IP world – you have choice. Today if a media company is as unhappy as you are about the way their video streams are tuned down, there is very little they can do about it – in an NGA world, they can simply choose to by-pass the internet.
And this will happen – the world of service providers is about to become a whole lot richer – it will no longer be synonymous with internet service providers. They will be joined by games service providers, healthcare service providers, and a whole raft of others.
How do I know this isn’t just a pipe dream? Content delivery networks (CDN’s) are rapidly becoming the biggest international transit companies, over taking many traditional internet transit companies. The customers of CDN’s are the same media and cloud computing companies that are impacted by today’s ISP policies. These companies are seeing their brand value tarnished and their business opportunities curtailed – if they could bypass the internet and form a relationship with you directly they would. NGA networks allow them.
So just as we start to consider a thinking cloud, we can also begin to consider a network environment able to support and nurture it. In the past, the network architect was often the expensive nay-sayer – with bandwidth no longer a barrier and networks able to be conscious of users and their applications, in this world network architects will become an essential part of the creative process.
In the next part I start to explore what happens within the original cloud as we see it soften its edges and become more nebulous and adaptable.