When I set out to distil my thoughts on a Thinking Cloud it looked like three chunks were going to be enough but by the end of it all there were some loose ends still remaining – the combination of a lot to say, a big subject and perhaps the long-form limitations of a blog format. So here I want to draw together a few of the loose ends and attempt to extract some shape and character from the ramblings so far – hence the pseudo-science of phrenology in the title.
The full potential of cloud computing is immense and will be a bigger revolution than the development of the web – of that I have no doubt. Supporting this change is a cloud infrastructure which is also yet to fully develop. Today it really means little more than storage and processing to support relatively simple application within the web but it will begin to draw in more powerful tools like distributed grid computing, parallelism and utility computing – we’ve seen mere hints of this with programmes like SETI and their screen-saver to crunch massive amounts of astronomical data.
As the cloud becomes more conscious with true cognitive powers to rival the memory it has today, the additional requirements will fall on the cloud network. It will no longer be acceptable for the cloud moniker to simply say “I’m so technical you don’t need to know what’s inside” – it will need to take on many of the characteristics of the infrastructure and more importantly the applications also within the cloud. We saw in Part I what this may mean for applications; in Part II we saw that the capability to link the network to applications exists in 3D networking; and in Part III we looked at how the telecommunications market is beginning to change which may unlock some of this power. But I hadn’t really delved into what that might mean as we move to true cloud networks.
Today’s networks are fairly rigid t0 the descriptions we use reinforce this – the superhighway suggesting massive solid routes offering mass transit which took many years to design and build. And this is not so very far from the truth, and this is where the tension lies. Laying fibre-optic cables in the ground requires long-term utility business models with deep pockets while cloud computing is rather more darting and changing, and transitory. The two need to be reconciled if the under-pinning networks are to release the true potential of a thinking cloud. And this creates a problem – the proximity to reality.
Applications people find it easy to shape solutions around people because they have a direct communication; database manager and server people find the conversation not too challenging because they have a good proxy through the developers; network people are several layers down and are trying to manage a necessarily shared domain with competing requirements so its easy to see why telecommunications is seen as some how disconnected from people’s day-to-day reality and not always very supportive.
Networks are like Ogre’s, according to Shrek at least, they are made up of layers. While network engineers may wrestle with a whole stack of layers, commercially we have tended to focus on a very narrow subset which fuses together the rest. The business case for networks has always focussed on the degree of interleaving, or contention, which can be achieved – assume, for example, that only one in five people will be on-line at any one time, and that only one in ten of those are generating traffic at any instant while the rest are reading the web page or email that’s just arrived – so a 50:1 contention ratio works for people who browse the web.
This doesn’t work in a thinking cloud – in fact, it doesn’t really work today as streaming, which necessarily demands a 1:1 contention ratio, and more pervasive computing takes hold. Much smarter, more dynamic measures need to evolve – and quickly. The 3D capabilities of next generation networks provide part of the answer and forge the link between applications and the higher layers of network which will be so critical but it doesn’t address the rather transitory and sometimes sudden demands of bandwidth and routes. New technologies are needed at lower layers which offer the same flexibility we are beginning to see at layer two, through standards like Active Line Access, to will significantly increase the network’s ability to bend and stretch.
Some of these tools are beginning to emerge. Companies like InTune Networks are beginning to release solutions which can do with light what we are beginning to do with virtual networks; wavelengths on demand which are able to respond to the demands of people and the applications they use. In many ways what InTune are creating is “cloud switches” where the switching fabric is dissolved into a cloud of light. Implementing these kinds of tools in the JON marketplace describe in Part III would unleash wavelengths on demand just as the current plans release VLAN’s on demand. In the future transparent optical cross-connect has the promise of dynamically connecting whole fibres in the way that InTune is able to switch wavelengths. At this point the only rigid element of networks will be the ducts the cables pass through.
As optical technology develops the concept of a cloud network will grow and become more supple. This will result in a network which can morph more easily around people and their demands, and to optimise the capacity at each and every layer. And this in turn means its network managers will be able to sit around the board table and become a constructive part of the business cycle – no longer the group which is too removed from people (not a criticism by the way) and with too many competing demands on their networks to be in a position to offer constructive support.
For countries which get this early the impacts will be huge. New research opportunities in photonics and network design which pioneer new markets; and an economy with much of the rigidity removed, able to draw on the creativity of all its ideas.