Data Centre EMEA: Fabrics, TRILL, SPB, and unnatural acts, a history of technology evolution
I spent last week in Silicon Valley which inevitably involves over 10 hours of being squeezed into a pretty uncomfortable airline seat. The compensation is that it is a good chance to catch up with old friends as the Sunday flights are known as the network express where you will struggle to avoid sitting with a competitor. Also it is a chance to catch up on some films as trying to work with the previously mentioned competition sitting so close is very difficult. Usually the flight can deliver three movies and a comedy TV show before you get the host of repetitive announcements in the last hour of a flight which makes watching anything impossible. The recent flight was in to a head wind which meant I waded through four films and a documentary.
The point of this travelogue is that the documentary I viewed covered the history of the Fairchild Eight who was a group of bright young engineers developing the semi conductor. Among the group were Gordon Moore (he of Moore’s law fame) and Robert Noyce who went on to found Intel. It did make me wonder about a similar documentary on the group who left PARC and founded many of the networking companies we now know and how different they would find the current environment.
I am not sure they would find things that different, the fact that the technologies they developed to connect terminals to processors are now used to connect everything to everything else would probably be a surprise to them. Many of the systems we use were never designed for what we use them for now but that was always the way.
In a world some time ago, where the dominant networking technologies were Ethernet and Token Ring delivering a maximum of 16Mbps, came ATM which not only had a bandwidth of 155Mbps+ but also promised the holy grail of voice integration. The challenge was that ATM was built around a connected stream of small cells where as a traditional LAN was built around a connectionless model using large packets. To make ATM work like a LAN we had to make it do unnatural things and so LAN emulation was born. At this point you should think that suddenly everyone wants to fly everywhere but all we have are ships so we need to make these ships fly.
Just when you thought things could not get any worse the problem of how do we make an ATM network look like a router arrived. This sent all the brains within every network company into overdrive. All of a sudden we had a host of solutions from a wide variety of organisations all unique and none of them a standard. The possibility of a complete meltdown was on the cards as everyone tried to dance to the ATM tune. Just in time one organisation developed the layer 3 Ethernet switch which was the most obvious solution. Overnight all attempts to make ATM work like Ethernet ceased and the market for L3 switches was born and the rest is history.
Fast forward 15 years the building block of all networks is now the L3 Ethernet switch. In the intervening years the design for the network has become hierarchical and tiered. We are quickly approaching the point where within the data centre we need a new type of network. Server consolidation and mass virtualization means that we need a simple any to any network. Ironically we need something that is much more like a fully meshed ATM network so the obvious first step is to try and make an Ethernet network look like a Fabric. There are a number of methods available to do this including TRILL and SPB and various manufacturers are aligning behind each of them.
So, here we are again trying to make technology do things it was never designed to do and as before we will try to get it to do even more. We are heading inexorably to the point where things will get so complex that no two implementations or data centres will be the same. We have reached the point where someone needs to stand up and say NO! Enough is enough and come up with the right solution. Luckily someone has, this month Juniper Networks began shipping its’ QFabric solution for the data center network. QFabric is an infrastructure that is high capacity, low latency and above all simple.
Someone once said that “ATM is like a duck, it can fly, walk and swim, but does none of them very well” when we tried turning it into a local area network it was like trying to make a duck rebuild a gearbox. History shows us that making technology do what it was not designed for is fraught with disaster.
So, will trying to make Ethernet switches act like a fabric be successful? Maybe, but only until customers try out Juniper Networks QFabric.
So which way will you go obvious simplicity or unnatural acts let me know.