Today’s traditional network infrastructure resembles a tree. In the data center, network infrastructures generally include an access layer, an aggregation (distribution) layer, a core layer, and an edge services layer. Connectivity fans out like branches from the core, through these multiple tiers of switching, to connect the myriad devices in the data center. This tree structure topology exists in almost every data center in the world. Where did this architecture come from?
It originated with the local area network.
The Ethernet switch was originally designed to solve the LAN problem. If you’ve ever worked with coaxial cables in a local area network, you’ll remember what a nightmare it was. The Ethernet switch was a major breakthrough, greatly simplifying network deployments and becoming the basic building block of all subsequent networks. The tree structure was created to provide the level of fan-out required to support and connect all the clients in the local area. At the top of the tree was the workgroup server; almost all network traffic moved “north and south” between the clients at the bottom of the tree and the workgroup server at the top. As the workgroup grew, the tree got bigger.
Eventually, as networks grew to incorporate remote locations and evolved from local to wide area networks, the workgroup servers were moved back into the data center where they were easier to manage. At the same time, Ethernet became reliable enough, fast enough, and cheap enough to displace alternate technologies such as SNA, DECnet and Token Ring in the data center network itself. When this happened, it became possible to take the same topology that had proved so successful in the local area network and apply it to the data center in a multi-tier tree structure. For the most part, this architecture worked fine in a client/server world since most traffic ran north and south, between the server and the client.
However, with the advent of Service Oriented Architectures, or SOA-based applications, there was a fundamental change in the traffic patterns of the modern data center. With the advent of the web browser, a piece of the client-side processing was pulled back into the data center while at the same time the server-side application was disaggregated. As a result, what was once a fairly monolithic application became a set of federated services interconnected by the network, enabling greater application scalability and flexibility. Not only did this dramatically increase the number of servers, it also fundamentally changed data flow patterns in the data center. Traffic that was once expressed via an internal IPC mechanism communicating through the memory of the server was now exposed as network traffic between servers. This was further exacerbated when the storage became virtualized and traffic that was once contained on an internal SCSI bus was now expressed over a network.
Thus, whereas 95% of network traffic within the client/server data center was north-south, today as much as 80% of network traffic is now east-west. And while traffic out to a client interfacing with a human can tolerate a certain amount of latency, this is not true of east-west traffic. In fact, traffic between servers and between servers and storage is extremely sensitive to latency, which has a direct impact on the delivered behavior of applications. Add the fact that there has also been an exponential increase in data traffic on data center networks and it becomes clear that it is time to rethink the legacy tree structures.
At Juniper, we believe that a fabric, not a tree structure, is the ideal topology for the data center. Check out this animation to learn more about how Juniper’s approach can allow you to build a more efficient data center and drive your business model better:
Exploring the vision for the networking industry and the issues shaping its future.