From the First Digital World Cup to Responsive Cloud Data Centers
Jul 1, 2014
As we move past the group stages of the 2014 FIFA World Cup Brazil, it is worth noting that this World Cup has set a new record in terms of numbers of viewers across the world, with a breakthrough performance in the United States and more impressive progress in Europe, Asia and the rest of the Americas. Millions of football fans will be using apps to place bets, stream live matches or highlights, as well as share content with their friends via social networks. Actually almost a third of people (30%) are planning on watching the actions unfold on a smartphone, tablet or smart TV.
For us techies, it is also impressive that this world cup is raved as the most technologically sophisticated tournament ever with products like the Adidas Smart Ball. It has built-in sensors to monitor how hard it is struck, tracks flight trajectories and also reveals impact points for penalties and corners. It syncs with the firm's miCoach app via Bluetooth and helps players learn and master various kicking and control skills during training.
The World Cup is just a reflection of where this world is heading to, an increasingly mobile population of people and things demanding always-on connectivity to share and receive information and content from anywhere, at any time and over any media.
For the information and infrastructure service providers, what this means to them is they need to build highly available data center infrastructure that can efficiently cover the areas they provide services to, and provide them the flexibility to run their service application anywhere in these data centers.
As a result, the new generation of cloud infrastructure is often built over a virtualized data center that spans across multiple geographically distributed physical data center locations to pool and maximize global resources. Since the introduction of virtualized data centers, the ability to move workload within and across physical data centers, has been the Holly Grail of data center architecture designs. The benefits are obvious; workload mobility can facilitate flexible deployment of applications for high availability, disaster avoidance and recovery, and optimal server resource distribution purposes.
The existing data center technologies to achieve workload mobility have been complex and disruptive. It normally requires a lot of planning and coordination but still can result in application downtime. One of the challenges has been the data center network, which is still largely physical, rigid and requires manual configurations.
Network virtualization, orchestration and automation enabled by Software Defined Networking technologies promise to make the network responsive to dynamic changes in a much more agile manner. But not all SDN solutions have been created equal. The ability to scale virtual network across physical data centers vary greatly among different SDN Controllers.
Let’s take a step back and see how other things scale. As organizations grow, they scale by having a hierarchical structure where the first line managers and workers deal with detailed daily tasks at the micro level so that the top executives can manage the whole organization at a macro level and think strategically about the direction the organizations are heading to. Similarly, if we take a look at the single largest-scale network in the world – the Internet, there has long been a differentiation of the devices at the edge of the networks close to the end users, and the devices at the core or backbone of the Internet. The edge devices normally connect hosts and applications to the network, and they are made to provide rich features. The core devices connect routers and form network of networks. They are designed to move packets in and out really fast and efficiently, and they do that largely due to the protocols they use: MPLS for efficiently forwarding, and BGP for route advertising.
Since these protocols work so well in the wide area network to scale the Internet, why not using them or similar principles to scale the software-defined networks? Actually, that is exactly why Pedro Marques, the founding engineer of Contrail Systems (Now part of Juniper Networks) proposed an internet draft to IETF to extend the BGP IP VPN model to serve as the signaling protocol for host-based overlay networks along with an XMPP interface that provides a bridge between the software concepts familiar to end-points and those familiar to network equipment. One other advantage of using proven scalable standards is that the physical networks already understand these protocols, and it is seamless to integrate virtual networks with physical networks such as the Internet or VPN networks.
Heroes normally think alike. Scott Shenker, a UC Berkeley professor often regarded widely in the networking industry as one of the key thought leaders for SDN, believes that MPLS “got it right”. In his lecture at Stanford University, Software-Defined Networking at the Crossroads, he openly admitted that SDN should incorporate MPLS to have edge routers look at full packet header and insert a label to the packet, and the core routers only switch based on the label. This creates a clean network modularity that helps networks scale.
Juniper Contrail was designed from ground up to simplify the secure extension of virtual networks and policies across geographically distributed data centers, a key feature to enable true workload mobility. Aside from following the above principals we discussed, Contrail Controller software also follows the scale-out architecture where its configuration, control and analytics modules can run on one or multiple nodes. The physically-distributed nature of the Contrail Controller is a distinguishing feature. Because there can be multiple redundant instances of any node, operating in an active-active mode (as opposed to an active-standby mode), the system can continue to operate without any interruption when any node fails. When a node becomes overloaded, additional instances of that node type can be instantiated, after which the load is automatically redistributed. This prevents any single node from becoming a bottleneck and allows the system to manage very large-scale systems – tens of thousands of servers.
But the Contrail Controller is also logically centralized, and it behaves as a single logical unit, despite the fact that it is implemented as a cluster of multiple nodes. This can greatly simplify management of data center networks.