Last week Juniper was the lead sponsor at Cloud Net, London, co-hosted by the Open Network Foundation. The Juniper team had several hours of conversations with service providers and ONF representatives discussing the role of the network in delivering cloud services and on the state of play, aspirations and potential of Software Defined Networking and OpenFlow. Below are my takeaways from the event.
There’s more to this than meets the eye
The concepts behind SDN in general and OpenFlow specifically are progressive and have great potential both within and outside the data center. Juniper has enabled OpenFlow through a JUNOS SDK application. OpenFlow aims to increases network functionality while lowering operating costs through simplified hardware, software, and management. Going forward the reliability and scalability of the controller component will be key. We also need to understand the role of the controller and network devices, and their behaviour and communication paths, during start-up, network change and failure modes.
Then there is the question of centralised control. It can be argued that this approach limits scale and adds risk whereas a more federated approach to network management and device control could better orchestrate data flows and ensure the levels of security and performance required of today’s cloud networks end to end.
On-box or off-box? Is that really the question?
I felt a sense of a black or white context to some of the discussion; as if you had to be either for OpenFlow or against it. My personal view is that OpenFlow and the over-arching SDN concepts behind it have great merit. In fact, many of Juniper’s openness innovations of the past few years such as the Junos operating system’s SDK, the Junos Space management SDK and API and the EX Switching Virtual Chassis technology with eXternal Route Engine (XRE), are reflected in some of the SDN aspirations for ‘off box’ control and coordination. However, OpenFlow cannot be taken as a solution in isolation. An approach to ‘off box’ orchestration needs to identify the specific aspects of network operations best suited to centralised control and acknowledge those more suited to distributed and/or federated ‘on box’ functions. The combination of all being greater than the sum of the parts.
Are we talking about the same thing?
Some of the discussions seemed to confuse the types of environments into which OpenFlow may be positioned. In response to a statement that OpenFlow may not be the answer to a specific problem, I heard the comment ‘if it’s good enough for Google’. As if this automatically gave weight to the counter argument. As we know, Google has a specific data centre and application environment that is very different to that of the enterprise or typical carrier. So much so, that if it is indeed ‘good enough for Google’, I would speculate it may not be appropriate for the service provider at all.
Why? In simple terms, Google operates a data center in which they run one application. This allows them to commoditise the hardware and provide resilience within the application. In a typical enterprise or service provider data center, where ‘off the shelf’ business applications are being run, the challenges and requirements of the underlying infrastructure in terms of performance, resilience and availability are very different. In short, an enterprise-grade data center is not the same as a commodity data center and we should acknowledge this in our thinking.
I wouldn’t start from here if I were you!
Juniper would be the first to agree that traditional networks are too costly, too complex and too difficult to manage. This is the basis of our drive for the new network. Many of the use cases put forward for OpenFlow purported to fix the complexities of the data center network through a management implementation, but assumed a traditional network topology as the starting point. The traditional data center network is the result of an historic approach to campus switch networks and is fundamentally flawed when it comes to supporting the largely ‘east-west’ traffic flows associated with virtualised data centers.
OpenFlow does indeed flatten and simplify the network from a control perspective but this will not solve the application performance issues imposed by a 3-tier physical network on an elastic, virtualised environment, as discussed in the Nermertes report Containing Chaos, The Complexity Challenge1 (page 5).
The network MUST change. The network must be physically flattened; not just logically re-represented as if were flat. With a flat physical network architecture the data centre will be able to support true simplicity of operation, maximise application performance and abstract function from location (as opposed to traditional tree topologies that force functions such as storage and security to reside within the same ‘branch’ as the associated compute resources). It was great to have several opportunities to present Juniper’s QFabric architecture that provides a massively simplified and flat network architecture to enable elastic cloud services.
I look forward to more sessions like this in the coming months to continue the discussion and I hope the ONF will continue to work with vendors to understand the state of art of networking so that OpenFlow can compliment other progressive innovations in the networking industry.
Did you attend? What did you take away?
Head of Cloud and Managed Service Solutions Marketing, EMEA
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.