The New Network
Explore Juniper’s vision for network innovation and how the company and industry are shaping the future with the new network
BPLewisJNPR

Q&A With David Yen, EVP and GM of Juniper's Fabric and Switching Technologies Business Group

by BPLewisJNPR on ‎09-15-2010 02:20 PM

As you know, we’ve been advocating over the past year a New Network vision that includes new technologies, partnerships, and even a venture fund to get tomorrow’s network innovations moving forward at a more rapid pace.

 

Recently, I was able to sit down with David Yen, Executive VP and General Manager of Juniper’s  Fabric and Switching Technologies Business Group, who shared his perspective on Juniper’s 3-2-1 architecture and what it means for the future of data centers. This is the first part of our conversation; I’ll be sharing the second part in the weeks ahead. Hope you enjoy David’s insight.

 

Q: It’s been a busy year at Juniper. As the head of our Fabric & Switching Technologies Business, can you sum up what Juniper has laid out there for the public?

A: One major theme we’ve been advocating since our ‘New Network’ launch in October 2009 is the realization that technology has evolved so much in just about every facet of the IT industry – from the mega data centers to consumer. It's time for a new network to fulfill the promise of today's technology while fostering future innovation. And we believe the most important thing—the first step of this process, particularly as it applies to data centers—is to simplify the data center. Then you can control your cost. You can scale. And you have a much better foundation to automate the network.

 

Q: Let’s talk a bit about the data center. What has fundamentally changed there in the past few years?

A: For decades the perennial challenge for data center managers has been to strike a balance between the user experience and the economics. With a new network, you actually have a chance to enhance both measures. As everybody has seen in the last decade or so, there have been very significant changes in the data center, particularly on the application, user services and programming style sides. Over the last 10 years, instead of the popular client-server model, the work, the application, has evolved to the Web 2.0 style. You tend to touch on a number of applications written by different people, possibly running on different servers.

 

With the service-oriented architecture time to market, people no longer develop their applications completely from scratch. Furthermore, more modern applications have fundamentally changed the workload in the data center. In the client-server era, typically people used very capable UNIX servers to run particular application services. And that particular service may be rendered by various processes within the server and dealing with one or maybe more databases also contained, controlled by the server. So the network traffic in the data center primarily ran “north and south,” between the server and the serving client.

 

Now look at the data center architecture, which today is primarily a multi-layer Ethernet switching tree structure, where at the bottom or “south” of the tree are the servers, while clients come from the top of the tree so traffic is north and south. With the advent of Web 2.0, the service-oriented architecture, along with the increasingly capable x86 processors that commoditize the server hardware, and also the popular practice of designating a particular server to run a particular application, now, a typical application in the data center may interact with several servers and deal with one or more databases. Each of these databases is probably created by different people for different purposes and they just aggregate all these to provide the services. As the workgroup grows, this tree gets bigger and more complex.

 

Q: So the data traffic patterns are shifting and data center networks have yet to catch up to this?

A: Well, the interesting thing here is that all traffic that was previously contained within the server now gets completely exposed to the outside of the server and storage. That puts a lot of new types of stress on the data center network. Furthermore, if you refer to this tree-like structure, it's no longer just north and south traffic between the so-called server and the client. Today, as much as 75 percent of network traffic in the data center now involves interactions among servers and storage, so it becomes an “east-west” traffic pattern.

 

Factor in server consolidation, data center consolidation, new types of content service providers, and cloud computing…and the data center becomes bigger. When the data center gets bigger, interconnecting all the data center resources becomes huge. This traffic pattern change and data center resource increase causes the tree-like structure used in the data center over the last 23 years to all of a sudden become very slow.

 

Q: Why is it too slow? 

A: Because every layer of switching introduces latency.  With a tree-like structure and the need to connect more data center resources, layers are added to connect switches which themselves are connecting server and storage resources.  And with the tree-like structure, when sending packets from one server to another, you have to travel all the way up the tree and back down again—in other words, traffic has to travel north and south first in order to move east and west.  For east-west traffic, this north-south path introduces extra hops and delays, which are undesirable.  Imagine what that means in an equity-transaction environment where each delay or bit of latency has a financial impact.

 

If you look at the tree-like structure in the data center, when you have layers of these switching boxes composing this whole tree, it becomes extremely complicated. At most of the major enterprise data centers you visit today, frequently you will find there are at least four or five switching layers. We have seen customers with as many as seven layers of switching. It becomes very complicated to manage. And furthermore, it obviously is very expensive, both financially and also in terms of power.

 

So it’s with these drawbacks and challenges in mind that we looked at how to enhance the data center network architecture to improve the experience and economics for our customers.

 

In our next post, David and I will discuss Juniper’s take on the data center and how our approach is both different from our competitors and better for customers.

 

Post a Comment
Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
About The New Network

Exploring the vision for the networking industry and the issues shaping its future.

Subscribe to The New Network    RSS Icon

Our Bloggers

Shaygan Kheradpir
Chief Executive Officer

Profile | Subscribe

Rami Rahim
EVP/GM
Juniper Development & Innovation

Profile | Subscribe

Brad Brooks
Chief Marketing Officer

Profile | Subscribe

Bask Iyer
Senior Vice President and CIO

Profile | Subscribe

Judy Beningson
Vice President, Strategic Planning

Profile | Subscribe

Mike Marcellin
Senior Vice President
Marketing

Profile | Subscribe

Jonathan Davidson
Senior Vice President
Engineering

Profile | Subscribe

Ankur Singla
Vice President of Engineering

Profile | Subscribe

Bob Dix
Vice President
Government Affairs &
Critical Infrastructure Protection

Profile | Subscribe

Copyright© 1999-2013 Juniper Networks, Inc. All rights reserved.