The New Network
Explore Juniper’s vision for network innovation and how the company and industry are shaping the future with the new network

Server Virtualization and the Path to Enlightenment

by Juniper Employee ‎02-18-2011 08:26 AM - edited ‎02-18-2011 04:17 PM

The pace of change in the data center is brisk to say the least.  One of the most significant drivers of change is the broad adoption of server virtualization, which is designed to allow multiple applications to independently co-exist on the same physical server.  There have been many different approaches to server virtualization in the past:  “envelopes” in MVS (zOS); Mainframe Domain Facility from Amdahl; Dynamic System Domains and Containers from Sun; and so forth.  Today, the preferred solution is to use hypervisors to encapsulate applications and their operating system instances inside a virtual machine (VM).


It may seem like hypervisors such as VMware’s ESX have sprung out of nowhere.  In fact, the hypervisor has been more than 45 years in the making and can be traced back to a 1964 R&D project at IBM’s Cambridge research facility running on a modified IBM 360-40 mainframe.  Initially known as CP-40 and later as VM/CMS, it was eventually released as IBM’s first fully supported hypervisor in 1972 under the name VM/370.  Although it remained in the shadow of MVS, VM/370 proved to be the O/S that customers would not let IBM kill off. Today, it is known as z/VM and runs on IBM’s z-series mainframes.


The “modern” history of hypervisors began when Mendel Rosenblum, an associate professor at Stanford, and a few of his students created a hypervisor on x86 servers as a graduate project.  Mendel then teamed up with his wife Diane Greene to start VMware.  In the beginning, the significant overhead required to run the hypervisor limited its use to test and development environments. This changed when the boys from Cambridge University invented para-virtualization and open-sourced Xen.  Combined with the hardware support that Intel and AMD baked into their processors, the required system overhead dropped to under 10% and the hypervisor exploded into the production world.


Today there is a wealth of hypervisors to choose from: ESXi, Hyper-V, Xen and KVM on x86 servers, plus a set specific to various UNIX boxes and mainframes.  Today, thank to the ubiquity of hypervisors, almost all companies have implemented some form of server virtualization.


At my previous employer, I was a VMware customer and had the opportunity to interact with a number of their customers.  What I noticed is that most businesses embrace server virtualization in three stages, what I call “the path to enlightenment.”  In the first stage, IT is seeking to tame server sprawl through server consolidation.  When a server runs a single application, average utilization of that physical server is generally 5%-8%.  Using VMs to isolate the applications from each other, multiple applications can co-exist on a single server, increasing hardware utilization to 25%-35% (or more if you are good or lucky).  This made it possible to actually reduce the number of servers, bucking the trends of the last several decades.  Stage one: Consolidation – saving capital costs.


During this initial stage, virtualized server pools are generally small and configurations are static, with VM migration limited to once or twice per year to facilitate maintenance.  The security model is simplistic with a very limited number of VLANs and zones implemented within the server pool.  For the most part, the virtualized applications are limited to non-critical apps.  In this first phase, the legacy data center network proves to be adequate.


At some point during the first stage, IT realizes there is a greater benefit than CAPEX savings – agility.  It begins when IT discovers that provisioning new virtual “servers” to meet the needs of the business groups can now be performed in hours rather than the weeks or months typically required for new physical servers.  Suddenly IT is a hero – they are exceeding their “customers’” expectations.  Now the business can move faster and IT can be more responsive.  New capabilities come on line in less time.  Resources can be added quickly to respond to changes in demand, while applications that did not work out can be taken off-line and the resources easily reallocated.  Stage 2: Agility – for the infrastructure and the business.


Finally, as VMs become more dynamic, there is a third stage of enlightenment – resilience.  The ability to pick up and move an application safely and dynamically can also be used to build a more resilient infrastructure without having to resort to complex HA (high availability) clusters; now, HA can be delivered to all applications in the data center.  Stage 3: Resilience – keeping the business running.


As customers move into the second and third stages, the pools of virtualized servers grow in size, and they find that a single, larger resource pool is both more efficient and more agile than multiple smaller pools.  The environment becomes more dynamic, with VM migration becoming common place in order to facilitate workload balancing and resilience.  Many or even most of the applications become virtualized, including the critical apps.  It is at this stage we start to see big Oracle databases being virtualized — not because they will share the server with other apps but because they can now be easily moved to another server.  And finally, because of the number of applications, there needs to be a more sophisticated security model.  The number of VLANs and security zones implemented within the server pools grows dramatically.


It is at the point, when customers move from the consolidation stage to the agility and resiliency stages, that they have an epiphany.  The legacy hierarchical network embedded in their data center is the single greatest impediment to achieving the promise of the virtualized data center.  And that, my friends, will be the subject of my next posting.

by Aboobacker sidique(anon) on ‎04-11-2011 09:34 PM

Hi ,


I have finished ccnp, now i work in telelogix for etisalat project as datacentre operation , here most customers are using juniper routers firewall etc, they are saying it is very cheap than cisco and enhances the same function as cisco, i realize that it is needed to be concentrate on juniper devices also, So kindly advice anybody , how to begin this  with a certification ,?

by Distinguished Expert on ‎04-12-2011 03:34 AM



The "Fast Track" program has a set of study material for the exam you can follow.  This includes an e-learning module with some exercises on the cli.

The "Day One Fundamentals" series books will also help.

There is also a practice test on the education site where you can see how you are doing.

You can ask certification specific questions in the forums here

Steve Puluka BSEET
Senior Network Administrator
MCP 70-290 - Managing Server 2003
MCTS Windows 7

Post a Comment
Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
About The New Network

Exploring the vision for the networking industry and the issues shaping its future.

Subscribe to The New Network    RSS Icon

Our Bloggers

Rami Rahim
Juniper Development & Innovation

Profile | Subscribe

Brad Brooks
Chief Marketing Officer

Profile | Subscribe

Bask Iyer
Senior Vice President and CIO

Profile | Subscribe

Judy Beningson
Vice President, Strategic Planning

Profile | Subscribe

Mike Marcellin
Senior Vice President

Profile | Subscribe

Jonathan Davidson
Senior Vice President

Profile | Subscribe

Ankur Singla
Vice President of Engineering

Profile | Subscribe

Bob Dix
Vice President
Government Affairs &
Critical Infrastructure Protection

Profile | Subscribe

Copyright© 1999-2013 Juniper Networks, Inc. All rights reserved.