If your organization is going through a data center transformation project you are probably looking at your options for switching infrastructure. As you design your network to support the move to virtualized compute infrastructure and the roll out of new application deployments the choice of switching infrastructure becomes a critical decision. One of the most versatile switching platforms on the market is Juniper’s QFX Series 10GbE/40GbE devices. I’ve had the opportunity to talk with many of our customers about the projects that they are using the QFX switches for and why they chose it over the other options in the market. I’d like to share these examples with you.
In order ensure application performance and increase productivity across the organization while trying to keep budgets under control Enterprise organizations have been increasingly evaluating and implementing a series of new technologies for the past few years. These technologies hold out the promise of increasing the agility of new application rollouts that deliver game changing services, and meeting the needs of the organization to understand the business and make timely and well informed decisions as well as meeting the changing needs of the organization as they adapt to moves, consolidations and mergers.
With the rapid growth in the adoption of server virtualization new requirements for securing the data center have emerged. Today’s data center contains a combination of physical servers and virtual servers. With the advent of distributed applications traffic often travels between virtual servers and might not be seen by physical security devices.
Network virtualization is a growing topic of interest and for some good reasons as networks scale to meet the challenges of cloud computing they are running up against VLAN scaling limitations. There have been several network overlay technologies released that seek to address the challenges with network scaling and to enable workload mobility. One of these technologies is VXLAN. It has a few proponents who say that it can meet the requirements for network virtualization. While it sounds good on the surface, it might create a few problems of its own. With VMWorld happening this week in San Francisco I’m sure that network virtualization will be a hot topic, especially considering the VMware Nicera news, so I thought I’d comment on it and offer some thoughts and options.
Are you and your colleagues headed to VMworld next week? Stop by and meet the Juniper team at booth #1517 where will be talking about the security and network architectures needed to move to an agile virtualized datacenter. It's going to be interesting with all of the changes that are happening in network virtualization. I'm looking forward to some interesting keynotes and sessions as well as catching up with friends in the industry.
With VMWorld coming up I’m reminded of a top of mind subject, virtual machine mobility. The reason for moving virtual machines is to better allocate server resources and maintain application performance. It’s a useful technology that works great in the data center. We also hear a lot about the need to move virtual machines across the WAN, live, without losing sessions. This is known as Long Distance vMotion or generically as long distance live migration. This might sound like a good idea, but it gets a bit complicated when you think outside the data center walls and across the WAN. It creates complexity in the network, as maintaining sessions requires keeping the same IP address and MAC address after the move. There are many proposed use cases for it, but is it such a good idea?