Centralization of applications at large scale in a consolidated data center, and the increasing size of software-as-a-service application deployments, creates the need to deploy thousands of 1GbE/10GbE ports to connect to servers and storage devices in the data center. Service providers need a way to terminate application servers without having to build layers of separately managed networking devices. They also need to be able to configure services from a central location and automate the provisioning of their network devices. This is driving the need for a device-based solution that can control thousands of network ports from a single point and that can interface with service orchestration systems.
My colleague Russell Skingsley has an interesting take on the effects of latency on virtualized application performance. The purpose of virtualization is to optimize resource utilization. As Russell pointed out it isn’t just an academic conversation. For the cloud hosting provider it’s about revenue maximization. Network latency has a direct impact on virtualized application performance and therefore revenue for the service provider. Your choice of network infrastructure will impact your business, but it can be difficult to see how investing in high performance networking solutions will improve your business. I will try to connect the dots.