SDN and NFV Era
Showing results for 
Search instead for 
Do you mean 

Network Requirements for Telco (NFV) Clouds

by Juniper Employee ‎02-20-2017 01:02 PM - edited ‎02-25-2017 11:43 AM

The network fabric is an integral element of the NFV Infrastructure (NFVI), as defined by the ETSI architectural framework for building NFV clouds. In my previous blog in this series, I mentioned that the architectural blocks required for building the network fabric for an NFV cloud are similar to those used by cloud providers for building their cloud offerings. Let’s examine what these are:

 

A Resilient, Scaled-Out, Open IP Fabric for underlay:

 

While Layer 2 technologies can certainly be leveraged to build small clouds, cloud providers have settled on Layer 3 designs to build scaled-out leaf-spine data center fabrics. IP based fabrics offer better scalability, often scaling to 1000’s of racks. NFV solutions offer the promise of elasticity, which is the ability to turn up services (VNF’s) on demand. Service Providers building Telco Clouds may start with a small footprint, but need to ensure that the underlay design allows for future growth and IP fabrics are better suited to build scaled out clouds. The QFX portfolio provides rich suite of standard based protocols with ECMP based load balancing to build a scaled out underlay.

 

Service providers currently provide network functions on dedicated hardware appliances which offer carrier grade reliability. As these services migrate to a virtualized delivery model, the demands for 5 9’s reliability from customers is not relaxed. Service providers who do not place enough importance to the need for always-on service risk customer attrition and loss of revenue. While carrier grade reliability places demands on the VNF’s themselves and requires a holistic approach to delivery, the network plays a crucial role in offering an always-on service.

 

Tenant Isolation with Network Virtualization:

 

The NFV service delivery models allow for multiple tenants to be hosted on the same NFVI (NFV Infrastructure). For example with NFVIaaS, multiple service providers may be hosted on the same cloud infrastructure, each providing their NFV services. On the other hand, even in a single tenant environment, individual VNF’s may need to be isolated. For example network slicing is a key tenet of the emerging 5G radio access standard. In all of these scenarios it is paramount that the physical network is capable of hosting virtualized network slices and that the traffic in each slice is 100% isolated.

 

Another important thing to keep in mind is that while it is true that the full agility of NFV can be exploited if the VNF’s are hosted on virtualized compute, we certainly see VNF’s being hosted on bare metal compute. In this scenario the edge of the network slice will originate and terminate on the ToR switch connected to the bare metal compute and not on the vSwitch or vRouter residing on the hypervisor. A key requirement for the network fabric in a NFV deployment is to provide seamless interconnectivity between VNF’s hosted on virtualized and bare metal compute.

 

Finally, the network should also be able to support service chaining of VNF’s to build a customizable service.

 

Enabler for VNF Migration:

 

Telco Clouds needs to support the live migration of VNF’s within a data center and across data centers. Workload migration may be initiated to balance the load on the compute resources, in response to failure of a compute resource or during maintenance windows. While a big onus, to provide the capability, resides on the VNF and the management and orchestration layers of the Telco Cloud, the network fabric comprising of the leaf, spine and edge routers needs to be an enabler for supporting this flexibility.

 

Programmability:

 

SDN and NFV are complementary technologies. The full potential of NFV relies on the ability to turn up and turn down services at the velocity of a few clicks on a web portal. Business intent or customer intent must translate into action in the network in an automated way and not rely on an army of network staff to provision the network. To achieve this the network layer must

 

  • Provide a comprehensive programmable interface: Every action (configuration or operation) that can be undertaken via a CLI should be available via a programmatic interface that facilitates automation. Moreover these programmable interfaces should be standards based where standards exist. For example in order to enable vendor neutral automation of the network, the network gear must support the YANG models published by OpenConfig and IETF.
  • Be Open: The network layer should be built on open standard based technologies that allow for easy integration with any open SDN controllers that the service provides chooses. Any proprietary solution that locks in the service provider to a closed solution will prevent them from best of breed selections and likely be more expensive in the long run.
  • Easily Integrate with Automation Platforms: There are many open source efforts (and certainly a few startups) that have spawned in recent years that provide platforms for automation of networking resources – Ansible and SaltStack being few of them. The network should integrate easily with the best of breed automation regime of choice that service providers decide to deploy.

Hardened for the COLO:

 

Telco clouds require customer proximity and would be hosted in the 100’s of COLO locations, rather than a few national or regional data centers. The network switches should be GR-63-CORE and GR-1089-Core compliant and built to meet the stringent physical and electrical demands for deployment in Telco clouds.

 

We would love to have a dialogue on how Juniper can help you with your NFV deployment needs. In my next blog, I will detail how the QFX portfolio of switches from Juniper can help you build scalable, open, elastic and programmable Telco clouds. 

 

For more information on Juniper QFX Series Switches, click here.

For more information on ETSI requirements for Network Infrastructure, click here.

For more information on NFV use cases defined by ETSI, click here.

Announcements
Juniper TechCafe Ask the Author
Labels