2015 is fast approaching and as usual, this is the time to reflect on the year past and look forward to the development and growth possibilities in the New Year.
2014 has been a fabulous year for SDN and NFV technologies, but there are still obvious doubts on how exactly the virtualization transformation can be achieved. To continue the momentum in 2015, especially in the carrier and cable organizations, technology advancement is imperative, but equally important is mindset change, not only at the leadership level, but also down to departments and individuals. It is time to get out of our comfort zone and adopt a new mindset on service provisioning and operations.
One of the questions I was frequently asked was “What if we fail?”. Here “failure” can mean two things. Most of the time, it means nobody wants to buy that service and it does not generate good revenue and return for the business. Other times, it means the infrastructure hardware failure that potentially can be caused by using COTS, which does not guarantee Five-9 availability. Those are valid questions and concerns. Indeed, service providers have been working very hard to make sure that their infrastructure and services don’t fail to both meet the reliability and SLA requirement, and to get good returns on their investment. Let’s focus on the first one in this blog and we will explore availability in my next blog.
Google is one of the most innovative companies in the world and it has created many innovative products and services such as Google Earth, Google Wallet, Google Shopping Express, etc. But even for Google, there have been many high-profile failures, remember Google Wave and Google Video? Because failure is a necessary part of the innovation process. From failure comes learning, iteration, adaptation, and the building of new models that lead to potential success. Almost all innovations are the result of prior learning from failures.
The key is to control the impact of the failure: identify failure, move on quickly and reuse resources to minimize sunk cost. This has been hard to do in service innovation among service providers because a new service offering means buying a new type of special purpose hardware appliances, hook them up into the lab, configure them, run various tests, and move to production. It can take months to ready a new service offering, and if the service does not get traction, there is no re-purposing of the hardware appliances because they are designed to only run one or a few services. All the appliances you bought become sunk cost. No wonder it is expensive to innovate!
One of the reasons Network Function Virtualization is so groundbreaking is that it encourages innovation by minimizing the impact of failure. Instead of running services on special purpose hardware appliances, NFV allows Virtualized Network Functions to run on Commercial of The Shelf (COTS) hardware. The virtualization layer, including hypervisors and network virtualization technologies such as SDN, pools all the hardware resources and present an abstract view to the applications, in this case, the service software.
So what if a service fails in this infrastructure? You wind it down and eventually terminate all virtual machines running the service software, releasing all compute, storage and network resources associated with that service back to the resource pools so that they can be used by other services. Better yet, if the service software was priced based on subscription, or usage, you can stop paying for the software almost immediately.
On the bright side, if your service is wildly successful, the virtualized infrastructure also the service to scale out very quickly and easily.
Thomas Watson, the founder of IBM, has this famous quote: “The way to succeed is to double your failure rate”. With NFV, service providers can fail fast, and succeed faster.