Physical Servers to Virtual Servers to—Now—Serverless
May 1, 2017
While participating in the AWS Summit in San Francisco few weeks back, I was amazed at the number of new AWS services available for developers to build their cloud applications. I also noticed a trend toward “serverless” computing discussed at many of the AWS sessions, including talks by AWS CTO Werner Vogels and AWS CEO Andrew Jassy.
Clearly, AWS anticipates widespread adoption of serverless computing and wants to shift the paradigm of how enterprises develop their applications. The onus is now on enterprises to make tradeoffs: either they avoid the increasing lock-in imposed by cloud platforms (since this new paradigm demands bestowing more responsibility to non-standard cloud platforms), or they embrace it as an accelerator for feature velocity, allowing them to strategically focus on their differentiating application code.
After the Summit, I reflected on the journey from physical to virtual to, now, serverless computing. The implications of this shift in cloud architectures is astounding.
What is Serverless Computing?
Virtualization dramatically improves economics and dynamism by bringing infrastructure-as-a-service (IaaS) to enterprises, allowing cloud providers to pool their infrastructure resources and offer compute, storage, and networking as a utility to bring enterprise applications to the public cloud. Hyperscale cloud providers are innovating at light speed, moving from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) to offer platform-level middleware services that remove the complexity of managing and maintaining the underlying infrastructure.
The main goal of PaaS is to make application development more agile for enterprises by absorbing common middleware services, creating greater value and stickiness for their cloud platforms. Now they are taking yet another step by moving to serverless computing. This phase is designed to improve application development agility by separating the enterprise application developer from the underlying infrastructure while allowing cloud providers to create more stickiness for their cloud platforms.
The prime enabler of serverless computing is the Function-as-a-Service (FaaS) capability, which scales and securely executes code in run-time containers in response to real-time events without needing to manage the underlying infrastructure. Amazon has AWS Lambda (introduced in 2014); Microsoft has Azure Functions (2016); Google has Google Cloud Functions (2016); and IBM has BlueMix OpenWhisk (2016). In the serverless computing paradigm, developers are focused on building trigger functions that serve the events to build their applications. This relieves enterprise application developers from undifferentiated and tedious infrastructure complexities, allowing them to focus on developing strategic assets of differentiated application code. As developers focus on differentiated trigger functions, cloud providers can take care of delivering just the right amount of compute, storage, networking, security, high availability, auto scaling, and maintenance such as software and security patches. AWS also provides AWS Step-Function service, introduced in December 2016, to create a complex pipeline of AWS Lambda functions to build complex applications. In essence, cloud providers want to hide the infrastructure from application developers and move them towards a more event-driven serverless computing paradigm. Apart from the potentially disruptive technology paradigm shift, this model has positive economic implications for enterprises.
In this serverless paradigm, enterprises don’t have deploy VMs/containers upfront to serve end-user events or requests. The trigger functions are fired up on-demand, when needed, to serve an event; those trigger functions then disappear once the event is serviced. Since AWS hosting costs are typically considered COGS for many SaaS applications, this process actually reduces costs by improving margins and/or driving down end-user expenses. For many cloud-born enterprises building cloud native applications, this paradigm makes a lot of sense, especially since achieving application feature velocity to capture market share is paramount.
What are the Implications on Cloud Networking?
As this paradigm shift occurs at the higher layers, it is imperative to understand the implications for the underlying infrastructure layer. For cloud providers, revenue is generated at the higher layers; the infrastructure layer is merely a means to an end. However, these providers are well aware that the infrastructure and architectural choices they make are strategic differentiators for delivering the higher layers. This is evident from the high infrastructure spending (CAGR of 9.6%, according to Heavy Reading’s SDN and NFV Market Tracker, September 2016), as well as the secrecy in preserving details of their infrastructure architectures.
By hiding the infrastructure from the higher-layer applications, cloud providers are taking on the burden of building a dynamic infrastructure that can be driven in a software-defined manner. As they hide the security, scalability, and high availability of on-demand containers, and their connectivity across multiple high availability zones, the networking infrastructure interconnecting compute and storage nodes within and between data centers needs to build on what I call “application-driven cloud networking” architectures. In order to bring dynamism to the infrastructure, these architectures must support the following characteristics:
Intent-based automation using programmable platforms to create an agile and responsive network layer that supports on-demand trigger functions that must be communicated inside and across data centers.
Platforms that can feed real-time analytics to logically centralized SDN controllers, enabling them to make real-time decisions that meet the needs of the trigger functions.
Application-driven routing technologies like segment routing that enable applications to request application-specific, SLA-constrained paths that are approved by centralized SDN controllers, regardless of how trigger function pipelines are built across data centers.
The bottom line is that leading cloud providers are strategically pursuing the move to serverless computing, creating a paradigm shift in how enterprises develop cloud-native applications. It also raises the question: can the networking layer deliver the needed capabilities fast enough to become strategic enough for cloud providers to differentiate themselves by supporting higher-layer innovations?