The entire premise of multicloud is that infrastructure shouldn’t reside on an island where it is connected but also separate. Resources should be largely fungible, allowing users to access workloads that might run in different pools without a noticeable change in experience.
From a networking perspective, for this to be true, it means that networking teams must have a unified view of policy and control across the entire network, from the cloud (both public and private) to the on-ramps, to the cloud in the campus and branch.
End-to-end means multivendor
Within a single domain (data center, for instance), networks of even moderate size and complexity are multivendor. Diverse operating requirements and economic leverage have driven many enterprises to multivendor strategies. It’s hard to imagine that the pressure for multivendor doesn’t increase when the boundaries extend well beyond a single domain.
For most enterprises, the network evolves somewhat organically over time. Driven by individual refresh and expansion events—typically in support of varying business needs—plotting out a multi-year journey dependent on a single supplier can be prohibitively difficult. When you consider the changing technology landscape as well, difficult can quickly approach impossible.
This means that any end-to-end networking endeavor will necessarily be multivendor. And that means that multicloud represents more than just a technology change for any enterprise pursuing a cloudy future.
Watch my 90-second video on why multicloud must be multivendor here.
Multicloud is an operational condition
For the past couple of decades, most technology changes in networking have been unlocked primarily through the procurement of new solutions. The migration to leaf-spine architectures in the data center, for example, was an exercise in replacing legacy switches running spanning tree with a newer crop of devices running layer-3 protocols. The SD-WAN movement is driving the turnover of branch gateways to support hybrid WAN.
But multicloud is about more than just new technology deployed as a device in the network. It is defined by the singular approach to managing diverse pools of resources. As a technological foundation, multicloud is more of an operational condition than a description of the underlying infrastructure.
Operationally, multicloud requires a unified means of managing policy and control across the data center, campus, branch and public cloud. This represents a degree of integration between devices in the network that has historically not existed.
More than APIs
In the networking world, we tend to think about multivendor primarily as a function of APIs. This is true when integration means communication or loose coordination between devices. If all that is required is the exchange of information, APIs and protocols are sufficient.
But multicloud exists over the top of the infrastructure. It’s about policy and control, which must then be executed in the private cloud, or the public cloud or at the campus or branch gateway. When capabilities and constructs differ, there is a need for abstraction.
Put differently, if edge policy must be specified in the language of the underlying devices, operations are not truly unified. Even if that policy is inserted from a single pane of glass, the necessary contextualization means that the operational model is fractured. An operator has to be aware of where the control is being enforced.
Instead, we believe multicloud needs to provide a means of abstracting policy and control, using intent-based models to define policy that is then translated into the underlying device primitives required to enact it. So whether control is being executed on a public cloud VPC or a physical device at some remote site, the operators only need to interact with one means of expression.
Juniper Network’s product thesis
Because we believe strongly in the operational tenets of multicloud, we have driven our product portfolio with these operational requirements in mind.
In the underlay, be it physical or virtual, on-premises or in the public cloud, our portfolio runs Junos. Junos provides open management interfaces leveraging protocols like NETCONF and gRPC to provide a consistent way of integrating Juniper devices into over-the-top orchestration solutions. Whether Juniper is deployed in that multicloud orchestration role or not, Juniper devices are well-suited to serve in the underlay with a common, standards-based interface set for northbound integration.
And the fact that Junos runs in the data center, campus, branch and public cloud means that enterprises can employ a unified management approach across all the places in the network required to make multicloud a reality.
In the orchestration, visibility and security layers, Contrail Enterprise Multicloud represents our core multicloud offering. We designed it explicitly for multivendor environments, understanding fundamentally that the path to multicloud cannot be blocked by a requirement to unify the underlying hardware under a single vendor.
For companies interested in playing defense (blocking insertion), it makes perfect strategic sense to operate in a closed environment. Being truly open represents a risk of displacement, yet other vendors only offer the promise of an upgrade of open solutions in the future, despite the fact that enterprises will likely take years to completely phase out legacy systems.
But multicloud as multivendor cannot wait for a complete hardware turnover that could take five to seven years across the whole of the enterprise. While it will take time for enterprises to form new operational practices, products to underpin those practices are available today that enterprises can deploy in natural refresh cycles. As the network becomes more and more capable, enterprises can then support more and more of the operational transition. The thoughtful progression from legacy to cloud and multicloud can help enterprises leverage their existing infrastructure en route to a multicloud future.