Enterprise Cloud and Transformation
Trusted Contributor , Trusted Contributor Trusted Contributor
Enterprise Cloud and Transformation
The consternation of diaggregation and integration
May 1, 2017

Most IT people are generally aware of disaggregation trends. The industry at large equates the decoupling of components to commoditization. And so the common refrain is that cheaper prices follow the disaggregation of parts of whatever IT stack is being discussed.
 
And while it is true that pricing tends to follow disaggregation, the actual implications of disaggregation are not usually talked about with much precision. 
 
A working definition
Let me start with a working definition for disaggregation. Think of disaggregation as nothing more than decoupling components that were previously tightly integrated. Important in this definition is that the why is not an integral part of disaggregation. There are several reasons to decouple things. 
 
As for common examples, the one that everyone is familiar is white box everything. In networking, the focus is white box switches, but IT has obviously seen white box servers already. And there is little reason to believe that white box routing will not arrive before too long as well.
 
Why does disaggregation lead to pricing changes?
There are two reasons that pricing action follows disaggregation, and both have to do with decoupling how procurement happens.
 
First, a common product strategy is to use proprietary advantage to drive improved margins. Let’s imagine two components for the sake of this example: hardware and software. If the hardware is effectively commodity (functionally equivalent to other things in the market and easily interchangeable with those options), then the margins on the hardware would be low—let’s imagine in the 20-percent range. Meanwhile, if the software is proprietary and highly differentiated, it might be somewhere higher—gross margins on the order of 70%. 
 
If the two can be procured independently, then the margins for each will reflect their value and uniqueness. If, however, the two have to be procured as one bundle, the higher margin gets spread across both components. This allows the vendor to leverage the thing of unique value to prop up the part that is more commodity. 
 
The second reason pricing tends to follow disaggregation is the way deals are negotiated. Imagine trading in a car as part of a new car purchase. The dealership is skilled at negotiations. They will try to get you to bundle everything into one negotiation. This is because they can let the complexity of the larger deal obfuscate the details of the sub-deals, which makes you less likely to negotiate fiercely to optimize individual outcomes.
 
The right way to handle this is to negotiate on the value of the trade-in separately from negotiating the price of the new vehicle, because you are operating without full information (what the trade-in is worth, what the dealer incentives are, and so on). Similarly, when the IT stack is disaggregated, where it leads to a separate procurement process for hardware and software, it allows the buyer to negotiate each of those deals independently.
 
Pricing isn’t the only reason to disaggregate
While the industry tends to talk about disaggregation as a pricing thing, there are actually other reasons to decouple components. 
 
In a traditional layered architecture, the difference between decoupled and tightly integrated is one of interface boundaries. If a vendor is making coupled changes on both sides of an interface, it becomes difficult to separate the components. And even if you do separate the components, you have to manage versioned software running on each side, which effectively limits your ability to make independent decisions.
 
The reason for doing this is usually because new capabilities (either new features or expanded performance) require optimizing both sides of a boundary. This is most likely to happen when a product or market is nearer its inception. As products and markets mature, the consumer emphasis shifts from features and performance to price and convenience, which shifts the focus from difficult-to-build functionality to efforts that drive down cost and simplify deployment.
 
When a layered stack becomes mature, the boundaries will become more rigid. This simply has to happen if development on both sides of a boundary is to continue decoupled. And when this happens, the individual components can be revved much more quickly. So one of the big values of a disaggregated stack is increased velocity in the now decoupled components.
 
If you have any doubts here, consider the rate of innovation of applications during the mainframe era, the client-server era, and during the current cloud era.
 
The role of interchangeability
If disaggregation leads to pricing and innovation advantages, an intriguing question is: does simply separating components necessarily lead to these advantages?
 
In a word, the answer is no. In the simplest case—the separation of hardware and software in networking—if the selection of either the hardware or the software limits the choice in the rest of the layers, the disaggregated purchasing benefits will not happen. The important attribute here is that within a layer, choices must be more or less interchangeable. 
 
Adding interchangeability as a core attribute in a disaggregated stack puts requirements on the interfaces between the components. For components to be interchangeable, those interfaces must be open. In this case, I use open to mean either open access (well-documented) or open standard. For example, white box switching relies primarily on the Broadcom SDK as a means of marrying the software to the hardware. That interface, while not a standard, is well-documented and stable. 
 
Of course, as interfaces move from open access to open standard, the options for interchangeability will increase. If P4 is successful, for instance, then software could conceivably be built to support a variety of underlying switching silicon options, which would extend the pricing benefits from the overarching platform to the key components within that platform. 
 
Any-to-any short-term myth
If the standardization of interfaces leads to broader interchangeability, then will we arrive at a point where customers can mix and match any component with any other component?
 
While this would give maximum leverage to consumers, the reality is that the systems are too complex (at least for now) for this to be a reality. Even within a reference hardware platform, changes in BIOS from supplier to supplier make running switching and routing software different on different platforms. In a previous life, we spent weeks debugging an issue that basically shut off a port because of differences between hardware suppliers. The platforms were indistinguishable and yet it didn't work.
 
That’s not to say that issues like these cannot be worked out, but it does mean that there is going to be a heavy integration focus on whoever is constructing the disaggregated stack and ultimately providing support to the end user. The implication here is that systems integration needs to remain a top-tier consideration for people working with disaggregated solutions. In part, this is why web-scale properties like Facebook procure at least some of their devices from integrators responsible for loading the software on the hardware and testing that it works.
 
The bottom line
The discussion around disaggregation lacks nuance. There can be pricing relief, but that relief only comes if the underlying components are interchangeable. The extent to which they are interchangeable is dependent on how the boundaries between layers are architected, but also by how consumers deploy functionality. If customers use snowflake configurations, they will find that their pricing leverage over vendors dwindles. The key here is going to be settling on a base set of functionality that meets business needs while not ruling out all but a few combinations of technology.
 
For architects pursuing a disaggregated world, this means that teams should spend time understanding how their choices help or hurt the objective. And there needs to be very concerted efforts deciding how integration will be handled so that the disaggregated stack does not become an unwieldy collection of disparate products that barely work together.

Top Kudoed Authors