Data center equipment criteria: Graceful evolution
Oct 16, 2017
As industries mature and their solutions become more hardened, evaluation criteria will naturally change. Data center equipment used to be judged on overall performance and a feature support matrix that included nerd knobs galore. With virtually every enterprise-grade switching solution having the same basic features now, how people think about data center equipment is changing.
And while price seems to be an obvious battlefield for the next wave of data center wars, the future ought to hinge more on graceful evolution.
How you get somewhere determines when you move beyond
I don’t know if this rises to a universal truism, but in most situations, how you get somewhere determines when you get to the next thing. Basically, if you get to where you want to be while also setting up for what’s next, your subsequent moves are more easily executed.
In sports, the best players are thinking a few moves ahead. They might adjust their speed or their body positioning to account for the next move, so that when the time comes, they can outmaneuver their defender and make the play.
In IT, the best architects will consider future architectural states and make current decisions based on where they know they need to go next. In this case, it’s less about outmaneuvering someone and more about trying to avoid the painful rip-and-replace model that is typically disastrous for budgets and fraught with risk.
Minimally, this means that architectural planning needs to be a series of layered moves, each building on the next. If your plans for a current buildout are largely one-and-done, you are probably not spending enough time setting up for the next move.
Applying this to scale
In some cases, the next move is about growth. If the business is successful, you will have to scale: more capacity, more users, more applications. Knowing that you have to scale, you might be drawn to switches that claim—not surprisingly—better scale.
But if you had planned for this growth, what would you be looking for?
In a word, you would want to look for graceful scaling. Essentially, how easy is it to add additional capacity? Can you add new devices without having to replace existing ones? Can you easily scale the leafs and the spine? How much does recabling come into play? Do management solutions need to be reconsidered?
And if the eventual scale is going to take you beyond certain break points in the architecture, what then? Do you have a path from standalone switches to layer 2 fabrics, to islands of layer 2 fabrics, to full-on IP fabrics? And how easy is it to transition between these?
I am not saying that you have to plan to be public cloud scale, but you should have a reasonable path to your foreseeable future, plus probably one additional unforeseen step.
Perhaps the evolutionary path your company is on is more focused on extracting operating expense from IT. The plan is to grow capacity at a faster clip than the operations team, leveraging automation and abstraction to make up the gap as the company adds more users and devices.
In this case, what does the transition look like for your NOC?
If your current environment is largely CLI-driven, you might consider how to layer in a small number of high-value workflows. Do these require a change in personnel, which is basically the people equivalent of a rip-and-replace? Or can you start with small automation steps and targeted integrations, building on the foundation to accommodate more sophisticated automation down the road?
And if the goal is to eventually evolve to a fully-automated environment, to what extent do your hardware purchases support that vision? While you might not need streaming telemetry and structured data now, it might be that they become blockers to your automated future.
In other words, how graceful will your evolution be?
The future is not all equal
If we are being honest about the state of data center networking, we have to admit to ourselves that the bulk of the industry is simply not that far along their evolutionary path. But with strides being made in both compute and storage, the network is being exposed as a bottleneck to change.
But because so many people are still using gear whose average age is probably close to 7 years old, and operational practices that date back to the dot-com era, a lot of the newer technologies haven’t been really rolled out in the bulk of mainline enterprises.
This means that a lot of teams simply don’t have a ton of practical experience. This poses a real threat to forward progress, as it makes it difficult to sift through the literature and determine which solutions actually allow for a more simple transition to some future state.
The bottom line
If you are one of the 99% who work in a data center that is not necessarily steeped in cloud technologies, you probably ought to be planning your evolution. And in doing so, you can reasonably assume that your future is going to require some explicit position on security, automation, and scale.
The question is not how do you deploy it the first time; it’s how do you evolve from there. Start with plotting out—in high-level terms—what your multi-year journey ought to look like. Be realistic about what you are going to do and not do. It serves you no purpose to have ambition that exceeds your ability to execute.
And then ask your suppliers about what the transition looks like from what you need today to what you want tomorrow. It’s not going to be a greenfield deployment. It’s a transition. Understand what technologies persist. How does one approach yield to a different approach. What gear has to be replaced over that time? Make sure you think through upper bounds and EOL plans. And make sure that the support and services to help migrate things like configuration and automation scripts are a part of the consideration.
If you are thoughtful about your future move, how you make the next transition will determine when you are able to move beyond that.