Automation & Programmability
Automation & Programmability
The Self-Driving Network™: Spare a Thought for Hardware
08.27.17

This is part VII in our multi-part blog series on The Self-Driving Network™. Find part VI here. Also, check out the newly unveiled 5-part video series that captures The Self-Driving Network story all the way from the genesis of the idea through to the ramifications for our industry.

 

Hardware in AI

OK, let me get this straight - software ate the world first. And now artificial intelligence (AI) is eating the world. Or maybe they’re both eating each other? I can’t remember. But, poor hardware. It’s so ‘80s. Or is it. . . ?

 

Maybe I’m just being nostalgic, but the recent explosion in artificial intelligence activity in general, and machine learning in particular (not to mention crypto-currency mining) has actually put hardware back in the spotlight. Graphical processing units (GPUs) from Nvidia have stepped in to power many machine learning systems over the last several years. And although GPU performance trends (and Nvidia’s market cap) continue to defy gravity, Google’s AI demands are so extensive that they actually took the dramatic step of designing their own application specific integrated circuits (ASICs) in-house.

 

Google’s tensor processing unit (TPU) is customized for the massive volumes, but low precision computations, required for machine learning. Traditional computing is deterministic - you generally need software to run the same lines of code in the same way every time - but machine learning is probabilistic. If you’re trying to identify a hot dog photo, your algorithm may never reach 100% accuracy, but if it is correct 99.9% of the time then it probably exceeds human performance, which is good enough.

 

 

Google TPU.png

 

While Moore’s Law plateaus, there’s still plenty of runway left in the evolution of processor designs optimized for artificial intelligence applications. For decades researchers have built the algorithms behind neural networks (aka, deep learning systems) based on the workings of the human brain. Now, the chip designers also are experimenting with ideas borrowed from biology.

 

Hardware in Networking

Custom hardware is intertwined with the advancements in artificial intelligence, but can we draw any analogies with hardware and networking? I think we can, although, like most analogies you can push it too far. The truth is that it’s all about the right tool for the job. Juniper practices “extreme agnosticism” and the infrastructure we build may contain our own highly specialized ASICs, merchant silicon/FPGAs, Intel x86 processors, or all of the above. We work with our customers to match the characteristics of the solution to the particular user application and we constantly navigate the fundamental design tradeoffs between performance and flexibility.

 

Big network operators seem to be happy with Intel’s latest Xeon design, while white box networking hardware based on x86 processors (and more recently ARM-based CPUs as well) has worked pretty well for the hyper-scale cloud providers. But in certain domains of the network cloud providers still employ hardware with packet forwarding engines optimized for the scale required to accommodate massive traffic growth (ie, ASICs and FPGAs). Back to machine learning, keep in mind that Google does continue to use GPUs and good old fashioned CPUs for many machine learning engines in addition to their in-house designed TPUs. Again, it’s all about employing the right processor for the appropriate application.

 

We talk about Google infrastructure for everyone else (GIFEE), the idea that “people are learning from what the web scale guys are doing” and then adopting the technologies themselves. We’ve seen this trend with the broad rise of containers and we’ll see it again with artificial intelligence (and FIFEE, AIFEE, MIFEE, . . .).

 

There’s some truth to hyperbolic claims that numerical computation algorithms are eating the world, but spare a thought for hardware – Can we agree that we will see exciting micro-processor innovation and experimentation continue for decades to come? Call it, Hardware-Define Networking.

Top Kudoed Authors