NextGen Networking
Showing results for 
Search instead for 
Do you mean 

How to Really Read Latency Reports

by Juniper Employee ‎04-05-2011 10:06 AM - edited ‎04-13-2011 01:47 PM

My very favorite quote on speed and measurement comes from Alice in Wonderland:


"Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"

(Through the Looking Glass)

Interestingly, this passage actually relates very nicely to how vendors measure latency in their networking devices.   When we read latency figures, we logically assume that the number represents the length of time it takes for a packet to enter the switch and come out the other side.  True enough, as far as it goes.  But there are lots of ways to measure the speed of a packet as it traverses a switch.  And there are lots of questions that should be asked in order to really understand what was measured and how.  An interesting new white paper that details these questions and provides additional insight into what “latency measurements” really mean can be found here.


As you’ve probably figured by now, there are numerous ways to present latency measurements to make them look as good—or as bad—as you like.  Now I’m going to teach you how to tell when someone is performing such a sleight of hand.  When presented with latency test result figures, look for these characteristics to determine whether the vendor is trying to hide something: 

  1. Traffic is only measured between two ports with just enough traffic to show the best case scenario.
  2. Only the minimum latency is shown.  There is no data on maximum or average latency.  Packet size is not disclosed; the data only says X nano or microseconds without referencing packets.
  3. Partial mesh test results are labeled as “mesh testing,” giving the impression the testing is full mesh when it isn’t.
  4. Multicast testing results that do not disclose how many groups were used.
  5. Only test results for less than 100% throughput is shown; results for 100% throughput are not disclosed.
  6. Latency results from a non-real world configuration are featured.
  7. LIFO (last in, first out) or FIFO (first in, first out) methodologies are not specified.  

Granted, that’s a lot to keep track of.  So how do you ensure that you get realistic test numbers from your vendor?  We recommend you ask the following questions when reviewing the test results to ensure you get a complete picture.  Make sure to ask for copies of the test reports as well.


  1. Does the number represent switch latency or packet latency?
  2. What size packets were used?
  3. What latency methodology was used (that is FIFO, LIFO, or LILO—last in, last out)?
  4. On how many ports was this measured?
  5. What was the testing topology (that is, port pair, partial mesh, or full mesh)?
  6. Under what load was this tested (that is, 10 percent, 25 percent, 50 percent, or 100 percent)?
  7. Is this the minimum, average, or maximum latency number?

Have fun evaluating.  Keep score of how many of the seven characteristics above you detect in the vendor reports.  Then ask questions 1-7 above and let us know what you find.


Juniper Design & Architecture Center - Mobile Cloud
About the Author
  • Ellen Brigham is Director of Product Marketing at Juniper Networks responsible for Enterprise Domain Marketing. She has held multiple management positions in networking at Cisco Systems and Hewlett-Packard. Ellen holds a Masters Degree from Stanford University and is a member of the IEEE and the IEEE Robotics and Automation Society.