Broadcast TV Live Production using IP: Juniper takes part in landmark interop events
Jan 8, 2018
We recently took part in the IP Showcases at the International Broadcasting Convention (IBC 2017) in Amsterdam and the SMPTE Annual Technical Conference in Hollywood. The showcases demonstrated real-world IP interoperability based on the recently created SMPTE 2110 standards. Approximately 50 vendors took part in both events, so it was a great opportunity to show interop with a wide variety of IP-enabled media devices, including cameras, playout servers, production desks and multi-viewers.
Standards have existed for some time describing how to encapsulate a fully-formed TV channel, consisting of video, audio and metadata, into a single IP multicast stream. This is well suited to transport in the WAN. However, within the studio live production environment, it is desirable to send video, audio and metadata as separate IP multicast streams, known as “essences”. For example, an audio processing device is not interested in receiving video information, so it only needs to receive audio essences. SMPTE 2110 is essence-based for this reason. Essence flows are encapsulated in IP RTP packets, typically in IP multicast format as usually multiple receivers need to receive the flow. At the interop, uncompressed 1080i HD video streams were used throughout, which are about 1.5 Gbps each in size.
Many of the media devices in the interop used SMPTE 2022-7 Seamless Protection Switching. In this scheme, a transmitter deliberately sends two copies of each packet (with different multicast group addresses) along diverse paths through the network. A receiver normally receives both copies. If there is a network failure or glitch, at most only one of the two streams should be affected and the other stream will still be arriving intact. The receiver inspects the RTP sequence numbers of the incoming packets. If packet with sequence number N is missing from one stream, it replaces it with the corresponding packet from the other stream, so no glitch occurs at the application layer. In practice, often two identical separate networks are built within the facility, each carrying one stream from each redundant pair – we used this approach in the IP Showcase.
Here is a diagram of the interop layout. Within each of the two redundant networks, OSPF was the routing protocol. IGMPv2 was used between receivers and the attached switch (QFX10002 in our case) and PIM sparse-mode was used between switches. A 100 GE link was used between switches, and a variety of link speeds were used between the switches and the media devices, ranging from 1 GE for audio devices to 100 GE for some of the multi-viewers.
PTP is used to achieve synchronization between essences, for example in order to achieve lip sync between an audio and video essence. As you probably know, Juniper has many years of PTP implementation expertise –we have our own in-house state-of-art servo algorithms and we actively participate in the ITU and IEEE PTP working groups. A particular PTP profile, SMPTE 2059-2 has been defined for user in media networks. In the IP Showcase, our QFX switch was operating in Boundary Clock mode with this profile. It was slaved to an upstream switch and was master for all of the directly attached media devices. We also took part in a separate standalone PTP demo, in order to show to visitors the reduction in jitter seen when using PTP on the network devices.
The IP Showcase ran successfully 24/7 for the whole duration of both events. It was great to see interop between such a wide variety of products, demonstrating the feasibility of IP in TV live production.