Data Center Technologists
Showing results for 
Search instead for 
Do you mean 

Today enterprise data centers face huge pressure to provide increased bandwidth for Digital Transformation, Big Data and IoT applications. The increasing adoption of virtualized workloads (VMs and containers) and ongoing transition from on prem to off prem cloud services has resulted in significant strain on the spine-leaf architectures. The higher bandwidth at the access layer (10/40 GbE) has driven the need for greater upstream bandwidth (100GbE).

 

To account for this need, Juniper is launching QFX5110 switch – the latest addition to the QFX5100 family of switches that provides 100GbE uplink access to the aggregation layer along with features designed to optimize today’s agile data centers. Based on Broadcom’s Trident 2+ chipset, the QFX5110 will be available in two form factors:

 

  • QFX5110-48S: Compact, 1U 10GbE/100GbE data center access switch with 48 small form-factor pluggable plus (SFP+) transceiver ports and 4 QSFP28 ports, which can be configured as 4x40GbE or 4x100GbE ports.
  • QFX5110-32Q: Compact, 1U 40GbE/100GbE data center access and aggregation switch offering up to 32 QSFP+ ports, or 20 QSFP+ ports and 4 QSFP28 ports.
Read more...

Earlier this year, IDC analysts interviewed several Juniper Networks customers in Western Europe with the aim of quantifying the business value of Juniper Networks’ switching, routing and security solutions. IDC determined that the average ROI for customers using Juniper equipment was 349 percent over five years.

Read more...

Hero Status is Waiting

by Juniper Employee ‎11-03-2016 08:00 AM - edited ‎11-03-2016 12:33 PM

In recent weeks, we’ve made the case for automation pretty clear. We’ve shown you why you need to automate your network, how to make the business case for automation and how develop a plan to get you there. Now it’s time to get the right tools and technology partner in place.

Read more...

MC-LAG is dead, Long live EVPN Multi-homing

by Juniper Employee ‎10-20-2016 05:34 PM - edited ‎01-12-2017 02:44 AM

MCLag.png

Practically every day, it seems, someone will ask me: “How can I configure MC-LAG with EVPN to provide multi-homing?

 

The answer, I tell them, is simple:  you don’t need to.  EVPN is a superset of MC-LAG, and it natively integrates multi-homing. It’s like the better, standard version of MC-LAG that we’ve been waiting for.

 

EVPN, either with VXLAN or MPLS encapsulation, natively provides N-Way multi-homing by creating the same Ethernet Segment Identifier (ESI) on multiple devices. An ESI is configured on a per-interface basis; all interfaces configured with the same ESI, on any devices within the same EVPN domain, appear as part of the same L2 segment or LAG. On top of an ESI, it’s also possible to configure LACP to provide better fault detection.

Read more...

Keeping with our principle that “Your network should not get in the way of what your business wants to do,” this time, we’ll show you how to map the targeted business processes to the network and then show you how to develop a plan to begin the process of automating your network and winning the approval of management.

Read more...

How to Get Junos “Speaking Whale” to Containers

by Moderator Moderator ‎09-28-2016 01:35 PM - edited ‎01-12-2017 02:37 AM

Docker-speaking-whale.jpg

Let's look at how to setup Junos OS networking with Docker's MACVLAN networking mode and test container connectivity between hosts on various VLAN network segments.

 

Now when the apps team is introducing Docker container workloads into your Juniper network environment, you can keep calm and speak whale Smiley Wink in other words "Jjjjjjjjuuuuuuuu-nnnnnnooooooosssssssss"

 

 

Read more...

In the first part of this blog we discussed about implementing virtual networks in OpenStack using the ML2 hierarchical port binding design. A virtual network implemented with hierarchical port binding is composed of multiple Layer 2 segments stitched together to form a single network. We also discussed that such a network is implemented using a VXLAN based core segment and VLAN based dynamic segments on the edges.

 

In this blog we will delve into the details of installing and configuring the ML2 EVPN VXLAN driver from Juniper Networks.

Read more...

Implement EVPN VXLAN for your OpenStack cloud –Part1

by Juniper Employee ‎09-26-2016 12:02 AM - edited ‎11-17-2016 09:13 PM

Neutron ML2 drivers are used to implement Layer 2 Network connectivity between VM instances in OpenStack.

 

Ethernet Virtual Private Network (EVPN) service provides Layer 2 connectivity between two endpoints by encapsulating Layer 2 packets inside a transport packet. The transport packet can be tunneled over VXLAN or MPLS path.

 

In this two part blog we will discuss how EVPN can be used to implement a L2 segment in a multi-segment OpenStack network. The first part contrasts the EVPN VXLAN driver with the native VXLAN support in OpenStack Neutron and describes the functionality of the EVPN driver in a multi-segment network, while the second part deals with the installation and configuration of the EVPN VXLAN driver for Neutron ML2 plugin.

 

 

 

Read more...

Announcements

Juniper Innovators Circle