Data Centers and the Cloud are all the rage right now, and Juniper has been at the forefront of the Data Center revolution from the very beginning – early on with their introduction of the QFX and the much maligned QFabric, and more recently with the addition of Virtual Chassis Fabric (VCF), various open architectures for creating IP Clos Fabrics, and even advanced features such as Junos Fusion for the Data Center which collapse and simplify the deployment and management of a large number of Ethernet switches.
The folks at the Juniper Networks Technical Certification Program (JNTCP) have not been far behind, creating a Data Center track and releasing a new certification, the Juniper Networks Certified Professional Data Center (JNCIP-DC). The JNCIP-DC is currently rated as the fifth hottest Data Center certification by Tom’s IT Pro, an online resource tracking the demand of various industry certifications. I’ve been following the developments within the Data Center track for a while now, and you could imagine my delight when I saw the following a few months back on Juniper’s Certification portal:
Within the Juniper community there is an intense interest to learn more about the JNCIE-DC exam, especially by the many JNCIx certified individuals who are interested in adding one more notch on the veritable certification bedpost, myself included. Details have been sparse as the exam is still in development, but I’ve managed to speak to a few of my former colleagues within the JNTCP and they were kind enough to give me some details as to what we can expect on the exam.
At the time of this writing, here are topics that can we can expect to be covered on the JNCIE-DC exam. Please note, these topics may change as the exam is still under development:
BGP for an IP Clos fabric configuration
Control plane protection
Basic SRX configuration (security zones/policies)
On-the-box script (i.e. event scripts)
Junos Space to manage Junos Devices
Quite of few of these topics should come as no surprise. In fact, anybody who has taken the Data Center Switching (DCX) or the Advanced Data Center Switching (ACDX) classes will likely recognize quite a few of them — obvious things like VCF, EVPN/VXLAN, Data Center Interconnect and provisioning with ZTP.
One topic that jumps out at me that I did not expect to see is the basic SRX configuration. Given that this topic is not covered under any of the corresponding courseware materials it will obviously require that the candidate spend some time outside of the standard curriculum and learn about basic security configuration on the SRX. Furthermore, items like control plane protection will at the very least likely require some exposure to best current practice with regards to protecting the routing engine using mechanisms such as loopback filters and DDoS Protections. The MX book might be a good reference here.
Insofar as the topic of CoS, I would expect this to be very similar to the other JNCIE exam tracks in that a basic level of understanding of classification (Multifield and Behavior Aggregate), policing, scheduling, queue configuration, RED drop profiles and remarking would be required.
One item that certainly does not appear on the list of topics above is Contrail. We can all breathe a sigh of relief for the moment as I’ve been told that this will definitely NOT be on the exam. Although they do state that there will be overlays (as evidenced by the inclusion of BGP/IP Clos Fabrics and VTEP/VXLAN), all the overlays will be controller-less. I would expect this to possibly change down the road as Contrail matures and becomes more integral to the creation of overlays in the Data Center.
One last thing to note is that I’ve been told that the labs in the DCX and ADCX courseware are very representative of the type of lab that will be used in the JNCIE-DC exam. So in terms of building out the topology this would be a great place to start. In addition, as there will be basic SRX on the exam, my suggestions for a decent lab topology would look like the following:
2 MX devices – vMX should be a suitable alternative
2 SRX devices – vSRX should be a suitable alternative
Minimum 3 QFX devices, preferably QFX5k or higher. Perhaps vQFX might be sufficient here but I am unsure if all features will be supported or will have the ability to run as Spine switches in a Spine-Leaf configuration
2 EX4300s – mostly to be able to run as leaf nodes in a mixed-mode Virtual Chassis Fabric
Decent server to run Junos Space, and to store ZTP scripts, etc.
SUGGESTED STUDY MATERIAL
In addition to building out the above lab topology, I would strongly suggest the following materials to be used in the course of study:
Stay tuned for some additional articles in the future where I will expand on the above, getting into more details into each of the topics and what I would recommend as additional study material. I will also further describe the topology I will be using as I begin my pursuit of this exciting new certification. Until then, happy labbing!