SDN and NFV Era
Showing results for 
Search instead for 
Do you mean 

Are Service Meshes the Next-gen SDN?

by Moderator Moderator ‎06-19-2017 10:32 PM - edited ‎06-28-2017 08:20 AM

 

June 28, 2017 update: more awesome background on service meshes, proxies and Istio in particular on yet another new SE Daily podcast with Istio engineers from Google.

June 26, 2017 update: For great background on service meshes (a relatively new concept) check out today’s podcast on SE Daily with the founder of Linkerd.

 

Whether you’re adopting a containers- or functions-as-a-Service stack, or both, in the new world of micro-services architectures, one thing that has grown in importance is the network because:

 

  1. Micro-service application building blocks are decoupled from one another over the network. They re-integrate over the network with APIs as remote procedure calls (RPC) that have evolved from the likes of CORBA and RMI, past web services with SOAP and REST, to new methods like Apache Thrift and the even-fresher gRPC: a new CNCF project donated by Google for secure and fast http2-based RPC. RPC has been around for a long time, but now the network is actually fast enough to handle it as a general means of communication between application components, allowing us to break down monoliths where service modules would have previously been bundled or coupled with tighter API communications based on package includes, libraries, and some of us may even remember more esoteric IPC.

  2. Each micro-service building block scales out by instance replication. The front-end of each micro-service is thus a load balancer which is itself a network component, but beyond that, the services need to discover their dependent services,
    which is generally done with DNS and service discovery.

  3. To boost processing scale out and engineer better reliability, each micro-service instance is often itself decoupled from application state and its storage. The state is saved over the network as well. For example, using an API into an object store, a database, a k/v-store, a streaming queue or a message queue. There is also good-ol’ disk, but such disk and accompanying file systems too, may be virtual network-mounted volumes. The API- and RPC-accessible variants of storing state are, themselves, systems that are micro-services too, and probably the best example of using disks in fact. They would also incorporate a lot of distributed storage magic to deliver on whatever promises they make, and that magic is often performed over the network.

Hopefully we’re all on the same page now as to why the network is important micro-services glue. If this was familiar to you, then maybe you already know about cloud-native SDN solutions and service meshes too.

 

The idea and implementation of a service mesh is fairly new. The topic is also garnering a lot of attention because they handle the main networking challenges listed above (esp #1 & 2), and much more in the case of new projects like the CNCF Linkerd and newly launched project Istio.

 

Since I’ve written about SDN for container stacks before, namely OpenContrail for Kubernetes and OpenShift, I’m not going to cover it super deeply. Nor am I going to cover service meshes in general detail except to make comparisons. I will also put some references below and throughout. And I’ve tried to organize my blog by compartmentalizing the comparisons, so you can skip technical bits that you might not care for, and dive into the ones that matter most to you.

 

So on to the fun! Let’s look at some of the use cases and features of services meshes and compare them to SDN solutions, mostly OpenContrail, so we can answer the question posed in the title. Are service meshes the “Next-Generation” of SDN?

AUTOMATING SDN AND SERVICE MESHES

 

First, let’s have a look at 3 general aspects of automation in various contexts where SDN and service meshes are used: 1 - programmability, 2 - configuration and 3 - installation.

 

1.     Programmability
When it comes to automating everything, programmability is a must. Good SDNs are untethered from the hardware side of networking, and many, like OpenContrail, offer a logically centralized control plane with an API. The main two service meshes introduced above do this too, and they follow an architectural pattern similar to SDNs of centralized control plane with a distributed forwarding plane agent. While Istio has a centralized control plane API, Linkerd is more distributed but offers an API through its Namerd counterpart. Most people would probably say that the two service meshes’ gRPC API is more modern and advantageous than the OpenContrail RESTful API, but then again OpenContrail’s API is very well built-out and tested compared to Istio’s still-primordial API functions.
 

2.     Configuration
A bigger difference than the API, is in how functionality can be accessed. The service meshes take in YAML to declare configuration intent that can be delivered through a CLI. I suppose most people would agree that’s an advantage over SDNs that don’t offer that (at least OpenContrail doesn’t today). In terms of a web-based interface, the service meshes do offer those, as so many SDNs. OpenContrail’s web interface is fairly sophisticated after 5 years of development, yet still modern-feeling and enterprise friendly.

Looking toward “network as code” trends however, CLI and YAML is codable and version controllable more easily than say OpenContrail’s API calls. In an OpenStack environment OpenContrail can be configured with YAML-based Heat templates, but that’s less relevant for container-based K8s and OpenShift world. In a K8s world, OpenContrail SDN configuration is annotated into K8s objects. It’s intentionally simple, so it’s just exposing a fraction of the OpenContrail functionality. It remains to be seen what will be done with K8s TPRs, ConfigMaps or through some OpenContrail interpreter of its own.
 

3.     Installation
When it comes to getting going with Linkerd, having a company behind it, Buoyant, means anyone can get support, but getting through day-one looks pretty straightforward on one’s own anyway. Deployed with Kubernetes in the model of a DaemonSet, it is straightforward to use out of the box.

Istio is brand new, but already has Helm charts to deploy it quickly with Kubernetes thanks to our friends at Deis (@LachlanEvenson has done some amazing demo videos already –links below). Using Istio, on the other hand, means bundling its Envoy proxy into every Kubernetes pod as a sidecar. It’s an extra step, but it looks fairly painless with the kube-inject command. Sidecar vs. DaemonSet considerations aside, this bundling is doing some magic, and it’s important to understand for debugging later.

When it comes to SDNs, they’re all different wrt deployments. OpenContrail is working on a Juniper-supported Helm chart for simple deployment, but in the meantime there are Ansible playbooks and other comparable configuration management solutions offered by the community.

One thing OpenContrail has in common with the two service meshes, is that it is deployed as containers. One difference is that OpenContrail’s forwarding agent on each node is both a user-space component and Kernel module (or DPDK-based or SmartNIC-based). They’re containerized, but the kernel module is only there for installation purposes to bootstrap the insmod installation. You may feel ambivalent towards kernel modules… The kernel module will obviously streamline performance and integration with the networking stack, but the resources it uses are not container-based, and thus not resource restricted, so resource management is different than say a user-space sidecar process. Anyway, this is same deal as using the kube-proxy or any IP tables-based networking which OpenContrail vRouter replaces.

 

SDN AND SERVICE MESHES: CONSIDERATIONS IN DEVOPS

 

When reflecting on micro-services architectures, we must remember that the complexity doesn’t stop there. There is also the devops apparatus to manage the application through dev, test, staging and prod, and through continuous integration, delivery, and response. Let’s look at some of the considerations:

 

1.     Multi-tenancy / multi-environment
In a shared cluster, code shouldn’t focus on operational contexts like operator or application dev/test environments. To achieve this, we need isolation mechanisms. Kubernetes namespaces and RBAC help this, but there is still more to do. I’ll quickly recap my understanding of the routing in OpenContrail and service meshes to better dissect the considerations for context isolation.

<background>
OpenContrail for K8s recap: One common SDN approach to isolation is overlay networks. They allow us to create virtual networks that are separate from each other on the wire (different encapsulation markings) and often in the forwarding agent as well. This is indeed the case with OpenContrail, but OpenContrail also allows higher-level namespace-like wrappers called domains/tenants and projects. Domains are isolated from each other, projects within domains are isolated from each other, and virtual networks within projects are isolated from each other. This hierarchy maps nicely to isolate tenants and dev/test/staging/prod environments, and then we can use a virtual network to isolate every micro-service. To connect networks (optionally across domains and projects), a policy is created and applied to the networks that need connecting, and this policy can optionally specify direction, network names, ports, and service chains to insert (for example, a stateful firewall service).

The way these domains, projects, and networks are created for Kubernetes is based on annotations. OpenContrail maps namespaces to their own OpenContrail project or their own virtual network, so optionally micro-services can all be reachable to each other on one big network (similar to the default cluster behavior). There are security concerns there, and OpenContrail can also enforce ACL rules and automate their creation as a method of isolating micro-services for security based on K8s object annotations or implementing Kubernetes NetworkPolicy objects as OpenContrail security groups and rules. Another kind of new annotations on objects like K8s deployments, jobs or services would specify the whole OpenContrail domain, project, and virtual network of choice. Personally, I think the best approach is a hierarchy designed to match devops teams and environments structure that makes use of the OpenContrail model of segmentation by domain, project and network. This is in (unfortunately) contrast to the simpler yet more frequently used global default-deny rule and ever-growing whitelist that ensues that turns your cluster into Swiss cheese. Have fun managing that :/

The overlay for SDN is at layer 2, 3 and 4, meaning that when the packet is received on the node, the vRouter (in OpenContrail’s case) will receive the packet destined to it and look at the inner header (the VXLAN ID or MPLS LSP number) to determine the domain/tenant, project and network. Basically, the number identifies which routing table will be used as a lookup context for the inner destination address, and then (pending ACLs) the packet is handed off to the right container/pod interface (per CNI standards).

Service mesh background: The model of Istio’s Envoy and Linkerd insofar as they are used (which can be on a per-microservice basis), is that there is a layer-7 router and proxy in front of your microservices. All traffic is intercepted at this proxy, and tunneled between nodes. Basically, it is also an overlay at a higher layer.

The overlay at layer-7 is conceptually the same as SDN overlays except that the overlay protocol over the wire is generally HTTP or HTTP2, or TLS with one of those. In the DaemonSet deployment mode of Linkerd, there is one IP address for the host and Linkerd will proxy all traffic. It’s conceptually similar to the vRouter except in reality it is just handling HTTP traffic on certain ports, not all traffic. Traffic is routed and destinations are resolved using a delegation tables (dtabs) format inherited from Finagle. In the sidecar deployment model for Linkerd or for Istio’s Envoy (which is always a sidecar), the proxy is actually in the same container network context as each micro-service because it is in the same pod. There are some IP tables tricks they do to sit between your application and the network. In Istio Pilot (the control plane) and Envoy (the data plane), traffic routing and destination resolution is based primarily on the Kubernetes service name.
</background>

With that background, here are a few implications for multi-tenancy.

Let’s observe that in the SDN setup, the tenant, environment and application (network) classification happens in the kernel vRouter. In service mesh proxies, we still need a CNI solution to get the packets into the pod in the first place. In Linkerd, we need dtab routing rules that include tenant, environment and service. Dtabs seems to give a good way to break this down that is manageable. In the sidecar mode, more frequently used for Envoy, it’s likely that the pod in which traffic ends up already has a K8s namespace associated with it, and so we would map a tenant or environment outside of the Istio rules, and just focus on resolving the service name to a container and port when it comes to Envoy.

It seems that OpenContrail here has a good way to match the hierarchy of separate teams, and separate out those RBAC and routing contexts. Linkerd dtabs are probably a more flexible way to create as many layers of routing interpretation as you want, but it may need a stronger RBAC to allow the splitting of dtabs among team tenants for security and coordination. Istio doesn’t do much in the way of isolating tenants and environments at all. Maybe that is out of scope for it which seems reasonable since Envoy is always a sidecar container and you should have underlying multi-tenant networking anyway to get traffic into the sidecar’s pod.

One more point is that service discovery baked into the service mesh solutions, but it is still important in the SDN world, and systems that include DNS (OpenContrail does) can help manage name resolution in a multi-tenant way as well as provide IP address management (like bring your own IPs) across the environments you carve up. This is out of scope for service meshes, but with respect to multiple team and dev/test/staging/prod environments, it may be desirable to have the same IP address management pools and subnets.

 

2.     Deployment and load balancing
When it comes to deployment and continuous delivery (CD), the fact that SDN is programmable helps, but service meshes have a clear advantage here because they’re designed with CD in mind.

To do blue-green deployments with SDN, it helps to have floating IP functionality. Basically, we can cut over to green (float a virtual IP to the new version of the micro-service) and safely float it back to blue if we needed to in case of an issue. As you continuously deliver or promote staging into the non-live deployment, you can still reach it with a different floating IP address. OpenContrail handles overlapping floating IPs to let you juggle this however you want to.

Service mesh routing rules can achieve the same thing, but based on routing switch overs at the HTTP level that point to for example a newer backend version. What service meshes further allow is traffic roll over like this example showing a small percentage of traffic at first and then all of it, effectively giving you a canary deployment that is traffic load-oriented as opposed to a Kubernetes rolling upgrade or the Kubernetes deployment canary strategy that gives you a canary that is instance-count based, and relies on the load balancing across instances to partition traffic.

This brings us to load balancing. Balancing traffic between the instances of a micro-service, by default happens with the K8s kube-proxy controller by its programming of IP tables. There is a bit of a performance and scale advantage here of using OpenContrail’s vRouter which uses its own ECMP load balancing and NAT instead of the kernel’s IP tables.

Service meshes also handle such load balancing. They support wider ranging features, both in terms of load balancing schemes like EWMA and also in terms of cases to eject an instance from the load balancing pool, like if they’re too slow.

Of course service meshes do also handle load balancing for ingress HTTP frontending. Linkerd and Istio integrate with the K8s Ingress as ingress controllers. While most SDNs don’t seem to offer this, OpenContrail does have a solution here that is based on haproxy, an open source TCP proxy project. One difference, is that OpenContrail does not yet support SSL/TLS, but there are also K8s pluggable alternatives like nginx for pure software-defined load balancing.
 

3.     Reliability Engineering
Yes, I categorize SRE and continuous response under the DevOps umbrella. In this area, since service meshes are more application-aware, it’s no surprise, they do the most further the causes of reliability.

When it comes to reliably optimizing and engineering performance, one point here from above is that EWMA and such advanced load balancing policies will assist in avoiding or ejecting slow instances, thus improving tail latency. A Buoyant article about performance addresses performance in terms of latency directly. Envoy and Linkerd are after all TCP proxies, and unpacking and repacking a TCP stream is seen as notoriously slow if you’re in the networking world (I can attest to this personally recalling one project I assisted with that did HTTP header injection for ad placement purposes). Anyway, processors have come far, and Envoy and Linkerd are probably some of the fastest TCP proxies you can get. That said, there are always the sensitive folks that balk at inserting such latency. I thought it was enlightening that in the test conducted in the article cited above, they’ve added more latency and steps, but because they’re also adding intelligence, they’re netting an overall latency speed up!

The consensus seems to be that service meshes solve more problems than they create, such as latency. Are they right for your particular case? As somebody highly quoted once said, “it depends.” As is the case with DPI-based firewalls, these kind of traffic processing applications can have great latency and throughput with a given feature set or load, but wildly different performance by turning on certain features or under load. Not that it’s a fair comparison, but the lightweight stateless processing that an SDN forwarding agent does is always going to be way faster than such proxies, especially when, like for OpenContrail, there are smart NIC vendors implementing the vRouter in hardware.

Another area that needs more attention in terms of reliability is security. As soon as I think of a TCP proxy, my mind wonders about protecting against a DoS attack because so much state is created to track each session. A nice way that service meshes nicely solve this is through the use of TLS. While Linkerd can support this, Istio makes this even easier because of the Istio Auth controller for key management. This is a great step to not only securing traffic over the wire (which SDNs could do too with IPsec etc.), but also making strong identity-based AAA for each micro-service. It’s worth noting that these proxies can change the wire protocol to anything they can configure, regardless of if it was initiated as such from the application. So an HTTP request could be sent as HTTP2 within TLS on the wire.

I’ll cap off this section by mentioning circuit breaking. I don’t know of any means that an SDN solution could do this very well without interpreting a lot of analytics and application tracing information and feeding that back into the API of the SDN. Even if that is possible in theory, service meshes already do this today as a built-in feature to gracefully handle failures instead of having them escalate.
 

4.     Testing and debugging
This is an important topic, but there’s not really an apples-to-apples comparison of features, so I’ll just hit prominent points on this topic separately.

Services meshes provide an application RPC-oriented view into the intercommunication in the mesh of micro-services. This information can be very useful for monitoring and ops visibility and during debugging by tracing the communication path across an application. Linkerd integrates with Zipkin for tracing and other tools for metrics, and works for applications written in any language unlike some language-specific tracing libraries.

Service meshes also provide per-request routing based on things like HTTP headers, which can be manipulated for testing. Additionally, Istio also provides fault injection to simulate blunders in the application.

On the SDN side of things, solutions differ. OpenContrail is fairly mature in this space compared to the other choices one has with CNI providers. OpenContrail has the ability to run packet capture and sniffers like Wireshark on demand, and its comprehensive analytics engines and visibility tools expose flow records and other traffic stats. Aside from debugging (at a more of network level), there are interesting security applications for auditing ACL deny logs. Finally, OpenContrail can tell you the end-to-end path of your traffic if it’s run atop of a physical network (not a cloud). All of this can potentially help debugging, but the kind of information is far more indirect vis-à-vis the applications, and is probably better suited for NetOps.

 

LEGACY AND OTHER INTERCONNECTION

 

Service meshes seem great in many ways, but one hitch to watch out for is how they can allow or block your micro-services connecting to your legacy services or any services that don’t have a proxy in front of them.

 

If you are storing state in S3 or making a call to a cloud service, that’s an external call. If you’re reaching back to a legacy application like an Oracle database, same deal. If you’re calling an RPC of another micro-service that isn’t on the service mesh (for example it’s sitting in virtual machine instead of a container), same again. If your micro-service is supposed to deal with traffic that isn’t TCP traffic, that too isn’t going to be handled through your service mesh (for example, DNS is UDP traffic, ping is ICMP).

 

In the case of Istio, you can setup egress connectivity with a service alias, but that may require changes to the application, so a direct pass-thru is perhaps a simpler option. Also there are a lot of variants of TCP traffic that are not HTTP nor directly supported as higher-level protocols riding on HTTP. Common examples might be ssh and mail protocols.

 

There is also the question of how service meshes will handle multiple IPs per pod and multiple network interfaces per pod once CNI soon allows it.

 

You most certainly have some of this communication in your applications that doesn’t quite fit the mesh. In these cases you not only need to plan how to allow this communication, but also how to do it securely, probably with an underlying SDN solution like OpenContrail that can span Kubernetes as well as OpenStack, VMware and metal.

 

WHAT DO YOU THINK?

 

Going back to the original question in the title: Are Service Meshes the Next-Gen SDN?

 

On one hand: yes! because they‘re eating a lot of the value that some SDNs provided by enabling micro-segmentation and security for RPC between micro-services. Service meshes are able to do this with improved TLS-based security and identity assignment to each micro-service. Also service meshes are adding advanced application-aware load balancing and fault handling that is otherwise hard to achieve without application analytics and code refactoring.

 

On the other hand: no! because service meshes sit atop of CNI and container connectivity. They ride on top of SDN, so they’ll still need a solid foundation. Moreover, most teams will want multiple layers of security isolation when they can get micro-segmentation and multi-tenancy that comes with SDN solutions without any performance penalty. SDN solutions can also span connectivity across clusters, stacks and runtimes other than containers, and satisfy the latency obsessed.

 

Either way, service meshes are a new, cool and shiny networking toy. They offer a lot of value beyond the networking and security values that they subsume, and I think we’ll soon see them in just about every micro-services architecture and stack.

 

MORE QUESTIONS…

 

Hopefully something in this never-ending blog makes you question SDN or the service meshes. Share your thoughts or questions. The anti-pattern of technology forecasting is thinking we’re done, so some open questions:

 

  1. Should we mash-up service meshes and fit Linkerd into the Istio framework as an alternative to Envoy? If so, why?
  2. Should we mash-up OpenContrail and service meshes and how?

 

RESOURCES ON LEARNING ABOUT SERVICE MESHES

 

 

This article was originally posted at http://jameskelly.net/blog/2017/6/19/are-service-meshes-the-next-gen-sdn

Announcements
Juniper Networks Technical Books
Labels
About the Author
  • Prior to Juniper acquisition, Ankur was the Founder and CEO of Contrail Systems Inc - a pioneer in standards based network virtualization and scale-out networking software. Ankur has over 15 years of experience in building world-class networking products and leading high performance teams. Prior to Contrail, Ankur served as Chief Technology Officer and VP of Engineering at Aruba Networks, where he played critical roles in the rapid expansion of team, products, and global businesses. Before Aruba, Ankur helped drive Juniper’s initial entry into and expansion of the Ethernet Switching market. Ankur received his MSEE from Stanford University & BSEE from the University of Southern California.
  • David Noguer Bau is the head of Telco Vertical Marketing at the SP Strategic Marketing team in Juniper Networks. He has extensive experience in Service Provider network evolution and regularly runs executive sessions with technical and marketing teams of important telecom operators to accelerate the adoption of virtualisation. David is based in Barcelona and has over 15 years of experience in the telecommunications sector. Prior joining Juniper Networks, Mr. Noguer Bau spent seven years at Nortel where he was a Business Development Manager specializing in Carrier Ethernet and Broadband areas. Before Nortel he worked at Eicon-Dialogic as Technical Manager in Spain. David has been the Country Marketing Chair at Metro Ethernet Forum for Spain. Mr. Noguer has wide experience speaking at international Conferences. He was graduated as Computer Engineer by Universitat Autonoma de Barcelona (UAB) and has an executive MBA from EADA Barcelona and executive education at the Thunderbird School of Global Management (Arizona) and the Henley Business School (UK). The views expressed here are my personal opinions , have not been reviewed or authorized by Juniper Networks and do not necessarily represent the views of Juniper Networks.
  • Donyel Jones-Williams is the Director of Service Provider Product Marketing Management overseeing all of Juniper's Service Provider Products for Juniper Networks. In this role, he leads all of the internal and external marketing activities for Juniper with respect to routing, automation, SDN and NFV. Prior to joining Juniper Networks in January 2014, Donyel was a Senior Product Line Manager for Cisco Systems with in the High End Optical Routing Group managing product lifecycle for multiple products lines helping telecom providers operate efficiently and effectively including; ONS 155xx Product Family, ONS 15216, ONS 15454 MSTP, Carrier Packet Transport Product Family, ME 2600x, & ASR 9000v. He also negotiated favorable agreements with 3rd-party vendors furnishing components and parts and conducted both outbound and inbound marketing (webinars, case study-development, developed and delivered both business & technical at Cisco Live 2005-2012). Donyel graduated from California Polytechnic State University-San Luis Obispo with a Bachelor of Science in Computer Science. While attending Cal Poly SLO he was a collegiate student athlete playing football as a wide receiver and a key member of the National Society of Black Engineers. Donyel is now an active volunteer for V Foundation.
  • Remarkably organized stardust. https://google.com/+JamesKelly
  • Jennifer Blatnik is vice president of cloud, security and enterprise portfolio marketing at Juniper Networks with focus on enterprise deployments of security, routing, switching, and SDN products, as well as cloud solutions. She has more than 20 years of experience helping enterprises solve network security challenges. Before joining Juniper, Jennifer served multiple roles at Cisco Systems, Inc., including directing product management for security technologies aimed at small to medium enterprises, as well as supporting managed services, cloud service architectures and go-to-market strategies. She holds a B.A. in Computer Science from University of California, Berkeley.
  • Jerry oversees all aspects of OpenLab which serves as a catalyst to spark the development of new innovative software applications or solutions that leverage the power of SDN/network programmability and intelligence. OpenLab is unique within Juniper and with its polished facility, globally accessible lab, and educational programs – such as the SDN “hackathons,” it serves as a tool for customer, partners, and academia. Prior to this position, Jerry led the development, management and marketing of the company’s strategic partnerships for video/unified communications, optical networking, and content/media delivery. In addition to handling the day-to-day oversight of the partnerships, he established new cross-partner go-to-market processes to drive and manage joint field opportunities. Before joining Juniper, Jerry led the Lucent Technologies application hosting/service provider marketing organization. He has over 25 years of experience in the data networking field with a focus on strategic alliance development, marketing, and technical field support. Jerry possesses a BS degree in Computer Science from St. John’s University in New York. He is active as a Juniper ambassador within the technology and academic community which includes advisory board positions with both NJIT and Rutgers in New Jersey.
  • I have been in the networking industry for over 35 years: PBXs, SNA, Muxes, ATM, routers, switches, optical - I've seen it all. Twelve years in the US, over 25 in Europe, at companies like AT&T, IBM, Bay Networks, Nortel Networks and Dimension Data. Since 2007 I have been at Juniper, focusing on solutions and services: solving business problems via products and projects. Our market is characterized by amazing technological innovations, but technology is no use if you cannot get it to work and keep it working. That is why services are so exciting: this is where the technology moves out of the glossy brochures and into the real world! Follow me on Twitter: @JoeAtJuniper For more about me, go to my LinkedIn profile: http://fr.linkedin.com/pub/joe-robertson/0/4a/34a
  • Mark Belk is the National Government Chief Architect at Juniper Networks
  • Mike Marcellin is Senior Vice President and Chief Marketing Officer, leading the global marketing team responsible for marketing Juniper’s product and services portfolio and stewarding the brand, driving preference for Juniper in the market, training our partners and account teams, and developing a differentiated information experience for our customers. Before joining the global marketing organization, Marcellin led business strategy and marketing for Juniper’s industry-leading portfolio of high-performance routing, switching and security products. Prior to joining Juniper in 2010, Marcellin served as Vice President of Global Managed Solutions for Verizon, where he oversaw product development and marketing of its managed IP networking, hosting, security and IT solutions for businesses around the world. He also served as Vice President of Global Product Marketing for Verizon Business, executive director of Verizon Business’ IP and Ethernet portfolio as well as leading the company’s eCRM marketing division. Marcellin began his career with MCI in 1994. Marcellin is a Board Member for the Telecommunications Industry Association and a Board Member of US Ignite, an NSF-sponsored initiative. Marcellin holds two patents and was a Rodman Scholar at the University of Virginia, where he received a bachelor of science degree with distinction in systems engineering. He is based in Sunnyvale, California.
  • I love the intracacy and intimacy of succesful communications. Why and how people engage with each other is fascinating. I am also consumed with the way IT changes behaviours, values and expectations in society. I bring this sense of wonder to my role in EMEA Service Provider Marketing Programs at Juniper Networks. Down time: My passions are music, reading, politics, Derby County and playing the guitar (and the harmonica). You can follow me elsewhere: twitter: @neilpound my personal blog: http://neilpound.tumblr.com/ my LinkedIn account: Neil Pound
  • Paul Obsitnik is Vice President of Service Provider Marketing for Juniper Networks Platform Systems Division (PSD), responsible for the marketing of Juniper’s portfolio of high performance routing, switching, and data center fabric products to Service Providers globally. Paul's team is responsible for marketing strategy, product marketing, go-to-market planning, and competitive analysis worldwide for the Service Provider segment. Obsitnik has extensive experience in marketing, sales and business development positions with a proven track record in creating technology markets. He has served in senior marketing and sales management positions at several companies including BridgeWave Communications, ONI Systems, NorthPoint Communications and 3Com. Paul holds a Bachelor of Science with Honors in Electrical Engineering from the United States Naval Academy and a Master of Business Administration from the Harvard Graduate School of Business. Obsitnik is based in Sunnyvale, California.
  • Praful Lalchandani is a Product Manager at Juniper Networks focussing on the Data Center portfolio. Praful is a seasoned veteran in the networking industry, with experience spanning over 15 years building networking products and helping service providers, cloud providers and enterprises with their networking requirements.
  • Pratik Roychowdhury currently leads product management for Juniper's SDN and Cloud Software product namely Contrail. He has been with Juniper Networks for the last six years, leading product management activities for Juniper’s Network Virtualization and Network Programmability products and taking some of these products from concept to release. Overall, Pratik has spent 16+ years in the hi-tech industry assuming various roles including product development at Citrix, strategy & product management at early stage start-ups, and technology investment banking at UBS. Pratik has a B.Tech in Electrical Engineering from Indian Institute of Technology and an MBA from Univ of Michigan, Ann Arbor (Ross School of Business).
  • VP of engineering, Juniper Networks & founder, AppFormix Entrepreneur and founder with successful exits from two networking startups. Sumeet holds more than 20 patents with technologies implemented in shipping products and has received numerous awards from organizations as diverse as MIT and Interop. His AppFormix team at Juniper Networks is shipping an automated, real-time monitoring environment that uses AI and machine learning to autonomously mitigate application and network function issues before they impact QoS or user experience.