Search the Community
- Tech Cafe
- The New Network
- Security Now
- Industry Solutions and Trends
- Partner Watch
- Community Talk
- Automation & Programmability
- SDN and NFV Era
- Packet-Optical Technologies
- Silicon and Systems
- Data Center Technologists
- Business and Finance
- Basic Cable
- Juniper German Blog
- Juniper France Tech Blog
- Government Trends and Insights
- Information Experience (iX)
- Your Business Edge
- All Things APAC
- AR Voices
- Corporate Social Responsibility
- Customer Stories and Successes
- Security Incident Response
- Threat Research
- Application Acceleration
- Community Feedback
- Configuration Library
- Contrail Platform Developers
- Day One Tips
- Ethernet Switching
- Identity & Policy Control - SBR Carrier & SRC
- Intrusion Prevention
- Junos Automation (Scripting)
- Junos Space Developer
- ScreenOS Firewalls (NOT SRX)
- SRX Services Gateway
- Training, Certification, and Career Topics
- Wireless LAN
- Ambassador Program
- Ambassador Program
Are we headed to a world of "Nothing but cloud?” Earlier this year, I was involved in Juniper’s cloud transformation and during that process, I saw an interesting trend in the cloud space today.
Juniper’s cloud journey started with a question of "Why cloud?" Burdened with a large and aging data center footprint where costs were rising and services were slow to be delivered, we wanted to see how we could leverage cloud services to better support the business. We went through a period to explore the various cloud offerings from several cloud service providers. We vetted different solutions to build up our capabilities in both expertise and tool sets to effectively consume and support the cloud services. Our mindset at this time was still "Why cloud?" meaning on-prem was still our default for deployment but we would consider the cloud solution when it made sense.
Why Not Cloud?
Once we got our arms wrapped around the many flavors of cloud services and have built up our capabilities, we wanted to accelerate adoption. We flipped the question and changed our mindset to "Why not cloud?" Cloud became our default deployment method and we would only consider hosting things on-prem if it could not go to the cloud. It took a lot of effort to get to a point where we were confident in the cloud solutions and our own capabilities to effectively consume and support the environment.
Nothing But Cloud?
It is undeniable the amount of cloud applications and services that are available today. Offerings are available in many different forms. You have software companies starting to offer cloud-based versions of their on-prem solutions. There are traditional hardware companies, such as Juniper Networks, offering a cloud security solution called SkyATP. And off course, you have a flood of companies that only provide cloud-based services.
The interesting trend I noticed is how companies are steering their customers toward the cloud solutions. It’s not surprising that new services, capabilities, and versions will only be available as cloud-based solutions. For example, if you go to Adobe's website today, you will only see a cloud solution for the latest version of their creative software. As companies continue to invest in this area, we will see more and more offering "nothing but cloud" solutions.
I don't mean to suggest that we will end up with 100% of our infrastructure in the cloud - but I do think these trends suggest that enterprises and consumers will have less choice over time. I recently participated as a guest speaker on a podcast where I go into this in more detail. It’s about having options and choices that will best fit your unique needs and something that we should all keep an eye on.
For additional resources:
Juniper on Juniper: Cloud Transformation Case Study: http://www.juniper.net/us/en/company/case-studies/service-provider/juniper-cloud-transformation/inde...
Facebook Live Session with Tony
Juniper Cloud Journey Podcast with Tony: http://www.juniper.net/assets/podcasts/juniper-it-cloud-transformation.mp3
Software-defined networking (SDN) and Network Functions Virtualization (NFV) have revolutionized the traditional communication network architectures and have transformed the way communication service providers (CSPs) design their network infrastructure and services. SDN and NFV use standard virtualization technologies to virtualize entire classes of network functions that can be connected or chained together to create network services. The implementation of the Service Provider Cloud (Telco Cloud, Cable Cloud or Mobile Cloud architecture) is a strategic step for SPs to become competitive in the Digital Cohesion era.
A traditional telco service can be described as a set of solutions distributed over multiple domains. These domains, in turn, are supported by transport layers consisting of resources such as switches, routers, and firewalls. These resources perform complex and sophisticated interconnection functions in support of telco services. Functions often require multiple dedicated VPNs and VLANs, extensive load balancers, security, address translators, proxies, redundancy, latency control, policy-based routing, as well as carrier-grade QoS.
In traditional networks, transport layers are provided by dedicated site infrastructure incorporating purpose-built proprietary hardware. In the telco cloud and NFV environment, domains become a tenant of a federated cloud infrastructure where many transport layer functionalities can be replaced with VNFs.
The basic networking capabilities provided through VNFs can range in complexity from basic routing to multi-network routing and forwarding capabilities. Advanced capabilities include virtual private cloud (VPC), vBNG as well as carrier-grade NAT, vitual IMS, virtual evolved packed core (vEPC), virtual service control gateway (vSCG), GiLAN services, and DPI and traffic detection-steering function (DPT/TDSF).
The evolution of virtualization technologies, as well as the Central Office Re-Architected as a Datacenter (CORD) movement, have expanded the boundary of telco cloud beyond centralized data centers. Juniper’s end-to-end NFV solution offers a flexible architecture that fully addresses the unique idiosyncrasies of multi-dimensional deployment models. The path toward NFV and telco cloud will likely be an evolutionary approach. CSPs will retain existing infrastructures while expanding selected new network infrastructures based on virtual capabilities and deployment models.
This gradual approach minimizes the risk of network migration, empowering the CSPs to completely control the pace of network evolution without disrupting established operating environments, while achieving substantial network efficiency, operational flexibility, CapEx savings, as well as a faster time to market for new network services.
Juniper Networks has an unwavering commitment to innovative transformation. With our field-proven SDN and NFV end-to-end solution as well as our experience from deployments with global CSPs, Juniper has been a collaborative partner for unlocking market opportunities while driving sustainable long-term competitive advantages. We are the leaders that have helped CSPs complete the shift into IP platforms, and we will continue to facilitate their transformation journey into NFV and beyond.
Downlowad here the "Architecture for Technology Transformation" Juniper White Paper as a guide on how Service Providers Leverage SDN/NFV to Empower Change.
We live in exciting and disruptive times where everything is shifting to the cloud. Carrier-grade networking has been the foundation of the internet that we have come to know and love. There continues to be an insatiable need for network capacity, with global Ethernet traffic volume estimated to reach 17,000 Tbps by 2020. Service providers will continue to be under pressure to deliver rapid, low-latency and high-volume communication of information. This has been especially important to enterprises which are increasingly adopting cloud-managed enterprise connectivity to reduce costs while increasing control and flexibility.
Virtually every company in the connected world has either decided on, or is actively formulating, their cloud strategy. In this process, the network is taking on a new level of importance. As the underlying infrastructure evolves to support cloud, we at Juniper Networks believe a new standard of networking is here: cloud-grade networking.
Not Everyone Builds Public Clouds
Not everyone is building a large public cloud. However, we believe that just as there were lessons to learn from how carriers built networks for high reliability, and how enterprises built networks for easier control, there are lessons to be learned from those who are building public clouds.
In the IT world, there is a lot of adoration for the cloud. Whether it’s service agility, highly automated operations, or scale-out architectures, IT network architects have been closely watching what the web-scale companies have been doing for years. And while not every practice is directly transferable to other parts of IT, it is safe to say that cloud architects have changed how businesses everywhere think about and build their networks.
From Better Networks to Better Networking
Cloud adoption for Juniper customers isn't just about having more capable routers and switches. The real transformational ideas address how networks are operated, with a greater emphasis on management, automation, visibility, and continuous integration.
We no longer design networks assuming failures can be avoided. Instead, we address reliability from an architectural, rather than a device, perspective. Services must be resilient in the face of failures. When connectivity fails, regardless of whether it’s a hardware or a software issue, paths can be recalculated, redirected, and failed components simply replaced.
By moving to a simpler, more uniform set of building blocks, cloud architects have flipped the emphasis from noun to verb—from “networks” to “networking.”
Pillars of Cloud-Grade Networking
At Juniper, we believe that cloud-grade networking is built around four major ideas:
- Everywhere Networking: Put simply, cloud-grade networks must be able to run anywhere and everywhere—on any software, on any hardware, and in any cloud. Juniper calls this idea Everywhere Networking, and it refers specifically to the disaggregation and abstraction of networking services to put the focus on faster deployment of applications and elastic scaling rather than on the physical infrastructure.
- Self-Driving Network: Self-driving networks are the combination of telemetry, workflow automation, DevOps, and machine learning, creating an infrastructure that is responsive, adaptive, and ultimately predictive. The journey to a self-driving future starts from today’s human-driven environments to more event-driven operations, then layers in machine learning algorithms en route to a full self-driving experience.
- Software-Defined Secure Networks: Software-defined secure networking is the application of software to drive pervasive detection and enforcement deep into the network so that every IT component becomes an integral part of the security umbrella. Using a software-defined secure networking approach, security teams can maintain centralized policy and control, surfacing threat intelligence across the entire infrastructure so that it can be analyzed in real-time and enforced dynamically.
- Platform First: A platform first approach is the acknowledgment that the network is never the end goal. Companies—whether they are service providers, cloud providers, or enterprises—are building more than just a network; in fact, the network is merely an enabler for network services and applications. Every element within that network must ultimately be a platform - hardware is a platform for software, software is a platform for network functions, the network is a platform for services, and the cloud is a platform for applications.
The Great Migration
There is a place that is better than where we are today. We know that future networks should be less complex, more software-defined, easier to procure and integrate, more open, more stable, and much more agile in order to adapt to the needs of the business.
Our innovation engine continues to run strong as we build upon our ideas to power the cloud transformation. Cloud-grade networking is about leveraging the strengths we have, complementing them with some of the most difference-making principles from the cloud, and ultimately democratizing the cloud so that networks of all shapes and sizes will win.
 Ovum Ethernet Services Forecast Report: 2015–2020 report
On June 20th, Juniper announced the concept or “Cloud-Grade Networking,” which builds on carrier-grade reach and reliability and enterprise-grade control and usability to bring cloud-level agility and operational scale to networks everywhere.
One of the tenets of Cloud-Grade Networking is the ability to run anywhere and everywhere—on any software, on any hardware, in any cloud. Juniper calls this requirement Everywhere Networking, and it refers specifically to the disaggregation of the networking technology stack so that applications can run in any cloud, cloud workloads can run on any device, and software can run on any hardware.
Good Engineering Practice
I started my career in software engineering almost two decades ago. When I joined Juniper, I found it odd that “disaggregation” was part of the corporate lexicon, with a specific emphasis on the separation of the control plane from the data plane. Wasn't this form of disaggregation merely table stakes for modern routers, or software engineering for that matter?
Disaggregation is just extending the core tenets of modular design to the commercial side of the business. When you have a large development team, the only practical way to build a product is to create clear interface boundaries, then decouple the components so that teams can act semi-autonomously. The more disciplined the engineering team is, the more strictly those boundaries are enforced.
The new conversation about disaggregation centers on the fact that vendors are now exposing these boundaries to the end customer. It's understood that the interface boundary has become hardened, mature, and standard enough that we can now let customers leverage them for their use. I do recognize that, from the customer’s perspective, this is revolutionary concept. I can't help but smile when I think about it because it feels like good engineering practice is finally emerging as the differentiator it should have been all along.
More than Economics
The central theme for most disaggregation discussions these days is how it will enable superior economic advantage. Separating hardware from software allows each layer to be procured independently. The classis scenario in short form is as follows: hardware devolves to the lowest common denominator, aka merchant silicon, resulting in huge savings for everyone.
However, while cost savings is important, I think reducing disaggregation to a mere cost-cutting technique misses the major point learned from the web-scale community. The web-scale community is printing money because of top-line growth, not bottom-line optimization. The real value in disaggregation is architectural; that’s what has separated the web-scale companies from the rest of the market. Architectural advantage equals business advantage. This is what's driving the growth.
One of the defining tenets of Cloud-Grade Networking addresses the question of how networks are operated. Basically, the major cloud properties have all built extensive monitoring and management frameworks around their resources (not only network, but also compute and storage), and they use these tools to optimize the underlying infrastructure. For optimization to be possible, they require fine-grained control. Disaggregation helps ensure that individual components (not just systems but also subsystems) are controllable. A strong interface layer provides a stable way of integrating with the surrounding tooling.
Also, for this extensive operational machinery to work, you want to make sure the underlying network is as simple as possible. The key to simplicity is stomping out snowflakes; uniqueness is bad for highly automated environments. So the web-scale companies have standardized on individual building blocks. Disaggregation allows them to isolate these building blocks, effectively locking them in place while allowing them to make changes elsewhere.
More than Hardware and Software
We deliberately chose the term “Everywhere Networking” because “disaggregation” has become so overused, it has drained it of any specific definition. When most people hear the word “disaggregation,” they immediately think about hardware and software decoupling. While that is a good thing to have, it is only one aspect of disaggregation—one that is critical for moving away from the legacy architectures perpetuated by a carrier-grade and enterprise-grade mentality to a new cloud-grade mindset.
For instance, you can apply the principles of Everywhere Networking to more than just a router or a switch; it can also be applied to large modular chassis. Line cards have historically been tightly coupled with the chassis design; with our June 20 announcement, we introduced a universal chassis by decoupling the line card feature functionality from the platform itself. This means that, for the first time, a single Juniper chassis can be leveraged for data center spin routing, core routing applications, and (in the near future) edge routing applications by simply selecting the appropriate branded QFX Series, PTX Series, or MX Series line card. This unique engineering accomplishment is a form of Everywhere Networking: disaggregation in the hardware itself.
As we look at the rest of the technology stack, there are lots of opportunities for Everywhere Networking. Does the control plane need to be tightly coupled with the device? Can we disaggregate the chassis into smaller components by providing APIs to the underlying silicon? Should disaggregation only apply to merchant silicon?
The point here is that we need to take a much broader look at Everywhere Networking than just merchant silicon switches.
More than Breaking Things Apart
So far, our discussion has focused almost exclusively on how to break things apart; we haven’t spent enough time talking about how to bring them back together again. For every component that is developed and sold separately, there is a need to integrate it into a fully-operational solution.
Currently, the burden of achieving this integration falls largely on the major cloud providers. I should point out that once these providers settle on the disaggregated components, they buy them as integrated solutions through systems integrators. But if we want to democratize the cloud, we need to make integration easier for everyone.
This is a delicate balancing act. We don’t want a world where all components can be mixed and matched freely; this would effectively mean that everyone is running a snowflake instance, which makes things more unstable. We need to provide enough diversity to allow for meaningful choice, but not so much that everything stops working.
Juniper's approach here is to disaggregate by default as part of a robust engineering design, then be measured on how we integrate components. The end state simply cannot be more unstable than the starting state.
Commercializing Everywhere Networking
As one of the product evangelists at Juniper, I believe that everything starts with building great products. This has never been truer. In the past, engineers everywhere could cut corners knowing that they could address the technical debt later because it was hidden underneath a broader product veneer.
In an Everywhere Networking world, this simply isn't the case any longer. In many ways, we believe that disaggregation puts a bit of architectural purity on display. And this allows Juniper to commercialize our engineering discipline—something we have never been shy about in the past.
The future of the service provider has two speeds: quick or non-existent. With connectivity services rapidly commoditizing and competition heating up, service providers have to develop new services, and deliver them to market faster than their peers.
The key to all of this - distributed telco cloud.
But, distributed telco cloud is more than relabeling the status quo. Built around software, the distributed cloud’s foundation is virtualization, and the services are delivered using Network Functions Virtualization (NFV).
In fact, according to a recent IHS Markit’s report, the NFV market will reach $36 billion by 2021. This is a clear indication that service providers are not just experimenting with, but rapidly preparing for, a distributed cloud driven by virtualized infrastructure. Of course, an architectural shift like this will come with challenges.
Building and operating a distributed cloud is no easy task. Service providers have proven beyond a doubt that they can build out a network. But delivering dynamic services on a distributed cloud is more challenging. It involves multi-vendor software with cloud-tailored management and orchestration tools. This means that service providers must deliver new infrastructure, using new processes and even new skills that have traditionally not been part of the service provider DNA. Indeed, distributed cloud represents a transformation across all parts of the business and more than merely an infrastructure deployment. It’s a massive undertaking, requiring a financial and personnel commitment to complete the migration.
What service providers need is a simple and proven platform that can provide trusted services and help them overcome all of these challenges. Enter Juniper’s Contrail Cloud.
Leveraging more than two decades of experience providing solutions for the most demanding service provider environments imaginable, Juniper Networks has packaged the infrastructure and operational components to make distributed telco cloud a success for service providers everywhere. By simplifying the service provider cloud business, Contrail Cloud is providing the foundation for an always-on, highly-differentiated service offering.
Contrail Cloud is an integrated telco cloud platform built to run high-performance NFV with always-on reliability, which enables service providers to deliver innovative services with greater agility. Utilizing Juniper’s reliable industry expertise to build and operate the cloud, simplifies and streamlines business operations, giving service providers an edge over their competition.
Contrail Cloud combines Contrail Networking with Red Hat’s industry-leading OpenStack solution to create a platform that brings together proven, dynamic cloud orchestration and high-scale connectivity. Furthermore, Contrail Cloud has built-in automation capability powered by machine learning to run cloud infrastructure and VNFs in the most optimal manner, remediating potential failures and guaranteeing service SLAs.
Contrail Cloud is a differentiating way to manage distributed cloud:
- With Juniper’s end-to-end support services, Contrail Cloud is more than a product. It represents a migration from legacy to cloud services, with support for technology, process, and people. This means the transition can be less difficult and take less time, allowing service providers to focus their energy where it’s best spent - on their customers.
- Built with industry leading components, such as Red Hat’s OpenStack Platform, Contrail Networking, and AppFormix, service providers can enhance the performance of their VNFs on a proven platform.
- AppFormix provides real-time monitoring with automated remedial action powered by machine learning, ensuring that service providers have always-on reliability for their services.
With Juniper’s enhanced Contrail Cloud, service providers can now realize the full benefits of a distributed telco cloud - maximizing revenue while minimizing time and cost. Where distributed telco cloud is concerned, Juniper has answered the question of how. The only question left is - what’s next?
To learn more about the technical details of Contrail Cloud, read Pratik Roychowdhury’s blog.
Want to see Contrail Networking and AppFormix in action? Try sandbox.
You have probably heard about “cognitive computing,” a methodology that simulates human intelligence through the application of machine-based cognitive models. A subset of artificial intelligence (AI), cognitive computing mimics the cognitive behaviors of human “thinking” in order to achieve human-like insights within the machine world—in other words, making machines think like humans.
“Cognitive cloud infrastructure” follows a similar path, applying cognitive computing techniques to cloud infrastructure, enabling it to “think like humans” and become essentially self-driving.
As thinking humans, before we can apply intuitive reasoning to a given situation, we must build “perceptions” in order to develop awareness based on our experiences. I believe we are currently at the “perception stage” with machine learning techniques - building awareness using telemetry sensors at every layer within the cloud infrastructure. This is the first step towards achieving the cognitive “thinking stage” as technology progresses.
As cloud infrastructures are built to differentiate themselves in a hypercompetitive world, many innovative Web 2.0 companies are positioning their own migration to cognitive cloud infrastructure. In these cloud infrastructure environments, analytics controllers collect not only structured data but also dark, unstructured operational data. When this unstructured data is combined with policy and baselined data, new insights can be gained using machine learning techniques. Patterns and relationships can be developed from the multi-layer infrastructure data with contextual associations; this awareness is critical to the “perception” stage. As we move from the “perception” to the “thinking” stage, cognitive controllers can progress from perception to reasoning to driving the cloud infrastructure, learning in real-time to continue honing reasoning ability.
Data is plentiful in cloud infrastructures, and most modern cloud platforms support streaming telemetry at every layer. However, most of this data is not utilized to form insights that provide the contextual awareness required to make actionable decisions.
So, what are the characteristics of cognitive cloud infrastructure?
Intent-Based Global Policy
A model-driven policy that defines all of the multi-layer cloud resources is key to achieving intent-based global policy. In order for this policy framework to scale, the policy attachments and enforcement should be distributed as close to resources as possible.
This distributed approach reduces the big data noise and only sends the relevant signals and data to the centralized cognitive controller, enabling higher-frequency collection for better learning, and scale to achieve the cognitive cloud.
Contextual monitoring of related workloads and resources, regardless of their location, is critical for dynamic cloud environments. Also, distributed collection points with a centralized aggregation point can bring increased scale to hyper-scale cloud environments. As multi-cloud environments take shape, global multi-cloud monitoring is critical to achieving a seamless cognitive cloud.
Once you have an intent-based policy with multi-cloud monitoring in place, baselining with machine learning enables cognitive controllers to start building awareness and perceptions that allow predictive insights. These predictive models exploit patterns in contextual historical data, moving from the “perception” to the “thinking” stage as cognitive models and sensors evolve. These sensors can feed resource-related behavior data, policy-related descriptive data, and contextual resource interaction data that enable the cognitive models to learn from experience and employ reasoning and logic to create better cognitive clouds.
As Intent Based Global Policy, Multi-Cloud Monitoring and Predictive Analytics integrate with SDN Controllers, we start to move towards the Self-Driving Cloud by triggering a closed-loop of control.
Juniper Networks, with programmable infrastructure platforms like Contrail Networking, and intelligent analytics platforms like AppFormix, is well-positioned to lead the journey to the cognitive cloud. As we move from automation to actionable insights based on awareness and reasoning, the cognitive cloud moves within our reach, driving real business outcomes in agility, operational simplicity, and an improved user experience.
Are you ready for the journey?
Ransomware has become so popular that a recent episode of Mr. Robot featured it as an attack vector against an evil corporation presumably to defraud them:
So how can ransomware attack medium and large companies if data isn’t stored on one machine? Is it the same mechanism as the traditional distribution method? Are such attacks still feasible?
Traditional Malware Model
When ransomware targets Windows/OSX users, it is primarily distributed via browser exploits, social engineering, or software updates:
Then the author collects money by asking users to pay pseudo-anonymous currency such as bitcoin to decrypt their data. This has been the case since 2011 when popular ransomware first started using the cryptocurrency. In 2012, the more user-friendly Citadel toolkit started distributing ransomware. Locky made headlines early this year and was followed by several crypto-ransomware strains including KeRanger ransomware, which made waves as the first major ransomware targeting MacOS. In fact, we recently covered how various recent ransomware take over a user’s computer.
Overall, traditional ransomware primarily targets the end-user and doesn’t spread much from user to user.
However, nothing stops the malware authors from using initial infections as a stepping stone to encrypt data stored in the cloud.
Holding data for ransom doesn’t work if you have strong distributed backups. However, the evolution of ransomware has seen increasingly sophisticated attempts to defeat weak backups. Beyond excising backups on locally-attached storage, modern ransomware can take the attack one step further and delete old versions of the files in Dropbox, Box, Google Drive and other similar services. Although no successful ransomware has tried this approach, such an attacks are possible and fairly easy to execute.
This is possible because if the user can have the ability to delete these files, then the malware that takes over the user’s computer also has such ability.
Taking your cloud for ransom
In many small to medium sized companies or groups within large companies, the organizational structure among developers who have access to the cloud is as follows:
Many developers can access server instances and cloud storage. Some developers (dev-ops in the image above) may have elevated privileges to access sensitive data.
Various sensitive information can be stored on Amazon AWS S3/Google Cloud storage/Microsoft Azure. This storage is fault tolerant and the cloud provider is responsible for maintaining the data. However, if files are modified, there is usually no backup except on the local server instances.
An attacker can either compromise one of the developers’ machines or any server instance.
Once a developer’s machine is compromised, anything they have access to is also available to malware authors.
However, this is not yet the time when ransomware begins encryption, because if it happens at this stage, the overall attack can be thwarted by revoking credentials. Instead, the malware can spread to more developers by utilizing ssh connection of the compromised server to spread to developer machines that access the server:
At this point, malware spreads to more developers and has access to cloud storage that many programmers have access to:
Finally, once machines that have elevated privileges (dev-ops in the picture) are compromised, even sensitive cloud-stored data that isn’t accessible to most developers is also compromised:
Now malware can rewrite stored data on all of the server instances and cloud storage with its encrypted counterpart. This would be followed by extortion and, if successful, decryption stages.
Why doesn’t this happen everywhere?
Although the approach is obvious and payoffs are big, enterprise-targeted ransomware is not yet very common.
It is difficult to pull off such an extensive attack and may require insights into the targeted org’s development structure and digital data storage practices. Unlike single machine infection, such enterprise infections require an approach that works without fault on multiple operating systems and is stealthy. This is because programmers are more likely than general populace to have different operating systems on their machines and custom environments. C&C communication is much more challenging as well since now some of the servers may never have internet access. Finally, decryption stage is challenging since it may be difficult to establish which machines belong to which user and which users paid the ransom. If decryption is not successful, victims would be less likely to pay the ransom.
Why doesn’t this happen with source code?
Unlike binary data that is usually only stored in a few places, source code is stored on each and every developer machine. If any server is not affected (usually the case since some servers/users are offline during the attack), source code can be recovered.
Countermeasures for the enterprise
Cold offline backups are an obvious solution, however some recent data would inevitably be lost.
2-factor authentication will not stop such attacks, but it may make them more difficult.
Some initial attack vectors can be stopped with network-based security solution or a local antivirus.
Developer culture needs to change so as to not allow package acquisition from non-audited public repositories.
Another solution is to have tiered developer privileges and no ssh access to sensitive server instances as well as possible duplication of data across different cloud providers. This step however is either labor intensive or pricey.
Partial data recovery is possible from non-compromised servers such as cache servers (memcached or Redis instances) or traditional databases. Data in databases is more difficult to reversibly encrypt. Dumping the data from a remote database server is time consuming and resource intensive so will likely not go unnoticed.
Carriers and enterprises know that success today is largely dependent upon giving customers and business units infrastructure choices that allow them to continuously move faster, all the time. New and improved applications and network services must be developed, packaged, and deployed across different environments—public, private, proprietary, open. To get there, they need a reliable and scalable infrastructure with security that is consistent across all environments, reducing complexity and risk while supporting growth opportunities.
Making this a reality requires giving developers a new set of abstractions that allow them to specify the infrastructure performance requirements that these services will demand in order for them to deliver the business value that customers expect. At the same time, operators of the infrastructure need to automate the monitoring and remedial actions necessary to keep those applications performing as expected.
This, in a nutshell, is what Intent-Driven Cloud is all about.
Intent-Driven Cloud is a concept that extends terms like “intent-driven networking” that were advanced by Gartner and others. Think of Intent-Driven Cloud as a means whereby application developers can inform the infrastructure what needs to be accomplished rather than what to do. Intent-Driven Cloud abstracts individual configurations away from the developer to more closely align infrastructure with the actual purpose the application needs delivered.
This approach allows us to translate higher-level business policy (what) into necessary configurations (how). It provides configuration across the infrastructure via automation or orchestration. And it can dynamically optimize application and software-defined infrastructure performance and assure network function service levels by validating in real-time that the stated intentions are being met. If not, the system can take corrective action based on automation informed, in part, by machine learning and A.I.
Through the synergy of Juniper Networks AppFormix, Contrail Networking and Software-Defined Secure Networks (SDSN), Juniper Networks has delivered these capabilities with real-world implementations.
How AppFormix Makes Intent-Driven Cloud Possible
The AppFormix platform for cloud operations is purpose-built to deliver an Intent-Driven Cloud. It is built to leverage big-data analytics and machine learning to redefine telemetry and operations management for cloud-native applications and infrastructure. Acquired by Juniper Networks at the end of 2016, AppFormix offers real-time and historic monitoring, visibility and dynamic performance optimization.
AppFormix is a new breed of full-stack optimization and management software that tracks performance and corrects orchestration of applications and software-defined infrastructure in real-time. The platform uses a distributed edge-computing approach coupled with machine learning to monitor environments where maintaining network performance demands quick response time.
AppFormix is a tool for service assurance to optimize the performance of infrastructure and applications and provide guidance for services. The purpose of service assurance is to optimize the end-user experience while maximizing the profitability of services delivered over the network. It provides capabilities for operators in cloud-native environments powered by OpenStack (running VMs as well as containers), on VMware vCenter, Kubernetes and in public clouds. AppFormix software provides predictive analytics driven by machine learning to issue alerts and take remedial action before performance problems impact users.
When using software-defined, cloud-based environments to deliver network functions—like internet access, firewall, VPN, VoIP and load balancing—a reliable infrastructure layer is key to reliable service availability. Metrics such as drop rate, latency and jitter are tracked to manage virtualized network function (VNF) performance. AppFormix reduces jitter by more than 70 percent in environments where VNFs are deployed across thousands of nodes and dozens of geographic regions.
In these virtualized environments, operators need automated and real-time tools to schedule VNF workloads across network infrastructure that is often distributed across many nodes and regions. Faults must be detected, quantified and remediated in real time, and that’s what AppFormix software delivers.
Application Monitoring for the Cloud-Native Era
AppFormix takes the high-level business intent provided by operators and developers—service levels, for instance—then monitors cloud operations to ensure that this intent is being met. If not, AppFormix can:
(a) inform the operator and developer/application that the intent is not being met;
(b) inform the respective orchestration tool of the violation in real-time, along with the insight of what needs to be done to remedy the violation (these are global actions, going to one or more controllers, e.g., nova for VM scheduling, or heat for workload scheduling, or Contrail for network scheduling); and
(c) take local corrective/orchestration actions to ensure that adequate resources are supplied to the workload to meet the intent.
Stated more simply, AppFormix is the mechanism for translating “what” into “how” with automated implementation, real-time state awareness and dynamic optimization. It provides a 360 degree loopback mechanism and visibility to ensure that the intent of cloud operators, let’s say SLA for a specific VNF service, is properly fulfilled.
What does this mean for cloud operators? With AppFormix, cloud operators have the means to heuristically validate that infrastructure environments are in sync with the organization’s business intent. The ultimate outcome of this approach is cloud infrastructure that not only aligns with business intentions but also delivers unprecedented speed and agility. In essence, delivering the Intent-Driven Cloud.
Want to Learn More?
Come see us at OpenStack Summit Boston, booth #A1 and check out our full list of activities planned at the event.
Sumeet Singh, Vice President of Engineering at Juniper and founder of AppFormix, as well as other Juniper executives, will be available for press interviews during the conference. Please contact Michelle Zimmermann (email@example.com) with any requests.
I love the notion of recursion. It’s a fascinating phenomenon because of its conceptual simplicity and its practical power. When you’re programming a computer, there’s nothing cooler than dipping down into a recursive function, letting it bore a hole deeper and deeper, doing more and more work all the while. Then when the time is right, you pull the trigger and pull it up-up-up and you continue on.
Anyone who has been in telecom more than 20 years knows that even before the World Wide Web, a cloud was used as a graphical representation of the network. For telcos, the cloud IS the network – not just the virtualized data centers connected to it. Of course the network is comprised of lots of components. What if you took one of the most foundational building blocks of the Internet, the component responsible for managing all the growing traffic and network services the modern Internet delivers, and you virtualized it?
What if you literally took the physical manifestation of the network cloud, and you sent it to the cloud, taking all the intelligence into a software module running somewhere else (i.e. a virtualized data center that’s part of the cloud writ large)? What you’d get is all the benefits of the cloud as we know it, applied to networking – it’s on demand, it elastically scales up/down rapidly, it’s multi-tenant.
But the power of a recursive solution is not what the first running of the process does, but what it continues to do as it’s repeated or extended. So if you took the physical network and sent it to the cloud, what could you then do? Then what could you do? What about then? The revolution is here. Are you ready?
IT today is faced with a dual challenge: Make IT a competitive differentiator and a business growth enabler. Or go home. That’s pretty clear. What’s also clear today is that the multi-hybrid cloud approach to transforming the data center is the winning strategy to beat the odds.
That’s where Juniper Networks Unite architecture comes in. We Introduced Juniper Unite in 2015 as an architecture for the entire enterprise – including data center, campus and branch – to give organizations the tools to design and build environments that bring all three together to meet their specific needs.
Today’s Unite Cloud for the Data Center announcement broadens the architecture to further simplify the creation and management of hybrid, multi-cloud data centers. We’re introducing an enhanced Junos® Space Network Director management tool, which leverages advanced automation to analyze and control data centers, and the new Juniper Networks® QFX5110 switch with 100 GB capabilities to scale with evolving bandwidth demands.
We recognize that deploying such an environment– while keeping costs down – is incredibly complex. That’s why we’re adding the Contrail JumpStart Program to our Professional Services portfolio.
JumpStart services provide proven, cost efficient, pre-tested approaches to successfully deploying Juniper’s software-defined networking (SDN) solution, providing customers with an open, automated pathway to the cloud.
JumpStart services are soup-to-nuts. They include installation and configuration for the software in a pre-defined environment, plus transfer of knowledge for the customer so they can start using and gaining experience with the software before full production deployment. Additionally, they provide post-installation support to address environment-specific questions and issues during the early implementation process.
For Juniper partners, this is an opportunity to include services in their portfolios, along with the QFX5110 switch and Space Network Director, to help ease customer adoption of complex solutions, minimize the risk of deployment, and enhance the customer experience. Partners will be able to sell JumpStart services immediately.
If you’re unfamiliar with the Unite architecture and want to learn more about JumpStart services, the following JumpStart data sheets will give you a good overview:
Contrail Cloud Platform: http://www.juniper.net/assets/us/en/local/pdf/datasheets/1000610-en.pdf
Contrail Networking: http://www.juniper.net/assets/us/en/local/pdf/datasheets/1000611-en.pdf
As always, our goal is to help make our partners as profitable as possible. JumpStart services is another tool in their toolboxes. More detail on JumpStart services is available on Partner Center. And watch this space for more news on how JumpStart services can continue to benefit partners.