Archive
Juniper Employee , Juniper Employee Juniper Employee
Archive
IPv6 destination remote triggered blackholing with the 6PE model - Part II
Apr 18, 2013

In IPv6 destination remote triggered blackholing with the 6PE model - Part I, we started the initial design analysis by preparing policies and defining premises to implement the IPv6 destination-based Remote Triggered Blackhole (RTBH) paradigm, originally defined under [RFC3882]. This is tested in a Junosphere topology across two Autonomous Systems and considering the transition from native IPv6 peering to internal 6PE model inside each network.

 

It is time now to simulate an attack mitigation event and see all these policies in action. At this point, the inherent next-hop self policy is analyzed and a recommendation for a next-hop rewrite policy application is made to achieve the intended application goal.

 

Traffic mitigation to a particular IPv6 destination

 

Following our previous example and at this stage, both networks should be prepared to nuke offending traffic to a marked IPv6 destination. Considering R15 as the RTBH rule injector, the intention is to propagate such advertisement to AS65000 and blackhole traffic as soon as possible:

 

6PE-rtbh-blackholing.jpg

 

R15 injects the destination-based RTBH rule in their iBGP policies:

 

[edit routing-options rib inet6.0 static]

      route 2001:db8:6500:1::1/128 {

          discard;

          install;

          tag 666;

      }

 

 

[edit policy-options policy-statement inject-RTBH-destination]

juniper@R15# show

term destination-RTBH {

    from {

        protocol static;

        tag 666;

    }

    then {

        local-preference 666;

        community add 65001:666;

        next-hop 100::1;

        accept;

    }

}

 

 

juniper@R15> show route advertising-protocol bgp 192.168.255.13 table inet6.0   

 

inet6.0: 15 destinations, 16 routes (15 active, 0 holddown, 0 hidden)

  Prefix          Nexthop           MED     Lclpref    AS path

[...]

  2001:db8:6500:1::/64

*                         ::ffff:192.168.255.24        100        I

  2001:db8:6500:1::1/128

*                         100::1                       666        I

 

 

juniper@R15> show route advertising-protocol bgp 192.168.255.13 table inet6.0 2001:db8:6500:1::1/128 extensive

 

inet6.0: 15 destinations, 16 routes (15 active, 0 holddown, 0 hidden)

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

BGP group iBGP-inet type Internal

     Route Label: 2

     Nexthop: 100::1

     Flags: Nexthop Change

     Localpref: 666

     AS path: [65001] I

     Communities: 65001:666

 

This NLRI is received at AS65001 ASBRs with the offending marking based on communities:

 

juniper@R13> show route protocol bgp 2001:db8:6500:1::1/128 extensive

 

inet6.0: 20 destinations, 29 routes (20 active, 0 holddown, 0 hidden)

2001:db8:6500:1::1/128 (1 entry, 1 announced)

TSI:

KRT in-kernel 2001:db8:6500:1::1/128 -> {indirect(262150)}

        *BGP    Preference: 170/-667

                Next hop type: Indirect

                Address: 0x93239b8

                Next-hop reference count: 3

                Source: 192.168.255.15

                Next hop type: Discard

                Protocol next hop: 100::1

                Push 2

                Indirect next hop: 93dc7e0 262150

                State: <Active Int Ext>

                Local AS: 65001 Peer AS: 65001

                Age: 1:51     Metric2: 0     Tag: 666

                Task: BGP_65001.192.168.255.15+51552

                Announcement bits (2): 0-KRT 4-Resolve tree 5

                AS path: I

                Communities: 65001:666

                Accepted

                Route Label: 2

                Localpref: 666

                Router ID: 192.168.255.15

                Indirect next hops: 1

                        Protocol next hop: 100::1 Metric: 0

                        Push 2

                        Indirect next hop: 93dc7e0 262150

                        Indirect path forwarding next hops: 0

                                Next hop type: Discard

            100::1/128 Originating RIB: inet6.0

              Metric: 0              Node path count: 1

              Forwarding nexthops: 0

                Next hop type: Discard

 

At this stage, traffic would be mitigated at the AS65001 upstream edge without propagating it downstream inside the same AS65001. Launching the same ping for the test path shows Normal discard drops at R13:

 

juniper@R13> show pfe statistics traffic | match Normal   

    Normal discard             :                  159

 

The intention is now to propagate this advertisement further upstream to AS65000 to nuke the attack effect closer to the offending (distributed) sources.

 

IPv6 destination-based RTBH beyond the own AS

 

This multi-AS destination-based RTBH scheme is offered as a service by many transit and upstream service providers to customer and peer ASs. Once the agreement is reached, peer ASs can advertise more specific routes than their route-objects with particular tagging with the intention to enforce attack mitigation upwards. This requires certain policy implementation at the approved edge peering sessions to admit these NLRIs.

 

In our example, the intention is that the mitigation advertisement injected by R15 gets populated beyond the ASBRs. In this case, let's assume some community tagging that refers to blackholing in the peer AS65000 together with policy implementation at this edge towards the source:

 

juniper@R13# show | compare rollback 1   

[edit protocols bgp group eBGP-inet]

+    export [ propagate-destination-RTBH control-own-space ];

[edit policy-options]

+   policy-statement control-own-space {

+       term own-statics {

[...]

+   }

+   policy-statement propagate-destination-RTBH {

+       term injected-RTBH-RR {

+           from {

+               protocol bgp;          

+               community 65001:666;

+           }

+           then {

+              community add 65000:666;

+              accept;

+           }

+       }

+   }

[edit policy-options]

+   community 65000:666 members 65000:666;

 

After adding the AS65000 blackhole community, the specific advertisement gets populated. And at this point, there is a slight difference between using multihop session (between R13 and R10) and the accept-remote-nexthop knob (between R14 and R11) mentioned in IPv6 destination remote triggered blackholing with the 6PE model - Part I. This difference is rather cosmetic in our case, because both options forcefully serve the blackholing purpose, but only with a multihop session, the next hop can be rewritten in an eBGP export policy.

 

  • In R13 (multihop), the route has a discard next hop and is advertised towards the eBGP peer with the proper community marking and discard prefix as next hop:

 

juniper@R13> show route 2001:db8:6500:1::1/128

[...]

2001:db8:6500:1::1/128

                   *[BGP/170] 00:09:16, localpref 666, from 192.168.255.15

                      AS path: I, validation-state: unverified

                     to Discard

 

juniper@R13> show route advertising-protocol bgp 192.168.15.1 extensive

 

inet6.0: 20 destinations, 29 routes (20 active, 0 holddown, 0 hidden)

[...]

 

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

BGP group eBGP-inet type External

     Nexthop: 100::1

     AS path: [65001] I

     Communities: 65000:666 65001:666

 

  • In R14 (accept-remote-nexthop), the route has a discard next hop and is advertised towards the eBGP peer with the proper community marking, but the discard prefix is not set as next hop in this advertisement, because this is a single hop session:

 

juniper@R14> show route 2001:db8:6500:1::1/128

[...]

2001:db8:6500:1::1/128

                   *[BGP/170] 00:11:38, localpref 666, from 192.168.255.15

                      AS path: I, validation-state: unverified

                     to Discard

 

juniper@R13> show route advertising-protocol bgp 192.168.16.1 extensive

 

inet6.0: 20 destinations, 29 routes (20 active, 0 holddown, 0 hidden)

[...]

 

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

BGP group eBGP-inet type External

     Nexthop: Self

     Flags: Nexthop Change

     AS path: [65001] I

     Communities: 65000:666 65001:666

 

Does it really matter in this use case? Not really, because the proper community marking will indicate the need to backhole at the upstream ASBRs. Let's look at the other side of the fence.

 

  • In R10 (multihop), as one AS6500 ASBR, the route is already received with the discard prefix as next hop, which is enforced as well with proper community marking:

 

juniper@R10> show route receive-protocol bgp 192.168.15.2 table inet6.0 extensive 

 

inet6.0: 20 destinations, 29 routes (20 active, 0 holddown, 0 hidden)

[...]

 

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

     Accepted

     Nexthop: 100::1

     AS path: 65001 I

     Communities: 65000:666 65001:666

 

and considering the RTBH-Next-Hop-rewrite import policy at ASBRs matching the right communities, the discard next hop is reinforced when importing the route in the RIB:

 

juniper@R10> show route 2001:db8:6500:1::1/128

 

inet6.0: 19 destinations, 29 routes (19 active, 0 holddown, 0 hidden)

+ = Active Route, - = Last Active, * = Both

 

2001:db8:6500:1::1/128

                   *[BGP/170] 00:18:03, localpref 666, from 192.168.15.2

                      AS path: 65001 I

                     Discard

[...]

 

  • In R11 (accept-remote-nexthop), as another AS6500 ASBR, the route is received with the natural eBGP session peer, but we rely on proper community marking for blackholing:

 

juniper@R11> show route receive-protocol bgp 192.168.16.2 table inet6.0 extensive 

[...]

 

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

     Accepted

     Nexthop: ::ffff:192.168.16.2

     AS path: 65001 I

     Communities: 65000:666 65001:666

 

but considering the same RTBH-Next-Hop-rewrite import policy at ASBRs described above and that the accept-remote-nexthop knob relaxes the resolver restrictions, the discard next hop is enforced now when importing the route in the RIB:

 

juniper@R11> show route 2001:db8:6500:1::1/128

 

inet6.0: 19 destinations, 30 routes (19 active, 0 holddown, 0 hidden)

+ = Active Route, - = Last Active, * = Both

 

2001:db8:6500:1::1/128

                   *[BGP/170] 00:22:53, localpref 666, from 192.168.16.2

                      AS path: 65001 I, validation-state: unverified

                     to Discard

[...]

 

In a nutshell, the same net effect.

 

So far so good. Let's have a look then now how this route gets populated further upstream in the inet6 labeled-unicast iBGP mesh:

 

juniper@R10> show route advertising-protocol bgp 192.168.255.6   

 

inet6.0: 19 destinations, 28 routes (19 active, 0 holddown, 0 hidden)

  Prefix          Nexthop           MED     Lclpref    AS path

[...]

  2001:db8:6500:1::1/128

*                         Self                         666        65001 I

 

 

juniper@R10> show route advertising-protocol bgp 192.168.255.6 extensive

 

inet6.0: 19 destinations, 28 routes (19 active, 0 holddown, 0 hidden)

[...]

 

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

BGP group iBGP-inet type Internal

     Route Label: 2

     Nexthop: Self

     Flags: Nexthop Change

     Localpref: 666

     AS path: [65000] 65001 I

     Communities: 65000:666 65001:666     

 

juniper@R6> show route 2001:db8:6500:1::1/128 extensive

 

inet6.0: 14 destinations, 15 routes (14 active, 0 holddown, 0 hidden)

2001:db8:6500:1::1/128 (1 entry, 1 announced)

TSI:

KRT in-kernel 2001:db8:6500:1::1/128 -> {indirect(262146)}

Page 0 idx 0 Type 1 val 9355650

    Nexthop: ::ffff:192.168.255.10

    Localpref: 666

    AS path: [65000] 65001 I

    Communities: 65000:666 65001:666

    Cluster ID: 192.168.255.6

    Originator ID: 192.168.255.10

    Advertise: 0000002f

[...]

 

and R11 provides here similar outputs.

 

Herewith you can see the described effect: there is another inherent next-hop self rewrite when exporting the route from the RIB towards labeled-unicast peers because of the context change.

 

This would lead itself to a situation where the advertisement would be indeed propagated inside AS65000, but having the ASBRs as next hops instead of the discard prefix. These ASBRs blackhole traffic before reaching AS65001:

 

juniper@R10> show pfe statistics traffic | match normal   

    Normal discard             :                   40

 

Traffic is blackholed at AS65000 downstream boundaries. But to mitigate the attack upwards, further machinery is needed.

 

Rewriting the labeled-unicast NLRI next hop to the discard prefix

 

The export policy chain in Junos OS easily allows rewriting the next hop (again!) when exporting the route from the RIB in iBGP or multihop eBGP sessions. This is applicable when using export policies for the iBGP inet6 labeled-unicast group/6PE mesh. A feasible option is therefore to add another next-hop rewrite policy (or use the same one from eBGP import) in the iBGP export policy chain towards the Route Reflector servers, so that the discard prefix is correctly propagated as the NLRI next hop across AS65000:

 

juniper@R10> show configuration policy-options policy-statement RTBH-Next-Hop-rewrite

term RTBH-from-community {

    from {

        protocol bgp;

        community 65001:666;

    }

    then {

        tag 666;

        local-preference 666;

        next-hop 100::1;

    }

}

 

[edit]

juniper@R10# set protocols bgp group iBGP-inet export RTBH-Next-Hop-rewrite

 

and similarly at R11.

 

In this case, the next hop gets rewritten again with the policy:

 

juniper@R10> show route advertising-protocol bgp 192.168.255.6 extensive

 

inet6.0: 19 destinations, 28 routes (19 active, 0 holddown, 0 hidden)

[...]

* 2001:db8:6500:1::1/128 (1 entry, 1 announced)

BGP group iBGP-inet type Internal

     Route Label: 2

     Nexthop: 100::1

     Flags: Nexthop Change

     Localpref: 666

     AS path: [65000] 65001 I

     Communities: 65000:666 65001:666

 

and at the Route Reflector server, this is detected from R10 and R11:

 

juniper@R6> show route receive-protocol bgp 192.168.255.10 table inet6.0

 

inet6.0: 14 destinations, 16 routes (14 active, 0 holddown, 0 hidden)

  Prefix          Nexthop           MED     Lclpref    AS path

[...]

  2001:db8:6500:1::/64

*                         ::ffff:192.168.255.10        100        65001 I

  2001:db8:6500:1::1/128

*                         100::1                       666        65001 I

 

juniper@R6> show route receive-protocol bgp 192.168.255.11 table inet6.0

 

inet6.0: 14 destinations, 16 routes (14 active, 0 holddown, 0 hidden)

  Prefix          Nexthop           MED     Lclpref    AS path

[...]

  2001:db8:6500:1::/64

*                         ::ffff:192.168.255.10        100        65001 I

  2001:db8:6500:1::1/128

*                         100::1                       666        65001 I

 

juniper@R6> show route 2001:db8:6500:1::1/128

 

inet6.0: 14 destinations, 15 routes (14 active, 0 holddown, 0 hidden)

+ = Active Route, - = Last Active, * = Both

 

2001:db8:6500:1::1/128

                   *[BGP/170] 00:01:18, localpref 666, from 192.168.255.10

                      AS path: 65001 I

                     Discard

                   *[BGP/170] 00:01:25, localpref 666, from 192.168.255.11

                      AS path: 65001 I

                     Discard

 

As per usual Route Reflector server policy, this next hop does not get rewritten when reflecting the route best path and this is further advertised as such to all clients:

 

juniper@R6> show route advertising-protocol bgp 192.168.255.1 table inet6.0 2001:db8:6500:1::1/128

 

inet6.0: 14 destinations, 15 routes (14 active, 0 holddown, 0 hidden)

  Prefix          Nexthop           MED     Lclpref    AS path

  2001:db8:6500:1::1/128

*                         100::1                       666        65001 I

 

So the attack mitigation is enforced as closest to the source as possible. At R1 in our case:

 

juniper@R1> show route 2001:db8:6500:1::1 extensive

 

inet6.0: 17 destinations, 25 routes (17 active, 0 holddown, 0 hidden)

2001:db8:6500:1::1/128 (2 entries, 1 announced)

TSI:

KRT in-kernel 2001:db8:6500:1::1/128 -> {indirect(262143)}

        *BGP    Preference: 170/-667

                Next hop type: Indirect

                Address: 0x9323658

                Next-hop reference count: 4

                Source: 192.168.255.6

                Next hop type: Discard

                Protocol next hop: 100::1

                Push 2

[...]

 


Other options to rewrite the next hop are equally viable, such as rewriting it with import policies at Route Reflector servers or even directly at clients, but keeping this confined in the ASBRs may result more scalable, because it is a perimeter policy enforcement, that can be leveraged with further tools and mechanisms. Mitigation at the service provider edge!

 

If you want to inspect more closely the setup or play around with it, the Junosphere topology files are attached to this post.


Conclusions

 

Junos OS implements an implicit next-nop self rewrite at 6PE<->native IPv6 boundaries. Depending on the network design, specific services, applications or use cases, it is relevant to take this into account for correct next-hop settings.

 

In this particular case study, the intention was to illustrate this effect in IPv6 destination-based RTBH with the 6PE model. But there could be many others, such as intending to rewrite the inet6 labeled-unicast NLRI next hop to another address than the primary FEC at the 6PE. In this RTBH use case, the situation may end up rewriting the next hop twice at ASBRs so as to mitigate traffic both locally (input policy) and remotely upwards (export policy).

 

At the end of the day, this effect is not a question of IPv6. It is a question of dealing with unicast and labeled-unicast routes and the need to set the appropriate next hop for the correct routing lookup context!

 

Please play around with the topology, modify the configuration and feel free to post comments/questions/critics either here or via @go_nzo

 

Feedback