SRX

last person joined: 3 days ago 

Ask questions and share experiences about the SRX Series, vSRX, and cSRX.
  • 1.  Cluster Failover, sort of working

    Posted 06-24-2011 14:32

    I have 2 SRX 240, and 2 switches in a stack.

     

    SRX1 and SRX2 are connected via Cross-Over cable for Data and Control links.

     

    SRX1 has LAN, DMZ, WAN interfaces all going to Switch1

    SRX2 has LAN, DMZ, WAN interfaces all going to Switch2

    Switch1 and Switch2 are a stack, so logically 1 switch with the redundancy of having 2.

     

    If I remove power from a SRX, failover occurs. 

    If SRX2 is active, and I remove power from Switch2, no failover occurs even though SRX2 lost link.

     

    What am I missing?   I thought it would failover because it lost link on other reths...   They are all part of the same redundancy group.

     

    chassis {
        cluster {
            reth-count 3;
            redundancy-group 0 {
                node 0 priority 100;
                node 1 priority 1;
            }
            redundancy-group 1 {
                node 0 priority 100;
                node 1 priority 1;
            }
        }
    }
    interfaces {
        ge-0/0/4 {
            gigether-options {
                redundant-parent reth0;
            }
        }
        ge-0/0/5 {
            gigether-options {
                redundant-parent reth1;
            }
        }
        ge-0/0/6 {
            gigether-options {
                redundant-parent reth2;
            }
        }
        ge-5/0/4 {
            gigether-options {
                redundant-parent reth0;
            }
        }
        ge-5/0/5 {
            gigether-options {
                redundant-parent reth1;
            }
        }
        ge-5/0/6 {
            gigether-options {
                redundant-parent reth2;
            }
        }
        fab0 {
            fabric-options {
                member-interfaces {
                    ge-0/0/2;
                }
            }
        }
        fab1 {
            fabric-options {
                member-interfaces {
                    ge-5/0/2;
                }
            }
        }
        reth0 {
            redundant-ether-options {
                redundancy-group 1;
            }
            unit 0 {
                description "Public Untrust";
                family inet {
                    address 1.1.1.1/30
                }
            }
        }
        reth1 {
            redundant-ether-options {
                redundancy-group 1;
            }
            unit 0 {
                description "Private Trust";
                family inet {
                    address 3.3.3.3/24;
                }
            }
        }
        reth2 {
            redundant-ether-options {
                redundancy-group 1;
            }
            unit 0 {
                description "Public DMZ";
                family inet {
                    address 2.2.2.2/24;
                }
            }
        }

     

    Thanks!

    Mark

     

     


    #cluster


  • 2.  RE: Cluster Failover, sort of working
    Best Answer

    Posted 06-24-2011 16:24

    Hi Mark,

     

    There are a number of ways you can do this - you need to decide on what conditions you want interfaces to fail-over.  

    In your current setup, you have all interfaces in redundancy group 1.  If you add the following lines to your config:

     

    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight 255

    it will mean that when any of the three interfaces go down, all interfaces will fail over to node 1.

     

    You could also configure 3 individual redundancy groups (1,2 & 3) and place one reth in each:

    set interfaces reth0 redundant-ether-options redundancy-group 1
    set interfaces reth1 redundant-ether-options redundancy-group 2
    set interfaces reth2 redundant-ether-options redundancy-group 3

     then with interface monitoring configured like:

    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
    set chassis cluster redundancy-group 2 interface-monitor ge-0/0/5 weight 255
    set chassis cluster redundancy-group 3 interface-monitor ge-0/0/6 weight 255

     only the reth that lost an interface would fail over to node 1.

     

    I also like to turn on pre-empt, so that after any failure conditions have been rectified, the whole system goes back to a known topology:

    set chassis cluster redundancy-group 1 preempt
    set chassis cluster redundancy-group 2 preempt
    set chassis cluster redundancy-group 3 preempt

     Hope this helps!

     



  • 3.  RE: Cluster Failover, sort of working

    Posted 07-15-2011 12:48

    Thanks!

     

    I went with

     

    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight 255
    set chassis cluster redundancy-group 1 preempt

     

    I assume I don't do anything for redundancy-group 0?   When I unplug a NIC that groups does not move, thats OK though right?

     

     

    Cluster ID: 1
    Node                  Priority          Status    Preempt  Manual failover

    Redundancy group: 0 , Failover count: 1
        node0                   100         primary        no       no
        node1                   1           secondary      no       no

    Redundancy group: 1 , Failover count: 3
        node0                   100         primary        yes      no
        node1                   1           secondary      yes      no



  • 4.  RE: Cluster Failover, sort of working

     
    Posted 07-17-2011 23:35

    That's correct, redundancy-group 0 is for the RE, if you'd kill your routing-engine it would fail over.

    You can test failovers (including RG0) with this command:
    request chassis cluster failover node <node-number> redundancy-group <group-number>

    just remember to reset the manual failover:
    request chassis cluster failover reset redundancy-group <group-number>