SRX

last person joined: 4 days ago 

Ask questions and share experiences about the SRX Series, vSRX, and cSRX.
  • 1.  Primary node should master after failover

    Posted 03-07-2017 10:24

    Hi,

     

    We have 2 Juniper SRX 340 firewall. OS version is 15.1X49-D75.5

     

    reth0 and reth1 assigned to untrust zone(ISP), reth 2 assigned to trust zone(Internal LAN)

     

    Below are the two issue observed.

     

    1) Secondary node interface(reth0,1 and 2) active while primary node completely down.

        We are expecting if reth0 down on Primary node, reth0 on Secondary node should start respond.

     

    2) If primary node is down and secondary node is up but after some time primary node come up then primary node should take charge but its status shown "secondary" (Primary node has highest priority)

     

    Please find below configuration and suggest solution.

     

    set groups node0 system host-name Primary
    set groups node0 interfaces fxp0 unit 0 family inet address 192.168.1.2/32
    set groups node1 system host-name Secondary
    set groups node1 interfaces fxp0 unit 0 family inet address 192.168.1.3/32
    set apply-groups "${node}"

    set chassis cluster reth-count 3
    set chassis cluster redundancy-group 0 node 0 priority 200
    set chassis cluster redundancy-group 0 node 1 priority 100
    set chassis cluster redundancy-group 1 node 0 priority 200
    set chassis cluster redundancy-group 1 node 1 priority 100

    set chassis cluster redundancy-group 1 preempt

    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/2 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-5/0/2 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-5/0/3 weight 255
    set chassis cluster redundancy-group 1 interface-monitor ge-5/0/4 weight 255

    set security policies from-zone trust to-zone untrust policy trust-to-untrust match source-address any
    set security policies from-zone trust to-zone untrust policy trust-to-untrust match destination-address any
    set security policies from-zone trust to-zone untrust policy trust-to-untrust match application any
    set security policies from-zone trust to-zone untrust policy trust-to-untrust then permit
    set security policies from-zone untrust to-zone trust policy untrust-to-trust match source-address any
    set security policies from-zone untrust to-zone trust policy untrust-to-trust match destination-address any
    set security policies from-zone untrust to-zone trust policy untrust-to-trust match application any
    set security policies from-zone untrust to-zone trust policy untrust-to-trust then permit
    set security zones security-zone trust host-inbound-traffic system-services all
    set security zones security-zone trust host-inbound-traffic protocols all
    set security zones security-zone trust interfaces reth2.0
    set security zones security-zone untrust host-inbound-traffic system-services all
    set security zones security-zone untrust host-inbound-traffic protocols all
    set security zones security-zone untrust interfaces reth0.0
    set security zones security-zone untrust interfaces reth1.0

    set interfaces fab0 fabric-options member-interfaces ge-0/0/0
    set interfaces fab1 fabric-options member-interfaces ge-5/0/0

     

    Cluster status

     

    root@Primary> show chassis cluster status
    Monitor Failure codes:
        CS  Cold Sync monitoring        FL  Fabric Connection monitoring
        GR  GRES monitoring             HW  Hardware monitoring
        IF  Interface monitoring        IP  IP monitoring
        LB  Loopback monitoring         MB  Mbuf monitoring
        NH  Nexthop monitoring          NP  NPC monitoring
        SP  SPU monitoring              SM  Schedule monitoring
        CF  Config Sync monitoring

    Cluster ID: 1
    Node   Priority Status         Preempt Manual   Monitor-failures

    Redundancy group: 0 , Failover count: 1
    node0  200      primary        no      no       None
    node1  100      secondary      no      no       None

    Redundancy group: 1 , Failover count: 1
    node0  0        primary        yes     no       IF
    node1  0        secondary      yes     no       IF

     

    Thank you....

     

     



  • 2.  RE: Primary node should master after failover
    Best Answer

     
    Posted 03-07-2017 13:10

    Hello, 

     

    As per RG1 status you've shared, it is clear that both node0 and node1 having at least 1 monitored interface down (priority=0 and IF flag ), which means RG1 will never failover from node0 to node1.

     

     

    Redundancy group: 1 , Failover count: 1
    node0  0        primary        yes     no       IF
    node1  0        secondary      yes     no       IF

     

    Regards



  • 3.  RE: Primary node should master after failover

    Posted 03-07-2017 18:04

    Hi,

     

    There are 3 interfaces are monitor through node0 and node1



  • 4.  RE: Primary node should master after failover

     
    Posted 03-08-2017 00:01

    I belive at least one interfaces from node 0 and node 1 are down . Can you run "show chassis cluster interfaces" and confirm?

     

    Since you have given monitor weight of 255 to each interfaces when one of the intrefaces from Node 0 fails (ge-0/0/2 , ge-0/0/3 or ge-0/0/4) it will trigger a failover to Node 1.

     

    But we need to make sure all monitor interfaces on Node 1 are up (ge-5/0/2, ge-5/0/3 or ge-5/0/4) .

     

    If you dont want dependency between reth0,reth1 and reth2, I would suggest you put them on different RG groups.