06-04-2010 03:24 AM
i have problem with SRX 3600. in fact We had configure SRX cluster in active passive configuration .but the node 0 is down like shown with show chassis cluster status command.
admin@FW2> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Redundancy group: 0 , Failover count: 3
node0 200 disabled no no
node1 100 primary no no
Redundancy group: 1 , Failover count: 17
node0 0 disabled yes no
node1 100 primary yes no
and when i diplay the chassis alarms is what a have in node 0:
1 alarms currently active
Alarm time Class Description
2010-06-04 09:26:02 UTC Minor Check FPC 0 Fabric Chip
could you help with that problem. what i should do?
06-04-2010 06:10 AM - edited 06-04-2010 06:24 AM
If the Fabric link flapped , that will cause the passive node to go into disabled state
To get rid of that disabled state , you should reboot that node
Also to Find the exact cause of disabled state , use the below command & paste its output :
show chassis cluster information no-forwarding
06-04-2010 06:57 AM
i did earlier change the HA control link (we suspect that it was the problem).
but it did have the same issue.
here is the output of the command
show chassis cluster information no-forwarding joind here.
thanks for the reply.
06-04-2010 08:00 AM - edited 06-04-2010 08:02 AM
Have you rebooted the node that is at the disabled state ? that should bring it back to its original state
The only method to go out from disabled state is by reboot
06-04-2010 08:11 AM
yes i did reboot it. but the the node 0 is again in diseabled state.can i ask what is the problem ? what the command show chassis cluster information no-forwarding has shown.
shoud i reboot it every time.
06-04-2010 08:46 AM - edited 06-04-2010 08:56 AM
At the txt file I was looking for something related to disabled state
But all what i could find is monitored interfaces status change from up to down ( that is not related to the issue )
When a node is at disabled state , you should reboot it , the question now is : why does the node go into disabled state ?
Can you paste the output of :
show log jsrpd
That may give valuable information
06-04-2010 09:26 AM
You can find the below at the end of the file :
Jun 4 12:17:16 Successfully sent an snmp-trap due to a failover from secondary to disabled on RG-0 on cluster 1 node 0. Reason: fabric-link-failure
Jun 4 12:17:16 Successfully sent an snmp-trap due to a failover from secondary to disabled on RG-1 on cluster 1 node 0. Reason: fabric-link-failure
Please check your fabric links
06-04-2010 09:48 AM
Yes i can see it due to fabric-link-failure. I will check the link but we did replace it earlier. could it be a problem in the configuration of the node fabric.
i did choose the ge-0/0/7 to be the Data fabric port and this the configuration i used.
Data Fabric Configuration
set interfaces fab0 fabric-options member-interfaces ge-0/0/7
set interfaces fab1 fabric-options member-interfaces ge-13/0/7
06-05-2010 06:18 PM
I have a 3600 pair with that exact port for fabric, no problems here.
Interface Admin Link Proto Local Remote
ge-0/0/7 up up
ge-0/0/7.0 up up aenet --> fab0.0
ge-13/0/7 up up
ge-13/0/7.0 up up aenet --> fab1.0
06-09-2011 08:33 AM
fadesa, which junos version you are using ...
i had the similar issue on my srx, tried all, however resolved by upgrading it to junos 10.1r3.7
please look at the alarm "fpc 0 fabric chip failure" alarm, might be due to malfunc SFB.
So i recommend you upgrade the firmware and see if both issues are resolved otherwise contact JTAC
JNCIE-SEC, JNCIP-SEC, JNCIS-SEC, JNCIS-FWV
JNCIS-SP, JNCIS-SA, JNCIA-JUNOS
IBM Qradar Deployment Professional
[Please mark it as Accepted Solution if it works, Kudos if you like]