12-14-2011 04:31 AM
I have to make a SRX 240 cluster running on Junos v.10.4R4.
Every documents that i found about this say 1st Step: enable cluster with: set chassis cluster cluster id <> node <> reboot
and after the FW reboot, i cant see "primary" or "secondary" my both nodes are in HOLD mode, they cant see each other, the control links are down the fabric links are down..
So i tried to delete ethernet switching by doing following commands (founded on juniper technical doc):
[edit security zones security-zone trust]
Interface vlan.0 must be configured under interfaces
error: Interface <ge-0/0/10.0> vlan member <vlan-trust> undefined
error: configuration check-out failed
I tried all the steps about troubleshooting why control link is down, fab link is down...
When i do "show chassis cluster status" i get :
root> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Redundancy group: 0 , Failover count: 0
node0 1 hold no no
node1 0 lost n/a n/a
My cabling is the same as described in the Juniper technical doc...
I really dont know why this first step is not working...
There is somebody who can help me about this trouble please ?
thank you in advance
02-05-2012 09:38 AM - edited 02-05-2012 09:38 AM
As you know There are 3 ports to be iused for HA , 2 of them should be specific ports ( fxp0 & control port )
make sure that those 2 specific ports doesnot have any configuration associated with them ( like zones , ...... )
************** Click on the button saying " Accept as Solution" if My Post solved your problem **************
02-05-2012 10:15 AM
For HA configuration check the following KB
it contain steps and troubleshooting steps also I hope it will be useful for you.
Also if you are using any device link L2 SW between the two boxes, try to connect it direct as sometimes it cause problems.
JNCIE-M/T # 1059, CCNP & CCIP
If this post was helpful, please mark this post as an "Accepted Solution".Kudos are always appreciated!
02-06-2012 04:39 PM
When issues pop up with starting clusters, I've found it best to start the cluster from a blank configuration.