SRX Services Gateway
Showing results for 
Search instead for 
Do you mean 
Reply
Contributor
Posts: 16
Registered: ‎07-23-2010
0

HA cluster management

Hi All,

 

Can I get some feedback as to how many of you guys manage a HA cluster?

 

I have been trying to work out a easy way for us to manage clusters without being terribly complex.

Specifically being able to manage each node, 99% of the time this isn't actually needed however when doing upgrades and such it's required.

 

My experience so far:

 

Ok, so when you cluster the unit it creates fxp0 and fxp1 which you can configure via groups.

Yes this works, however this seems to be a purely OOB management. The interface IPs and routes via this interface are installed into the global route table and as such this means you can't also route traffic through the devices from your management network.

 

So, routing-instances.

Can't put the fxp interfaces in a new routing instance.

 

Ok, so I'm willing to burn two ports in the name of being able to manage each node

Interfaces 6/0/15 and 15/0/15 selected for this task.

Can't put IP addresses on them through the interface command as this drops the secondary out of a HA state.

Can put the IP addressing on the interfaces via groups, however for some reason this only works when under node0.

Getting somewhere with this, however when creating a security zone to put the interfaces in and apply services it will not accept due to 15/0/15 not being configured under interfaces.

 

So really would like someone to point out where I'm going wrong, or if I'm just going about this the wrong way.

 

Configs below

 

 

Cheers & many thanks to any input.

 

 

 

node0 {
    system {
        host-name fw-02-01;
    }
    interfaces {
        ge-6/0/15 {
            unit 0 {
                family inet {
                    address 172.20.187.253/26;
                }
            }
        }
        ge-15/0/15 {
            unit 0 {
                family inet {
                    address 172.20.187.252/26;
                }
            }
        }
    }
}
node1 {
    system {
        host-name fw-02-02;   
    }

 

zones {
    functional-zone management {
        interfaces {                    
            ge-6/0/15.0;
            ge-15/0/15.0;
        }
    }
}

 

 

Contributor
Posts: 24
Registered: ‎01-30-2008

Re: HA cluster management

You are not alone in your struggles!  Such a task can get nasty pretty fast.  Right now the way I do this at sites that do not have out of band management is SSH to the cluster, which puts you on the primary, in my case node 0 and jump over to the secondary as follows:

 

(10.1 I believe this command came, if not at 10.1 or higher, there is no hope!)

 

root@SRX-Node0> request routing-engine login node 1

 

That puts you on node 1, and off you go...

 

The next issue of course is upgrading.  If you are local, bust out the USB stick, mount it, copy the files off, etc...if remote or just to see how to copy stuff around, see this post from Thorsten:

 

http://forums.juniper.net/t5/SRX-Services-Gateway/Minimum-effort-SRX-Cluster-upgrade-procedure/m-p/5...

 

-Gerry

Distinguished Expert
Posts: 979
Registered: ‎09-10-2009

Re: HA cluster management

SRX Cluster management.  Fun, ain't it?

 

Personally, I would love to meet the person or persons responsible for the design of how these devices operate for management, and introduce those parties to what I refer to as the "brick of education."

 

As far back as the pre-Juniper days of Netscreen, I have always had issue with the way their so-called "out-of-band management" ports operate.  Seems Juniper took a bad idea and kept it alive and kicking.

 


RichardF wrote:

Yes this works, however this seems to be a purely OOB management. The interface IPs and routes via this interface are installed into the global route table and as such this means you can't also route traffic through the devices from your management network. 


You hit it on the head here.  There is a bit of dissension about what OOB means.  When a network device puts the routes for its supposedly out-of-band network management interface into the system's master routing table, that does NOT fit my definition of "out-of-band," because as you said, the system will then start to route transit traffic to/from your management network through that interface! WRONG, Juniper! WRONG! If the device installs routes through that interface into its primary routing table, that is by definition *IN BAND*.

 

Juniper's answer is that we're supposed to create an entirely separate network for the sole purposes of managing our Juniper devices.  Yeah, that's great if you have a small, simple network.  What are customers with large, diverse, and complicated networks supposed to do? That's not exactly a trivial task.

 

Your configuration examples of putting interfaces into your node0 and node1 groups aren't going to work, as you've mostly discovered. The thing to remember about how the SRXs do clustering is that two devices logically become one device.  That is the biggest hurdle to get past when trying to wrap your head around how these things operate.  You can think of an SRX cluster as you would a single switch, maybe a Cisco 6500, for example, that has 2 supervisor cards.  One card is active in the chassis, the other is standby.  If the active card fails, the standby card picks up.  Both cards live in the same switch and service the same ports.  Your two SRX devices become one device with two "supervisor cards" (Routing Engines).  Think of node1 as just extra ports, not as an individual device.  The RE in node0 is active (usually), and the RE in node1 basically shuts down and goes to sleep until it's needed. That turns node1 into a dumb box of network interfaces, for all intents and purposes, and is why the interfaces you configure under node1 don't work.  If you fail the routing engine over to node1, then those interfaces on that node would start working and the ones on node0 would stop working.  Juniper's way around this is with the blasted fxp0 interfaces.  If you have the luxury of building an isolated network segment for your SRX management, then you're in good shape to manage those devices.  For the rest of us in the real world, though, it turns into a hassle.

 

Depending on what your needs are, with Junos 10.1R2 (I believe) and newer you can use "Virtual Chassis" mode for managing the SRX cluster.  It brings a little bit of sanity to the way they operate, but I would say it's still got a long way to go.  I honestly don't know if VC mode solves the issue of being able to download IDP or other software updates to the node1 box, or if it even solves the issue of node1 not being able to sync its clock with your NTP servers (another glaring oversight in my opinion...). Perhaps someone who's used VC mode can chime in on what it solves and what it doesn't solve.

 

Here is a KB that describes VC mode:  http://kb.juniper.net/InfoCenter/index?page=content&id=KB18228

 

Good luck.  Smiley Happy

 

-kr


---
If this solves your problem, please mark this post as "Accepted Solution."
Kudos are always appreciated.
Distinguished Expert
Posts: 826
Registered: ‎05-04-2008
0

Re: HA cluster management

Hi,

 

I have my 240 in VC mode and managing via the NSM.  I also have a 210 VC.  Both are able to download IDP updates without issue.  The thing I haven't tested yet is are they making it over to the secondary.  In my opinion, VC is the only way to go, especially when factoring in a remote NSM.  The "backup-router" approach is buggy across all versions and JTAC was unable to resolve my issues.  I ran packet captures and found the mgt traffic leaving the fxp's for the nsm was hit and miss.

 

John

John Judge
JNCIS-SEC, JNCIS-ENT,

If this solves your problem, please mark this post as "Accepted Solution". Kudos are appreciated.
Distinguished Expert
Posts: 2,400
Registered: ‎01-29-2008
0

Re: HA cluster management

Hey John - IDP updates DO NOT propogate from primary to secondary. You must update manualy. I had a JTAC case on this one and got validation on this ever so sad fact.....

Kevin Barker
JNCIP-SEC
JNCIS-ENT, FWV, SSL, WLAN
JNCIA-ER, EX, IDP, UAC, WX
Juniper Networks Certified Instructor
Juniper Networks Ambassador

Juniper Elite Reseller
J-Partner Service Specialist - Implementation

If this worked for you please flag my post as an "Accepted Solution" so others can benefit. A kudo would be cool if you think I earned it.
Distinguished Expert
Posts: 979
Registered: ‎09-10-2009
0

Re: HA cluster management

 


muttbarker wrote:

Hey John - IDP updates DO NOT propogate from primary to secondary. You must update manualy. I had a JTAC case on this one and got validation on this ever so sad fact.....


 

But if it's in VC mode, can the secondary at least reach out to the internet to download the updates?  That's always been the primary problem with the "old" chassis cluster mode, the secondary could only reach out via its fxp0, which is no mystery what a nightmare that can be to manage.

-kr


---
If this solves your problem, please mark this post as "Accepted Solution."
Kudos are always appreciated.
Distinguished Expert
Posts: 2,400
Registered: ‎01-29-2008
0

Re: HA cluster management

Nope - need to ftp the updates from the primary to the secondary. A BSD script could be written I guess. It is sadly another example of just how these boxes lack functionality in the production world.

 

Kevin Barker
JNCIP-SEC
JNCIS-ENT, FWV, SSL, WLAN
JNCIA-ER, EX, IDP, UAC, WX
Juniper Networks Certified Instructor
Juniper Networks Ambassador

Juniper Elite Reseller
J-Partner Service Specialist - Implementation

If this worked for you please flag my post as an "Accepted Solution" so others can benefit. A kudo would be cool if you think I earned it.
Visitor
Posts: 7
Registered: ‎05-24-2008
0

Re: HA cluster management

Hello,

There's also something disappointing about cluster management in real world: when one cluster member gets disabled (after a fabric link failure for example), the fxp0 interface is down (physically up but does not answer to requests). The only way to manage the disabled box (to issue a reboot) is to log via serial console.  Or use the trick "request routing-engine login node 1" but I discovered it does not work in all cases.

 

I also confirm that using fxp0 routing is a nightmare in real world situation (different behaviours on primary/secondary with egress packets or ingress packets, managing route priority between backup-router routing table and inet.0 routing table, etc...)

 

I think real and robust out-of-band management should be a priority in the product roadmap (look at Netscreen MGT interface which can be considered as a beginning of a decent management interface). The lack of such feature is a design flaw...

Super Contributor
Posts: 222
Registered: ‎12-16-2008
0

Re: HA cluster management

 


muttbarker wrote:

Nope - need to ftp the updates from the primary to the secondary. A BSD script could be written I guess. It is sadly another example of just how these boxes lack functionality in the production world.

 


An event script already exists. No idea who wrote it, maybe jtac has a copy. Lots of people have written scripts to work around SRX/NSM limitations (even I wrote a few very basic ones), but most of them aren't public unfortunately. I'm still looking for one that will keep RG0 on the same node as RG1 for all the features that are not supported in A/A Smiley Happy

 

 

SRX cluster management has always been a bit of a challenge. Its doable but I have had to use some ugly tricks sometimes. Especially the backup-router behavior is difficult to understand for most (or just poorly documented), its not an entirely separate table.

Oh well, as soon as we can terminate VPN connections in non-default routing instances, we can use the old netscreen trick and dedicate inet.0 to management and use a routing instance for all transit traffic.

Distinguished Expert
Posts: 805
Registered: ‎04-17-2008
0

Re: HA cluster management

Hi All,

 

I've been watching this thread with interest, as I too have struggled with a way around the limitations of chassis-cluster and the fxp0 silliness.

 

What I have deployed in the past is as follows:

 

Connect the fxp0 interface on each SRX to redundant switches which have a management VLAN stretched between them. Next configure a Management security zone on the firewall, and put an interface from this zone into the same management VLAN on that switch - essentially you're connecting all SRX fxp0 and management ge interfaces to each other in the same VLAN.

 

Because of the JUNOS stupidity with the fxp0 interface being present in the global table, but still out-of-band, you need to place all "Security" interfaces in their own virtual router.  This separates the fxp0 routes from the VR table (which is now used essentially as the new "global" table).  It would be nice to do it the other way around, but you can't place fxp0 in a VR.

 

When configuring the backup-router command under your node groups, point it to the IP address on the Management security zone interface and add appropriate security policies to allow traffic to/from this zone referencing the fxp0 IP address as if it were a host.  A word of caution here - when using backup-router, I've had all sorts of issues specifying 0.0.0.0/0 as the destination route, so make it the specific prefix(es) of your management and you shouldn't have (as many) issues.

 

If you are doing IPSEC on your SRX, the above topology may be problematic, as IKE can't be negotiated over interfaces inside a virtual router yet (coming in either 10.4 or 11.1 I believe).

 

As someone mentioned above, IDP updates are also problematic with HA clusters.

 

From JUNOS 10.1r2 and onwards, there is a hidden command:

 

 

set chassis cluster network-management cluster-master

 which allows in-band management from any interface, and the chassis cluster appears/behaves more like an EX Virtual-Chassis in NSM (eg: 2x REs).  Again there are caveats around deploying this way, but it might help out some of you who are struggling with fxp0 issues.

 

Good luck!

 

Ben

 

 

 

 

Ben Dale
JNCIP-ENT, JNCIS-SP, JNCIE-SEC #63
Juniper Ambassador
Follow me @labelswitcher
Super Contributor
Posts: 239
Registered: ‎11-06-2007
0

Re: HA cluster management

Not sure if this helps, but if you do want fxp0 and one of your traffic ports to be in the same subnet, you can do this, but it would require putting your traffic ports in a non-default routing-instance, and leaving the fxp0 in the inet.0 routing-instance.  This would require importing between the inet.0 and the custom routing-instance, but this should work.  We've had some issues with this when NSM and STRM is in the picture, and that's how we got around it.

Distinguished Expert
Posts: 979
Registered: ‎09-10-2009
0

Re: HA cluster management

 


motd wrote:

 Oh well, as soon as we can terminate VPN connections in non-default routing instances, we can use the old netscreen trick and dedicate inet.0 to management and use a routing instance for all transit traffic.


 

Don't forget we also cannot use DHCP relay services (forwarders -> helpers -> bootp) for any interfaces that do not belong to the inet.0 routing instance, and we also cannot obtain interface address / default routes via DHCP for interfaces that do not belong to the inet.0 routing instance.  There's probably even MORE things that we *can't* do with non-default routing instances, these are just the things that have bitten me in the arse recently.

 

I really, really, REALLY wish Juniper would have given some more thought to the design of how Junos operated on SRX.  I've actually had to redesign and re-architect parts of my networks just to replace aging ScreenOS systems with SRX boxes, and I really feel like I should NOT have had to do that.  Things that worked great on ScreenOS either aren't supported or just don't work right on the SRX.  *NOT* a good replacement strategy, Juniper!  I should be able to take out my old ScreenOS boxes and drop SRX boxes in their place without having to completely change the way the pieces of the network are designed!

 

It's actually bitten us so hard, so many times recently, that our next big project for security devices is going to have the "C" word labelled on the firewalls.  We're spending 2X as much for firewalls that on paper have fewer features, but we know that we can put these things on the network and they work as advertised and as expected.  That's worth something.  I've spent a good part of the last 12 months designing network solutions AROUND the limitations and missing features or incomplete / unstable support of the SRX firewalls.  That's not how it's supposed to work.  It's CRAZY what some companies charge for features, though... (if we were to add IDP processing to these boxes we would be spending 4X as much over 5 years vs. what it would have cost us for Juniper boxes...)

-kr


---
If this solves your problem, please mark this post as "Accepted Solution."
Kudos are always appreciated.
Super Contributor
Posts: 222
Registered: ‎12-16-2008
0

Re: HA cluster management

 

 


keithr wrote:
Don't forget we also cannot use DHCP relay services (forwarders -> helpers -> bootp) for any interfaces that do not belong to the inet.0 routing instance, and we also cannot obtain interface address / default routes via DHCP for interfaces that do not belong to the inet.0 routing instance.  There's probably even MORE things that we *can't* do with non-default routing instances, these are just the things that have bitten me in the arse recently.

 


 

You're right, I forgot about that one. The DHCP relay is actually an interesting case, I've been using that since 9.6 and its working. Up until a few months ago I didn't even know it wasn't officially supported. From what I can tell it works so long as dhcp clients & server are in the same instance and it even works if the dhcp server is in another instance if the server is ISC dhcpd, not with MS DHCP. Something about microsoft not echoing back certain options.

 


keithr wrote:

 

I really, really, REALLY wish Juniper would have given some more thought to the design of how Junos operated on SRX.  I've actually had to redesign and re-architect parts of my networks just to replace aging ScreenOS systems with SRX boxes, and I really feel like I should NOT have had to do that.  Things that worked great on ScreenOS either aren't supported or just don't work right on the SRX.  *NOT* a good replacement strategy, Juniper!  I should be able to take out my old ScreenOS boxes and drop SRX boxes in their place without having to completely change the way the pieces of the network are designed!

 

It's actually bitten us so hard, so many times recently, that our next big project for security devices is going to have the "C" word labelled on the firewalls.  We're spending 2X as much for firewalls that on paper have fewer features, but we know that we can put these things on the network and they work as advertised and as expected.  That's worth something.  I've spent a good part of the last 12 months designing network solutions AROUND the limitations and missing features or incomplete / unstable support of the SRX firewalls.  That's not how it's supposed to work.  It's CRAZY what some companies charge for features, though... (if we were to add IDP processing to these boxes we would be spending 4X as much over 5 years vs. what it would have cost us for Juniper boxes...)


Personally, I would have kept the screenos firewalls. They can do more than the C-labelled devices and are rock solid.

The SRX shouldn't be advertised as a drop in replacement for netscreen as full feature parity is going to take some time. I expect them to be a good replacement for nearly all netscreen deployments by the 2nd half of '11.

 

I'm starting to like the SRX series, in many ways its already far ahead of netscreen (BGP/MPLS/OSPF/..). For me, the two main problem areas are the use of multiple routing-instances (VPN/DHCP/track-ip limitations) and in-service upgrades on the Branch devices. Most annoying is knowing that certain features are probably delayed because of QA, as they are already supported on the J-series.

 

Its easier to sell SRX to people who are currently running C/CP/.. as they are used to having limited features and rather poor debugging capabilities. Netscreen users are spoiled Smiley Wink

 

Distinguished Expert
Posts: 979
Registered: ‎09-10-2009
0

Re: HA cluster management

 


motd wrote:

 

You're right, I forgot about that one. The DHCP relay is actually an interesting case, I've been using that since 9.6 and its working. Up until a few months ago I didn't even know it wasn't officially supported. From what I can tell it works so long as dhcp clients & server are in the same instance and it even works if the dhcp server is in another instance if the server is ISC dhcpd, not with MS DHCP. Something about microsoft not echoing back certain options. 


That is quite interesting, actually.  I recently (last week) struggled with trying to get DHCP relay to work for interfaces that were not in the default VR.  I have ISC dhcpd servers, and they were reachable via instance "VR-1," for example.  I also had "VR-2" configured (the names have been changed to protect the innocent).

 

When I had DHCP relay configured in the routing-instances -> VR-1 -> forwarders -> helpers -> bootp context, I got nothing.  Literally, nothing.  Traceoptions didn't log anything, and "show system services dhcp relay-statistics" came up with all 0 counters.  Clients in a VLAN whose interface lived in VR-1 were unable to get DHCP service.  When I moved the DHCP relay configuration into the default routing instance, then I started seeing the DHCP requests being dropped due to an invalid route to the destination, since DHCP relay packets are sourced from "self" and thus come out of inet.0.  I opened a case with JTAC and was told rather bluntly that DHCP relay was not supported when using virtual routers.

 

I would be very interested to see how you've configured this such that it's working for you.  I ended up having to rework the whole configuration and do some trickery with FBF instead of being able to keep the traffic cleanly separated with true VRs as I had wanted to (and as I had done it on ScreenOS).


motd wrote:

 

Personally, I would have kept the screenos firewalls. They can do more than the C-labelled devices and are rock solid.

The SRX shouldn't be advertised as a drop in replacement for netscreen as full feature parity is going to take some time. I expect them to be a good replacement for nearly all netscreen deployments by the 2nd half of '11.


Our ScreenOS boxes are aging.  Our ISG2000s were some of the first off the production line.  We had to wait a couple months after our order was processed for the IDP security modules to arrive because they hadn't been manufactured yet.  We recently had to replace one of the ISG2000 chassis (RMA) due to it starting to crash a lot (1 - 2 times per week).  We have a couple NS5200s that are running original M-series management and line cards.  The cost to replace the management and line cards with current versions was higher than getting all new SRX hardware -- now I'm starting to wish we had just spent the money and upgraded the 5200s.  I recently just retired an NS500.  Our sales team pushed the SRX as the way to go, saying that the NS boxes were being left behind and SRX was taking its place.  We were told that the SRXs could do everything that the NS boxes could do, and much more.  Once I started configuring them and putting them out on the network, I learned that that simply was not the case, and we were not pleased with being mislead.

 

When you say that the ScreenOS devices can do more than the C-label boxes... could you give some examples?


motd wrote:

 

I'm starting to like the SRX series, in many ways its already far ahead of netscreen (BGP/MPLS/OSPF/..). For me, the two main problem areas are the use of multiple routing-instances (VPN/DHCP/track-ip limitations) and in-service upgrades on the Branch devices. Most annoying is knowing that certain features are probably delayed because of QA, as they are already supported on the J-series.

 

Its easier to sell SRX to people who are currently running C/CP/.. as they are used to having limited features and rather poor debugging capabilities. Netscreen users are spoiled Smiley Wink 


I get what Juniper tried / is trying to do with the SRX, I think they just failed miserably in their execution.  The whole process seemed very "Microsoft Windows Millennium Edition"-ish.  The product was not ready to go to market, and they rushed it.  They overlooked serious deficiencies in both system design and implementation.  They made excuse after excuse after excuse.  It seemed like every time we needed a feature or tried to implement something that was advertised in the data sheets or by our sales team, it wasn't supported, didn't work correctly, or just flat out sucked (SRX clusters, anyone?).  For all their faults, the SRX has promise, but honestly those boxes have caused me way more headaches than they've solved for me at this point.

 

We have a few folks on staff who are old-school Cxxxx guys.  Our current project is a statewide network overhaul for a subsidiary institution that we've kind of acquired.  We were looking at 2 HA pairs of SRX650 boxes for the data centers in the north and south main hubs of the state, and about 20 SRX210s for branch offices.  We needed 1Gbps of IPsec throughput between the data centers, so the SRX650s seemed to fit the bill.  Due to the numerous problems we've had with the products themselves, sub-par support in a lot of cases, and little to no support from our sales team, we needed to look at other solutions to compare.  Trying to do an apples-to-apples comparison between vendors' products is near impossible these days.  Throughput numbers are always suspect... "best case" and "under optimal conditions," etc.  It's hard to know what's believable.  The golden rule:  "Vendors Lie."  On the one hand, the SRX 650 says it can do 7Gbps max throughput, but it drops to a meager 900Mbps of IPS throughput.  The 1.5Gbps of IPSec throughput was ideal.  It can handle 512k total sessions and 30,000 sessions per second, and it tops out at 900,000 Packets Per Second.  Great.  Compare that to, well, let's just say it out loud, the Cisco ASA 5585 series.  The 5585-10 lists 2Gbps to 4Gbps throughput.  That's lower (on paper) than the SRX650.  But it can do 1.5 million packets per second, 750k max sessions, and 50,000 sessions per second.  It shows 1Gbps of IPsec throughput, but with the optional IPS processing card it can do 2Gbps of IPS throughput.  The numbers are all over the place between these products.  Which one's faster?  Which one's going to hold up better under load?

 

Management is probably going to push for the non-Juniper solution this time.  We have some sister institutions who use a lot of ASA boxes and they say they're fantastic.  They say that the configuration can be a little clunky at times, but debugging is top-notch and the majority of the time "they just work."  I suppose that if we end up buying a bunch of them we'll see how it really goes.

 

I just realized how long this post has gotten, and we've diverged from the thread topic.  Sorry for the novel, and if we want to continue this thread of discussion maybe we should move it to a new post?

-kr


---
If this solves your problem, please mark this post as "Accepted Solution."
Kudos are always appreciated.
Contributor
Posts: 24
Registered: ‎01-30-2008
0

Re: HA cluster management

Good tread (unfortunately).  I have seen a handful of the things being discussed here over the last year on the two dozen or so installs I have done.  The Juniper partner I work for stopped selling SRX all together after 6 of the installs were ripped out and replaced with SSG or Cisco.  We are selling again, but only into positions we know SRX will fit and work...No dual ISP, no funny routing, tech savy IT staff that can handle CLI (until webui page loads are <1 second like ScreenOS we just tell customers Web is not a means of management), etc, etc. 

 

I gave up on cluster management...either hit it through a revenue port and jump to the other node (if on branch) with the request routing-engine x command, or buy a device that allows for serial login via IP...http://www.perle.com for example...this approach of course has it's own advantages and disadvantages...at this point with the current state of the SRX HA management mess, it is what it is....

Contributor
Posts: 23
Registered: ‎06-22-2008
0

Re: HA cluster management

My personal favorite thing is still that JSRP doesn't sync time like NSRP does. Consequently I have not been able to put together an HA config that supports NTP without a dedicated management subnet NTP server, to go with the now required dedicated management subnet logging box. 

 

I see NTP as a requirement for any reasonable firewall deployment. Just another feature that is coming 'some time soon' I'm sure.

Distinguished Expert
Posts: 805
Registered: ‎04-17-2008
0

Re: HA cluster management

 


oldtimer wrote:

Not sure if this helps, but if you do want fxp0 and one of your traffic ports to be in the same subnet, you can do this, but it would require putting your traffic ports in a non-default routing-instance, and leaving the fxp0 in the inet.0 routing-instance.  This would require importing between the inet.0 and the custom routing-instance, but this should work.  We've had some issues with this when NSM and STRM is in the picture, and that's how we got around it.


 

It's pretty much the only way you can have them both in the same subnet - if you leave them in the global instance and the IP address of fxp0 is lower than that of your "revenue port" in the same subnet, JUNOS will select fxp0 as the best next-hop for traffic bound for that subnet, even though the fxp0 interface is not a member of any security zone (nor part of the security process), and subsequently all traffic bound to the subnet will be dropped.

 

Again this comes back to the original issue - why is fxp0 placed into the global table if it is an out-of-band interface?  This problem isn't specific to SRX, I believe all M-Series with dedicated fxp0 suffer from it as well.

 

As others have mentioned, DHCP relay, IPSEC VPN, NTP and DNS are all sourced from the fxp0 interface, so not having it available limits the capabilities of the box somewhat.

Ben Dale
JNCIP-ENT, JNCIS-SP, JNCIE-SEC #63
Juniper Ambassador
Follow me @labelswitcher
Visitor
Posts: 9
Registered: ‎12-23-2010
0

Re: HA cluster management

There is a event script which can automatically sync the IDP signature database.

 

Juniper is in the process of implementing simplified cluster upgrade process.

Trusted Contributor
Posts: 1,048
Registered: ‎09-26-2011
0

Re: HA cluster management

Hope one day, UTM (IDP, AV...) and NTP will be propagated & synchronized, by default, as part of an ever-growing wishlist...

Anyone knows whether 12.x or 13.x will support auto-propagation and synchronization?
Thanks!

Michael
JNCIA-JUNOS, JNCIS-ENT/SEC, JNCIP-ENT
(CCNA, ACMP, ACFE, CISE)
"http://www.thechampioncommunity.com/"
CONNECT EVERYTHING. EMPOWER EVERYONE.
Share & Learn. Knowledge is Power.

"If there's a will, there's a way!"