SRX Services Gateway
Reply
Visitor
Posts: 5
Registered: ‎11-19-2010

RANT: Management when clustering

It's been ~3 years since I put in a feature request to have fxp0 in it's own virtual router.  Existing problems:

 

  • - I cannot assign an IP address to fxp0 that exists in a subnet on another interface (for example, if I had a management zone where my out of band management interfaces for various devices lived)
  • - fxp0 is required to be able to access the passive unit when clustered.  I cannot use a transit interface to do this.
  • - fxp0 cannot be assigned to a VR
  • - Using another interface in a "mgmt-vr" for dedicated management doesn't work since netconf doesn't work in VR's
  • - leaving fxp0 in inet.0 and assigning all other interfaces to a "traffic-vr" works, but you cannot terminate IPSec connections to interfaces in a VR.  Also, there were some issues with acting as a DHCP server or client on interfaces in a VR.  If Netconf, IPSec and DHCP don't work in a VR, what else doesn't work in a VR?

 

fxp0 is not an out of band management interface.  It's use is required to access the secondary unit when in a cluster, but the workarounds to ensure traffic doesn't leave the box through another interface cause problems also.  I've spent hours trying different things, and the best solution I've found so far is to leave fxp0 in inet.0 and put all other interfaces in a VR.  As I mentioned above, there are problems with this, and it cannot be done in every situation.

 

When is fxp0 going to be fixed?  I consider this a bug.  ScreenOS handled it properly, and other firewall vendors also handle it properly.  This is a big problem in large environments where NSM needs to talk to everything.

 

Why can't fxp0 just have it's own routing instance?  I know I'm not the only one severly annoyed by this.

Super Contributor
Posts: 167
Registered: ‎08-02-2010
0

Re: RANT: Management when clustering

[ Edited ]

While I agree that we should be able to put fxp0 in a different vr, terminating ipsec to interfaces in non-default vr has been possible since 11.1:
http://kb.juniper.net/InfoCenter/index?page=content&id=KB21487
I can't speak from experience when it comes to dhcp client in vr, but it does seem like it was fixed in 11.1R2:
http://kb.juniper.net/InfoCenter/index?page=content&id=KB22642
Of course, the recommended release is still 10.4R10 which makes the point kind of moot, but I expect 11.4 to be recommended pretty soon!

Regards,
Adam

(if my post helped solve your problem, mark it as accepted solution)
Visitor
Posts: 5
Registered: ‎11-19-2010
0

Re: RANT: Management when clustering

Well, it's nice to know they've fixed DHCP and IPSec.  Except for the fact that 10.4 is still the recommended version.  Smiley Happy

 

Could someone from Juniper chime in on the fxp0/routing-instance issue?  Is this going to be fixed at some point?

Distinguished Expert
Posts: 979
Registered: ‎09-10-2009

Re: RANT: Management when clustering

I'd just like to chime in here and say that if the operational state of something is such that the majority of users find that they have to do extra work every time they deploy a device just to make it operate in a more commonly-accepted definition of "logical" and "sane," then the design is fundamentally flawed.

 

Having to manually carve out a VR and shove your revenue ports into it is extra work, and not exactly trivial for all things Junos. I'm sure we've all run into weird issues with multiple VRs from time to time.

 

ScreenOS, for comparison, still did not do this "correctly" as far as I'm concerned.  If the MGT port was connected to a network that also transits the firewall through revenue ports, the system's main routing table would contain a direct/connected route entry for the network on the MGT port, and transit traffic would get hijacked and the system would try to send it over the MGT port instead of the proper transit ports.

 

Brocade/Foundry switches do this "correctly" with a true OOB management port that doesn't interfere with the system's main routing table, and Palo Alto firewalls handle this beautifully, by default, out of the box.  No extra work or hoops to jump through, or workar