The reason that your config did not work is that you tried putting fxp0 in a separate routig instance. That won't work. Instead, you have to put all other interfaces into a different routing instance and leave fxp0 in the default instance.
The key to understanding how fxp0 operates is to know that it is connected directly to the control plane, not the data plane. The result is that no packets will ever hit fxp0 unless those packet come in through that very interface. So if you want to talk to fxp0 and have to go through another interface first to reach it, you will fail. The packets arriving at "the other" interface come in through the data plane and there is no way for the data plane to forward the packets to the control plane (where fxp0 is connected).
You can have routing for fxp0, but you can not have a default route for fxp0. So if your management station is that bastion host in a DMZ you have to route the IP of your management station to the bastion host's router. The problem here is that if your SRX has another route to that bastion host, your fxp0 route will not work (you can't have the same route going out through fxp0 and a revenue interface). That's why some people use a separate instance.
That's how I understand it. Someone please correct me if I am wrong.
In my experience it's just too much of a hassle and too many workarounds. So I tend to just forget about fxp0 and use a normal (data plane) interface for management. You don't have to re-write your whole monitoring infrastructure for this either. You can use virtual chassis mode for your cluster. If using SNMP, the virtual chassis will report for all the nodes in the cluster (so your monitoring will actually see both). And if you really need to log into the inactive node, you can always login to the active node and issue the "request routing-engine login node 1" command. This will connect you to the other node with full CLI access.