We are migrating our on-prem datacenter to Azure and I was considering using the vSRX to protect our Azure VMs since we run SRX firewalls at our branch offices and on-prem datacenter. The configuration would entail adding VPNs between our office SRXs and the vSRX in Azure. Some VMs in Azure must be reachable over the public internet as well.
Would this be a legitimate use case? I've read through the "vSRX Deployment Guide for Microsoft Azure Cloud", but the only use cases mentioned are for vSRX's in Azure connecting to each other, but no mention of whether devices behind them are publicly reachable.
The reason I question its viability is that I'm having difficulty understanding how the routing should be configured. Specifically, how would devices external to the vnet route to devices behind the firewall if the untrust interface isnt configured with a publicly accessible ipnetmask, as would normally be the case? I can see how accessibilty via VPN would work, but not public access.
Any information on how to do this would be helpful.
Ok, so I've gone ahead and started setting up the test environment using the vSRX Gateway template and am having some difficulties I'm hoping to get some help with here. I have two VMs set up, one in trust and another in untrust (so i can rdp to the one in trust). I'm unable to connect from a public ip to the VM in the trust subnet.
Here is what I have so far:
TRUST SUBNET: 172.16.22.0/24 SRX int ge-0/0/0.0 IP 172.16.22.4
UNTRUST SUBNET: 172.16.21.0/24 SRX int ge-0/0/1.0 Pvt IP 172.16.21.4 Pub IP <not shown>
MNGMT SUBNET: works fine
Azure User Defined Routes:
TRUST SUBNET 0.0.0.0/0 Virtual Appliance Next Hop 172.16.22.4
UNTRUST SUBNET 0.0.0.0/0 Next Hop Internet
SRX routing instances:
vr1: ge-0/0/0.0, ge-0/0/1.0
routing-options 0.0.0.0/0 static next hop 172.16.21.1
Security policies are set to allow any/any in both directions
Both VMs are configured with private and public ip addresses.
Static nat has been configured in a few different ways with no joy, but I'm curious to see how this is supposed to be configure since I havent found good information on this yet that works.
I've followed the documentation from MS here "If the appliance must route traffic to a public IP address, it must network address translate the private IP address of the source's private IP address to its own private IP address, which Azure then network address translates to a public IP address, before sending the traffic to the Internet."
I've tried this, but I'm still unable to reach anything on the internet from the VM in the trust network.
If anyone has information on how this should be configured I'd appreciate your assistance.
The VM in "trust" has the IP address 10.0.0.5 and I see matches on the NAT rules, so something is getting to the firewall:
fw1> show security nat source rule Trust-Subnet
source NAT rule: Trust-Subnet Rule-set: Trust
Rule-Id : 1
Rule position : 1
From zone : trust
To zone : untrust
Source addresses : 10.0.0.0 - 10.0.0.63
Action : interface
Persistent NAT type : N/A
Persistent NAT mapping type : address-port-mapping
Inactivity timeout : 0
Max session number : 0
Translation hits : 49
Successful sessions : 49
Failed sessions : 0
Number of sessions : 6
But it can't ping or telnet to servers on the Internet (e.g. ping 220.127.116.11).
I've also noticed that I while I can ping between Linux VMs on the "trust" subnet, I can't ping to/from the vSRX. And the VMs have default gateways of 10.0.0.1, but the Azure route table rule is to go via the vSRX:
Also the vSRX doesn't display any ARP entries for the VMs in "trust".
I got a more complete vSRX config to bring up an IPSec VPN with a non-cloud DC, but I just can't get the communcation on the Azure-side of the vSRX to work. I'm about to give up on the vSRX and just use the Azure Network Security Groups, so I'd love to know if you got it working in the end - and how!
In case anyone finds this thread and wonders what the problem was, it's a bug in Junos (or a bug in the interaction between Junos and the Hyper-V hypervisor in Azure).
For vSRX on Azure there is a known issue where the MAC address does not get updated and stays 00:00:00:00:00:00.
This issue is caused by that the dpdk vmbus driver would send request to hyperv server (the host on Azure) to get the MAC address, and it would wait for just 0.01 seconds for the answer. If the reply does not arrive in 0.01 seconds, the driver parses an all-zero message and the MAC is set to zero. This has been fixed in D120 such that the wait time has been increased. (https://prsearch.juniper.net/InfoCenter/index?page=prcontent&id=PR1410825)
The shocking thing is that although someone in Juniper has known about this bug, fixed it in -D120, then broken it again in -D130 and beyond, the vSRX they have published on the Azure Marketplace runs the broken -D100 version! Also, the JTAC-recommended version of Junos for the vSRX (-D15) doesn't have a caveat about using it on Azure (https://kb.juniper.net/InfoCenter/index?page=content&id=KB21476) and there no mention of this PR in the Release Notes for the various versions of Junos.