I think that box is very well equipped. You meet the 'ivy bridge' requirement for 'performance mode' though I recommend 'lite-mode' only for lab testing... you've got more than enough CPU power with this rig and lots of RAM.
What kind of drives does this have? "Dual SD cards" is confusing. Is that supposed to mean dual SSDs? If you're not worries about power-draw, get a hardware RAID controller (remember that ESXi doesn't recognize software RAID controllers) and get a pair or more of 2TB spinning disks (recommend WD RE4 drives) and then one or two 1TB or 2TB SSDs. With those, I recommend creating one 'spinning-disk'
datastore and one SSD datastore and then balance your VMs across those. For example: JUNOS VMs do most of their 'thing' in RAM so you can run LOTS of Junos VMs on spinning disks without having to worry much about disk thrashing. But for windows hosts, or any form of a logging server ... anything that accesses disk more, I'd recommend putting those on an SSD. I also recommend putting your money into a better SSD (ie Samsung 860/870 Pro or EVO... stay away from lower MTBF SSDs like QLC since you're going to be reading/writing the heck out of these things) rather than size, unless you can afford to get both!
For reference, my main virtualization platform is dual 2660v2 CPUs w/ 256GB of RAM and I've very rarely maxed the CPUs (only when running lots of nested VMs)... So if you can save money by doing so and are interested, you can scale back those very sexy 2697s to less expensive and lower power-draw 2660 or 2670 v2 CPUs IMO. Those CPUs are both 95W instead of 125W and should cost much less up front as well.
Yes I think you'll still be okay. Those CPUs are two generations older (same generation as first-gen Core i3/5/7 so Nehalem I believe) so only thing I believe you won't be able to do is vMX performance mode (which again, I wouldn't recommend performance testing with virtualization unless you're looking to performance test for what will ultimately be a vMX in production... and that's the only known use I'm aware of for using other than 'lite-mode').
If you're wondering 'what's missing', as far as I understand it, Intel added AES instruction code to the following generation to Nehalem (Sandy-bridge) but had a time-to-market problem, so actually disabled them in SB processors and so they never actually became active until Ivy-bridge. But that's the missing secret sauce to for vMX perf. mode.
KEY piece of information for dummies like me: I spent a lot of time getting my first vMXs up and running because I didn't read the instructions all the way through and after staging the vCP and vFPC and making all of the vswitch/port-group connections and checking the resource allocations, I expected the vCP to boot and the vFPC to connect (I mean who wouldn't right?).
TWO things you need to be aware of:
1) We deployed the vMX vFPC ova such that it will attempt to deploy with 32*MB* of RAM. Not gonna work, but an excellent way to get customers chasing their tails so we did it. You must set the vCP to at least 1 vCPU and 1GB+ RAM and the vFPC to at least 3 vCPUs and 3GB+ RAM or wind-river will just boot-loop like a fruit-loop.
2) Then you must also let the vCP boot up, go into config mode, set root-authentication and "set chassis fpc 0 lite-mode" and commit. If you don't do this, if I recall correctly, the vFP will boot but it will never connect to the control-plane properly without the correct minimum vCPUs/RAM *and* because for performance mode, you need an ivy-bridge or later (v2/v3/v4 or latest-gen i3/5/7/xeon).
I like to issue from operational mode "monitor start chassisd" and watch the FPC boot and if all looks hunky dory, then use "show chassis fpc" to see if it came up alright. If it has successfully connected, slot 0 should transition from "empty" to "online" though the temp will remain at "testing" in all virtual platforms.
I love this stuff so if you run into any issues, hit me back here and I'll help however I can!