09-07-2010 09:04 AM
Hi, I've asked TAC this but the answers I'm getting are not great (i.e. there is probably no solution) so as a last-ditch attempt I thought I'd try here.
We have a SA4500 cluster where the nodes are in two separate locations. We use a DNS load balancer to route traffic to the most available site; this works fine. Using the Load Balancer and the IVE admin console, as an administrator we are able to tell where a given user has authenticated as this is recorded in the logs. However, we have a requirement to notify the user as to which cluster node the user has authenticated, as with the load balancer in place they have no way of knowing where their connection has been terminated.
I'm looking for a Variable such as <HOSTNAME> or <IVENODENAME> which I can place on the front page or post authentication in the UI Options "show notification window", it doesn't matter too much where as long as it's somehwere the user can easily find on the UI.
I have checked the documentation and knowledge base and can't find any reference on how to do this.
Creating a Java applet from scratch that contained the logic "IF SERVERIP == "10.0.0.1" then Print "Welcome to London"" or "IF SERVERIP == "22.214.171.124" then print "Welcome to Birmingham"" and embedding it a custom page may be possible, but our web developer (who would be the first to admit he is not a Java expert) seems to think this would be a task of biblical proportions.
Given that we can't run anything server side on the IVE, I think I'm pretty much out of options. As this is such as minor niggle, It's unlikely that I'll get this picked up as feature request as there is not mega $$$$ attached to it.
So does anyone have any bright ideas?
09-07-2010 10:44 AM
09-08-2010 01:56 AM
Thanks for that, not a bad idea.. Sadly we don't have any hardware load balancers in the environment as I know that we would have various options then as they inevitibly would be much "smarter" and properly location aware.
The JPG idea is definatly feasible, equally it occured to me that I could include in a custom template a reference to the cluster status page, wouldn't be massively elegant but would at least give me something. I'll have a play with a custom template and see if that gets us anywhere..
All other ideas/opinions still greatfully recieved..
09-14-2010 11:34 AM
You can configure a custom login page and via the template toolkit do things on the server side.
Here is the code:
<% USE CGI %>
<% CGI.server_name() %>
And here are the other cgi variables you could display:
DOCUMENT_ROOT The root directory of your server
HTTP_COOKIE The visitor's cookie, if one is set
HTTP_HOST The hostname of the page being attempted
HTTP_REFERER The URL of the page that called your program
HTTP_USER_AGENT The browser type of the visitor
HTTPS "on" if the program is being called through a secure server
PATH The system path your server is running under
QUERY_STRING The query string (see GET, below)
REMOTE_ADDR The IP address of the visitor
REMOTE_HOST The hostname of the visitor (if your server has reverse-name-lookups on; otherwise this is the IP address again)
REMOTE_PORT The port the visitor is connected to on the web server
REMOTE_USER The visitor's username (for .htaccess-protected pages)
REQUEST_METHOD GET or POST
REQUEST_URI The interpreted pathname of the requested document or CGI (relative to the document root)
SCRIPT_FILENAME The full pathname of the current CGI
SCRIPT_NAME The interpreted pathname of the current CGI (relative to the document root)
SERVER_ADMIN The email address for your server's webmaster
SERVER_NAME Your server's fully qualified domain name (e.g. www.cgi101.com)
SERVER_PORT The port number your server is listening on
SERVER_SOFTWARE The server software you're using (e.g. Apache 1.3)
09-19-2010 09:46 PM
Unfortunately, as you have found out through contacting JTAC it sounds, there is nothing on the IVE that will allow you to do this; there is no <HOSTNAME>, or similar, variable that can be used.
In addition, as you are aware from the same JTAC conversations, DNS load balancing is not supported on the SA devices. Is there a specific reason for using a cluster rather than separate nodes (other than ease of configuration parity)?
09-22-2010 03:42 AM
Hi JJH, thanks for your contribution, another poster seems to contradict you but I will give it a try in anycase as i've nothing to lose really.
09-22-2010 04:05 AM
Things weren't looking good, hence I was looking for the more left-field solutions.
We are not strictly using DNS to Loadbalance, only for failover should our the primary site fail and even then it is not instanionous (about 8mins of downtime in our testing so far). Suggesting that Juniper don't support DNS Load balancing is like saying you don't support DNS. the SA is oblivious is to what is happening externally to it, for that reason we have to have a solution to provide geographic failover since Juniper dropped the DX (at mine and my customers extreme invconvience and cost). Having seperate nodes doesn't make sense in most cases as it increases the admin workload and the likelihood of an introduced configuration error occuring. Furthermore, despite the cluster license changes in version 7.0 (which the last time I looked still weren't adequatly documented) seperate nodes without local load balancers doesn't give the same guarentees as having a shared cluster across two locations. Specifcally, in the new licenseing there is a 5 day grace period before the license count drops back the surving nodes capacity. This means that we either have to double the licensing that we need, accept that during an outage 50% of capacity will be lost, or purchase ICE licesening at signifncant cost. The problem for us is that some customers we have a 2 week wait time before we can get into there data centre (crazy I know). We have recently had a problem with a cluster node which would intermittetly crash and require a cold boot before restarting, the case went on for 4 months before the issue was resolved, during that time would have been at 50% capacity for most of the duration. Whilst the new licesning scheme offers a new solution, it's not going to be right for everyone and I think a lot of -CL licenses will still be sold.
The primary reason we have DNS failover versus local balancers is cost; an in the cloud service costs about $4000/year to maintain, the cheapest, nastiest hardware load balancers I could find which met the spec were ~$20,000 all in for the first year, the preffered F5 solution was ~$60,000.