For some reason, NSM only sees 1313 GB of space (shows it in Server Monitor in Gui), it also shows that space allocation is currently 99%.
When trying to (re)start NSM, the following is seen:
[root@nsm dbbackup]# /usr/netscreen/DevSvr/bin/devSvr.sh start
Starting devSvrManager as nsm..................
There is not enough disk space or Inode left on the server machine,
you have to backup your data and clean up your disk space or Inode,
before we let you start the server!
Starting devSvrLogWalker as nsm....................OK
Starting devSvrDataCollector as nsm................OK
Starting devSvrDirectiveHandler as nsm.............OK
Starting devSvrProfilerMgr as nsm..................OK
Starting devSvrStatusMonitor as nsm................OK
This is just a guess but is similar to what I have seen in the past. Early on with a 32-bit OS you would have problems creating a volume larger than 1.4TB to 1.8TB; this includes Linux. Now they have addressed the problem these days, but given that NSM requires RHEL 4.0 or 5.0, 4.0 is rather old and would run into that if not updated. Next you have the application, given that NSM doesn’t support a 64-bit OS, it is clearly a 32-bit app. So the developers could have done a lot of things and put in what was a large file system size limit in the app and it just so happens you have run into it. Given all the quirks with NSM, I truly think you might have hit an applications limit. Look at the install guide, they are still recommending an 80GB HDD. When was the last time you saw that? If they haven’t updated the docs, then surely they haven’t updated the code either. The code could be prior to LFS support, so 2TB without LFS wouldn’t happen.
As I was typing this, I really wanted to see if LFS support was in. Nope.
“LFS in Glibc 2.1.3 and Glibc 2.2
The LFS interface in glibc 2.1.3 is complete - but the implementation not. The implementation in 2.1.3 contains also some bugs, e.g. ftello64 is broken. If you want to use the LFS interface, you need to use a glibc that has been compiled against headers from a kernel with LFS support in it.
Since glibc 2.1.3 was released before LFS support went into Linux 2.3.X/2.4.0-testX, some fixes had to be made to glibc to support the kernel routines. The current stable release of glibc is glibc 2.2.3 (2.2 was released in November 2000) and it does support all the features from Linux 2.4.0. Glibc 2.2.x is now used by most of the major distributions in their latest release (e.g. SuSE 7.2, Red Hat 7.1). glibc 2.2 supports the following features that glibc 2.1.3 doesn't support:”
On my NSM, it has 2.1.2 installed. So, LFS support is not fully there. While LFS doesn’t play a direct part into volume sizes, it still shows the lack of support for large files and thus large volumes.
Also, at the end of last month, RHEL 4 hit the end of Production 3 phase and entered extended life phase.
“During the Extended Life Phase, a Red Hat Enterprise Linux subscription provides continued access to previously released content on the Red Hat Customer Portal, as well as to other content, such as documentation and the Red Hat Knowledgebase.
As an optionally available add-onvii to a Red Hat Enterprise Linux subscription, Red Hat offers an Extended Life Cycle Support (ELS) subscription. ELS delivers critical impact security fixes and selected urgent-priority bug fixes that are available and qualified for a published subset of the packages in a specific major release of Red Hat Enterprise Linux that is beyond the end of the Production 3 Phase. For ELS subscribers, Red Hat will generally continue to proactively provide the Critical Impact security fixes if and when available independent of customer requests.
The ELS Add-On is delivered during the Extended Life Phase for an additional three years after the conclusion of the seven-year Production Phase. ELS is delivered for a limited set of software packages on a specific set of hardware architectures and is available for Red Hat Enterprise Linux 3 and 4 only.”
Given that they are still referencing for the most part a dead OS and HDD sizes that you could only find on eBay (or deep in a storage closet), it shows how neglected the NSM code and guides really are.
I know, not what you wanted to hear. I will be in the same boat as you, I have a new server to install NSM on and will have about the same amount of storage as you will. I've been hoping for a 2012 release though with some improvements. The sad part, if you buy a new server today, you can't even get 4GB installed, it is much higher and that extra RAM cannot be used.
Hmmm, then something is up with my NSM install. I'm waiting to hear if the 2012 release is close before I build a new NSM server. If it isn't, then I'll let you know about the 1.4TB limit. I have seen 1.4TB and 1.8TB in the past though (not on NSM but elsewhere.)