Management
Management

NSM sees only part of the disk - why?

[ Edited ]
‎02-07-2012 10:54 AM

Hello All,

 

We are seeing rather weird problem with NSM 2011.1, maybe someone
here have seen that before or has an idea of what is happening.

 

The RAID partition is mounted on /dev/mapper/VolGroup00-LogVol00
and has about 5 terabytes size. Logs from both Dev and Gui servers
are stored there. About 20% of space is used:

 

[root@nsm dbbackup]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      5.3T  1.1T  4.1T  21% /
/dev/sda1              99M   14M   81M  15% /boot
tmpfs                 5.8G     0  5.8G   0% /dev/shm
none                  5.8G  104K  5.8G   1% /var/lib/xenstored

 

 

For some reason, NSM only sees 1313 GB of space (shows it in Server
Monitor in Gui), it also shows that space allocation is currently 99%.

 

When trying to (re)start NSM, the following is seen:

 

[root@nsm dbbackup]# /usr/netscreen/DevSvr/bin/devSvr.sh start
Starting apps...
Starting devSvrDbSvr...............................OK
Starting devSvrManager as nsm..................
********************************************************
There is not enough disk space or Inode left on the server machine,
   you have to backup your data and clean up your disk space or Inode,
   before we let you start the server!
*******************************************************
....OK
Starting devSvrLogWalker as nsm....................OK
Starting devSvrDataCollector as nsm................OK
Starting devSvrDirectiveHandler as nsm.............OK
Starting devSvrProfilerMgr as nsm..................OK
Starting devSvrStatusMonitor as nsm................OK

 

 

The iNodes are ok:

 

[root@nsm dbbackup]# df -hTi
Filesystem    Type    Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3      1.4G    485K    1.4G    1% /
/dev/sda1     ext3       26K      36     26K    1% /boot
tmpfs        tmpfs      183K       1    183K    1% /dev/shm
none         tmpfs      181K       4    181K    1% /var/lib/xenstored

 

 

What is the reason NSM only sees 21% of storage on this partition?
Also, the GPT is used for the disk, not sure if it is relevant.

Best Regards,
PK

Juniper Ambassador, Juniper Networks Certified Instructor,
JNCIE-SEC #98, JNCIE-ENT #393, JNCIE-SP #2253
Twitter: @JuniperTrain
GitHub: https://github.com/pklimai
[Juniper Authorized Education & Support in Russia]
3 REPLIES 3
Management

Re: NSM sees only part of the disk - why?

‎03-08-2012 12:59 PM

This is just a guess but is similar to what I have seen in the past. Early on with a 32-bit OS you would have problems creating a volume larger than 1.4TB to 1.8TB; this includes Linux. Now they have addressed the problem these days, but given that NSM requires RHEL 4.0 or 5.0, 4.0 is rather old and would run into that if not updated. Next you have the application, given that NSM doesn’t support a 64-bit OS, it is clearly a 32-bit app. So the developers could have done a lot of things and put in what was a large file system size limit in the app and it just so happens you have run into it. Given all the quirks with NSM, I truly think you might have hit an applications limit. Look at the install guide, they are still recommending an 80GB HDD. When was the last time you saw that? If they haven’t updated the docs, then surely they haven’t updated the code either. The code could be prior to LFS support, so 2TB without LFS wouldn’t happen.

 

As I was typing this, I really wanted to see if LFS support was in. Nope.

 

“LFS in Glibc 2.1.3 and Glibc 2.2

 

The LFS interface in glibc 2.1.3 is complete - but the implementation not. The implementation in 2.1.3 contains also some bugs, e.g. ftello64 is broken. If you want to use the LFS interface, you need to use a glibc that has been compiled against headers from a kernel with LFS support in it.

 

Since glibc 2.1.3 was released before LFS support went into Linux 2.3.X/2.4.0-testX, some fixes had to be made to glibc to support the kernel routines. The current stable release of glibc is glibc 2.2.3 (2.2 was released in November 2000) and it does support all the features from Linux 2.4.0. Glibc 2.2.x is now used by most of the major distributions in their latest release (e.g. SuSE 7.2, Red Hat 7.1). glibc 2.2 supports the following features that glibc 2.1.3 doesn't support:”

 

On my NSM, it has 2.1.2 installed. So, LFS support is not fully there. While LFS doesn’t play a direct part into volume sizes, it still shows the lack of support for large files and thus large volumes.

 

Also, at the end of last month, RHEL 4 hit the end of Production 3 phase and entered extended life phase.

 

“During the Extended Life Phase, a Red Hat Enterprise Linux subscription provides continued access to previously released content on the Red Hat Customer Portal, as well as to other content, such as documentation and the Red Hat Knowledgebase.

 

As an optionally available add-onvii to a Red Hat Enterprise Linux subscription, Red Hat offers an Extended Life Cycle Support (ELS) subscription. ELS delivers critical impact security fixes and selected urgent-priority bug fixes that are available and qualified for a published subset of the packages in a specific major release of Red Hat Enterprise Linux that is beyond the end of the Production 3 Phase. For ELS subscribers, Red Hat will generally continue to proactively provide the Critical Impact security fixes if and when available independent of customer requests.

 

The ELS Add-On is delivered during the Extended Life Phase for an additional three years after the conclusion of the seven-year Production Phase. ELS is delivered for a limited set of software packages on a specific set of hardware architectures and is available for Red Hat Enterprise Linux 3 and 4 only.”

 

Given that they are still referencing for the most part a dead OS and HDD sizes that you could only find on eBay (or deep in a storage closet), it shows how neglected the NSM code and guides really are.

 

I know, not what you wanted to hear.  I will be in the same boat as you, I have a new server to install NSM on and will have about the same amount of storage as you will.  I've been hoping for a 2012 release though with some improvements.  The sad part, if you buy a new server today, you can't even get 4GB installed, it is much higher and that extra RAM cannot be used.

Management

Re: NSM sees only part of the disk - why?

‎03-08-2012 01:22 PM

Hi lanbrown,

 

Thanks for your post. Yes this could be an NSM limit (or bug), then I wonder
if someone else runs NSM with more than 1.3 TB partition sizes.

 

As for RAM, in fact, you can use more than 4GB with PAE-enabled kernel
(usually installed by default). NSM is in fact using it, seen it working
fine.

 

Best Regards,
PK

Juniper Ambassador, Juniper Networks Certified Instructor,
JNCIE-SEC #98, JNCIE-ENT #393, JNCIE-SP #2253
Twitter: @JuniperTrain
GitHub: https://github.com/pklimai
[Juniper Authorized Education & Support in Russia]
Management

Re: NSM sees only part of the disk - why?

‎03-08-2012 01:25 PM

Hmmm, then something is up with my NSM install.  I'm waiting to hear if the 2012 release is close before I build a new NSM server.  If it isn't, then I'll let you know about the 1.4TB limit.  I have seen 1.4TB and 1.8TB in the past though (not on NSM but elsewhere.)