SRX Services Gateway
SRX Services Gateway

SRX1400 sh chassis fpc high cpu usage

[ Edited ]
‎04-21-2017 06:24 PM

Hello,

 

I've run in such situation and need some help to debug it. First of all there is not really much behind this SRX1400 with peak of 500 mbps traffic on upstream port. I've turned off all possible logs and deleted then count from policies. Turned off screens and do not run any vpn/idp/dpi services.  Still, there is constanlty such result from show chassis fpc. The CPU total jumps from near 60 up to near 80 for all the time.

 

# run show chassis fpc
node0:
--------------------------------------------------------------------------
                     Temp  CPU Utilization (%)   Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      DRAM (MB) Heap     Buffer
  0  Online            41     63          0       1024        3         26
  1  Online            34     63          0       1024        3         26
  2  Empty
  3  Online            49     63          0       1024        3         26

node1:
--------------------------------------------------------------------------
                     Temp  CPU Utilization (%)   Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      DRAM (MB) Heap     Buffer
  0  Online            39     77          0       1024        3         26
  1  Online            33     77          0       1024        3         26
  2  Empty
  3  Online            47     77          0       1024        3         26

If I look at SPU load, it seems fine:

# run show security monitoring fpc 1
node0:
--------------------------------------------------------------------------
FPC 1
  PIC 0
    CPU utilization          :   14 %
    Memory utilization       :   75 %
    Current flow session     : 13483
    Current flow session IPv4: 13483
    Current flow session IPv6:    0
    Max flow session         : 1048576
    Current CP session       : 14061
    Current CP session   IPv4: 14061
    Current CP session   IPv6:    0
    Max CP session           : 1048576
Total Session Creation Per Second (for last 96 seconds on average):  369
IPv4  Session Creation Per Second (for last 96 seconds on average):  369
IPv6  Session Creation Per Second (for last 96 seconds on average):    0

node1:
--------------------------------------------------------------------------
FPC 1
  PIC 0
    CPU utilization          :    0 %
    Memory utilization       :   74 %
    Current flow session     : 14200
    Current flow session IPv4: 14200
    Current flow session IPv6:    0
    Max flow session         : 1048576
    Current CP session       : 14197
    Current CP session   IPv4: 14197
    Current CP session   IPv6:    0
    Max CP session           : 1048576
Total Session Creation Per Second (for last 96 seconds on average):    0
IPv4  Session Creation Per Second (for last 96 seconds on average):    0
IPv6  Session Creation Per Second (for last 96 seconds on average):    0

So I've went to cpp0 and looked there:

CPP platform (800Mhz MPC 8544 processor, 1024MB memory, 512KB flash)

CPP0( vty)# show threads
PID PR State     Name                   Stack Use  Time (Last/Max/Total) cpu
--- -- -------   ---------------------  ---------  ---------------------
  1 H  asleep    Maintenance            344/2048   0/5/5 ms  0%
  2 L  running   Idle                   264/2056   0/5/3489199345 ms 29%
  3 H  asleep    Timer Services         304/2056   0/5/54275 ms  0%
  5 L  asleep    Ukern Syslog           304/4096   0/0/0 ms  0%
  6 L  asleep    Sheaf Background       400/2056   0/5/5095 ms  0%
  7 M  asleep    XFP                    408/4104   0/5/29495 ms  0%
  8 M  asleep    SFP                   2096/4096   0/220/73406895 ms  0%
  9 M  asleep    Ethernet               328/4096   0/5/17200 ms  0%
 10 M  asleep    GR253                  384/4096   0/5/15925 ms  0%
 11 M  asleep    mac_db                 240/8192   0/0/0 ms  0%
 12 M  asleep    RSMON syslog thread    880/4104   0/45/330 ms  0%
 13 L  asleep    Firmware Upgrade       264/4096   0/0/0 ms  0%
 14 L  asleep    Syslog                 496/4096   0/5/515 ms  0%
 15 M  asleep    Periodic              1288/8200   0/5/238600 ms  0%
 16 M  asleep    ezchip                 936/16392  0/5/29683580 ms  0%
 17 L  asleep    E2E Packet Handler Thread  1384/8192   0/0/0 ms  0%
 18 L  asleep    CPP FCM                272/2048   0/0/0 ms  0%
 19 M  asleep    CPP SPU_IPC            576/2048   0/0/0 ms  0%
 20 M  asleep    CPP SPU_IPC           1336/2048   0/5/713565 ms  0%
 21 M  asleep    HSL2                   384/4104   0/5/57760 ms  0%
 22 H  asleep    TNP Hello              472/2048   0/5/92195 ms  0%
 23 M  asleep    UDP Input              320/2056   0/5/1765 ms  0%
 24 H  asleep    TTP Receive            344/4096   0/0/0 ms  0%
 25 H  asleep    TTP Transmit           336/4096   0/0/0 ms  0%
 26 H  asleep    RDP Timers             408/2056   0/5/53315 ms  0%
 27 H  asleep    RDP Input              992/2048   0/30/3527555 ms  0%
 28 H  asleep    USP IPC Server         720/8192   0/0/0 ms  0%
 29 M  asleep    lsys event loop        216/4104   0/0/0 ms  0%
 30 M  asleep    PIC Periodic          2128/4104   0/25/202947540 ms  1%
 31 M  asleep    PIC                    200/4096   0/0/0 ms  0%
 32 M  asleep    CPP CM                6616/16392  0/845/2258665 ms  0%
 33 L  asleep    ICMP6 Input            464/4104   0/0/0 ms  0%
 34 L  asleep    IP6 Option Input      1032/4096   0/0/0 ms  0%
 35 L  asleep    ICMP Input            1032/4096   0/0/0 ms  0%
 36 L  asleep    IP Option Input       1016/4104   0/0/0 ms  0%
 37 M  asleep    IGMP Input            1016/4096   0/0/0 ms  0%
 38 L  asleep    NH Probe Service       264/4104   0/5/240 ms  0%
 41 H  asleep    SNTP Daemon           1120/8192   0/5/1485 ms  0%
 42 M  asleep    PFE Manager           5744/8200   0/25/6074560 ms  0%
 43 L  asleep    Console               2088/16392  0/0/0 ms  0%
 44 L  asleep    Console               2296/16384  0/5/15 ms  0%
 45 M  asleep    L2HA TOGGLE            352/4104   0/5/182890 ms  0%
 46 M  asleep    USP Trace             1056/16392  0/5/30715 ms  0%
 47 M  asleep    PFE Statistics         720/16384  0/5/902525 ms  0%
 48 L  asleep    Recovery Socket        680/2056   0/0/0 ms  0%
 49 M  asleep    bcmDPC                 344/16392  0/0/0 ms  0%
 50 M  asleep    bcmCNTR.0             1352/16384  680/685/8109704880 ms 67%
 51 M  asleep    bcmTX                  336/16384  0/0/0 ms  0%
 52 M  asleep    bcmXGS3AsyncTX         376/16384  0/0/0 ms  0%
 53 M  asleep    bcmLINK.0             2392/16392  5/120/36738080 ms  0%
107 L  asleep    Cattle-Prod Daemon    1744/16384  0/0/0 ms  0%
112 L  ready     Virtual Console       4064/16384  0/0/0 ms  0%
113 L  asleep    Virtual Console        592/16392  0/0/0 ms  0%

and there is a process named    bcmCNTR.0  which runs almost always at 60-80%. Ok, I've found the root of the problem, but what can I do to calm it down? All I know about this process is that it runs throughout the system and collects counters and such thins.. But I've got no any enabled. 

 

Any ideas? I've got nothing to look at anymore.

JunOS 12.3X48-D40.5

2 REPLIES 2
SRX Services Gateway

Re: SRX1400 sh chassis fpc high cpu usage

‎05-04-2017 03:33 AM

How did you get cpp0 processes statistics?

SRX Services Gateway

Re: SRX1400 sh chassis fpc high cpu usage

‎05-04-2017 07:10 AM

Hi Romeo,

 

Thank you for psoting your query here.

 

First of i would like to inform you that the command "show chassis fpc" does not gives you the CPU utlization by SRX due to traffic. The correct command is the second one you have used i.e. "show security monitoring fpc <fpc_no>"or "show security monitroing performance spu".

 

Command "show chassis fpc" shows the CPU usage on CPP and CPP is the control CPU monitoring status of SPC/IOC/NP.

Command "show security monitoring performance spu" or "show security monitoring fpc <fpc_no>" shows the CPU usage on SPU, SPU is the guy which processes network traffic.

 

Now the process bcmCNTR.0 becasue of which "show chassis fpc" shows high CPU actually is a thread which scan the counters and state of SPC/IOC/NP and hence even when the traffic is not high it can show high utlization as it is running to collect the counter and state information for the various cards and chips.

 

To summarize the CPU utilization you are seeing in the output of "show chassis fpc" can be said to be actually expected. Moreover if you want to know the CPU utilization on SRX due to traffic please use the other two commands as suggested above.

 

Hope this Helps. Smiley Happy

 

Thanks,
Pulkit Bhandari
Please mark my response as Solution Accepted if it Helps, Kudos are Appreciated too. Smiley Happy