SRX Services Gateway
Highlighted
SRX Services Gateway

cpu load to be high

‎02-20-2020 07:39 PM

Hi


I have a problem with juniper srx300 JUNOS 18.2R3-S2.9 after I have committed the cpu load to be high and access to ssh juniper is slow. what is the cause?

 

Thanks,

7 REPLIES 7
Highlighted
SRX Services Gateway

Re: cpu load to be high

‎02-20-2020 07:46 PM
Can you share the “show chassis routing-engine” and “show system processes extensive | no-more” outputs
Thanks,
Suraj
Please Mark My Solution Accepted if it Helped, Kudos are Appreciated too
Highlighted
SRX Services Gateway

Re: cpu load to be high

‎02-20-2020 08:23 PM

admin@SRX> show chassis routing-engine
Routing Engine status:
Temperature 42 degrees C / 107 degrees F
CPU temperature 58 degrees C / 136 degrees F
Total memory 4096 MB Max 1311 MB used ( 32 percent)
Control plane memory 2400 MB Max 864 MB used ( 36 percent)
Data plane memory 1696 MB Max 441 MB used ( 26 percent)
5 sec CPU utilization:
User 56 percent
Background 0 percent
Kernel 20 percent
Interrupt 0 percent
Idle 24 percent
Model RE-SRX300
Serial ID CV4119AF0527
Start time 2020-02-14 13:41:41 WIT
Uptime 6 days, 21 hours, 39 minutes, 10 seconds
Last reboot reason 0x200:normal shutdown
Load averages: 1 minute 5 minute 15 minute
3.45 1.43 0.82

admin@HRI-SRX300> show system processes extensive | no-more
last pid: 29991; load averages: 2.47, 1.33, 0.80 up 6+21:40:02 11:22:08
194 processes: 18 running, 162 sleeping, 1 zombie, 13 waiting

Mem: 613M Active, 321M Inact, 1807M Wired, 742M Cache, 112M Buf, 481M Free
Swap: 792M Total, 792M Free


PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
2307 root 123 0 1803M 1140M CPU1 1 195.1H 93.95% flowd_octeon_hm
21 root 155 52 0K 16K RUN 0 121.0H 71.39% idle: cpu0
2307 root 27 0 1803M 1140M RUN 0 195.1H 9.77% flowd_octeon_hm
2307 root 20 0 1803M 1140M select 0 195.1H 0.00% flowd_octeon_hm
2307 root 20 0 1803M 1140M ucondt 0 195.1H 0.00% flowd_octeon_hm
23 root -36 -139 0K 16K RUN 0 67:45 0.00% swi7: clock
2068 root 20 0 132M 35920K RUN 0 40:34 0.00% authd
2051 root 20 0 47940K 19808K select 0 33:42 0.00% pfed
2048 root 4 0 144M 74792K kqread 0 29:26 0.00% rpd
2048 root 4 0 144M 74792K kqread 0 29:26 0.00% rpd
2048 root 4 0 144M 74792K kqread 0 29:26 0.00% rpd
2102 root 20 0 38428K 13300K select 0 28:02 0.00% license-check
2047 root 20 0 61128K 22984K select 0 18:21 0.00% mib2d
2089 root 39 0 14564K 2764K select 0 16:37 0.00% repd
2089 root 20 0 14564K 2764K select 0 16:37 0.00% repd
2049 root 20 0 61080K 27580K select 0 15:57 0.00% l2ald
2046 root 20 0 39648K 16868K select 0 14:12 0.00% snmpd
22 root -56 -159 0K 16K WAIT 0 12:02 0.00% swi2: netisr 0
2067 root 20 0 112M 20532K RUN 0 11:52 0.00% jdhcpd
8 root -16 0 0K 16K rtfifo 0 8:41 0.00% rtfifo_kern_recv
2 root -16 0 0K 16K jfe_jo 0 7:41 0.00% jfe_job_0_0
2074 root 20 0 28140K 5688K select 0 6:06 0.00% shm-rtsdbd
2060 root 20 0 28028K 6772K select 0 5:41 0.00% dood
1660 root 21 0 35444K 10876K select 0 4:22 0.00% eventd
2078 root 20 0 50460K 19552K select 0 4:20 0.00% nsd
2090 root 20 0 25624K 7428K select 0 4:11 0.00% ipfd
2090 root 8 0 25624K 7428K nanslp 0 4:11 0.00% ipfd
2090 root 8 0 25624K 7428K nanslp 0 4:11 0.00% ipfd
2090 root 8 0 25624K 7428K nanslp 0 4:11 0.00% ipfd
2090 root 8 0 25624K 7428K nanslp 0 4:11 0.00% ipfd
20 root 155 52 0K 16K RUN 1 3:54 0.00% idle: cpu1
2095 root 20 0 46132K 24352K select 0 3:16 0.00% utmd
2043 root 20 0 30648K 11144K select 0 3:07 0.00% alarmd
58 root -16 0 0K 16K psleep 0 3:06 0.00% vmkmemdaemon
2094 root 20 0 30156K 9700K select 0 2:56 0.00% rtlogd
2054 root 20 0 31068K 11348K select 0 2:40 0.00% ppmd
2085 root 123 0 61740K 35524K select 0 2:37 0.00% appidd
2085 root 20 0 61740K 35524K select 0 2:37 0.00% appidd
2039 root 20 0 3556K 1472K select 0 2:30 0.00% bslockd
2042 root 20 0 126M 35920K select 0 2:30 0.00% chassisd
2091 root 20 0 47736K 13188K select 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2091 root 8 0 47736K 13188K nanslp 0 2:22 0.00% aamwd
2063 root 4 0 56232K 25472K kqread 0 2:21 0.00% l2cpd
36 root -8 0 0K 16K xfer s 0 2:20 0.00% udev-sched-0.1
2056 root 20 0 19916K 5268K select 0 2:00 0.00% irsd
41 root 155 52 0K 16K pgzero 0 2:00 0.00% pagezero
43 root 20 0 0K 16K vnlrum 0 1:41 0.00% vnlru_mem
2053 root 20 0 52596K 23556K select 0 1:39 0.00% kmd
2093 root 20 0 34428K 13092K select 0 1:30 0.00% fwauthd
2093 root 8 0 34428K 13092K nanslp 0 1:30 0.00% fwauthd
2093 root 8 0 34428K 13092K nanslp 0 1:30 0.00% fwauthd
2093 root 8 0 34428K 13092K nanslp 0 1:30 0.00% fwauthd
6 root -8 0 0K 16K - 0 1:25 0.00% g_up
2099 root 8 0 30008K 4052K nanslp 0 1:18 0.00% wmic
45 root -16 0 0K 16K syncer 0 1:17 0.00% syncer
2073 root 20 0 38192K 10624K select 0 1:12 0.00% smid
25 root -16 0 0K 16K - 0 1:05 0.00% rand_harvestq
7 root -8 0 0K 16K - 0 1:04 0.00% g_down
2082 root 20 0 28208K 9688K select 0 0:58 0.00% pkid
2340 root -8 0 0K 16K select 0 0:55 0.00% ppt_0a_00000001
5 root -8 0 0K 16K - 0 0:55 0.00% g_event
2041 root 20 0 65196K 21424K select 0 0:50 0.00% dcd
2107 root 4 0 52400K 23004K kqread 0 0:43 0.00% dot1xd
2096 root 20 0 12812K 9160K select 0 0:37 0.00% ntpd
2066 root 20 0 28280K 9600K select 0 0:35 0.00% lfmd
2057 root 20 0 27488K 9204K select 0 0:32 0.00% bfdd
2087 root 20 0 55196K 14368K select 0 0:29 0.00% idpd
2038 root 20 0 3092K 1040K select 0 0:27 0.00% watchdog
2064 root 20 0 27176K 9084K select 0 0:27 0.00% oamd
88 root -8 0 0K 16K mdwait 0 0:25 0.00% md1
26 root -44 -147 0K 16K WAIT 0 0:25 0.00% swi5: cambio
20623 nobody 4 0 12788K 1748K kqread 0 0:22 0.00% webapid
2052 root 123 0 43972K 15176K select 0 0:21 0.00% cosd
2052 root 20 0 43972K 15176K select 0 0:21 0.00% cosd
2077 root 20 0 31232K 12140K select 0 0:19 0.00% jsrpd
2044 root 20 0 27384K 7716K select 0 0:18 0.00% craftd
2050 root 20 0 10360K 2816K select 0 0:14 0.00% inetd
2055 root 20 0 48928K 19028K select 0 0:10 0.00% dfwd
2045 root 20 0 75064K 29272K select 0 0:09 0.00% mgd
4 root -16 0 0K 16K jfe_jo 0 0:09 0.00% jfe_job_1_1
46 root -16 0 0K 16K sdflus 0 0:07 0.00% softdepflush
42 root -16 0 0K 16K psleep 0 0:07 0.00% bufdaemon
1 root 8 0 2304K 1400K wait 0 0:06 0.00% init
2558 root 20 0 82452K 63588K select 0 0:06 0.00% cli
44 root -4 0 0K 16K vlruwt 0 0:05 0.00% vnlru
51 root -16 0 0K 16K psleep 0 0:04 0.00% vmuncachedaemon
2069 root 20 0 31420K 6884K select 0 0:04 0.00% mplsoamd
2104 root 20 0 22280K 6472K select 0 0:04 0.00% mgd-api
2098 root 20 0 19748K 4332K select 0 0:03 0.00% smtpd
6378 root 8 0 3756K 1468K nanslp 0 0:03 0.00% cron
2092 root 123 0 23256K 5892K select 0 0:03 0.00% nstraced
20242 root 20 0 26496K 9872K select 0 0:02 0.00% httpd-gk
28 root 8 0 0K 16K - 0 0:02 0.00% thread taskq
2086 root 123 0 34136K 8748K select 0 0:02 0.00% appsecured
2097 root 43 0 31228K 7420K ucond 0 0:02 0.00% syshmd
2097 root 43 0 31228K 7420K ucond 0 0:02 0.00% syshmd
2097 root 20 0 31228K 7420K select 0 0:02 0.00% syshmd
26457 admin 20 0 82436K 63068K select 0 0:01 0.00% cli
29147 admin 20 0 82436K 63068K select 0 0:01 0.00% cli
2079 root 20 0 24416K 6896K select 0 0:01 0.00% lsysd
2110 root 8 0 16404K 3520K wait 0 0:01 0.00% login
1626 root 20 0 9460K 2140K select 0 0:01 0.00% usbd
40 root -16 0 0K 16K psleep 0 0:01 0.00% pagedaemon
2080 root 20 0 28096K 7556K select 0 0:01 0.00% jsqlsyncd
352 root -8 0 0K 16K mdwait 0 0:01 0.00% md2
2072 root 59 0 32468K 9244K select 0 0:01 0.00% wwand
2072 root 20 0 32468K 9244K select 0 0:01 0.00% wwand
26458 root 20 0 76232K 20700K select 0 0:01 0.00% mgd
29151 root 20 0 76232K 20760K select 0 0:01 0.00% mgd
2559 root 123 0 77928K 20064K select 0 0:01 0.00% mgd
29921 nobody 29 0 14128K 5476K select 0 0:01 0.00% httpd
2105 root 123 0 26168K 6400K select 0 0:01 0.00% xmlproxyd
2106 root 123 0 22104K 4840K select 0 0:01 0.00% sdxd
2070 root 123 0 28320K 6160K select 0 0:00 0.00% bdbrepd
2088 root 123 0 20240K 5112K select 0 0:00 0.00% datapath-traced
2061 root 123 0 20340K 5388K select 0 0:00 0.00% pppd
2071 root 123 0 22392K 5668K select 0 0:00 0.00% sendd
29933 root 75 0 37020K 8868K ucond 0 0:00 0.00% llmd
29933 root 63 0 37020K 8868K ucond 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K ucond 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 60 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 20 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 20 0 37020K 8868K select 0 0:00 0.00% llmd
29933 root 8 0 37020K 8868K nanslp 0 0:00 0.00% llmd
53 root 8 0 0K 16K ifscli 0 0:00 0.00% ifsclientclosed
26446 root 31 0 19724K 5992K select 0 0:00 0.00% sshd
29136 root 30 0 19724K 5992K select 0 0:00 0.00% sshd
2100 root 20 0 12536K 4028K pause 0 0:00 0.00% webapid
2538 root 20 0 6164K 3412K pause 0 0:00 0.00% csh
29478 root 26 0 9100K 4320K select 0 0:00 0.00% sshd
65 root -8 0 0K 16K mdwait 0 0:00 0.00% md0
1881 root -8 0 0K 16K mdwait 0 0:00 0.00% md3
2040 root 40 0 9756K 2180K select 0 0:00 0.00% tnetd
33 root -8 0 0K 16K usbevt 0 0:00 0.00% usb0
2081 root 48 0 10244K 2552K select 0 0:00 0.00% inetd
29479 sshd 30 0 9116K 2168K select 0 0:00 0.00% sshd
2076 root 46 0 7616K 2276K select 0 0:00 0.00% tcsd
35 root -8 0 0K 16K usbevt 0 0:00 0.00% usb1
29145 admin 20 0 19736K 2424K select 0 0:00 0.00% sshd
26455 admin 20 0 19724K 2408K select 0 0:00 0.00% sshd
29991 root 20 0 25192K 2300K CPU0 0 0:00 0.00% top
32 root -64 -167 0K 16K WAIT 0 0:00 0.00% swi0: uart
0 root -8 0 0K 0K WAIT 0 0:00 0.00% swapper
57 root -8 0 0K 16K select 0 0:00 0.00% if_pfe_listen
38 root -52 -155 0K 16K WAIT 0 0:00 0.00% swi3: ipopt ip6opt
9 root 8 0 0K 16K - 0 0:00 0.00% kqueue taskq
59 root -8 0 0K 16K select 0 0:00 0.00% if_pic_listen0
50 root 4 0 0K 16K dump_r 0 0:00 0.00% kern_dump_proc
48 root 12 0 0K 16K sleep 0 0:00 0.00% netdaemon
60 root -8 0 0K 16K - 0 0:00 0.00% nfsiod 0
1759 root -8 0 0K 16K crypto 0 0:00 0.00% crypto
55 root -20 0 0K 16K jsr_js 0 0:00 0.00% jsr_jsm
54 root 4 0 0K 16K kkcm_n 0 0:00 0.00% jsr_kkcm
52 root 4 0 0K 16K purge_ 0 0:00 0.00% kern_pir_proc
62 root -8 0 0K 16K - 0 0:00 0.00% nfsiod 2
63 root -8 0 0K 16K - 0 0:00 0.00% nfsiod 3
61 root -8 0 0K 16K - 0 0:00 0.00% nfsiod 1
1760 root -8 0 0K 16K crypto 0 0:00 0.00% crypto returns
31 root 8 0 0K 16K - 0 0:00 0.00% mastership taskq
34 root 8 0 0K 16K usbtsk 0 0:00 0.00% usbtask
3 root -16 0 0K 16K jfe_jo 0 0:00 0.00% jfe_job_1_0
30 root -28 -131 0K 16K WAIT 0 0:00 0.00% swi9: task queue
29 root -28 -131 0K 16K WAIT 0 0:00 0.00% swi9: Giant taskq
27 root -32 -135 0K 16K WAIT 0 0:00 0.00% swi8: +
24 root -40 -143 0K 16K WAIT 0 0:00 0.00% swi6: vm
37 root -48 -151 0K 16K WAIT 0 0:00 0.00% swi4: ip_mismatch+
49 root -56 -159 0K 16K WAIT 0 0:00 0.00% swi2: ndpisr-I
47 root -56 -159 0K 16K WAIT 0 0:00 0.00% swi2: ndpisr-E
39 root -60 -163 0K 16K WAIT 0 0:00 0.00% swi1: ipfwd
17 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu4
16 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu5
15 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu6
14 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu7
18 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu3
12 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu9
11 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu10
10 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu11
19 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu2
13 root 155 52 0K 16K CPU0 0 0:00 0.00% idle: cpu8


admin@srx>

Highlighted
SRX Services Gateway

Re: cpu load to be high

[ Edited ]
‎02-20-2020 08:33 PM

hi tech_mvt,

 

What was the configuration you commited before the high CPU started?

Please access the shell and gather "top -H" command to see if we can find an offendind process:

 

> start shell
% top -H

 

It will be good to check the "messages" log file (if configured) looking for any suspicious messages that give us a hint about what could be causing the high CPU:

 

> show log messages

 

If you want to upload the file (messages) I will be glad to check it.

 

Pura Vida from Costa Rica - Mark as Resolved if it applies.
Kudos are appreciated too!
Highlighted
SRX Services Gateway

Re: cpu load to be high

‎02-20-2020 08:52 PM

Hi epaniagua

 

Next I attach the log. when I commit the cpu becomes high about 78-99% after using a few minutes it becomes normal again. guess what the cause

 

Thank you

Attachments

Highlighted
SRX Services Gateway

Re: cpu load to be high

‎02-20-2020 09:06 PM

Thanks, I thought the high CPU started after committing a specific configuration. Now I understand that the high cpu ocurrs when you perform a commit, no matter what you are commiting.

 

Please try committing any configuration (maybe add a description to an interface) but use the following command:

 

# commit | display detail

 

 

Pura Vida from Costa Rica - Mark as Resolved if it applies.
Kudos are appreciated too!
Highlighted
SRX Services Gateway

Re: cpu load to be high

‎02-20-2020 09:32 PM

1. I checked the file but the logs are from Feb 5, is that correct? Check the current time on the SRX with:

 

> show system uptime

 

2. I saw some ssh failed attempts that could cause high CPU if these are constant, please run:

 

> show log messages | match ssh | no-more

 

3. There are a bunch of logs related to Web-filtering and this indicates that you are sending data-plane logs to the Routing Engine which can affect also the CPU utilization at control plane level. Please see my comment about logging on the following post and if possible configure security log to stream mode. 

 

https://forums.juniper.net/t5/SRX-Services-Gateway/syslogs-not-being-saved-on-srx340-local-storage/m...

 

4. Finally, open a second session to the SRX and while you perform a commit over the first session (commit | display detail), check via the second session if there is a process going high during the time of the commit with the following command:

 

> show system processes extensive | except 0.0

% top -H (shell command)

 

 

Pura Vida from Costa Rica - Mark as Resolved if it applies.
Kudos are appreciated too!
Highlighted
SRX Services Gateway

Re: cpu load to be high

‎03-31-2020 01:42 PM

Hi tech,

 

Any updates on this?

 

Pura Vida from Costa Rica - Mark as Resolved if it applies.
Kudos are appreciated too!
Feedback