Junos
Reply
Visitor
ebalmon
Posts: 4
Registered: ‎12-04-2009
0

Log Juniper M20

Hi all,

anyone knows what could be the reason that this message appears on the log of our juniper M20?

Apr  6 06:00:15 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 5 sec scheduler slip, user: 4 sec 940542 usec, system: 0 sec, 14925 usec
Apr 6 05:58:07 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 4 sec 75182 usec, system: 0 sec, 0 usec
Apr 6 05:57:52 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 3 sec 976823 usec, system: 0 sec, 0 usec
Apr 6 05:57:05 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 4 sec 7157 usec, system: 0 sec, 0 usec
Apr 6 05:56:09 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 3 sec 991475 usec, system: 0 sec, 0 usec
Apr 6 05:45:33 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 5 sec scheduler slip, user: 4 sec 972022 usec, system: 0 sec, 7041 usec
Apr 6 05:45:23 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 4 sec 112812 usec, system: 0 sec, 0 usec
Apr 6 05:44:42 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec scheduler slip, user: 4 sec 332330 usec, system: 0

Mathew

Recognized Expert
Loup2
Posts: 301
Registered: ‎04-22-2008
0

Re: Log Juniper M20

I think this is usually an overcrowded CPU usage on the Routing-engine.

Check the CPU load:

 

Show chassis routing-engine

show system processes summary

or

show system processes extensive

 

HTH

 

Visitor
ebalmon
Posts: 4
Registered: ‎12-04-2009
0

Re: Log Juniper M20

This is the ouput:

 

abc123@xx01.bad2> show chassis routing-engine
Routing Engine status:
  Slot 0:
    Current state                  Master
    Election priority              Master (default)
    Temperature                 23 degrees C / 73 degrees F
    CPU temperature             21 degrees C / 69 degrees F
    DRAM                      2048 MB
    Memory utilization          47 percent
    CPU utilization:
      User                       3 percent
      Background                 0 percent
      Kernel                     1 percent
      Interrupt                  0 percent
      Idle                      96 percent
    Model                          RE-3.0
    Serial ID                      P10865701888
    Start time                     2009-10-28 15:49:40 CET
    Uptime                        162 days, 19 hours, 28 minutes, 41 seconds
    Load averages:                 1 minute   5 minute  15 minute
                                       0.00       0.01       0.00

abc123@xx01.bad2> show system processes summary
last pid:  2053;  load averages:  0.13,  0.04,  0.01  up 162+19:30:35    12:28:44
65 processes:  2 running, 63 sleeping

Mem: 738M Active, 220M Inact, 177M Wired, 404K Cache, 143M Buf, 873M Free
Swap: 2048M Total, 2048M Free


  PID USERNAME PRI NICE  SIZE    RES STATE    TIME   WCPU    CPU COMMAND
 2625 root       2   0   429M   426M kqread  44.0H  0.68%  0.68% rpd
 2609 root      58  15 42556K 41740K RUN    873:03  0.00%  0.00% sampled
 2053 root      34   0 21728K   844K RUN      0:00  0.00%  0.00% top

 

 

abc123@xx01.bad2> show system processes extensive
last pid:  2054;  load averages:  0.03,  0.03,  0.00  up 162+19:32:29    12:30:38
65 processes:  1 running, 64 sleeping

Mem: 738M Active, 220M Inact, 177M Wired, 404K Cache, 143M Buf, 873M Free
Swap: 2048M Total, 2048M Free

 

  PID USERNAME PRI NICE  SIZE    RES STATE    TIME   WCPU    CPU COMMAND
 2606 root       2   0   394M   392M kqread  53.7H  1.22%  1.22% rpd
 2625 root       2   0   429M   426M kqread  44.0H  0.34%  0.34% rpd
 2609 root       2  15 42556K 41740K select 873:03  0.00%  0.00% sampled
 2585 root       2   0  9908K  2960K select 239:20  0.00%  0.00% chassisd
 2615 root       2   0  2020K  1408K select  57:46  0.00%  0.00% ppmd
 2629 root       2   0  4472K  2660K select  42:25  0.00%  0.00% snmpd
 2605 root       2   0  3620K  2244K select  28:02  0.00%  0.00% mib2d
 2630 root       2   0  3808K  2960K select  26:23  0.00%  0.00% dcd
 2626 root       2   0  1388K   764K select  16:42  0.00%  0.00% irsd
 2586 root       2   0  1824K  1080K select  15:58  0.00%  0.00% alarmd
 2712 root       2   0     0K     0K peer_s  14:13  0.00%  0.00% peer proxy
 2495 root       2   0  1300K   820K select   9:21  0.00%  0.00% syslogd
    7 root      18   0     0K     0K syncer   7:06  0.00%  0.00% syncer
77737 root       2   0  1256K   728K select   2:50  0.00%  0.00% ntpd
 2621 root       2   0  1928K  1168K select   2:35  0.00%  0.00% bfdd
 2628 root       2   0  2532K  1272K select   1:46  0.00%  0.00% pfed
 2716 root       2   0     0K     0K peer_s   1:41  0.00%  0.00% peer proxy
 2611 root       2   0  2792K  1428K select   1:39  0.00%  0.00% rmopd
 2618 root       2   0  1928K  1140K select   1:06  0.00%  0.00% fsad
    6 root      -2   0     0K     0K vlruwt   0:55  0.00%  0.00% vnlru
    5 root     -18   0     0K     0K psleep   0:52  0.00%  0.00% bufdaemon
   11 root     -18   0     0K     0K psleep   0:49  0.00%  0.00% vmuncachedaemo
 2627 root       2   0  1944K  1160K select   0:41  0.00%  0.00% dfwd
 2590 root       2   0  1284K   808K select   0:32  0.00%  0.00% inetd
 2591 root       2   0  1004K   392K sbwait   0:31  0.00%  0.00% tnp.sntpd
 2582 root       2   0   996K   360K select   0:30  0.00%  0.00% watchdog
 2552 root      10   0  1132K   632K nanslp   0:23  0.00%  0.00% cron
 2631 root       2   0  4580K  2304K select   0:21  0.00%  0.00% kmd
 2588 root       2   0 13012K  6956K select   0:20  0.00%  0.00% mgd
 2738 root       2   0     0K     0K peer_s   0:13  0.00%  0.00% peer proxy
    3 root     -18   0     0K     0K psleep   0:12  0.00%  0.00% pagedaemon
 2599 root      10   0  1072K   504K nanslp   0:08  0.00%  0.00% eccd
    1 root      10   0   916K   576K wait     0:04  0.00%  0.00% init
 2595 root      10   0  1040K   460K nanslp   0:03  0.00%  0.00% smartd
  101 root      10   0  2051M 35448K mfsidl   0:01  0.00%  0.00% newfs
 2619 root       2   0  2044K  1276K select   0:00  0.00%  0.00% spd
 2607 root       2 -15  2556K  1184K select   0:00  0.00%  0.00% apsd
 2608 root       2   0  2600K  1292K select   0:00  0.00%  0.00% vrrpd
 2612 root       2   0  2856K  1572K select   0:00  0.00%  0.00% cosd
 2054 root      34   0 21728K   844K RUN      0:00  0.00%  0.00% top
 2613 root       2   0  1928K  1172K select   0:00  0.00%  0.00% nasd
 2045 root       2   0  5416K  1872K select   0:00  0.00%  0.00% sshd
 2622 root       2   0  1736K   960K select   0:00  0.00%  0.00% sdxd
 2047 ibx333     2   0  9256K  4364K select   0:00  0.00%  0.00% cli
 2623 root       2   0  1784K  1040K select   0:00  0.00%  0.00% rdd
 2610 root       2   0  2072K   960K select   0:00  0.00%  0.00% ilmid
 2614 root       2   0  1848K  1084K select   0:00  0.00%  0.00% fud
 2616 root       2   0  1988K  1132K select   0:00  0.00%  0.00% lmpd
 2620 root       2   0  1944K  1072K select   0:00  0.00%  0.00% pgmd
 2624 root       2   0  1672K   968K select   0:00  0.00%  0.00% lrmuxd
 2048 root       2   0 13068K  7768K select   0:00  0.00%  0.00% mgd
 2587 root       2   0  1928K   788K select   0:00  0.00%  0.00% craftd
 2617 root       2   0  1284K   636K select   0:00  0.00%  0.00% rtspd
    9 root       2   0     0K     0K pfeacc   0:00  0.00%  0.00% if_pfe_listen
 2583 root       2   0  1128K   620K select   0:00  0.00%  0.00% tnetd
 2600 root       3   0  1084K   524K ttyin    0:00  0.00%  0.00% getty
 2601 root       3   0  1080K   500K siodcd   0:00  0.00%  0.00% getty
 2479 root       2   0   448K   264K select   0:00  0.00%  0.00% pccardd
    0 root     -18   0     0K     0K sched    0:00  0.00%  0.00% swapper
   12 root       2   0     0K     0K picacc   0:00  0.00%  0.00% if_pic_listen
   10 root       2   0     0K     0K cb-pol   0:00  0.00%  0.00% cb_poll
   13 root       2   0     0K     0K scs_ho   0:00  0.00%  0.00% scs_housekeepi
    8 root      29   0     0K     0K sleep    0:00  0.00%  0.00% netdaemon
    4 root      18   0     0K     0K psleep   0:00  0.00%  0.00% vmdaemon
    2 root      10   0     0K     0K tqthr    0:00  0.00%  0.00% taskqueue

 

 

 

someone sees something unusual?

 

Thanks for your reply

Regular Visitor
saliljoshi
Posts: 2
Registered: ‎05-01-2011
0

Re: Log Juniper M20

[ Edited ]
I recommend that you test the hard drive with a SmartD test. You must run the below commands as root and start the test: root@router> request chassis routing-engine hard-disk-test long disk /dev/ad1 or root@router% smartd -oe /dev/ad1 < --- This will enable the test. root@router% smartd -oX /dev/ad1 < ---- will run the extensive test Show results: root@router> request chassis routing-engine hard-disk-test show-status disk /dev/ad1 or root@router% smartd -oa /dev/ad1 Smartd commands can take over 20 minutes to finish, but will not be service affecting, which means it can be run in production.
Copyright© 1999-2013 Juniper Networks, Inc. All rights reserved.