Switching

last person joined: 2 days ago 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  Problem with Virtual-Chassis

    Posted 07-23-2015 05:29
      |   view attached

    Hi all,

    I have virtual-chassis with next configuarion:

    ***@SW5> show configuration virtual-chassis
    preprovisioned;
    no-split-detection;
    member 0 {
    role line-card;
    serial-number ***;
    }
    member 1 {
    role line-card;
    serial-number ***;
    }
    member 2 {
    role routing-engine;
    serial-number ***;
    }
    member 4 {
    role line-card;
    serial-number ***;
    }
    member 3 {
    role line-card;
    serial-number ***;
    }
    member 6 {
    role routing-engine;
    serial-number ***;
    }

    I detected that all ports on FPC4 went down, but FPC4 uptime didn't change. I found next info in logs:

    Jul 20 04:48:01.154  SW5.M9P.MSK fpc4 PFEMAN: Master socket closed
    Jul 20 04:48:02.163  SW5.M9P.MSK fpc4 Routing engine PFEMAN reconnection succeeded after 1 tries
    etc...

    I attached all logs.

    I suppose that there was some soft/hard crach with RE on FPC4.

    Have anybody same issue?

    Attachment(s)

    txt
    logs.txt   118 KB 1 version


  • 2.  RE: Problem with Virtual-Chassis
    Best Answer

    Posted 07-24-2015 08:25

    Hello,

     

    Which version is installed?

    Do you see any crc on vc/vcp links? >> show virtual-chassis vc-port statistics extensive | grep crc

    Any core file been generated? >> show system core-dumps all-members 

     

    Any specific error from logs on FPC4 >> request session member 4 >>> show log messages (around time of the issue)

     

    If nothing found I would suggest open JTAC case.

     

    =======================================================

    Please Mark My Solution Accepted if it Helped, Kudos are Appreciated Too

     

     



  • 3.  RE: Problem with Virtual-Chassis

    Posted 07-25-2015 05:04

    Hello,

    Thanks for your support. It's really helpfull.

    In logs for FPC4 I found next message:

    Jul 20 04:47:55.749  SW5.M9P.MSK /kernel: Buffer management parity error detected in mpfe0, value 0x10001, re-init the PFE

    There is description about it here:
    http://kb.juniper.net/InfoCenter/index?page=content&id=KB18931

    It was memory crach on PFE. Thank you.