Solaris 9 panic

My solaris crash every week, one or two times.
# uname -a
SunOS venus 5.9 Generic_117171-02 sun4u sparc SUNW,Sun-Blade-100
#tail /var/adm/messages
Sep 9 12:08:55 venus unix: [ID 836849 kern.notice]
Sep 9 12:08:55 venus ^Mpanic[cpu0]/thread=30002de4aa0:
Sep 9 12:08:56 venus unix: [ID 799565 kern.notice] BAD TRAP: type=34 rp=2a1005bd2f0 addr=1 mmu_fsr=0
Sep 9 12:08:56 venus unix: [ID 100000 kern.notice]
Sep 9 12:08:56 venus unix: [ID 839527 kern.notice] Xsun:
Sep 9 12:08:56 venus unix: [ID 123557 kern.notice] alignment error:
Sep 9 12:08:56 venus unix: [ID 381800 kern.notice] addr=0x1
Sep 9 12:08:56 venus unix: [ID 101969 kern.notice] pid=345, pc=0x1128ee8, sp=0x2a1005bcb91, tstate=0x80001601, context=0x188
4
Sep 9 12:08:56 venus unix: [ID 743441 kern.notice] g1-g7: 1128ecc, 0, 30000603d70, 4, 3000077dc00, 0, 30002de4aa0
Sep 9 12:08:56 venus unix: [ID 100000 kern.notice]
Sep 9 12:08:56 venus genunix: [ID 723222 kern.notice] 000002a1005bd020 unix:die+a4 (34, 2a1005bd2f0, 1, 0, 159000, 0)
Sep 9 12:08:56 venus genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 00000000007f8090 0000000000000009 0000000001
499890
Sep 9 12:08:56 venus %l4-7: 0000000000000034 0000000003002250 000003000000b400 00000300011e6ac0
Sep 9 12:08:56 venus genunix: [ID 723222 kern.notice] 000002a1005bd100 unix:trap+59c (2a1005bd2f0, 0, 10000, 10200, 0, ff00)
Sep 9 12:08:57 venus genunix: [ID 179002 kern.notice] %l0-3: 0000000000000001 000000000080000f 0000030002dda050 0000000000
000034
Sep 9 12:08:57 venus %l4-7: 0000030002de23c8 000003000000b618 0000000000000000 0000000000000000
Sep 9 12:08:57 venus genunix: [ID 723222 kern.notice] 000002a1005bd240 unix:ktl0+48 (0, 1, 1, 2a1005bd448, 2a1005bd444, 0)
Sep 9 12:08:57 venus genunix: [ID 179002 kern.notice] %l0-3: 0000000000000002 0000000000001400 0000000080001601 0000000001
02cb04
Sep 9 12:08:57 venus %l4-7: 000003000000b618 0000030001677c80 0000000000000000 000002a1005bd2f0
Sep 9 12:08:57 venus genunix: [ID 723222 kern.notice] 000002a1005bd390 genunix:power_dev+224 (1, 2, 1, 0, 0, 1)
Sep 9 12:08:57 venus genunix: [ID 179002 kern.notice] %l0-3: 000003000028a758 0000000000000000 0000000000000001 0000000000
000001
Sep 9 12:08:57 venus %l4-7: 0000000000000002 0000000000000000 000003000028a758 0000000000000001
Sep 9 12:08:57 venus genunix: [ID 723222 kern.notice] 000002a1005bd470 genunix:pm_set_power+428 (58, 0, 2, 2, 3000060d000, 3
0000957dd0)
Sep 9 12:08:57 venus genunix: [ID 179002 kern.notice] %l0-3: 0000000000000002 000003000028a758 000003000028dd10 0000000000
000001
Sep 9 12:08:57 venus %l4-7: 0000000000000001 0000000000000000 0000000000000002 0000000000000000
Sep 9 12:08:58 venus genunix: [ID 723222 kern.notice] 000002a1005bd570 pm:pm_ioctl+1774 (1, 0, 0, 100003, 3000025fa48, 2a100
5bdaec)
Sep 9 12:08:58 venus genunix: [ID 179002 kern.notice] %l0-3: 000003000060d000 0000000000010000 000003000028a758 0000000001
496ca0
Sep 9 12:08:58 venus %l4-7: 0000000001492840 0000000001447000 0000030002de4aa0 0000000001447000
Sep 9 12:08:58 venus genunix: [ID 723222 kern.notice] 000002a1005bd9a0 genunix:ioctl+1f8 (e, 2c, 153160, e, 155ffc, ffbfeab0
Sep 9 12:08:58 venus genunix: [ID 179002 kern.notice] %l0-3: 000000000117e44c 000000000000002c 000000000000000e 0000000000
000023
Sep 9 12:08:58 venus %l4-7: 000003000094d9e0 0000000000000076 0000000000155ff4 0000000081010100
Sep 9 12:08:58 venus unix: [ID 100000 kern.notice]
Sep 9 12:08:58 venus genunix: [ID 672855 kern.notice] syncing file systems...
Sep 9 12:08:58 venus genunix: [ID 733762 kern.notice] 1
Sep 9 12:09:20 venus last message repeated 20 times
Sep 9 12:09:21 venus genunix: [ID 622722 kern.notice] done (not all i/o completed)
Sep 9 12:09:22 venus genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c0t0d0s1, offset 107479040, content: kernel
Sep 9 12:09:29 venus genunix: [ID 409368 kern.notice] ^M100% done: 10959 pages dumped, compression ratio 3.26,
Sep 9 12:09:29 venus genunix: [ID 851671 kern.notice] dump succeeded
What is the problem? and how can I find the problem?
How can I fix the problem?
Any comments will be appreciated
Thanks in advance
Julxu

The easiest way is to get this core dump into a debugger and get the problem finalized. adb like debugger would do this. But you need to be an expert of memory addresses conventions and solaris internals. This is something that requires training. Another way is to see the problem step by step. Is there any script that runs that day otr a day before. Do you have some specific entry in your crontab. I see some ECC (error check control) ram problems. Some extra load on server cause CPU/RAM faults to be visible.

Similar Messages

  • Solaris 10 panics during reboot after successful installation

    hello,
    this is the 2nd time i'm trying to make solaris 10 x86 (latest dvd version) to
    run on my asus w1n laptop. /first i tried cd version in last december./
    and again i'm not being successful. what happens is this:
    after (i guess successful) installation the system reboots and it panics
    during the boot process. where and why exactly i don't know -- it just prints
    the kernel banner and i noticed a change of screen font. and when it panics i
    have no time to write down or even read the message as it reboots in a second.
    does anyone have an idea of what might be going on or how to overcome this
    issue pls ???
    /i want to give the system a try but so far i've got no chance./
    many thanks,
    martin
    ps: pls cc me as i'm not on the list. cheers.

    Martin,
    as a first step, try booting with -v. This will give you more insight on just where your system panics. Obviously, your system at least gets across the point where the video driver is loaded - that's where the screen font changes.
    Since you seem to be able to boot from a CD/DVD Media, you may also try to boot to single user mode from there.
    Don't get confused by the absence of the usual device configuration assistant / 2nd level boot dialog screens when booting from an install media - at the point where you are prompted for an installation type you can also enter, e.g., "b -s" (without the "", of course) for a single user boot.
    You may then try to look into /var/adm/messages on your hard disk installation. USUALLY, you'll find some hints towards the problem source there.
    HIH, kind regards,
    Me

  • Ultra 10 crashes with Solaris 7:panic[cpu0] ... ECache SRAM ...

    The error message is as follows:
    panic[cpu0]/thread=706a6920: cpu0 Ecache SRAM Data Parity Error: AFSR 0x00000000
    .0040003 AFSR 0x00000000.1364a7b8
    Syncing file systems ????
    dumping to /dev/dsk/c0t0d0s1, offset ???
    100% done: 2484 pages dumped, compression ratio 2.94, dump succeeded
    Rebooting...
    Restarting...
    Thanks

    Hi JTB,
    There is not currently a patch for Solaris, but you may want to take a look at the following articles that address the use of GPIB-ENET devices with Solaris 8:
    GPIB-ENET/100 Fails with Solaris 8
    Unable to Use the GPIB-ENET with LabVIEW under Solaris
    Again, I recommend trying to monitor the specific commands you are executing and where the error is occurring in order to isolate the issue.
    Regards,
    Lauren
    Applications Engineering
    National Instruments

  • Solaris 10 panic

    Hi,
    I just upgraded my Solaris 9 ultra 10 box to Solaris 10. When the machine tries to boot and after starting the root device it crashes. I don't have a serial console, so I'm not able to paste the error, but it's something about usb. One of the messages I see is "consconfig get_usb_kb_path".
    My guess is that, since my ultra 10 does not support USB, the kernel panics trying to configure the USB devices (?). Is it enough to remove the USB device from /etc/driver_aliases ? I cannot try this now, so if you have any suggestions I would appreciate.
    Thank you.

    Well, I couldn't find the reason for the crash. Since I had an empty
    9GB slice on the disk, I decided to install a "fresh" solaris 10
    version on that slice. It's working fine, so it's not a problem with
    the detection of the USB subsystem. Maybe later I can connect a serial
    console to see what's happening.

  • Sun Cluster 3.0 update 1 on Solaris 8 - panics!

    I am building a test system in our lab on admittedly unsupported hardware configurations but the failure wasn't expected to be so dramatic. Setup as follows:
    2x E250 (single processor, 512 MB RAM)
    dual connected to D1000 fully populated 18 Gb HDD
    Solaris 8 6/00 with all latest recommended patches
    Sun Cluster 3.0 update 1 installed with latest patches.
    On first reboot (on either node), the kernel panics with the following:
    panic[cpu0]/thread=3000132c320: segkp_fault: accessing redzone
    This happens straight after the system sets up the network and happens like that everytime and is easily reproduceable. My question is, has anyone successfully used SC3.0 update 1 on Solaris 8 6/00? Any information would be most appreciated.
    -chris.

    We have the same problem with 2 SUN E420 and a D1000 storage array.
    The problem is releted to the settings on the file :
    /etc/system
    added by the cluster installation :
    set rpcmod:svc_default_stksize=0x4000
    set ge:ge_intr_mode=0x833
    the second one line try to set a Gigabit Ethernet Interface that does not exist.
    We comment out the two line and every things work fine.
    I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
    email : [email protected]
    Stefano

  • Solaris 7: system panic when kmem_flags is turned on

    Hi,
    The system running Solaris 7 panics caused by ip_ire_clookup_and_delete( ) in ip driver, when the kmem_flags (0xF) is enable. Does someone know which patch fixed the issue? Thanks.
    30002aca268: BAD TRAP: cpu=0 type=0x34 rp=0x2a10003f350 addr=0xfffffeefdeadbf37 mmu_fsr=0x0
    30002acae68: sched:
    30002acafe8: alignment error:
    30002aca6e8: addr=0xfffffeefdeadbf37
    30002acbd68: pid=0, pc=0x10171248, sp=0x2a10003ebf1, tstate=0x8800001602, context=0x1a57
    30002aca0e8: g1-g7: 30003665430, 0, 0, 1f, 1368, 0, 2a10003fd60
    30002acbbe8: Begin traceback... sp = 2a10003ebf1
    30002acb8e8: Called from 1015d114, fp=2a10003eca1, args=90be24d 1 1 deadbeefdeadbeef 30005ed3a40
    3000008be50
    30002aca9e8: Called from 10168014, fp=2a10003ee01, args=30006ddfd00 30005da8098 30005da8090 30006ddfd00 0 300000a0e40
    30005ebd428: Called from 1003abbc, fp=2a10003eeb1, args=30005e70fc8 30006ddfd00 20 30006ddfd00 30005e70fc8 0
    30005ebdba8: Called from 1021d79c, fp=2a10003ef61, args=30005e70fc8 30006ddfd00 3000293b628 10100 10165e3c 3000293df18
    30005ebd2a8: Called from 1003ab2c, fp=2a10003f051, args=104a4290 104a3e94 30006ddec80 1 300036657b0 30002d128c0
    30005ebd5a8: Called from 10310184, fp=2a10003f101, args=1030f074 30005da8088 30002d128c0 30005d74000 2a10003f9b8 1
    30005ebc228: Called from 1030eacc, fp=2a10003f1d1, args=30005da8080 30005e71100 1468 b4 3a0 30005d74c18
    30005ebce28: Called from 101ce8a4, fp=2a10003f281, args=80000000 1468 1400 30005d74000 30005d743c0 30005d88ba0
    30005ebd728: Called from 10009a2c, fp=2a10003f351, args=3000550c6c0 30002a78f78 30002a49dc8 30002f4e1b8 fc20 30002a78f78
    30005ebc3a8: Called from 10068314, fp=2a10061e281, args=2a10061e331 6e43680 30006dde840 30006e42cc0 30006dbbdc0 deadbeef
    30005ebdea8: End traceback...
    30005ebc9a8: panic[cpu0]/thread=2a10003fd60:
    30005ebc828: trap
    30005ebcca8:
    30005ebd12b: syncing file systems...
    30005ebcb2b: 5
    # adb -k unix.21 vmcore.21
    physmem 1f208
    0x10171248/ai
    ip_ire_clookup_and_delete+0x90:
    ip_ire_clookup_and_delete+0x90: ld [%i3 + 0x48], %g4
    ip_ire_clookup_and_delete+0x94:
    ip_ire_clookup_and_delete+0x94: sra %i1, 0x0, %i0

    Bug ID: 6242141
    Synopsis: Solaris 7 KU-39 panics if kmem_flags are enabled
    Category: kernel
    Subcategory: tcp-ip
    State: 7-Fix in Progress
    Description:
    System panic might occur on Solaris 7 with KU-39 if kmem_flags are enabled.
    Work Around:
    unset kmem_flags (KMF_LITE, KMF_DEADBEEF), backout 106541-39, boot 64 bit kernel
    if kmem_flags=0x100 is set.
    [email protected] 2005-03-17 13:50:22 GMT

  • Solaris 8 crashes in kstat subsystem

    Solaris 8 panics in kstat subsystem while in kstat_runq_exit(). I am already checking rcnt for non-null.
    Harry

    You may want to investigate installing from a Web Start Flash Archive located on your net, CD or tape.
    http://docs.sun.com/db/doc/816-2411/6m8ou8s8p?a=view

  • IBM Thnikpad T30 512M panics

    After a recent system board changed on a IBM T30, my Solaris 9/10 panics when during boot up. It was working fine before the change with 512M memory, but after this change, Solaris works fine with one DIMM of 256M. When both 256M Dimms installed, Solaris would panics during boot up. I tried different DIMMs, and the result is the same. I talked to the repair depot, they claimed the system board which they replaced is exactly the same as the old one. Has anyone run into the same problem??

    A bit more information on the panic, the panic string is on "segpt_badop".

  • Cluster node panic on booting

    Hi
    I have setup a two nodes cluster with sun cluster 3.1 u4 on sun v890+StorageTek6140.On the cluster runs oracle RAC with oracle 10g+clusterware.
    When all thing finished ,I mirrored the bootdisk with SVM on the nodes,but during boot in solaris,it panic like this:
    Jun 23 15:14:37 hisa ID[SUNWudlm.udlm]: [ID 795570 local0.error] Unix DLM version (2) and SUN Unix DLM library version (1): compatible.
    Jun 23 15:14:37 hisa Cluster.OPS.UCMMD: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Jun 23 15:14:37 hisa Cluster.OPS.UCMMD: [ID 377347 daemon.notice] CMM: Node hisa (nodeid = 1) is up; new incarnation number = 1182582874.
    Jun 23 15:14:37 hisa Cluster.OPS.UCMMD: [ID 377347 daemon.notice] CMM: Node hisb (nodeid = 2) is up; new incarnation number = 1182582873.
    Jun 23 15:14:38 hisa java[1656]: [ID 807473 user.error] pkcs11_softtoken: Keystore version failure.
    Jun 23 15:15:30 hisa cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
    Jun 23 15:15:30 hisa unix: [ID 836849 kern.notice]
    Jun 23 15:15:30 hisa ^Mpanic[cpu2]/thread=2a100047cc0:
    Jun 23 15:15:30 hisa unix: [ID 213328 kern.notice] kstat_q_exit: qlen == 0
    Jun 23 15:15:30 hisa unix: [ID 100000 kern.notice]
    Jun 23 15:15:30 hisa genunix: [ID 723222 kern.notice] 000002a100047020 SUNW,UltraSPARC-IV+:kstat_q_panic+8 (300026ab150, 0, ffffffffffffffff, 2200061, 300026ab150, 5800)
    Jun 23 15:15:30 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000002 0000060001815000 0000000000000000 0000030000241b80
    Jun 23 15:15:30 hisa %l4-7: 0000030000241b80 0000000000000000 0000000000000000 0000000001297400
    Jun 23 15:15:31 hisa genunix: [ID 723222 kern.notice] 000002a1000470d0 md:md_kstat_done+cc (600060dda08, 60001fc5938, 0, 600060dda30, 200, 300026ab040)
    Jun 23 15:15:31 hisa genunix: [ID 179002 kern.notice] %l0-3: 00000300026ab040 00000300026ab150 0000000000000009 0000000000000008
    Jun 23 15:15:31 hisa %l4-7: 00000300026d5d00 0000000000000002 0000000000000008 0000000000000000
    Jun 23 15:15:31 hisa genunix: [ID 723222 kern.notice] 000002a100047180 md_sp:sp_done+114 (0, 600060dda08, 0, 60001fc5938, 6000750ddf0, 704b7800)
    Jun 23 15:15:31 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000200061 00000300026aaff0 000000000000000b 000000000000000a
    Jun 23 15:15:31 hisa %l4-7: 0000000000004000 0000000000000000 0000000000000001 00000000704b7800
    Jun 23 15:15:31 hisa genunix: [ID 723222 kern.notice] 000002a100047230 md_stripe:stripe_done+13c (4, 6000721cb38, 703c1400, 6000750f930, 60007509ce8, 60007509d40)
    Jun 23 15:15:31 hisa genunix: [ID 179002 kern.notice] %l0-3: 000006000750b730 0000000000004000 0000000000000000 0000000000000001
    Jun 23 15:15:31 hisa %l4-7: 0000000000000000 0000000000000000 00000000703c1400 0000000000000000
    Jun 23 15:15:31 hisa genunix: [ID 723222 kern.notice] 000002a1000472e0 did:did_done+3c (60004b8b9c0, 60001236a80, 6000750b770, 6000131d280, 2200061, 0)
    Jun 23 15:15:31 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000001202390 00000000018cafd8 0000000001202310 0000000000200061
    Jun 23 15:15:31 hisa %l4-7: 0000000002200061 00000000fdffffff 00000000fdfffc00 000000007b666310
    Jun 23 15:15:32 hisa genunix: [ID 723222 kern.notice] 000002a100047390 ssd:ssd_return_command+198 (60001236a80, 60004b8b9c0, 4, 6000131d280, 4, 4)
    Jun 23 15:15:32 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000020 00000000018cad68 00000000018cac00 000000000126ced8
    Jun 23 15:15:32 hisa %l4-7: 0000000000000020 00000000018caf08 00000000018cac00 0000000000000004
    Jun 23 15:15:32 hisa genunix: [ID 723222 kern.notice] 000002a100047440 ssd:ssdintr+268 (60006e0f458, 0, 0, 6000594d680, 60004b8b9c0, 60001236a80)
    Jun 23 15:15:32 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000000 0000000000004000 0000060006e0f4f8
    Jun 23 15:15:32 hisa %l4-7: 0000000000000000 0000000000000000 0000000000000001 0000000000000000
    Jun 23 15:15:32 hisa genunix: [ID 723222 kern.notice] 000002a1000474f0 scsi_vhci:vhci_intr+7b0 (600011e8dc0, 60006e0f4b8, 600018b13e0, 0, 60001822388, 60006e0f458)
    Jun 23 15:15:32 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000060001863a40 0000000000000000 0000060006e0f4b8 0000000000000000
    Jun 23 15:15:32 hisa %l4-7: 0000000000000000 0000060006e0f4f8 00000600018b1284 0000000000000028
    Jun 23 15:15:32 hisa genunix: [ID 723222 kern.notice] 000002a1000475d0 fcp:ssfcp_cmd_callback+64 (600018b1438, 0, 1, 813, 600018b1248, 600011c2f40)
    Jun 23 15:15:32 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000002 0000060001815000 0000000000000000 0000030000241b80
    Jun 23 15:15:32 hisa %l4-7: 0000030000241b80 0000000000000000 0000000000000000 0000000001297400
    Jun 23 15:15:33 hisa genunix: [ID 723222 kern.notice] 000002a100047680 qlc:ql_fast_fcp_post+178 (600018b15d8, 128ae70, 600018b1438, 60001236fc0, 60001237038, 128ae70)
    Jun 23 15:15:33 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000400000 00000000018d5148 0000000000000803 0000000000000001
    Jun 23 15:15:33 hisa %l4-7: 00000600018b1438 00000600018b1438 00000600018b1438 00000600018b1278
    Jun 23 15:15:33 hisa genunix: [ID 723222 kern.notice] 000002a100047730 qlc:ql_24xx_status_entry+1ec (0, 300012008c0, 2a100047958, 2a10004796c, 0, 0)
    Jun 23 15:15:33 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000811 00000600018b15d8 0000000000000000 0000000000080811
    Jun 23 15:15:33 hisa %l4-7: 00000000fff7ffff 0000000000000001 0000000000000001 0000000000000000
    Jun 23 15:15:33 hisa genunix: [ID 723222 kern.notice] 000002a1000477e0 qlc:ql_response_pkt+248 (60001236fc0, 2a100047958, 2a10004796c, 2a100047968, 20aa, 2840)
    Jun 23 15:15:33 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000004000 0000000000002000 0000000000000000
    Jun 23 15:15:33 hisa %l4-7: 0000000000000000 00000300012008c0 0000000000000000 0000000000000000
    Jun 23 15:15:33 hisa genunix: [ID 723222 kern.notice] 000002a100047890 qlc:ql_isr+664 (60001236fc0, a2, 8000, a2, ffffffffffffffff, 60001237018)
    Jun 23 15:15:34 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000002000 0000000000004000 0000060001236fd8 00000000012db3a8
    Jun 23 15:15:34 hisa %l4-7: 0000000000000001 0000000000000000 0000000000000000 0000000000000003
    Jun 23 15:15:34 hisa genunix: [ID 723222 kern.notice] 000002a100047970 qlc:___const_seg_900000101+db4 (60001236fc0, 0, 60001236fc0, 0, 0, 60001237018)
    Jun 23 15:15:34 hisa genunix: [ID 179002 kern.notice] %l0-3: 0000000000002000 0000000000004000 0000060001236fd8 00000000012db3a8
    Jun 23 15:15:34 hisa %l4-7: 0000000000000001 0000000000000001 0000000000000000 00000000000001ab
    Jun 23 15:15:34 hisa genunix: [ID 723222 kern.notice] 000002a100047a20 pcisch:pci_intr_wrapper+b4 (6000122f7b0, 600010b5230, 0, 0, 0, 6000136c738)
    Jun 23 15:15:34 hisa genunix: [ID 179002 kern.notice] %l0-3: 00000000018d5170 00000600010f8c80 00000000018d51b8 0000000000000001
    Jun 23 15:15:34 hisa %l4-7: 0000030000220220 0000060001236fc0 0000000000000000 00000000012dc158
    Jun 23 15:15:34 hisa unix: [ID 100000 kern.notice]
    Jun 23 15:15:34 hisa genunix: [ID 672855 kern.notice] syncing file systems...
    Has any one ever met like this?

    Thank you for your attation!
    I would like to add some other information to this issue
    Between the hosts and storage,we did not use switch but directly connnect them use fibre cables ,I don't konw if this way could bring problems and we did not use QFS either.Beside panic at booting,sometimes the messages would display this information:
    Jun 26 15:32:20 hisb genunix: [ID 454863 kern.info] dump on /dev/dsk/c4t500000E0147BACB0d0s1 size 8198 MB
    Jun 26 15:33:18 hisb cacao[978]: [ID 388282 daemon.warning] com.sun.cacao.ModuleManager.garbage : Cannot garbage class loader for module com.sun.cacao.snmpv3_adaptor
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group rac-rg state on node hisb change to RG_PENDING_OFFLINE
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-svm-rs state on node hisb change to R_MON_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-udlm-rs state on node hisb change to R_MON_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-framework-rs state on node hisb change to R_MON_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_framework_monitor_stop> for resource <rac-framework-rs>, resource group <rac-rg>, timeout <3600> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_udlm_monitor_stop> for resource <rac-udlm-rs>, resource group <rac-rg>, timeout <300> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_svm_monitor_stop> for resource <rac-svm-rs>, resource group <rac-rg>, timeout <300> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_framework_monitor_stop> completed successfully for resource <rac-framework-rs>, resource group <rac-rg>, time used: 0% of timeout <3600 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-framework-rs state on node hisb change to R_ONLINE_UNMON
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_svm_monitor_stop> completed successfully for resource <rac-svm-rs>, resource group <rac-rg>, time used: 0% of timeout <300 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-svm-rs state on node hisb change to R_ONLINE_UNMON
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-svm-rs state on node hisb change to R_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource rac-svm-rs status on node hisb change to R_FM_UNKNOWN
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-svm-rs status msg on node hisb change to <Stopping>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_svm_stop> for resource <rac-svm-rs>, resource group <rac-rg>, timeout <300> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_udlm_monitor_stop> completed successfully for resource <rac-udlm-rs>, resource group <rac-rg>, time used: 0% of timeout <300 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-udlm-rs state on node hisb change to R_ONLINE_UNMON
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-udlm-rs state on node hisb change to R_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_udlm_stop> for resource <rac-udlm-rs>, resource group <rac-rg>, timeout <300> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource rac-udlm-rs status on node hisb change to R_FM_UNKNOWN
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-udlm-rs status msg on node hisb change to <Stopping>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-svm-rs status msg on node hisb change to <RAC framework is running>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_svm_stop> completed successfully for resource <rac-svm-rs>, resource group <rac-rg>, time used: 0% of timeout <300 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-svm-rs state on node hisb change to R_OFFLINE
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-udlm-rs status msg on node hisb change to <RAC framework is running>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_udlm_stop> completed successfully for resource <rac-udlm-rs>, resource group <rac-rg>, time used: 0% of timeout <300 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-udlm-rs state on node hisb change to R_OFFLINE
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-framework-rs state on node hisb change to R_STOPPING
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <bin/rac_framework_stop> for resource <rac-framework-rs>, resource group <rac-rg>, timeout <300> seconds
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource rac-framework-rs status on node hisb change to R_FM_UNKNOWN
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-framework-rs status msg on node hisb change to <Stopping>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource rac-framework-rs status msg on node hisb change to <RAC framework is running>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <bin/rac_framework_stop> completed successfully for resource <rac-framework-rs>, resource group <rac-rg>, time used: 0% of timeout <300 seconds>
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource rac-framework-rs state on node hisb change to R_OFFLINE
    Jun 26 15:33:21 hisb Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group rac-rg state on node hisb change to RG_OFFLINE
    Jun 26 15:33:21 hisb xntpd[568]: [ID 866926 daemon.notice] xntpd exiting on signal 15
    Jun 26 15:33:23 hisb root: [ID 702911 user.error] Oracle CRSD 1099 set to stop
    Jun 26 15:33:23 hisb root: [ID 702911 user.error] Oracle CRSD 1099 shutdown completed
    Jun 26 15:33:23 hisb root: [ID 702911 user.error] Oracle EVMD set to stop
    Jun 26 15:33:23 hisb root: [ID 702911 user.error] Oracle CSSD being stopped
    Jun 26 15:33:41 hisb FIN_SVC_CTRL: [ID 702911 local0.error] Warning:      Because one or more of the sun cluster userland cluster      services are offline this service goes offline
    Jun 26 15:33:41 hisb cl_eventlogd[843]: [ID 247336 daemon.error] Going down on signal 15.
    Jun 26 15:33:43 hisb root: [ID 702911 user.error] Oracle CSSD graceful shutdown
    Jun 26 15:33:44 hisb Cluster.PNM: [ID 226280 daemon.notice] PNM daemon exiting.
    Regards,
    Caicia

  • Cluster 3.0 fails to boot

    This is a two node cluster consisting of:
    2x E250 w 512 Mb RAM
    both connected to
    D1000 with 12x 18 Gb IBM HDDs
    After I got over the initial configuration problem (see "Sun Cluster 3.0 update 1 on Solaris 8 - panics!" in a previous topic), I've run into another (why me?).
    Post configuration of quorum device and deactivation of installmode panics the node on which I configured the quorum disk with a reservation conflict. The same thing happens on reboot. Halting node 1 and booting other node doesn't help as it can't gain control of the quorum disk.
    on node 1:
    Sep 19 21:23:09 cluster2 cl_runtime: NOTICE: clcomm: Path cluster2:qfe2 - cluster1:qfe2 online
    Sep 19 21:23:09 cluster2 cl_runtime: NOTICE: clcomm: Path cluster2:qfe3 - cluster1:qfe3 online
    Sep 19 21:23:14 cluster2 cl_runtime: NOTICE: CMM: Node cluster1 (nodeid: 2, incarnation #: 100095797
    3) has become reachable.
    panic[cpu0]/thread=2a100045d40: Reservation Conflict
    after node 1 is halted, on node 2:
    ASC: 0x29 (<vendor unique code 0x29>), ASCQ: 0x2, FRU: 0x0
    NOTICE: CMM: Quorum device 1(gdevname /dev/did/rdsk/d6s2) can not be acquired by the current cluster members. This quorum device is held by node 1.
    Is this the famous SCSI 3 reservation bug^H^H^Hfeature that I've been told about? Anyone with a similar experience? Thanks,
    -chris.

    Use the following procedure in the event that it becomes necessary to
    remove a SCSI-3 PGR
    1.Login as root on one of the nodes that currently has access to the
    disk
    2.Determine the DID name of the disk
    scdidadm -L
    3.Verify that there is a SCSI-3 PGR on the disk
    /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dXs2
    4.Scrub the reservation from the disk
    /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dXs2
    5.Verify that the reservation has been scrubbed
    /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dXs2

  • Solaris 8 in ultra 2 got panic and auto reboot when install Oracle software

    Dear all,
    I installed Solaris 8 by selecting 'Entire Solaris Software Group' and select 32 bit support only in an old ultra II machine. Then I tried to install Oracle 8.1.7 in the same machine but machine always got panic and started auto reboot when Oracle installed about 17% or 20% or even 70% (I tried install the minimum of Oracle).
    What is the problem? Anyone can help?
    Thanks in advance,
    Vicky

    I think you need to check the parameters like SHMMAX etc in /etc/system file as recommanded by oracle if not already added you need to add them .Oracle manuals have a list of these parameters and don't forget to reboot and boot with boot -r after adding these parameters.
    Make sure you have sufficient memory or some patch recommanded by Oracle .
    Hemant
    http://www.adminschoice.com

  • Kernel panic with Apache2, NCA and Solaris 10 x86

    Hi,
    I'm trying to configure Apache2 on Solaris 10 with Network Cache Accelerator.
    All the config in /etc/nca is in place.
    pargs -e <process id> shows the LD_PRELOAD for ncad_addr.so
    pldd <process id> whos ncad_addr.so.1 as loaded
    BUT
    pfiles <process id> still shows AF_INET for the interface instead of AF_NCA
    AND
    the /var/nca/log doesn't grow when the page is accessed
    AND
    after a couple of accesses the machine panics and reboots.
    Any suggestion what to look at next (beside the obvious?)
    Thanks,
    R.

    Usually third party software is to blame for Kernal Panics.
    Try removing com.sophos.kext.sav 9.2.0 / com.sophos.nke.swi 9.2.0, (all Sophos kernel extensions) and then restart your MacBook.
    If you're having difficulty, try the following:
    Open the Terminal app (Applications > Utilities) then triple click the following line, copy and paste the following after the "$ " prompt:
    cd /Library/Application\ Support/Sophos/he/Installer.app/Contents/MacOS/
    and hit return.
    when the "$ " prompt returns, triple click copy and paste the following:
    sudo ./InstallationDeployer --remove
    ...and hit return, then enter your admin password when prompted (you will not see any typing) and hit return once more.  You will see a bunch of commands fly by and when the "$ " prompt returns it should be gone.
    Personally, I am not a fan of Chrome. Issues with the browser certainty aren't uncommon. You should check out the following article on how Google's Chrome Web Browser Is Killing Your Laptop Battery.
    Good luck!

  • Install_Check_1.6 tool and Solaris 10 5/09 panics after GRUB on Dell T7400

    Hello anybody!
    Apparently the Solaris 10 5/09 drop panics after the GRUB menu, for Solaris Installation. Similarly the Install_Check_1.6 Tool for Hardware Compatablity seems to do the same.
    I can't capture the Kernel messages that appear on the screen. It goes too fast. I was able to see however that it points to SUNOS-8000-0G message. The system goes into an automatic reboot and hence if left alone continuously reboots;
    Can anybody help on how to capture the kernel error messages during the boot period? My guess is that it may be a device driver compatablity issue, with what? I don't know. I am trying to update the firmware/bios but I doubt if that may change anything.
    Does anyone have this problem. Is there a fix for this?
    sam

    All,
    I check the BIOS versions and it seems that everything is the latest. Therefore I am basically stuck on T7400 Dell precision system Solaris install.
    Does anyone have this similar problem. If I can bypass this device specific errors (I think that is what they are), and load the OS, I might be able to go in and patch it up.
    Has anyone experience this with Solaris on T7400?
    Please help

  • Panic T5240 after of terminated install SO Solaris 10 10/08

    Hi
    After finishing the installation of operating system, panic in the server, it clears hardware cards FC and it tries to start with the minimum in the server, creates a new alias whit path logic of the disk, and to try to booter….aid.....after panic reboot and so on.... for that reason I only stick the following capture
    , No Keyboard
    Copyright 2008 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.30.0, 16160 MB memory available, Serial #83049876.
    Ethernet address 0:14:4f:f3:3d:94, Host ID: 84f33d94.
    Boot device: /pci@400/pci@0/pci@8/scsi@0/disk@1,0:a File and args: -s
    WARNING: cannot open system file: /etc/system
    SunOS Release 5.10 Version Generic_137137-09 64-bit
    Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    panic[cpu0]/thread=180e000: read_binding_file: /etc/name_to_major file not found
    000000000180b640 genunix:read_binding_file+2d8 (18a99cc, 18fd7b0, 1218db8, 7ffffc00, 7530, 1275c00)
    %l0-3: fffffcfffeae6008 fffffcfffeae6000 ffffffffffffffff 0000000000000000
    %l4-7: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
    000000000180b800 genunix:mod_setup+1c (185f800, 185f800, 0, 3c00, 1218c00, 18fd400)
    %l0-3: 0000000000000000 000000000185d800 000000000000752b 000000000185d800
    %l4-7: 0000000000007530 0000000000000005 0000000001862c00 000000000182b400
    000000000180b8b0 unix:startup_modules+24 (1968000, 185d800, 183d400, 1832800, 80000, 0)
    %l0-3: 0000000070002000 000000000185d400 0000000000000103 0000000070004000
    %l4-7: 0000000070004000 0000000001826c00 000000000187b800 0000000001c00000
    000000000180b960 unix:startup+28 (2, 1, 1, 1, 1, 1045000)
    %l0-3: 000000000dbab91d 03b9aca000000000 00000000457656f0 000000000000001c
    %l4-7: 000000000000048e 000000004585992f 00000000457656f0 000000000106b160
    000000000180ba10 genunix:main+c (0, 180c000, 185b240, 10aec00, 1831948, 70002000)
    %l0-3: 000000000101a800 0000000000000001 0000000070002000 0000000000000002
    %l4-7: 0000000001862800 0000000000000000 000000000180c000 0000000000000001
    skipping system dump - no dump device configured
    rebooting...

    how was this resolved? I have a similiar issue with new 5240
    SUNW-MSG-ID: SUNOS-8000-0G, TYPE: Error, VER: 1, SEVERITY: Major
    EVENT-TIME: 0x49faf52f.0x55d26cc (0x10ffa3017bcf0d)
    PLATFORM: SUNW,T5240, CSN: -, HOSTNAME:
    SOURCE: SunOS, REV: 5.10 Generic_137137-09
    DESC: Errors have been detected that require a reboot to ensure system
    integrity. See http://www.sun.com/msg/SUNOS-8000-0G for more information.
    AUTO-RESPONSE: Solaris will attempt to save and diagnose the error telemetry
    IMPACT: The system will sync files, save a crash dump if needed, and reboot
    REC-ACTION: Save the error summary below in case telemetry cannot be saved
    panic[cpu0]/thread=180e000: Fatal error has occured in: PCIe root complex.(0x10)(0x0)

  • Panic loop after patching solaris 10 x86

    I have installed a number of patches on to my server and think I have messed things up somehow, wrong order, not in single user mode, missing dependencies, not rebooting, etc...
    After installing I rebooted and now am stuck in a panic loop. I have gathered as much information as possible below.
    The server is a Sunfire V20z running Solaris 10.
    If anyone can interpret the below information and point me in the right direction that would be much appreciated.
    Thanks.
    =======================================================
    Panic Message:
    panic[cpu0]/thread=fffffffffbc22dc0: BAD TRAP: type=d (#gp General protection) rp=fffffffffbc45760 addr=0
    #gp General protection
    pid=0, pc=0xfffffffffb94a051, sp=0xfffffffffbc45850, eflags=0x10282
    cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
    cr2: 0 cr3: 9d60000 cr8: c
    rdi: 0 rsi: 0 rdx: fffffffffbc45730
    rcx: fffffffffbc45700 r8: 0 r9: ffffffff80178945
    rax: f000f84dc000147a rbx: ffffffff80cdc088 rbp: fffffffffbc45870
    r10: 0 r11: 0 r12: 0
    r13: ffffffff80cdc268 r14: ffffffff80178940 r15: 1
    fsb: 200000000 gsb: fffffffffbc240e0 ds: 0
    es: 0 fs: 0 gs: 0
    trp: d err: 0 rip: fffffffffb94a051
    cs: 28 rfl: 10282 rsp: fffffffffbc45850
    ss: 30
    fffffffffbc45670 unix:real_mode_end+6ad1 ()
    fffffffffbc45750 unix:trap+97b ()
    fffffffffbc45760 unix:cmntrap+13f ()
    fffffffffbc45870 genunix:init_node+51 ()
    fffffffffbc458a0 genunix:i_ndi_config_node+f3 ()
    fffffffffbc458c0 genunix:i_ddi_attachchild+41 ()
    fffffffffbc458f0 genunix:devi_attach_node+71 ()
    fffffffffbc45930 genunix:ndi_devi_online+a5 ()
    fffffffffbc45960 unix:add_cpunode2devtree+e4 ()
    fffffffffbc45970 unix:post_startup+78 ()
    fffffffffbc459b0 genunix:main+d3 ()
    fffffffffbc459c0 unix:_start+95 ()
    From kmdb debugger:
    SunOS Release 5.10 Version Generic_118855-36 64-bit
    Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    load 'fs/specfs' id 7 loaded @ 0xfffffffffbb4a630/0xfffffffffbcdfee0 size 20832/248
    installing specfs, module id 7.
    load 'fs/devfs' id 9 loaded @ 0xfffffffffbb503e8/0xfffffffffbce08f1 size 19064/712
    installing devfs, module id 9.
    load 'sched/TS' id 11 loaded @ 0xfffffffffbb54e60/0xfffffffffbce0bdd size 12992/1984
    installing TS, module id 11.
    load 'sched/TS_DPTBL' id 12 loaded @ 0xfffffffffbb58120/0xfffffffffbce13bd size 376/2152
    installing TS_DPTBL, module id 12.
    load 'misc/sysinit' id 13 loaded @ 0xfffffffffbb58298/0xfffffffffbce1c25 size 656/160
    installing sysinit, module id 13.
    uninstalled sysinit
    unloading sysinit, module id 13, loadcnt 1.
    load 'misc/acpica' id 15 loaded @ 0xfffffffffbb5eef8/0xfffffffffbce1d35 size 360536/5712
    load 'misc/pci_autoconfig' id 14 loaded @ 0xfffffffffbb58298/0xfffffffffbce1c25 size 27744/272
    installing pci_autoconfig, module id 14.
    installing acpica, module id 15.
    load 'cpu/cpu.AuthenticAMD.15' id 18 loaded @ 0xfffffffffbbb6f50/0xfffffffffbce38b5 size 12608/368
    installing cpu.AuthenticAMD.15, module id 18.
    load 'mach/uppc' id 19 loaded @ 0xfffffffffbbbac90/0xfffffffffbce3a4a size 11784/616
    installing uppc, module id 19.
    load 'mach/pcplusmp' id 20 loaded @ 0xfffffffffbbbe480/0xfffffffffbce4d32 size 41992/5112
    installing pcplusmp, module id 20.
    load 'drv/rootnex' id 21 loaded @ 0xfffffffffbbc9470/0xfffffffffbce7cba size 18040/840
    installing rootnex, module id 21.
    load 'drv/options' id 22 loaded @ 0xfffffffffbbcdae8/0xfffffffffbce8012 size 432/232
    installing options, module id 22.
    load 'drv/pseudo' id 23 loaded @ 0xfffffffffbbcdc98/0xfffffffffbce80fa size 2704/648
    installing pseudo, module id 23.
    load 'drv/clone' id 24 loaded @ 0xfffffffffbbce728/0xfffffffffbce8382 size 1184/680
    installing clone, module id 24.
    load 'misc/scsi' id 26 loaded @ 0xfffffffffbbe2cf8/0xfffffffffbce91ba size 51384/13152
    load 'drv/scsi_vhci' id 25 loaded @ 0xfffffffffbbcebc8/0xfffffffffbce862a size 82224/2960
    installing scsi_vhci, module id 25.
    installing scsi, module id 26.
    load 'misc/busra' id 28 loaded @ 0xfffffffffbbf1820/0xfffffffffbcecab8 size 6776/136
    load 'drv/isa' id 27 loaded @ 0xfffffffffbbef5b0/0xfffffffffbcec570 size 8816/1352
    installing isa, module id 27.
    installing busra, module id 28.
    load 'drv/sad' id 29 loaded @ 0xfffffffffbbf3298/0xfffffffffbcecb58 size 5320/688
    installing sad, module id 29.
    load 'misc/fssnap_if' id 31 loaded @ 0xfffffffff3a39110/0xfffffffffbcee638 size 776/176
    load 'fs/ufs' id 30 loaded @ 0xfffffffff3a00000/0xfffffffffbcece08 size 233744/6192
    installing ufs, module id 30.
    installing fssnap_if, module id 31.
    load 'misc/hpcsvc' id 34 loaded @ 0xfffffffffbbfb698/0xfffffffffbcef438 size 4472/144
    load 'misc/pcihp' id 33 loaded @ 0xfffffffff3a3a000/0xfffffffffbcef2f0 size 21232/328
    load 'drv/pci' id 32 loaded @ 0xfffffffffbbf6998/0xfffffffffbcef058 size 19712/664
    installing pci, module id 32.
    installing pcihp, module id 33.
    installing hpcsvc, module id 34.
    load 'drv/pci_pci' id 35 loaded @ 0xfffffffffbbfc810/0xfffffffffbcef4d8 size 5144/688
    installing pci_pci, module id 35.
    load 'drv/mpt' id 36 loaded @ 0xfffffffff3a40000/0xfffffffffbcef788 size 83864/109432
    installing mpt, module id 36.
    load 'drv/sd' id 37 loaded @ 0xfffffffff3a55000/0xfffffffffbd0a300 size 134144/9160
    installing sd, module id 37.
    load 'fs/ctfs' id 38 loaded @ 0xfffffffff3a76000/0xfffffffffbd0c6ec size 14808/944
    installing ctfs, module id 38.
    load 'fs/procfs' id 39 loaded @ 0xfffffffff3a7a000/0xfffffffffbd0cafc size 126976/2800
    installing procfs, module id 39.
    load 'fs/mntfs' id 40 loaded @ 0xfffffffff3a99000/0xfffffffffbd0d600 size 10552/248
    installing mntfs, module id 40.
    load 'fs/tmpfs' id 41 loaded @ 0xfffffffff3a9c000/0xfffffffffbd0d708 size 27952/66488
    installing tmpfs, module id 41.
    load 'fs/objfs' id 42 loaded @ 0xfffffffffbbfdc28/0xfffffffffbd1dae8 size 7648/1056
    installing objfs, module id 42.
    panic[cpu0]/thread=fffffffffbc22dc0: BAD TRAP: type=d (#gp General protection) rp=fffffffffbc45760 addr=0
    #gp General protection
    pid=0, pc=0xfffffffffb94a051, sp=0xfffffffffbc45850, eflags=0x10282
    cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
    cr2: 0 cr3: a2a5000 cr8: c
    rdi: 0 rsi: 0 rdx: fffffffffbc45730
    rcx: fffffffffbc45700 r8: 0 r9: ffffffff80178dc5
    rax: f000f84dc000147a rbx: ffffffff80566088 rbp: fffffffffbc45870
    r10: 0 r11: 0 r12: 0
    r13: ffffffff80566268 r14: ffffffff80178dc0 r15: 1
    fsb: 200000000 gsb: fffffffffbc240e0 ds: 0
    es: 0 fs: 0 gs: 0
    trp: d err: 0 rip: fffffffffb94a051
    cs: 28 rfl: 10282 rsp: fffffffffbc45850
    ss: 30
    fffffffffbc45670 unix:real_mode_end+6ad1 ()
    fffffffffbc45750 unix:trap+97b ()
    fffffffffbc45760 unix:cmntrap+13f ()
    fffffffffbc45870 genunix:init_node+51 ()
    fffffffffbc458a0 genunix:i_ndi_config_node+f3 ()
    fffffffffbc458c0 genunix:i_ddi_attachchild+41 ()
    fffffffffbc458f0 genunix:devi_attach_node+71 ()
    fffffffffbc45930 genunix:ndi_devi_online+a5 ()
    fffffffffbc45960 unix:add_cpunode2devtree+e4 ()
    fffffffffbc45970 unix:post_startup+78 ()
    fffffffffbc459b0 genunix:main+d3 ()
    fffffffffbc459c0 unix:_start+95 ()
    panic: entering debugger (no dump device, continue to reboot)
    Loaded modules: [ uppc ufs unix krtld genunix specfs pcplusmp
    cpu.AuthenticAMD.15 ]
    kmdb: target stopped at:
    kaif_enter+8: popfq
    From kmdb msgbuf:
    load 'cpu/cpu.AuthenticAMD.15' id 18 loaded @ 0xfffffffffbbb6f50/0xfffffffffbce3
    8b5 size 12608/368
    installing cpu.AuthenticAMD.15, module id 18.
    load 'mach/uppc' id 19 loaded @ 0xfffffffffbbbac90/0xfffffffffbce3a4a size 11784
    /616
    installing uppc, module id 19.
    load 'mach/pcplusmp' id 20 loaded @ 0xfffffffffbbbe480/0xfffffffffbce4d32 size 4
    1992/5112
    installing pcplusmp, module id 20.
    mem = 2096168K (0x7ff0a000)
    load 'drv/rootnex' id 21 loaded @ 0xfffffffffbbc9470/0xfffffffffbce7cba size 180
    40/840
    installing rootnex, module id 21.
    root nexus = i86pc
    load 'drv/options' id 22 loaded @ 0xfffffffffbbcdae8/0xfffffffffbce8012 size 432
    /232
    installing options, module id 22.
    load 'drv/pseudo' id 23 loaded @ 0xfffffffffbbcdc98/0xfffffffffbce80fa size 2704
    /648
    installing pseudo, module id 23.
    pseudo0 at root
    pseudo0 is /pseudo
    load 'drv/clone' id 24 loaded @ 0xfffffffffbbce728/0xfffffffffbce8382 size 1184/
    680
    installing clone, module id 24.
    load 'misc/scsi' id 26 loaded @ 0xfffffffffbbe2cf8/0xfffffffffbce91ba size 51384
    /13152
    load 'drv/scsi_vhci' id 25 loaded @ 0xfffffffffbbcebc8/0xfffffffffbce862a size 8
    2224/2960
    installing scsi_vhci, module id 25.
    installing scsi, module id 26.
    scsi_vhci0 at root
    scsi_vhci0 is /scsi_vhci
    load 'misc/busra' id 28 loaded @ 0xfffffffffbbf1820/0xfffffffffbcecab8 size 6776
    /136
    load 'drv/isa' id 27 loaded @ 0xfffffffffbbef5b0/0xfffffffffbcec570 size 8816/13
    52
    installing isa, module id 27.
    installing busra, module id 28.
    isa0 at root
    NOTICE: ACPI source type ACPI_RESOURCE_TYPE_EXT_IRQ not supported
    NOTICE: apic: local nmi: 0 1 1 1
    NOTICE: apic: local nmi: 1 1 1 1
    pcplusmp: vector 0x9 ioapic 0x2 intin 0x9 is bound to cpu 1
    load 'drv/sad' id 29 loaded @ 0xfffffffffbbf3298/0xfffffffffbcecb58 size 5320/68
    8
    installing sad, module id 29.
    load 'misc/fssnap_if' id 31 loaded @ 0xfffffffff3a39110/0xfffffffffbcee638 size
    776/176
    load 'fs/ufs' id 30 loaded @ 0xfffffffff3a00000/0xfffffffffbcece08 size 233744/6
    192
    installing ufs, module id 30.
    installing fssnap_if, module id 31.
    load 'misc/hpcsvc' id 34 loaded @ 0xfffffffffbbfb698/0xfffffffffbcef438 size 447
    2/144
    load 'misc/pcihp' id 33 loaded @ 0xfffffffff3a3a000/0xfffffffffbcef2f0 size 2123
    2/328
    load 'drv/pci' id 32 loaded @ 0xfffffffffbbf6998/0xfffffffffbcef058 size 19712/6
    64
    installing pci, module id 32.
    installing pcihp, module id 33.
    installing hpcsvc, module id 34.
    pci0 at root: space 0 offset 0
    pci0 is /pci@0,0
    load 'drv/pci_pci' id 35 loaded @ 0xfffffffffbbfc810/0xfffffffffbcef4d8 size 514
    4/688
    installing pci_pci, module id 35.
    PCI-device: pci1022,7450@a, pci_pci1
    pci_pci1 is /pci@0,0/pci1022,7450@a
    load 'drv/mpt' id 36 loaded @ 0xfffffffff3a40000/0xfffffffffbcef788 size 83864/1
    09432
    installing mpt, module id 36.
    /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
    Rev. 8 LSI, Inc. 1030 found.
    /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
    mpt0 supports power management.
    pcplusmp: pci1000,30 (mpt) instance 0 vector 0x1b ioapic 0x3 intin 0x3 is bound
    to cpu 0
    /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
    mpt0 Firmware version v1.3.27.0 (IM/IME)
    /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
    mpt0: IOC Operational.
    /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
    Volume 0 is optimal
    PCI-device: pci17c2,10@4, mpt0
    mpt0 is /pci@0,0/pci1022,7450@a/pci17c2,10@4
    load 'drv/sd' id 37 loaded @ 0xfffffffff3a55000/0xfffffffffbd0a300 size 134144/9
    160
    installing sd, module id 37.
    sd1 at mpt0: target 0 lun 0
    sd1 is /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    load 'fs/ctfs' id 38 loaded @ 0xfffffffff3a76000/0xfffffffffbd0c6ec size 14808/9
    44
    installing ctfs, module id 38.
    load 'fs/procfs' id 39 loaded @ 0xfffffffff3a7a000/0xfffffffffbd0cafc size 12697
    6/2800
    installing procfs, module id 39.
    load 'fs/mntfs' id 40 loaded @ 0xfffffffff3a99000/0xfffffffffbd0d600 size 10552/
    248
    installing mntfs, module id 40.
    load 'fs/tmpfs' id 41 loaded @ 0xfffffffff3a9c000/0xfffffffffbd0d708 size 27952/
    66488
    installing tmpfs, module id 41.
    load 'fs/objfs' id 42 loaded @ 0xfffffffffbbfdc28/0xfffffffffbd1dae8 size 7648/1
    056
    installing objfs, module id 42.
    SMBIOS v2.31 loaded (1939 bytes)
    panic[cpu0]/thread=fffffffffbc22dc0:
    BAD TRAP: type=d (#gp General protection) rp=fffffffffbc45760 addr=0
    #gp General protection
    pid=0, pc=0xfffffffffb94a051, sp=0xfffffffffbc45850, eflags=0x10282
    cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
    cr2: 0 cr3: a2a5000 cr8: c
    rdi: 0 rsi: 0 rdx: fffffffffbc45730
    rcx: fffffffffbc45700 r8: 0 r9: ffffffff80178dc5
    rax: f000f84dc000147a rbx: ffffffff80566088 rbp: fffffffffbc45870
    r10: 0 r11: 0 r12: 0
    r13: ffffffff80566268 r14: ffffffff80178dc0 r15: 1
    fsb: 200000000 gsb: fffffffffbc240e0 ds: 0
    es: 0 fs: 0 gs: 0
    trp: d err: 0 rip: fffffffffb94a051
    cs: 28 rfl: 10282 rsp: fffffffffbc45850
    ss: 30
    fffffffffbc45670 unix:real_mode_end+6ad1 ()
    fffffffffbc45750 unix:trap+97b ()
    fffffffffbc45760 unix:cmntrap+13f ()
    fffffffffbc45870 genunix:init_node+51 ()
    fffffffffbc458a0 genunix:i_ndi_config_node+f3 ()
    fffffffffbc458c0 genunix:i_ddi_attachchild+41 ()
    fffffffffbc458f0 genunix:devi_attach_node+71 ()
    fffffffffbc45930 genunix:ndi_devi_online+a5 ()
    fffffffffbc45960 unix:add_cpunode2devtree+e4 ()
    fffffffffbc45970 unix:post_startup+78 ()
    fffffffffbc459b0 genunix:main+d3 ()
    fffffffffbc459c0 unix:_start+95 ()

    I am seeing something similar to this, but I can bring it up and it crashed after a random amount of time. Any ideas what patches could have caused this?
    Jun 24 13:32:08 voldemort2 unix: [ID 836849 kern.notice]
    Jun 24 13:32:08 voldemort2 ^Mpanic[cpu3]/thread=fffffe8000c6ec80:
    Jun 24 13:32:08 voldemort2 genunix: [ID 683410 kern.notice] BAD TRAP: type=d (#gp General protection) rp=fffffe8000c6ea70 addr=fffffe84c13befdb
    Jun 24 13:32:08 voldemort2 unix: [ID 100000 kern.notice]
    Jun 24 13:32:08 voldemort2 unix: [ID 839527 kern.notice] sched:
    Jun 24 13:32:08 voldemort2 unix: [ID 753105 kern.notice] #gp General protection
    Jun 24 13:32:08 voldemort2 unix: [ID 358286 kern.notice] addr=0xfffffe84c13befdb
    Jun 24 13:32:08 voldemort2 unix: [ID 243837 kern.notice] pid=0, pc=0xfffffffffb834fed, sp=0xfffffe8000c6eb68, eflags=0x10286
    Jun 24 13:32:08 voldemort2 unix: [ID 211416 kern.notice] cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de>
    Jun 24 13:32:08 voldemort2 unix: [ID 354241 kern.notice] cr2: fffffe84c13befdb cr3: 1072d000 cr8: c
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] rdi: ffffffff8027c531 rsi: 1 rdx: 200
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] rcx: 2 r8: ffffffff8027c4c0 r9: ffffffff8027c4c0
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] rax: fffffe8000c6ec80 rbx: ffffffff98401000 rbp: fffffe8000c6ebc0
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] r10: ffffffff97bab7b4 r11: ff00000000000000 r12: ffffffffb5fd4bc0
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] r13: 4 r14: ffffffff8027c531 r15: 1
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] fsb: ffffffff80000000 gsb: ffffffff98401000 ds: 43
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] es: 43 fs: 0 gs: 1c3
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] trp: d err: 0 rip: fffffffffb834fed
    Jun 24 13:32:08 voldemort2 unix: [ID 592667 kern.notice] cs: 28 rfl: 10286 rsp: fffffe8000c6eb68
    Jun 24 13:32:08 voldemort2 unix: [ID 266532 kern.notice] ss: 30
    Jun 24 13:32:08 voldemort2 unix: [ID 100000 kern.notice]
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6e980 unix:real_mode_end+71e1 ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ea60 unix:trap+b04 ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ea70 unix:cmntrap+140 ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ebc0 unix:mutex_owner_running+d ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ec20 cpqary3:cpqary3_sw_isr+5a ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ec60 unix:av_dispatch_softvect+62 ()
    Jun 24 13:32:08 voldemort2 genunix: [ID 655072 kern.notice] fffffe8000c6ec70 unix:intr_thread+b4 ()
    Jun 24 13:32:08 voldemort2 unix: [ID 100000 kern.notice]
    Jun 24 13:32:08 voldemort2 genunix: [ID 672855 kern.notice] syncing file systems...
    Jun 24 13:32:09 voldemort2 genunix: [ID 733762 kern.notice] 350
    Jun 24 13:32:10 voldemort2 genunix: [ID 733762 kern.notice] 349
    Jun 24 13:32:11 voldemort2 genunix: [ID 733762 kern.notice] 348
    Jun 24 13:32:38 voldemort2 last message repeated 20 times
    Jun 24 13:32:39 voldemort2 genunix: [ID 622722 kern.notice] done (not all i/o completed)
    Jun 24 13:32:40 voldemort2 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c3t0d0s1, offset 1719074816, content: kernel
    Jun 24 13:33:13 voldemort2 genunix: [ID 409368 kern.notice] ^M100% done: 1232512 pages dumped, compression ratio 4.65,
    Jun 24 13:33:13 voldemort2 genunix: [ID 851671 kern.notice] dump succeeded

Maybe you are looking for

  • Can I create an email address for a family member using my apple ID?

    I converted to iCloud from Mobile Me in October before I created email addresses for my kids.  Can I create accounts for them in iCloud?

  • How to Hide a column in a report @run time

    Hello Can you help me please I need to hide some columns in a report when you run it , I dont want it to be displayed. Here is the scenerio. I have a column called (QTY) and the other (Price). Now I have a new one where I take (Qty*Price) and the res

  • Open inbound deliveries in md04

    Dear Experts, Please suggest how we can handle the qty differences arising from the goods receipt or return delivery in case of using the goods receipt w.r.t inbound delivery in system. In MD04 the diffrential quantities are shown as open shipping no

  • How do I create a pdf document from my scanner?

    How do I create a pdf document from my scanner?

  • Arrear Break-up

    My client requires component0wise Break-up for Basic ( Real,Arrears,Retro) HRA breaku[ as above FBP as above I'm getting /551 and /552 but that's not sufficing the Purpose Pl explain what to be done where Thanks in advance BHarat