ZFS crash

I had an interesting dilemma today and I'm wondering if anyone here can illuminate on why this happened.
I have a number of pools, including the root pool, in on-board disks on the server. I also have one pool on a SAN disk, outside the system. Last night the SAN crashed, and shortly thereafter, the Solaris system executed a number of cron jobs, most of which involved running functions on the pool that was on the SAN. This caused a number of problems, most notably that when the SAN eventually came up, those cron jobs finished, and then crashed the system again.
Only by zfs destroying the newly created zfs file system that the cron jobs created was the system able to boot up again. As long as those corrupted zfs file systems remained on the SAN disk, not even the rpool would boot up correctly. None of the zfs file systems would mount, and most services were disabled. Once I destroyed the newly created zfs file systems, everything instantly mounted and all services started.
Question: why would those one zfs file systems prevent ALL pools from mounting, even when they are on different disks and file systems, and prevent all services from starting? I thought ZFS was more resistant to this sort of thing. I will have to edit my scripts and add SAN-checking to make sure it is up before they execute to prevent this from happening again. Luckily I still had all the raw data that the cron jobs were working with, so I was able to quickly re-create what the cron jobs did originally.

If no one answers here try again on the OpenSolaris ZFS Forums.
alan

Similar Messages

  • Crash dump size too big (ZFS cache is counted as kernel pages)

    Hi,
    After a system crash, no crash dump was available: kernel size is 17GB ( ! ) probably because we use ZFS and ZFS cache is counted as kernel pages.
    How to do not save ZFS cache in dump ?
    Thanks.
    Alain.
    dumpadm
    Dump content: kernel pages
    Dump device: /dev/dsk/c4t20000014C3CA8778d0s1 (dedicated)
    Savecore directory: /var/crash/ppor6
    Savecore enabled: yes
    savecore -v
    System dump time: Fri Nov 2 10:52:09 2007
    savecore: not enough space in /var/crash/ppor6 (5706 MB avail, 17737 MB needed)
    ::memstatPage Summary Pages MB %Tot
    Kernel 2210567 17270 53%
    Anon 1148368 8971 27%
    Exec and libs 9315 72 0%
    Page cache 134335 1049 3%
    Free (cachelist) 119964 937 3%
    Free (freelist) 556241 4345 13%
    Total 4178790 32646
    Physical 4110937 32116
    >
    Solaris 10 6/06 s10s_u2wos_09a SPARC
    System Configuration: Sun Microsystems sun4u Sun Fire V890
    System clock frequency: 150 MHz
    Memory size: 32768 Megabytes

    I checked around with more collegues and the problem of the ZFS ARC that is included in the crashdump is described in BUG ID 4894692 (caching data in heap inflates crash dump).
    This BUG is solved in Solaris 10 U4 or in the kernel patch 120011-14.
    If you don't want that ZFS ARC inflates the crashdumps, please upgrade to sol10 U4 or upgrade at least to this latest kernel patch ...

  • DB2 on Solaris x64 - ZFS as filesystem possible?

    Environment:
    Solaris 10 5/08
    Full root zone
    Tablespaces on loopback mounted ZFS filesystem
    DB2 LUW 9.5 FP2a
    The installation was smooth and worked.
    After some days I encountered crashes. db2diag was telling "something like"
    "filesystem ZFS not supported"
    Unfortunately the system is currently not available to check the exact messages. This happened only when specific transactions/actions were carried out, I believe the system wanted to do some direct I/O related things.
    Are there any known issues with this configuration?
    Markus

    I could reproduce the problem:
    2009-01-20-17.14.25.258156+060 E54981582E588      LEVEL: Warning
    PID     : 15217                TID  : 14          PROC : db2sysc 0
    INSTANCE: db2eh4               NODE : 000         DB   : EH4
    APPHDL  : 0-26470              APPID: 170.60.143.1.50595.090120161401
    AUTHID  : SAPEH4
    EDUID   : 14                   EDUNAME: db2agent (EH4) 0
    FUNCTION: DB2 UDB, oper system services, sqlo_enable_dio_cio_using_ioctl, probe:30
    MESSAGE : ZRC=0x870F00B7=-2029059913=SQLO_UNSUPPORTED
              "Operation is unsupported."
    DATA #1 : <preformatted>
    Unsupported file system type zfs for Direct I/O.
    2009-01-20-17.14.25.258493+060 E54982171E588      LEVEL: Warning
    PID     : 15217                TID  : 14          PROC : db2sysc 0
    INSTANCE: db2eh4               NODE : 000         DB   : EH4
    APPHDL  : 0-26470              APPID: 170.60.143.1.50595.090120161401
    AUTHID  : SAPEH4
    EDUID   : 14                   EDUNAME: db2agent (EH4) 0
    FUNCTION: DB2 UDB, oper system services, sqlo_enable_dio_cio_using_ioctl, probe:30
    MESSAGE : ZRC=0x870F00B7=-2029059913=SQLO_UNSUPPORTED
              "Operation is unsupported."
    DATA #1 : <preformatted>
    Unsupported file system type zfs for Direct I/O.
    % db2level
    DB21085I  Instance "db2eh4" uses "64" bits and DB2 code release "SQL09052" with
    level identifier "03030107".
    Informational tokens are "DB2 v9.5.0.2", "s080911", "U820798", and Fix Pack
    "2a".
    Anyone any clue on that?
    Thanx!
    Markus

  • Unbootable Solaris 10 x86 installed on ZFS root file system

    Hi all,
    I have unbootable Solaris 10 x86 installed on ZFS root file system. on an IDE HDD
    The bios keep showing the msg
    DISK BOOT FAILURE , PLEASE INSERT SYSTEM BOOT DISK
    please note :
    1- the HDD is connected properly and recognized by the system
    2- GRUB don't show any messages
    is there any guide to recover the system , or detail procedure to boot system again
    Thanks,,,

    It's not clear if this is a recently installed system that is refusing to boot OR if the system was working fine and crashed.
    If it's the former, I would suggest you check the BIOS settings to make sure it's booting from the right hard disk. In any case, the Solaris 10 installation should have writting the GRUB stage1 and stage2 blocks to the beginning of the disk.
    If the system crashed and is refusing to boot, you can try to boot from a Solaris 10 installation DVD. Choose the single user shell option and see if it can find your system. You should be able to use format/devfsadm/etc to do the actual troubleshooting. If your disk is still responding, try a `zpool import` to see if there is any data that ZFS can recognize (it usually has many backup uberblocks and disk labels scattered around the disk).

  • System crash attempting to use the packet filtering on Solaris 10, MU7

    I have been attempting to port my kernel module to run on Solaris 10, MU7 (from MU6). Some changes to the packet filtering hooks interface requires me to make code changes and linker option changes i.e -Nmisc/neti -Nmisc/hook
    I now have my module loading successfully and "hooking" packets. However, I am seeing instability and after processing in the order of 100-200 packets the system crashes. See stack dump beow for details.
    Also note that initially my callback hook function is very simple i.e returns 0.
    I require assistance on identifying the root cause. The key code fragements are as follows:
    int _init()
    // allocated a control block using net_instance_alloc
    // populated the nin_name, nin_create, nin_destroy, and nin_shutdown fields with valid callback functions
    // registered the control block using net_instance_register
    static int _attach(dip, cmd)
    dev_info_t *dip;
    ddi_attach_cmd_t cmd;
    // initialised a hook control block using HOOK_INIT
    // performed a protocol lookup (using net_protocol_lookup) on the net_id provided by the nin_create function callback
    // registered the hook with the net_id protocol using net_hook_register
    static int
    myipf_hook4_in (hook_event_token_t tok, hook_data_t info, void *arg) {
    // simple callback function for test purposes
    return 0;
    System Stack trace:
    Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: bfs-t5440-03-ldm12
    NIS domain name is bfs.nis
    Reading ZFS config: done.
    bfs-t5440-03-ldm12 console login:
    panic[cpu9]/thread=2a100a67ca0: BAD TRAP: type=9 rp=2a100a67630 addr=7b6e8d48 mmu_fsr=0
    sched: trap type = 0x9
    addr=0x7b6e8d48
    pid=0, pc=0x7b6e8d48, sp=0x2a100a66ed1, tstate=0x1606, context=0x0
    g1-g7: 1910, 18b0, 2a100a678f0, 60010776b14, 1910, 0, 2a100a67ca0
    000002a100a67350 unix:die+9c (9, 2a100a67630, 7b6e8d48, 0, 2a100a67410, 182b400)
    %l0-3: 000000000100954c 0000000000000009 0000060020ac1620 00000000010523ac
    %l4-7: 00000000018a3c78 0000060020ac1848 000003000481dbe0 00000000010ac400
    000002a100a67430 unix:trap+6cc (2a100a67630, 10000, 0, 0, 30004028000, 2a100a67ca0)
    %l0-3: 0000000000000000 000000000185b480 0000000000000009 0000000000000000
    %l4-7: 0000000000000000 0000000000000000 0000000000001606 0000000000010200
    000002a100a67580 unix:ktl0+64 (300014c8e40, 2a100a67890, 600114fb428, 3, 1, 0)
    %l0-3: 0000030004028000 0000000000000048 0000000000001606 0000000001021604
    %l4-7: 00000000003c0000 0000000000000001 0000000000000000 000002a100a67630
    000002a100a676d0 hook:hook_run+7c (30001b039c0, 300014c8e40, 2a100a67890, 60012566ea8, 7b6e8d48, 1)
    %l0-3: 0000030001b039c8 00000600117df3c0 0000000001878888 0000000000000000
    %l4-7: 0000000000000000 000000000000003c 0000000000000000 0000000000000000
    000002a100a67780 ip:ip_input+3b4 (0, 600135ca040, 0, 6001359bc28, 0, 0)
    %l0-3: 0000000000000000 0000000000000000 0000000000000000 0000060011562000
    %l4-7: 00000000e0000000 0000000000000001 0000000000000000 0000000000000000
    000002a100a67910 dls:soft_ring_drain+78 (600135d1f00, 60011dfa940, 2, 2000000, 2, 0)
    %l0-3: 0000000000000000 0000000000000000 0000000000000004 0000000000000005
    %l4-7: 000006001359bc28 00000600135ca040 000000007be1c238 000000000000fffe
    000002a100a679c0 dls:soft_ring_worker+64 (600135d1f00, 0, 2, 600135d1f4c, 0, 2a100a67a8a)
    %l0-3: 000002a100a67a88 0000000000000000 000002a10001fca0 000002a10001fca0
    %l4-7: 0000000000000002 0000000000000000 0000000000000002 00000000018f1000
    syncing file systems... [1] 104 [1] 95 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 [1] 4 done (not all i/o completed)
    dumping to /dev/dsk/c0d0s1, offset 644284416, content: kernel
    100% done: 118970 pages dumped, compression ratio 10.00, dump succeeded
    rebooting...
    Resetting...
    -eugene
    Edited by: emonagh on Aug 25, 2009 1:54 AM
    Edited by: emonagh on Aug 25, 2009 1:56 AM

    I have checked weblogic download link.
    Currently webloigc is only available only for below mentioned platforms:-
    1. Windows (32 bit jvm)
    2. Linux (32 bit jvm)
    3. sun solaris (only SPARC) (32 bit JVM)
    There is no generic installer available for weblogic 9.2
    Thus what I want is weblogic 9.2 setup for x86 machine.
    I have tried to run weblogic 9.2 setup for linux on sun solaris x86.
    But it did not run, it also gave error message that some package is missing in /lib/.. folder.....

  • How to find reason for system crash?

    Hello,
    I have a test system running in VirtualBox environment and its crashing every now and then, sometimes two times a day but sometimes it lasts for 3 days before crash happens.
    system info
    root>cat /etc/*release*
    Solaris 10 10/09 s10x_u8wos_08a X86
    root>uname -a
    SunOS 5.10 Generic_142910-17 i86pc i386 i86pc
    dump files
    root>ls -l vmdump.*
    -rw-r--r-- 1 root root 116064256 Jan 13 05:35 vmdump.0
    -rw-r--r-- 1 root root 108003328 Jan 13 10:15 vmdump.1
    -rw-r--r-- 1 root root 112852992 Jan 14 18:53 vmdump.2
    -rw-r--r-- 1 root root 129236992 Jan 17 08:41 vmdump.3
    -rw-r--r-- 1 root root 122486784 Jan 17 16:39 vmdump.4
    started to use mdb
    root@cs5vs01>mdb -k unix.4 vmcore.4
    Loading modules: [ unix krtld genunix specfs dtrace cpu.generic uppc pcplusmp ufs ipc ip hook neti sctp arp usba fctl nca lofs zfs nfs random md cpc fcip sppp ]
    ::stack
    vpanic()
    kadmin+0x517()
    uadmin+0xc7()
    sys_syscall+0x17b()
    $C
    fffffe800100ee60 vpanic()
    fffffe800100eeb0 kadmin+0x517()
    fffffe800100ef00 uadmin+0xc7()
    fffffe800100ef10 sys_syscall+0x17b()I wonder how I can continue from this point? Any support would be greatly appreciated!
    Anyone knows about a good step by step debug guide? For example how to logically track down the root cause, good example commands and so on?
    Thank you!
    BR
    Daniel

    We can see from the following stack that the panic was caused by someone or some app calling uadmin(1M)
    $C
    fffffe800100ee60 vpanic()
    fffffe800100eeb0 kadmin+0x517()
    fffffe800100ef00 uadmin+0xc7()
    fffffe800100ef10 sys_syscall+0x17b()
    So the question is who/what. ::ptree will give you the process tree and you should see from that the execname responsible. What apps do you have running in this VBOX environment? Oracle RAC uses the uadmin interface because it doesn't have a failfast driver of it's own. There are several other applications who also use the uadmin syscall for similar reasons. Normally they panic the node/system because something has timed out and they need to restore cluster/system stability and integrity.
    With regards to the step-by-step guide, no such document exists because it's impossible to write one. Each crash dump is different, each person does things slightly differently. What you need to do is understand how to use the tools (typically mdb or scat) and then start poking about in crash dumps. In fact you may find scat (Solaris Crash Analysis Tool) more user friendly and easier to learn than mdb. SCAT can be downloaded from https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=SCAT-5.2-G-F@CDS-CDS_SMI
    There's some information for mdb in the man page and at the following locations:
    http://dlc.sun.com/osol/docs/content/MODDEBUG/intro-1.html
    http://download.oracle.com/docs/cd/E19253-01/816-5041/index.html
    Unfortunately Crash Dump Analysis (CDA) isn't something that can easily be done via the forums. If you raise a service request it'll come to the Kernel group and we can then have a look for you.
    Regards,
    Steve

  • Solaris 10 / RSYNC / ZFS Kernel Panic

    I've already raised a call with sun about this but am wondering if anyone else is getting anything like this. We have a new set of V215's we're attempting to roll out for a project, however since Sunday each one of them has crashed at least once when rsync'ing data across from the older servers to the new zfs filesystem on these boxes...
    Below is the crash messages we are getting, anyone had anything similar or any idea?
    Oct 15 21:33:21 h3-img1n unix: [ID 836849 kern.notice]
    Oct 15 21:33:21 h3-img1n ^Mpanic[cpu1]/thread=3000289e360:
    Oct 15 21:33:21 h3-img1n unix: [ID 340138 kern.notice] BAD TRAP: type=31 rp=2a102f35340 addr=e mmu_fsr=0 occurred in module "genunix" due to a NULL pointer dereference
    Oct 15 21:33:21 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:21 h3-img1n unix: [ID 839527 kern.notice] rsync:
    Oct 15 21:33:22 h3-img1n unix: [ID 520581 kern.notice] trap type = 0x31
    Oct 15 21:33:22 h3-img1n unix: [ID 381800 kern.notice] addr=0xe
    Oct 15 21:33:22 h3-img1n unix: [ID 101969 kern.notice] pid=14302, pc=0x11dd670, sp=0x2a102f34be1, tstate=0x4480001607, context=0x14
    Oct 15 21:33:22 h3-img1n unix: [ID 743441 kern.notice] g1-g7: 48, 9, 9, 30134780180, a, 0, 3000289e360
    Oct 15 21:33:22 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35060 unix:die+78 (31, 2a102f35340, e, 0, 2a102f35120, 107b000)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 00000000c0800000 0000000000000031 0000000001000000 0000000000002000
    Oct 15 21:33:22 h3-img1n %l4-7: 000000000181a4f8 000000000181a400 0000000000000000 0000004480001607
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35140 unix:trap+9d4 (2a102f35340, 5, 1fff, 1c00, 0, 1)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 00000600054f6070 0000000000000031 0000000000000000
    Oct 15 21:33:22 h3-img1n %l4-7: ffffffffffffe000 00000600143bdad0 0000000000000001 0000000000000005
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35290 unix:ktl0+48 (f, 60001f4b3f8, 31356b00, 7efefeff, 81010100, ff00)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000001400 0000004480001607 000000000101aee0
    Oct 15 21:33:22 h3-img1n %l4-7: 0000060004526d00 0000000001002b2e 0000000000000000 000002a102f35340
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f353e0 genunix:vn_setpath+40 (60014497780, 60014497780, 3017094c7c0, 2a102f35680, 5
    , 31)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000060014497780 0000000000000014 000000000000000f
    Oct 15 21:33:23 h3-img1n %l4-7: 0000000000000000 000000000000000e 0000000000000002 0000000000000002
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35490 genunix:fop_lookup+f4 (60014497780, 2a102f35680, 2a102f35678, 7b750b84, 6000
    1d2dd80, 60004732288)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000060004732040 00000000222dd1e3 00000000222dd1e2
    Oct 15 21:33:23 h3-img1n %l4-7: 000000002d1c7677 000000002d1c7676 0000000000000000 00000000018b3800
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35550 genunix:lookuppnvp+344 (2a102f35940, 0, 60014497780, 2a102f35678, 2a102f3568
    0, 60001035a40)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 00000000018ad838 0000060014497780 0000000000000000 0000000000000000
    Oct 15 21:33:23 h3-img1n %l4-7: 00000300227718d8 0000060001035a40 0000000000000000 0000000000000010
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35790 genunix:lookuppnat+120 (60004e95380, 0, 0, 0, 2a102f35ad8, 0)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000054 0000000000000031 0000060001035a40 0000000000000053
    Oct 15 21:33:23 h3-img1n %l4-7: 0000000000000b0b 000000000000012f 00000300227718d8 000002a102f35940
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35850 genunix:lookupnameat+5c (0, 0, 0, 0, 2a102f35ad8, 0)
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000060004732040 0000000000002420 0000000000002000 0000000000000001
    Oct 15 21:33:24 h3-img1n %l4-7: 00000000ffbfd7e0 000002a102f35940 0000000000000000 00000000018ad800
    Oct 15 21:33:24 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35960 genunix:cstatat_getvp+198 (ffd19400, ffbfd7e0, 1, 0, 2a102f35ad8, 0)
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: ffffffffffd19553 0000000000000000 0000000000000000 0000000004010002
    Oct 15 21:33:24 h3-img1n %l4-7: 0000000000000000 00000000018ad800 000000000185ec00 00000300227718d8
    Oct 15 21:33:24 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35a20 genunix:cstatat64_32+40 (ffffffffffd19553, ffbfd7e0, 1000, ffbfd6c0, 1000, 0
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 000002a102f35ad0 0000000000000001 0000000000000000
    Oct 15 21:33:24 h3-img1n %l4-7: 0000000000000000 000002a102f35ad8 0000000000000000 0000000000000000
    Oct 15 21:33:24 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:24 h3-img1n genunix: [ID 672855 kern.notice] syncing file systems...
    Oct 15 21:33:25 h3-img1n genunix: [ID 733762 kern.notice] 24
    Oct 15 21:33:26 h3-img1n genunix: [ID 733762 kern.notice] 17
    Oct 15 21:33:27 h3-img1n genunix: [ID 733762 kern.notice] 16
    Oct 15 21:33:53 h3-img1n last message repeated 20 times
    Oct 15 21:33:54 h3-img1n genunix: [ID 622722 kern.notice] done (not all i/o completed)
    Oct 15 21:33:55 h3-img1n genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c0t0d0s1, offset 65536, content: kernel
    Oct 15 21:36:35 h3-img1n genunix: [ID 409368 kern.notice] ^M100% done: 876815 pages dumped, compression ratio 3.14,
    Oct 15 21:36:35 h3-img1n genunix: [ID 851671 kern.notice] dump succeeded
    Oct 15 21:37:39 h3-img1n genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_120011-14 64-bit
    Oct 15 21:37:39 h3-img1n genunix: [ID 943907 kern.notice] Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Oct 15 21:37:39 h3-img1n Use is subject to license terms.
    Oct 15 21:37:39 h3-img1n genunix: [ID 678236 kern.info] Ethernet address = 0:14:4f:a2:c4:70
    Oct 15 21:37:39 h3-img1n unix: [ID 673563 kern.info] NOTICE: Kernel Cage is ENABLED
    Oct 15 21:37:39 h3-img1n unix: [ID 389951 kern.info] mem = 8388608K (0x200000000)
    Oct 15 21:37:39 h3-img1n unix: [ID 930857 kern.info] avail mem = 8390737920
    Oct 15 21:37:39 h3-img1n rootnex: [ID 466748 kern.info] root nexus = Sun Fire V215
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] pseudo0 at root
    Oct 15 21:37:39 h3-img1n genunix: [ID 936769 kern.info] pseudo0 is /pseudo
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] scsi_vhci0 at root
    Oct 15 21:37:39 h3-img1n genunix: [ID 936769 kern.info] scsi_vhci0 is /scsi_vhci
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] px0 at root: SAFARI 0x1e 0x600000
    Cheers,
    Mike

    I've already raised a call with sun about this but am wondering if anyone else is getting anything like this. We have a new set of V215's we're attempting to roll out for a project, however since Sunday each one of them has crashed at least once when rsync'ing data across from the older servers to the new zfs filesystem on these boxes...
    Below is the crash messages we are getting, anyone had anything similar or any idea?
    Oct 15 21:33:21 h3-img1n unix: [ID 836849 kern.notice]
    Oct 15 21:33:21 h3-img1n ^Mpanic[cpu1]/thread=3000289e360:
    Oct 15 21:33:21 h3-img1n unix: [ID 340138 kern.notice] BAD TRAP: type=31 rp=2a102f35340 addr=e mmu_fsr=0 occurred in module "genunix" due to a NULL pointer dereference
    Oct 15 21:33:21 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:21 h3-img1n unix: [ID 839527 kern.notice] rsync:
    Oct 15 21:33:22 h3-img1n unix: [ID 520581 kern.notice] trap type = 0x31
    Oct 15 21:33:22 h3-img1n unix: [ID 381800 kern.notice] addr=0xe
    Oct 15 21:33:22 h3-img1n unix: [ID 101969 kern.notice] pid=14302, pc=0x11dd670, sp=0x2a102f34be1, tstate=0x4480001607, context=0x14
    Oct 15 21:33:22 h3-img1n unix: [ID 743441 kern.notice] g1-g7: 48, 9, 9, 30134780180, a, 0, 3000289e360
    Oct 15 21:33:22 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35060 unix:die+78 (31, 2a102f35340, e, 0, 2a102f35120, 107b000)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 00000000c0800000 0000000000000031 0000000001000000 0000000000002000
    Oct 15 21:33:22 h3-img1n %l4-7: 000000000181a4f8 000000000181a400 0000000000000000 0000004480001607
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35140 unix:trap+9d4 (2a102f35340, 5, 1fff, 1c00, 0, 1)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 00000600054f6070 0000000000000031 0000000000000000
    Oct 15 21:33:22 h3-img1n %l4-7: ffffffffffffe000 00000600143bdad0 0000000000000001 0000000000000005
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35290 unix:ktl0+48 (f, 60001f4b3f8, 31356b00, 7efefeff, 81010100, ff00)
    Oct 15 21:33:22 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000001400 0000004480001607 000000000101aee0
    Oct 15 21:33:22 h3-img1n %l4-7: 0000060004526d00 0000000001002b2e 0000000000000000 000002a102f35340
    Oct 15 21:33:22 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f353e0 genunix:vn_setpath+40 (60014497780, 60014497780, 3017094c7c0, 2a102f35680, 5
    , 31)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000060014497780 0000000000000014 000000000000000f
    Oct 15 21:33:23 h3-img1n %l4-7: 0000000000000000 000000000000000e 0000000000000002 0000000000000002
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35490 genunix:fop_lookup+f4 (60014497780, 2a102f35680, 2a102f35678, 7b750b84, 6000
    1d2dd80, 60004732288)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000060004732040 00000000222dd1e3 00000000222dd1e2
    Oct 15 21:33:23 h3-img1n %l4-7: 000000002d1c7677 000000002d1c7676 0000000000000000 00000000018b3800
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35550 genunix:lookuppnvp+344 (2a102f35940, 0, 60014497780, 2a102f35678, 2a102f3568
    0, 60001035a40)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 00000000018ad838 0000060014497780 0000000000000000 0000000000000000
    Oct 15 21:33:23 h3-img1n %l4-7: 00000300227718d8 0000060001035a40 0000000000000000 0000000000000010
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35790 genunix:lookuppnat+120 (60004e95380, 0, 0, 0, 2a102f35ad8, 0)
    Oct 15 21:33:23 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000054 0000000000000031 0000060001035a40 0000000000000053
    Oct 15 21:33:23 h3-img1n %l4-7: 0000000000000b0b 000000000000012f 00000300227718d8 000002a102f35940
    Oct 15 21:33:23 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35850 genunix:lookupnameat+5c (0, 0, 0, 0, 2a102f35ad8, 0)
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000060004732040 0000000000002420 0000000000002000 0000000000000001
    Oct 15 21:33:24 h3-img1n %l4-7: 00000000ffbfd7e0 000002a102f35940 0000000000000000 00000000018ad800
    Oct 15 21:33:24 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35960 genunix:cstatat_getvp+198 (ffd19400, ffbfd7e0, 1, 0, 2a102f35ad8, 0)
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: ffffffffffd19553 0000000000000000 0000000000000000 0000000004010002
    Oct 15 21:33:24 h3-img1n %l4-7: 0000000000000000 00000000018ad800 000000000185ec00 00000300227718d8
    Oct 15 21:33:24 h3-img1n genunix: [ID 723222 kern.notice] 000002a102f35a20 genunix:cstatat64_32+40 (ffffffffffd19553, ffbfd7e0, 1000, ffbfd6c0, 1000, 0
    Oct 15 21:33:24 h3-img1n genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 000002a102f35ad0 0000000000000001 0000000000000000
    Oct 15 21:33:24 h3-img1n %l4-7: 0000000000000000 000002a102f35ad8 0000000000000000 0000000000000000
    Oct 15 21:33:24 h3-img1n unix: [ID 100000 kern.notice]
    Oct 15 21:33:24 h3-img1n genunix: [ID 672855 kern.notice] syncing file systems...
    Oct 15 21:33:25 h3-img1n genunix: [ID 733762 kern.notice] 24
    Oct 15 21:33:26 h3-img1n genunix: [ID 733762 kern.notice] 17
    Oct 15 21:33:27 h3-img1n genunix: [ID 733762 kern.notice] 16
    Oct 15 21:33:53 h3-img1n last message repeated 20 times
    Oct 15 21:33:54 h3-img1n genunix: [ID 622722 kern.notice] done (not all i/o completed)
    Oct 15 21:33:55 h3-img1n genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c0t0d0s1, offset 65536, content: kernel
    Oct 15 21:36:35 h3-img1n genunix: [ID 409368 kern.notice] ^M100% done: 876815 pages dumped, compression ratio 3.14,
    Oct 15 21:36:35 h3-img1n genunix: [ID 851671 kern.notice] dump succeeded
    Oct 15 21:37:39 h3-img1n genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_120011-14 64-bit
    Oct 15 21:37:39 h3-img1n genunix: [ID 943907 kern.notice] Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Oct 15 21:37:39 h3-img1n Use is subject to license terms.
    Oct 15 21:37:39 h3-img1n genunix: [ID 678236 kern.info] Ethernet address = 0:14:4f:a2:c4:70
    Oct 15 21:37:39 h3-img1n unix: [ID 673563 kern.info] NOTICE: Kernel Cage is ENABLED
    Oct 15 21:37:39 h3-img1n unix: [ID 389951 kern.info] mem = 8388608K (0x200000000)
    Oct 15 21:37:39 h3-img1n unix: [ID 930857 kern.info] avail mem = 8390737920
    Oct 15 21:37:39 h3-img1n rootnex: [ID 466748 kern.info] root nexus = Sun Fire V215
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] pseudo0 at root
    Oct 15 21:37:39 h3-img1n genunix: [ID 936769 kern.info] pseudo0 is /pseudo
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] scsi_vhci0 at root
    Oct 15 21:37:39 h3-img1n genunix: [ID 936769 kern.info] scsi_vhci0 is /scsi_vhci
    Oct 15 21:37:39 h3-img1n rootnex: [ID 349649 kern.info] px0 at root: SAFARI 0x1e 0x600000
    Cheers,
    Mike

  • HpOVO 8.x on Solaris 10. free() crashes

    Hello,
    Has anybody come across HPOVO on Solaris 10 crash issue?
    We are facing issue on Solaris 10 (5.10 Generic_118833-33) with HpOVO 8.3 .
    HpOVO api : opcdata_free(&l_eventMsg);
    This function works on all other Solaris version except Sol10.
    Any patch for free() delivered out on Solaris 10?
    Any kind of help is appreciated.
    Best regards,
    Rupesh

    Thanks Jon for your reply.
    We checked the file system. We are not using zfs but we are using ufs.
    # fstyp /dev/md/dsk/d1
    ufs
    More over:
    We did some debug stuff using gdb and analysed the core. Following are some useful back traces :
    #0 0xfdb550ec in freeunlocked () from /usr/lib/libc.so.1
    #1 0xfdb55094 in free () from /usr/lib/libc.so.1
    #2 0xff052fe0 in csmpb_empty_opcdata () from /usr/lib/libopcsv_r.so
    #3 0xff054318 in opcdata_free () from /usr/lib/libopcsv_r.so
    Can anybody give us clue what may be causing the crash at free_inlocked().
    Any type of help is appreciated.
    Best regards,
    Rupesh

  • Solaris 8 restart syslogd with rc script, server crash

    I have this Sparc Sun server. I added the -t option to the syslogd script to prevent listening for syslog requests and the server crashes when I execute S74syslog start.
    release: 5.8 (64-bit)
    version: Generic_117350-44
    machine: sun4u
    I have other solaris 8 servers and it is not happening. They are running Generic_117350-50. Has anyone experienced this issue. I could update the kernel but it is a remote server and that is not as simple as it sounds.
    Thank you.

    Whats the crash message from /var/adm/message ?
    Also, if you go to /var/crash/<hostname> you will see a bunch of files there which ends in different numbers, take the number and feed it to mdb, for example if you have unix.0 and vmcore.0 you would type:
    mdb 0
    Then, at the new prompt, type $c to print the stack trace, for example:
    unknown(/var/crash/unknown):# ls
    bounds     unix.1     unix.3     unix.5     vmcore.1   vmcore.3   vmcore.5
    unix.0     unix.2     unix.4     vmcore.0   vmcore.2   vmcore.4
    unknown(/var/crash/unknown):# mdb 4
    mdb: warning: dump is from SunOS 5.10 Generic_141445-09; dcmds and macros may not match kernel implementation
    $cLoading modules: [ unix krtld
    genunix specfs dtrace cpu.generic uppc pcplusmp ufs ip hook neti sctp arp usba uhci fctl nca lofs zfs audiosup cpc random fcip logindmux ptm ]
    $cfop_rwlock+0x15(d3f43300, 1, 0)
    write+0xdf()
    sys_sysenter+0x101()
    .. then cut the output back here for a few of them..
    I think mdb was introduced in Solaris 8, if not, the old command is 'kadb'.
    .7/M.

  • Where to find download for "Solaris cat" (crash analysis tool)

    Good afternoon,
    I have a machine that crashes regularly.
    Recently I have found that there is a tool, called "Solaris CAT", that can be used for analysiing crash dumps of Solaris machines.
    Unfortunately I can't find any download for this tool.
    Does anybody know the URL where Solaris CAT can be downloaded from?
    Thanks
    Dominique

    Hello,
    I'm afraid we're not there yet:
    *<machine_name> root#savecore -vd -f vmcore.0*
    savecore: bad magic number 8000
    And concerning "mdb":
    *<machine_name> root#mdb 0*
    *Loading modules: [ unix krtld genunix specfs dtrace ufs sd mpt px ldc ip hook neti sctp arp usba fctl md lofs zfs random nfs crypto ptm logindmux ipc ]*
    *> ::status*
    debugging crash dump vmcore.0 (64-bit) from <machine_name>
    operating system: 5.10 Generic_127127-11 (sun4v)
    panic message: Unrecoverable hardware error
    dump content: kernel pages only
    *> $c*
    vpanic(1096fe0, 2a100277730, 300014d0c40, 1, 0, 189c400)
    process_nonresumable_error+0x234(2a100277870, 1, 1, 40, 0, 1)
    ktl0+0x64(300078471d8, 0, 0, 0, 10000, 60026f6a0d8)
    poll_common+0x2a8(ffffffff7fffcc34, 600247c10a8, 2a100277ad0, 0, ffffffff7fffcc34, 3)
    pollsys+0xf8(ffffffff7fffcc34, 3, ffffffff7fff6980, 0, 2a100277ad0, 0)
    syscall_trap+0xac(ffffffff7fffcc34, 3, ffffffff7fff6980, 0, 110554, 0)
    *> ::panicinfo*
    cpu                1
    thread      300053b00a0
    message Unrecoverable hardware error
    tstate         e2001603
    g1                1
    g2          1096c00
    g3                1
    g4                2
    g5                2
    g6                0
    g7      300053b00a0
    o0          1096fe0
    o1      2a100277708
    o2                0
    o3                0
    o4                0
    o5      60021ec6000
    o6      2a100276dd1
    o7          111099c
    pc          1054250
    npc          1054254
    y                0
    => Does this mean that I need to replace CPU 1 (I thought that the problem was due to a memory board)?
    Thanks
    Dominique

  • ZFS help - disk id has changed

    Hi. I'm running Solaris 10 on x86 platform.
    I had 3 disks on my computer, 1 IDE (with solaris 10) and 2 SATA.
    Both SATA drives are (were) using zfs, though in two different pools.
    One of the drives crashed and I had to remove it, but I forgot to remove it from the pool.
    Naturally, things started cooking after reboot but the system booted. I managed to remove the broken drive from the pool by removing whole pool (I no longer need it).
    But after removing the broken drive from the computer, the hardware ID (c0d0, c1,0, c2do etc) for
    the remaining SATA drive changed. Before removal, this sata drive was c1d0s0, after removal it became c2d0s0 (I think s0 or p0). I don't know why the enumeration changed.
    The question is, is there a way to tell zfs to change the drive in the zpool from c1d0 to c2d0 without erasing its (the remaining drive) contents?
    Kind Regards,
    Yaerek

    Problem resolved. Had to reset the memory buffer inside 1 TB drive; without reset MoBo was recognizing it as a 33 MB drive

  • Webconsole does not display ZFS after patching

    I patched my server with smpatch and after the reboot was not able to see the zfs adminstration in the webconsole. No applications registered.
    After searching I found how to register the app but not sure why this is broken.
    wcadmin list -a did not show zfs
    wcadmin deploy -a console -x zfs /usr/share/webconsole/webapps/zfs
    now it does.
    This server is from a fresh install and had nothing attached or installed other than the latest patch cluster.

    The file "zfs.reg" is put in an incorrect location so OS upgrade removes the file. You are on the right track to redeploy the application by the wcadmin deploy command. Now if you still cannot log in to the console, or see a JVM crash after clicking into the ZFS GUI, follow this:
    ==================================================================================================================
    First, S10u6 has a different security setting when the server communicates to the outside world.
    Run this:
    # svccfg -s svc:/system/webconsole setprop options/tcp_listen=true; smcwebserver restart
    Now log on to the console: https://<server_name>:6789
    Now you should be able to see the GUI. Install this patch if you see a JVM crash. This will happen if you have at least one zpool configured in the system. Thus, it also happens to servers that use ZFS as their root file system.
    For S10 - SPARC,
    http://sunsolve.sun.com/search/document.do?assetkey=1-21-141104-01-1&searchclause=141104
    For S10 - x86,
    http://sunsolve.sun.com/search/document.do?assetkey=1-21-141105-01-1&searchclause=141105
    Run "patchadd" to install the patch. Make sure you restart the web console by running "smcwebserver restart".

  • ZFS not mounted on reboot - Possibly udev

    Linux nicenas 3.18.4-1-ARCH #1 SMP PREEMPT Tue Jan 27 20:45:02 CET 2015 x86_64 GNU/Linux
    zfs-git 0.6.3_r170_gd958324f_3.18.4_1-1
    From Repo: demz-repo-core
    My system recently crashed and I was forced to recreate my system drive.   Luckily I stored all of my data in a zfs pool.   I setup a new copy of Arch and installed the zfs kernel packages.
    I executed:
    zpool import media
    zpool set cachefile=/etc/zfs/zpool.cache media
    systemctl enable zfs.target
    systemctl start zfs.target
    Everything mounted just fine.
    The problem is that now after every reboot it does not mount.  My first thought was that there was an issue with the hostid.   I verified that my hostid was the same in zdb as well as `hostid` and just for the sake of it I also added it as a kernel parameter.
    After lots of ripping through logs and support posts I decided to add debug to the kernel command line.  Once debug was enabled it would mount on every boot.   This leads me to believe that there's a timing issue.   Maybe the drives take too long to be detected... Not sure.   But I don't know how to find out.
    Can anyone offer advice on how to figure out which service to put the delay on and how?
    Thanks
    Last edited by msalerno (2015-01-30 14:06:38)

    Update:
    I disabled debug from kernel and rebooted a few times.  Each time the zfs pool was not mounted.   My next step was to add debug to UDEV.   The problem is that when I added debug to udev, the pool would mount.
    How do I troubleshoot this?

  • Crash recovery for zones

    hi,
    What's the best way for crash recovery for zones?
    Thanks,
    Rosario

    there are some recipes using the snapshot feature of ZFS systems. I tried a solution for non ZFS filesystems:
    to backup:
    df -F nfs | sed "s/ .*//" > /tmp/excludes # this excludes NFS mounted filesystems
    find /$zonepath/$zonename -fstype lofs -prune 2> /dev/null | sed "s/^\/$zonepath\///">> /tmp/excludes # this exclude LOFS mounted filesystems
    echo "$zonename/root/proc" >> /tmp/excludes
    echo "$zonename/root/var/run" >> /tmp/excludes
    echo "$zonename/root/system/contract/all" >> /tmp/excludes
    echo "$zonename/root/system/contract/process" >> /tmp/excludes
    echo "$zonename/root/etc/mnttab" >> /tmp/excludes
    echo "$zonename/root/etc/svc/volatile" >> /tmp/excludes
    echo "$zonename/root/system/object" >> /tmp/excludes
    echo "$zonename/root/tmp" >> /tmp/excludes
    echo "$zonename/root/var/svc/log" >> /tmp/excludes
    echo "$zonename/root/var/log/syslog*" >> /tmp/excludes
    echo "$zonename/root/var/saf/zsmon/log" >> /tmp/excludes
    echo "$zonename/root/var/adm/messages*" >> /tmp/excludes
    echo "$zonename/root/var/adm/wtmpx" >> /tmp/excludes
    echo "$zonename/root/etc/svc/volatile/repository_door" >> /tmp/excludes
    echo "$zonename/root/tmp/.X11-unix/X0" >> /tmp/excludes
    echo "$zonename/dev/.devfsadm_synch_door" >> /tmp/excludes
    cd /$zonepath
    tar -EcfX /$somepath/$zonename.tar /tmp/excludes $zonename 2> /tmp/$zonename.err
    to restore:
    extract tar
    mkdir of all mount points
    mkdir for all dynamic data (i.e. $zonename/root/proc), assigning correct permissions&ownership
    create files for all dynamic data (i.e. $zonename/root/etc/mnttab), assigning correct permissions&ownership
    this solution requires that the backed up machine, and the restored one, to have the same OS level and patches

  • Webconsole with ZFS does not work

    Hi,
    if I try to open the zfs GUI in the webconsole , the smcwebserver dumps a java error and restarts itself.
    what can be the reason for this behaviour?
    the head of the dump-log file looks like
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # SIGSEGV (0xb) at pc=0xfeae30c0, pid=4767, tid=27
    # Java VM: Java HotSpot(TM) Server VM (1.5.0_07-b03 mixed mode)
    # Problematic frame:
    # V [libjvm.so+0x2e30c0]
    --------------- T H R E A D ---------------
    Current thread (0x0054a930): JavaThread "http-6789-Processor5" daemon [_thread_in_vm, id=27]
    siginfo:si_signo=11, si_errno=0, si_code=1, si_addr=0x00000000
    Registers:
    O0=0x0054a930 O1=0x00000002 O2=0xff018640 O3=0x00007aa0
    O4=0x00006ef4 O5=0x00006c00 O6=0xe5b79028 O7=0x00000000
    G1=0x80000000 G2=0x00000000 G3=0xff0129f4 G4=0x00008220
    G5=0x00008000 G6=0x00000000 G7=0xfe52ca00 Y=0x00000000
    PC=0xfeae30c0 nPC=0xfeae30c4
    Top of Stack: (sp=0xe5b79028)
    0xe5b79028: ff01863c 00000001 00007800 00008d54
    0xe5b79038: 00008b4c 00008c00 00008800 00000000
    0xe5b79048: 00000006 0054a930 00000000 00152a4c
    0xe5b79058: ff01c654 fefc4000 e5b79088 e6d98274
    0xe5b79068: 00004019 00000013 00000016 e5b791a8
    0xe5b79078: 000003d0 e5b79117 e5b790e8 01b791a8
    0xe5b79088: e6daaa30 00000048 e6dab3ac feae2f5c
    0xe5b79098: ff0108a8 fead92c0 e66b77f0 e6dab3ac
    Instructions: (pc=0xfeae30c0)
    0xfeae30b0: 94 10 20 00 98 10 00 1a 40 0c 9d 9e 9a 10 20 01
    0xfeae30c0: d4 06 a0 00 d6 06 a0 08 7f f9 a8 6f d0 02 80 0b
    Stack: [0xe5b00000,0xe5b80000), sp=0xe5b79028, free space=484k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [libjvm.so+0x2e30c0]
    C [libzfs_jni.so.1+0x827c]
    C [libzfs_jni.so.1+0x308c]
    C [libzfs_jni.so.1+0x3190]
    C [libzfs_jni.so.1+0x3500]
    C [libzfs.so.2+0x15870] zpool_iter+0x6c
    C [libzfs_jni.so.1+0x4f94] Java_com_sun_zfs_common_model_SystemDataModel_getPools+0x90
    j com.sun.zfs.common.model.SystemDataModel.getPools()[Lcom/sun/zfs/common/model/Pool;+35660
    j com.sun.zfs.common.model.SystemDataModel.getPools()[Lcom/sun/zfs/common/model/Pool;+0
    j com.sun.zfs.web.admin.zfsmodule.model.DeviceTreeModel.appendPools(Lcom/sun/web/ui/model/CCNavNodeInterface;Lcom/sun/zfs/common/model/ZDataModel;I)I+1
    j com.sun.zfs.web.admin.zfsmodule.model.MainDeviceTreeModel.init()V+62
    j com.sun.zfs.web.admin.zfsmodule.model.MainDeviceTreeModel.<init>()V+5
    j com.sun.zfs.web.admin.zfsmodule.DevicesTreeViewBean.createDeviceTreeModel()Lcom/sun/web/ui/model/CCTreeModel;+4
    j com.sun.zfs.web.admin.zfsmodule.DevicesTreeViewBean.createChild(Ljava/lang/String;)Lcom/iplanet/jato/view/View;+16
    j com.iplanet.jato.view.ContainerViewBase.ensureChild(Ljava/lang/String;)Lcom/iplanet/jato/view/View;+20
    j com.iplanet.jato.view.ContainerViewBase.getChild(Ljava/lang/String;)Lcom/iplanet/jato/view/View;+443
    j com.iplanet.jato.view.ContainerViewBase.beginChildDisplay(Lcom/iplanet/jato/view/event/ChildDisplayEvent;)Z+135
    --------------- S Y S T E M ---------------
    OS: Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    uname:SunOS 5.10 Generic_127111-06 sun4u (T2 libthread)
    rlimit: STACK 8192k, CORE infinity, NOFILE 65536, AS infinity
    load average:0.68 0.44 0.26
    CPU:total 4 has_v8, has_v9, has_vis1, has_vis2, is_ultra3
    Memory: 8k page, physical 8388608k(7390952k free)
    vm_info: Java HotSpot(TM) Server VM (1.5.0_07-b03) for solaris-sparc, built on May 3 2006 01:22:35 by unknown with unknown Workshop:0x550
    if you need the content let me know.
    Thanks for your help and cheers, Michael

    First, S10u6 has a different security setting when the server communicates to the outside world.
    Run this:
    # svccfg -s svc:/system/webconsole setprop options/tcp_listen=true; smcwebserver restart
    Now log on to the console: https://<server_name>:6789
    If you do not see the ZFS Administration GUI showing there, do this:
    # wcadmin deploy -a zfs -x zfs /usr/share/webconsole/webapps/zfs ; smcwebserver restart
    Now you should be able to see the GUI. Install this patch if you see a JVM crash. This will happen if you have at least one zpool configured in the system. Thus, it also happens to servers that use ZFS as their root file system.
    For S10 - SPARC,
    http://sunsolve.sun.com/search/document.do?assetkey=1-21-141104-01-1&searchclause=141104
    For S10 - x86,
    http://sunsolve.sun.com/search/document.do?assetkey=1-21-141105-01-1&searchclause=141105
    Run "patchadd" to install the patch. Make sure you restart the web console by running "smcwebserver restart".

Maybe you are looking for

  • Find format of an image

    Hi, I'm looking for finding the format of an image file (or stream) : tiff, jpg, gif, png, bmp, ... I don't want to use the extention name (.jpg, .tif, ...), but the contain of the file himself. how can I do it with JAI (without JDK 1.4) or an other

  • Windows Convert...

    Ok so I got myself a PowerMac G4 from ebay, and upgraded it, and it now has the following specs: - PPC G4 400mhz - 758mb Ram - AGP Radeon 7000 64mb (running at AGP 2x, was originally a PC version, so I had it flashed with mac firmware) - SuperDrive -

  • Extenction of Vendor in SRM

    Hi Gurus, Please guid me how to extend Vendor from one Purchase organisation to another purchase organisation in SRM. Reshma

  • To Be Delivered on Adapter Message for JDBC

    I'm stuck on a process that was working (file->Map->JDBC) until I made a change to the JDBC message which casued blocking on the SQL Server (don't know why). I ran the test a few times and now am getting status "To Be Delivered" in Adapater Monitorin

  • Error 404 on all Bridge functions except Browser and Files

    I have reloaded, unloaded, wiped both phone and PlayBook and cannot seem to get this fixed. When I click on Mail, Calendar, Contacts, I get the following error: "Error 404 We cannot open the web page you have requested because it could not be found."