Create ZONE in ZFS pool solaris10

Hi Gurus,
I'm reading some solaris 10 tutorials about ZFS and Zones. Is it possible to create a new storage pool using my current hard disk in which I installed solaris???
I'm a bit new in Solaris, I have a SPARC box in which I'm learnin about solaris 10. I have installed Solaris 10 using ZFS file system. I think my box only have 1 disk but not sure. I see 46 GB of free space running "df -kh " command
I run "format" command, this is the output
root@orclidm # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@780/pci@0/pci@9/scsi@0/sd@0,0
1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@780/pci@0/pci@9/scsi@0/sd@1,0
Specify disk (enter its number):
zpool list "display this:"
root@orclidm # zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 68G 13.1G 54.9G 19% ONLINE -
zfs list "display this:"
root@orclidm # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 21.3G 45.6G 106K /rpool
rpool/ROOT 11.6G 45.6G 31K legacy
rpool/ROOT/s10s_u10wos_17b 11.6G 45.6G 11.6G /
rpool/dump 1.50G 45.6G 1.50G -
rpool/export 66K 45.6G 32K /export
rpool/export/home 34K 45.6G 34K /export/home
rpool/swap 8.25G 53.9G 16K -
I read in a tutorial that when you create a zpool you need to specify an empty hard disk, is that correct?
Please point me on the best approach to create zones using zfs pools.
Regards

manin21 wrote:
Hi Gurus,
I'm reading some solaris 10 tutorials about ZFS and Zones. Is it possible to create a new storage pool using my current hard disk in which I installed solaris???IF you have a spare partition you may use that.
>
I'm a bit new in Solaris, I have a SPARC box in which I'm learnin about solaris 10. I have installed Solaris 10 using ZFS file system. I think my box only have 1 disk but not sure. I see 46 GB of free space running "df -kh " command
I run "format" command, this is the output
root@orclidm # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@780/pci@0/pci@9/scsi@0/sd@0,0
1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@780/pci@0/pci@9/scsi@0/sd@1,0
Specify disk (enter its number):
This shows two disks. In a production setup you might mirror this.
zpool list "display this:"
root@orclidm # zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 68G 13.1G 54.9G 19% ONLINE -
The command:
zpool status
would show you what devices you are using
zfs list "display this:"
root@orclidm # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 21.3G 45.6G 106K /rpool
rpool/ROOT 11.6G 45.6G 31K legacy
rpool/ROOT/s10s_u10wos_17b 11.6G 45.6G 11.6G /
rpool/dump 1.50G 45.6G 1.50G -
rpool/export 66K 45.6G 32K /export
rpool/export/home 34K 45.6G 34K /export/home
rpool/swap 8.25G 53.9G 16K -
I read in a tutorial that when you create a zpool you need to specify an empty hard disk, is that correct?
No.
You can use partions/slices instead. A zone storage pool is composed of one or more devices; each device can be a a whole disk, disk slice or even a file if i remember correctly ( .... but you really dont want to use a file normally).
Please point me on the best approach to create zones using zfs pools.
RegardsYour storage rpool is 68GB in size on a 72GB disk .... therefore the disk is full up and their is no space for another zfs pool. If zpool status shows your disk is mirrored by zfs that is that. Otherwise you may choose to create a storage pool on the other disk (not best production practice).
often one creates a zfs filesystem out of an existing filesystem.
zfs create -o mountpoint=/zones rpool/zones
zfs create rpool/zones/myzone
Then use zonepath=/zones/myzone creating the zone.
- I was googling to cross check my answer ... the following blog has an example but it is a little old and may be opensolaris orientated.
https://blogs.oracle.com/DanX/entry/solaris_zfs_and_zones_simple
Authorative information is at http://docs.oracle.com, notably:
http://docs.oracle.com/cd/E23823_01/index.html
http://docs.oracle.com/cd/E23823_01/html/819-5461/index.html
http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html

Similar Messages

  • SFTP chroot from non-global zone to zfs pool

    Hi,
    I am unable to create an SFTP chroot inside a zone to a shared folder on the global zone.
    Inside the global zone:
    I have created a zfs pool (rpool/data) and then mounted it to /data.
    I then created some shared folders: /data/sftp/ipl/import and /data/sftp/ipl/export
    I then created a non-global zone and added a file system that loops back to /data.
    Inside the zone:
    I then did the ususal stuff to create a chroot sftp user, similar to: http://nixinfra.blogspot.com.au/2012/12/openssh-chroot-sftp-setup-in-linux.html
    I modifed the /etc/ssh/sshd_config file and hard wired the ChrootDirectory to /data/sftp/ipl.
    When I attempt to sftp into the zone an error message is displayed in the zone -> fatal: bad ownership or modes for chroot directory /data/
    Multiple web sites warn that folder ownership and access privileges is important. However, issuing chown -R root:iplgroup /data made no difference. Perhaps it is something todo with the fact the folders were created in the global zone?
    If I create a simple shared folder inside the zone it works, e.g. /data3/ftp/ipl......ChrootDirectory => /data3/ftp/ipl
    If I use the users home directory it works. eg /export/home/sftpuser......ChrootDirectory => %h
    FYI. The reason for having a ZFS shared folder is to allow separate SFTP and FTP zones and a common/shared data repository for FTP and SFTP exchanges with remote systems. e.g. One remote client pushes data to the FTP server. A second remote client pulls the data via SFTP. Having separate zones increases security?
    Any help would be appreciated to solve this issue.
    Regards John

    sanjaykumarfromsymantec wrote:
    Hi,
    I want to do IPC between inter-zones ( commnication between processes running two different zones). So what are the different techniques can be used. I am not interested in TCP/IP ( AF_INET) sockets.Zones are designed to prevent most visibility between non-global zones and other zones. So network communication (like you might use between two physical machines) are the most common method.
    You could mount a global zone filesystem into multiple non-global zones (via lofs) and have your programs push data there. But you'll probably have to poll for updates. I'm not certain that's easier or better than network communication.
    Darren

  • Need Best Practice for creating BE in ZFS boot environment with zones

    Good Afternoon -
    I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
    # zfs create –o canmount=noauto rpool/ROOT/S10be/zones
    # zfs mount rpool/ROOT/S10be/zones
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z1
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z2
    # zfs mount rpool/ROOT/s10be/zones/z1
    # zfs mount rpool/ROOT/s10be/zones/z2
    # chmod 700 /zones/z1
    # chmod 700 /zones/z2
    # zonecfg –z z1
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z1
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zonecfg –z z2
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z2
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zoneadm –z z1 install
    # zoneadm –z z2 install
    # zlogin –C –e 9. z1
    # zlogin –C –e 9. z2
    Output from zoneadm list -v:
    # zoneadm list -v
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    2 z1 running /zones/z1 native shared
    4 z2 running /zones/z2 native shared
    Now for the BE create:
    # lucreate –n newBE
    # zfs list
    rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
    # zfs inherit -r mountpoint rpool/ROOT/newBE
    # zfs set mountpoint=/ rpool/ROOT/newBE
    # zfs inherit -r mountpoint rpool/ROOT/newBE/var
    # zfs set mountpoint=/var rpool/ROOT/newBE/var
    # zfs inherit -r mountpoint rpool/ROOT/newBE/zones
    # zfs set mountpoint=/zones rpool/ROOT/newBE/zones
    and did it for the zones too.
    When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
    Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
    # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    s10s_u8wos_08a yes yes no no -
    newBE yes no yes no -
    Ran init 0
    ok boot -L
    picked item two which was newBE
    then boot.
    Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
    Please help!
    thanks julie

    The issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.

  • How can i create a /home zfs partition

    I need to create a zfs home partition without been part of the aumounter.
    ldap home users configuracions needs to be in /home
    disableing the automounter works.. however
    automounter its needed to mount other file systems.

    Edit the /etc/auto_master file and remove or comment out the /home entry. Then restart the autofs service. This should take /home out of the automounter. You can then use zfs create to create the local /home ZFS filesystem as you need it. For example, if you want /home on the "rpool" zfs pool, you could use "zfs create -o mountpoint=/home rpool/home" and then create your home directories inside that.

  • Solaris 10 upgrade and zfs pool import

    Hello folks,
    I am currently running "Solaris 10 5/08 s10x_u5wos_10 X86" on a Sun Thumper box where two drives are mirrored UFS boot volume and the rest is used in ZFS pools. I would like to upgrade my system to "10/08 s10x_u6wos_07b X86" to be able to use ZFS for the boot volume. I've seen documentation that describes how to break the mirror, create new BE and so on. This system is only being used as iSCSI target for windows systems so there is really nothing on the box that i need other then my zfs pools. Could i simply pop the DVD in and perform a clean install and select my current UFS drives as my install location, basically telling Solaris to wipe them clean and create an rpool out of them. Once the installation is complete, would i be able to import my existing zfs pools ?
    Thank you very much

    Sure. As long as you don't write over any of the disks in your ZFS pool you should be fine.
    Darren

  • ZFS on solaris10 panics HP Proliant server BAD TRAP page fault in kernel

    When installing Solaris 9/10 on a HP Proliant DL 380 G5
    the system resets when choosing ZFS as a root file system,
    even after CPQary3 device driver has been installed.
    The CPQary3 device driver is the latest version 2.3.0,is neccessary because of the presence of a HP Smart Array E200 Controller
    and is installed sucessfully during the installation from DVD
    When choosing UFS as a root file system, the OS installs fine.
    But when zfs is used to configure a zps pool after the installation, the system resets again
    Has anyone experienced the same problems ? knows how to solve this ?
    Solaris installs fine when choosing UFS as a root file system.
    After that, a second disk partition is created of type Other to hold a zfs pool with the following command:
    zpool create datapool c0t0d0p2
    After that the system PANICS: here is an extract from the /var/adm/messages files:
    Mar 14 16:36:49 solarisintel ^Mpanic[cpu1]/thread=fffffe800069bc60:
    Mar 14 16:36:49 solarisintel genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffe800069b940 addr=238 occurred in
    module "unix" due to a NULL pointer dereference
    Mar 14 16:36:49 solarisintel unix: [ID 100000 kern.notice]
    Mar 14 16:36:49 solarisintel unix: [ID 839527 kern.notice] sched:
    Mar 14 16:36:49 solarisintel unix: [ID 753105 kern.notice] #pf Page fault
    Mar 14 16:36:49 solarisintel unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x238
    Mar 14 16:36:49 solarisintel unix: [ID 243837 kern.notice] pid=0, pc=0xfffffffffb8406fb, sp=0xfffffe800069ba38, eflags=0x10246
    Mar 14 16:36:49 solarisintel unix: [ID 211416 kern.notice] cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
    Mar 14 16:36:49 solarisintel unix: [ID 354241 kern.notice] cr2: 238 cr3: 11ada000 cr8: c
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] rdi: 238 rsi: 4 rdx: fffffe800069bc60
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] rcx: 14 r8: 0 r9: 0
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] rax: 0 rbx: 238 rbp: fffffe800069ba60
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] r10: 0 r11: 1 r12: 100000
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] r13: 0 r14: 4 r15: ffffffffb3a61af0
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] fsb: 0 gsb: ffffffff9b2ac800 ds: 43
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] es: 43 fs: 0 gs: 1c3
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] trp: e err: 2 rip: fffffffffb8406fb
    Mar 14 16:36:49 solarisintel unix: [ID 592667 kern.notice] cs: 28 rfl: 10246 rsp: fffffe800069ba38
    Mar 14 16:36:49 solarisintel unix: [ID 266532 kern.notice] ss: 30
    Mar 14 16:36:49 solarisintel unix: [ID 100000 kern.notice]
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069b850 unix:die+da ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069b930 unix:trap+5e6 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069b940 unix:cmntrap+140 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069ba60 unix:mutex_enter+b ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069ba70 zfs:zio_buf_alloc+1d ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069baa0 zfs:zio_vdev_io_start+120 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bad0 zfs:zio_execute+7b ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069baf0 zfs:zio_nowait+1a ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bb60 zfs:vdev_probe+f0 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bba0 zfs:vdev_open+2b1 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bbc0 zfs:vdev_open_child+21 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bc40 genunix:taskq_thread+295 ()
    Mar 14 16:36:49 solarisintel genunix: [ID 655072 kern.notice] fffffe800069bc50 unix:thread_start+8 ()
    Mar 14 16:36:49 solarisintel unix: [ID 100000 kern.notice]
    Mar 14 16:36:49 solarisintel genunix: [ID 672855 kern.notice] syncing file systems...
    Mar 14 16:36:49 solarisintel genunix: [ID 904073 kern.notice] done
    Mar 14 16:36:50 solarisintel genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c0t0d0s1, offset 108593152, content: kernel
    Mar 14 16:36:54 solarisintel genunix: [ID 100000 kern.notice]
    Mar 14 16:36:54 solarisintel genunix: [ID 665016 kern.notice] ^M100% done: 210699 pages dumped,
    Mar 14 16:36:54 solarisintel genunix: [ID 851671 kern.notice] dump succeeded
    Mar 14 16:38:21 solarisintel genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_142910-17 64-bit
    I will try again now with solaris 10 5/09 (U7)
    Chris.
    Edited by: user5485639 on 14-Mar-2011 08:42

    What do you mean after the driver is installed?
    S10/U9 has the Compaq Array3 driver pre-installed. No more adding it to the jumpstart tree or "adding a driver" necessary.
    I have setup several DL360 G3's and G4's which use the same controller.

  • ISCSI array died, held ZFS pool.  Now box han

    I was doing some iSCSI testing and, on an x86 EM64T server running an out-of-the box install of Solaris 10u5, created a ZFS pool on two RAID-0 arrays on an IBM DS300 iSCSI enclosure.
    One of the disks in the array died, the DS300 got really flaky, and now the Solaris box gets hung in boot. It looks like it's trying to mount the ZFS filesystems. The box has two ZFS pools, or had two, anyway. The other ZFS pool has some VirtualBox images filling it.
    Originally, I got a few iSCSI target offline messages on the console, so I booted to failsafe and tried to run iscsiadm to remove the targets, but that wouldn't work. So I just removed the contents of /etc/iscsi and all the iSCSI instances in /etc/path_to_inst on the root drive.
    Now the box hangs with no error messages.
    Anyone have any ideas what to do next? I'm willing to nuke the iSCSI ZFS pool as it's effectively gone anyway, but I would like to save the VirtualBox ZFS pool, if possible. But they are all test images, so I don't have to save them. The host itself is a test host with nothing irreplaceable on it, so I could just reinstall Solaris. But I'd prefer to figure out how to save it, even if only for the learning experience.

    Try this. Disconnect the iSCSI drives completely, then boot. My fallback plan on zfs if things get screwed up is to physically disconnect the zfs drives so that solaris doesn't see them on boot. It marks them failed and should boot. Once it's up, zpool destroy the pools WITH THE DRIVES DISCONNECTED so that it doesn't think there's a pool anymore. THEN reconnect the drives and try to do a "zpool import -f".
    The pools that are on intact drives should be still ok. In theory :)
    BTW, if you removed devices, you probably should do a reconfiguration boot (create a /a/reconfigure in failsafe mode) and make sure the devices gets reprobed. Does the thing boot in single user ( pass -s after the multiboot line in grub )? If it does, you can disable the iscsi svcs with "svcadm disable network/iscsi_initiator; svcadm disable iscsitgt".

  • Zones on zfs question

    Hi,
    I'm running solaris 10 update 8 with the latest patch cluster installed. I also have one non-global zone running on the system. Here is how things are setup.
    The filesystems
    zfs create -o canmount=noauto rpool/ROOT/10_U8/zones
    zfs create -o canmount=noauto rpool/ROOT/10_U8/zones/nonGlobalZone
    zfs set mountpoint=/zones rpool/ROOT/10_U8/zones
    zfs set mountpoint=/zones rpool/ROOT/10_U8/zones/nonGlobalZoneThen I configured and installed the zone. Everything is running fine, except I noticed that the two zfs filesystems I created do not show up when I run df. They do show up with a zfs list though. If I run zfs get all, I notice that the two filesytems are listed as not mounted. How is it that my zone is running if the filesystem is showing up as not mounted?
    Here is the output from zfs get all
    rpool/ROOT                                             mounted               no                     -
    rpool/ROOT/10_U8                                   mounted               yes                    -
    rpool/ROOT/10_U8/var                             mounted               yes                    -
    rpool/ROOT/10_U8/zones                         mounted               no                     -
    rpool/ROOT/10_U8/zones/nonGlobalZone  mounted               no                     -
    Here is the zoneadm list output
      ID NAME             STATUS     PATH                           BRAND    IP   
       0 global           running    /                              native   shared
       1 nonGlobalZone         running    /zones/nonGlobalZone                native   shared

    I'm relatively new to zones, but my guess is that the zones were created in the root partition, under /zones. If you cd into a directory under /zones and do a df, it probably will show that you are still under the / root partition.In my experience with ZFS, I have found that if I forget to mount a ZFS filesystem, the contents end up under /root (and sometimes fill it up, which the system does not like...).
    -- Alan

  • Replace FC Card and ZFS Pools

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

  • Updating memory capping for zones in default pool.

    Updating memory capping for zones in default pool. Has anyone else run into this where the items for physical/locked/shared memory and swap for zones within the default pool are not changeable? Is this a known bug or feature?
    After moving a zone from the default pool to a user created pool, update these values and then move back to the default pool, I find that I am at that point able to update these values for zones in the default pool now?
    Anyone else hitting this?

    Is it a full zone or a sparse zone?
    prstat doesn't really understand everything that's going on. Its numbers are not accurate when pages are shared (very likely in a sparse zone).
    Darren

  • Help with seting up a Data Sorce can't be created with non-existent Pool

    I am wanting to use an Oracle 9i with WebLogic 7
    I have the following in my config.xml:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
    Properties="user=SYSTEM" TestTableName="OID"
    URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
    <JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
    The console seems happy, no error mesages but in the log I get:
    ####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <101047>
    <[ServletContext(id=4110316,name=console,context-path=/console)]
    FileServlet: Using standard I/O>
    ####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001082> <Creating Data Source named DB_DS for pool
    Thin.Pool>
    ####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001059> <Error during Data Source creation:
    weblogic.common.ResourceException: DataSource(DB_DS) can't be created
    with non-existent Pool (connection or multi) (Thin.Pool)
         at weblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:127)
         at weblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
         at weblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:293)
         at weblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270)
         at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:375)
         at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:154)
         at java.lang.reflect.Method.invoke(Native Method)
         at weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl.java:732)
         at weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:714)
         at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:417)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
         at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
         at weblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(ConfigurationMBeanImpl.java:578)
         at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:419)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
         at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
         at $Proxy16.addDeployment(Unknown Source)
         at weblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(DynamicMBeanImpl.java:1784)
         at weblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java:1737)
         at weblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:1715)
         at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
         at weblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBeanImpl.java:1711)
         at weblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.java:1035)
         at weblogic.management.internal.ConfigurationMBeanImpl.setAttribute(ConfigurationMBeanImpl.java:353)
         at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1358)
         at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1333)
         at weblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanServerImpl.java:898)
         at weblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
         at $Proxy13.setTargets(Unknown Source)
         at java.lang.reflect.Method.invoke(Native Method)
         at weblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanAttribute.java:92)
         at weblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMBeanAction.java:145)
         at weblogic.management.console.actions.internal.ActionServlet.doAction(ActionServlet.java:171)
         at weblogic.management.console.actions.internal.ActionServlet.doPost(ActionServlet.java:85)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1058)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:401)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:306)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:5445)
         at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3105)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2588)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
    Why does it say:
    can't be created with non-existent Pool
    Thanks,

    Add "Targets" attribute to the connection pool. You may
    get an idea how it looks like by searching config.xml
    for "targets". If target servers are not set, the pool won't be
    deployed and can not be used to a create datasource.
    Regards,
    Slava Imeshev
    "BBaker" <[email protected]> wrote in message
    news:[email protected]...
    I am wanting to use an Oracle 9i with WebLogic 7
    I have the following in my config.xml:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
    Properties="user=SYSTEM" TestTableName="OID"
    URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
    <JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
    The console seems happy, no error mesages but in the log I get:
    ####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <101047>
    <[ServletContext(id=4110316,name=console,context-path=/console)]
    FileServlet: Using standard I/O>
    ####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001082> <Creating Data Source named DB_DS for pool
    Thin.Pool>
    ####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001059> <Error during Data Source creation:
    weblogic.common.ResourceException: DataSource(DB_DS) can't be created
    with non-existent Pool (connection or multi) (Thin.Pool)
    atweblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:
    127)
    atweblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
    atweblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:29
    3)
    atweblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270
    atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
    arget.java:375)
    atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
    arget.java:154)
    at java.lang.reflect.Method.invoke(Native Method)
    atweblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
    .java:732)
    atweblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:7
    14)
    atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
    nImpl.java:417)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
    atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
    mpl.java:952)
    atweblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(Confi
    gurationMBeanImpl.java:578)
    atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
    nImpl.java:419)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
    atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
    mpl.java:952)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
    at $Proxy16.addDeployment(Unknown Source)
    atweblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(D
    ynamicMBeanImpl.java:1784)
    atweblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java
    :1737)
    atweblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:17
    15)
    atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
    r.java:780)
    atweblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBean
    Impl.java:1711)
    atweblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.
    java:1035)
    atweblogic.management.internal.ConfigurationMBeanImpl.setAttribute(Configurati
    onMBeanImpl.java:353)
    atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:135
    8)
    atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:133
    3)
    atweblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanS
    erverImpl.java:898)
    atweblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
    at $Proxy13.setTargets(Unknown Source)
    at java.lang.reflect.Method.invoke(Native Method)
    atweblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanA
    ttribute.java:92)
    atweblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMB
    eanAction.java:145)
    atweblogic.management.console.actions.internal.ActionServlet.doAction(ActionSe
    rvlet.java:171)
    atweblogic.management.console.actions.internal.ActionServlet.doPost(ActionServ
    let.java:85)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    atweblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(Servle
    tStubImpl.java:1058)
    atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
    :401)
    atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
    :306)
    atweblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(W
    ebAppServletContext.java:5445)
    atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
    r.java:780)
    atweblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletCo
    ntext.java:3105)
    atweblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java
    :2588)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
    Why does it say:
    can't be created with non-existent Pool
    Thanks,

  • Error in writing an Ant task for creating a new connection pool.

    I have written the following ant task to create a new connection pool in weblogic 10.3.
    <target name="pool.dev">
         <wlconfig url="http://localhost:7001/" username="weblogic" password="weblogic">
         <query domain="C:/weblogic/rtg-L0" name="myserver"/>
         <create type="JDBCConnectionPool" name="OneSourceConnectionPool">
         <set attribute="DriverName"
         value="oracle.jdbc.OracleDriver"/>
         <set attribute="InitialCapacity" value="1"/>
         <set attribute="MaxCapacity" value="5"/>
         <set attribute="Password" value="rating"/>
         <set attribute="Properties" value="user=rating"/>
         <set attribute="RefreshMinutes" value="0"/>
         <set attribute="ShrinkPeriodMinutes" value="15"/>
         <set attribute="ShrinkingEnabled" value="true"/>
         <set attribute="TestConnectionsOnRelease" value="true"/>
         <set attribute="TestConnectionsOnReserve" value="true"/>
         <set attribute="TestConnectionsOnCreate" value="true"/>
         <set attribute="TestTableName" value="SQL SELECT 1 FROM DUAL"/>
         <set attribute="URL"
         value="jdbc:oracle:thin:@xyz.com:1522:oradvl"/>
         <set attribute="Targets" value="myserver"/>
         </create>
         </wlconfig>
    </target>
    When I run it, I see the following error:
    BUILD FAILED
    C:\ganymede\eclipse\workspace1\RtgSvr\build.xml:286: Failed to connect to the server: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
         java.io.EOFException]
    Can anybody please help me regarding this...
    Thank you,
    Sowmya

    Hi everybody,
    Thank you very much for your replies....actually I added weblogic.jar to the classpath of the target. So, now I dont see that error. But, I have another problem which is as follows:
    <target name="initJDBC">
    <wlconfig url="t3://${host}:${port}" username="${username}" password="${password}">
         <query domain="domain.name" type="Server" name="${target.server}" property="${target.server}"/>
         <create type="JDBCConnectionPool" name="TestConnectionPool">
         <set attribute="DriverName" value="oracle.jdbc.OracleDriver"/>
         <set attribute="Password" value="welcome"/>
         <set attribute="Properties" value="user=welcome"/>
         <set attribute="URL" value="jdbc:oracle:thin:@test.com:1522:oradvl"/>
              <set attribute="Targets" value=""/>
         <set attribute="TestTableName" value="SQL SELECT 1 FROM DUAL"/>
         <set attribute="TestConnectionsOnRelease" value="false"/>
         <set attribute="TestConnectionsOnReserve" value="true"/>
         </create>
         <create type="JDBCDataSource" name="TestDataSource">
              <set attribute="JNDIName" value="TestDataSource"/>
              <set attribute="PoolName" value="TestConnectionPool"/>
              <set attribute="Targets" value=""/>
              </create>
         </wlconfig>
         </target>
    I am not knowing what to give in the value field of <set attribute="Targets" value=""/>. The following is my build.properties file:
    target.server=myserver
    host=127.0.0.1
    port=7001
    username=weblogic
    password=weblogic
    domain.name=testDomain
    If I give <set attribute="Targets" value="${myserver}"/>, I get the following error:
    BUILD FAILED
    C:\ganymede\eclipse\workspace1\TestSvr\build.xml:290: Property not set: ${myserver}
    When I set myserver=myserver in build.properties, I get the following error:
    BUILD FAILED
    C:\ganymede\eclipse\workspace1\TestSvr\build.xml:290: Error invoking MBean command: java.lang.IllegalArgumentException: Property Name and value not valid for the MBean. Value myserver for parameter[Targets].java.lang.IllegalArgumentException: Unable to convert the argument valuemyserver to class javax.management.ObjectName.java.lang.reflect.InvocationTargetException
    Can someone plzz help me in this regard.
    Thank you,
    Sowmya

  • Failed to obtain/create connection from connection pool after redeploy

    I have a web application (.war) that uses a jdbc connection pool. The application works fine, however after I redeploy it using the administration console I get "Failed to obtain/create connection from connection pool [ Datavision_Pool ]. Reason : null" followed by "Error allocating connection : [Error in allocating a connection. Cause: null]" from javax.enterprise.resource.resourceadapter and I am forced to restart the instance. I am running Sun Java System Application Server 9.1 (build b58g-fcs)
    using a connection pool to a Microsoft SQL 2000 database using inet software's JDBC drivers. I need to be able to redeploy applications without having to restart the instance. Any help is appreciated.

    I have turned on some additional diagnostics and found out some answers and a work-around, but I think that there may be a bug in the way JDBC connection pool classes are loaded. The actual error was a null pointer in the JDBC driver class in the perpareStatement method. The only line in this method is "return factory.createPreparedStatement( this , sql );" and the only possible NPE would be if the factory was null, which should be impossible because it is a static variable and it is initialized when the class is loaded. The problem occurs because we deploy the JDBC driver .jar file within our .war file, for use when a client doesn't have or want to use connection pooling. Apparently, the connection pool must have picked up some of these classes and when the .war was redeployed, the reference to the factory was lost for existing connections (not sure how). If I remove the JDBC .jar file from the .war, it works, but that wasn't an ideal solution, the other way to get it to work was to change the sun-web.xml file to have <class-loader delegate="true">. We previously had it set to false in version 8.1 because of interference with a different version of the apache Tiles classes, which has now been addressed in version 9.1.
    I still think there is an issue, because the connection pool should never use the application specific classloaders. Am I wrong to believe this?

  • Error while creating zone inside Solaris 11.2 beta ldom

    hi
    i have installed solaris 11.2 in ldom (sparc ovm 3.1 )
    and i try to create zone inside guest domain , it always give this error
    Error occurred during execution of 'generated-transfer-3442-1' checkpoint.
            Failed Checkpoints:
            Checkpoint execution error:
                    Error refreshing publishers, 0/1 catalogs successfully updated:
                    Framework error: code: 28 reason: Operation too slow. Less than 1024 bytes/sec transfered the last 30 seconds
                    URL: 'http://pkg.oracle.com/solaris/beta/solaris/catalog/1/catalog.summary.C' (happened 4 times)
    Installation: Failed.  See install log at /system/volatile/install.3442/install_log
    ERROR: auto-install failed.

    You might want to tune PKG_CLIENT_LOWSPEED_TIMEOUT before running "zoneadm install"
    For example:
    # export PKG_CLIENT_LOWSPEED_TIMEOUT=300

  • Zfs pool I/O failures

    Hello,
    Been using an external SAS/SATA tray connected to a t5220 using a SAS cable as storage for a media library.  The weekly scrub cron failed last week with all disks reporting I/O failures:
    zpool status
      pool: media_NAS
    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
       see: http://www.sun.com/msg/ZFS-8000-HC
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        2.34T scanned out of 9.59T at 14.7M/s, 143h43m to go
        0 repaired, 24.36% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   UNAVAIL  10.6K    75     0  experienced I/O failures
              raidz2-0  UNAVAIL  21.1K    10     0  experienced I/O failures
                c6t0d0  UNAVAIL    212     6     0  experienced I/O failures
                c6t1d0  UNAVAIL    216     6     0  experienced I/O failures
                c6t2d0  UNAVAIL    225     6     0  experienced I/O failures
                c6t3d0  UNAVAIL    217     6     0  experienced I/O failures
                c6t4d0  UNAVAIL    202     6     0  experienced I/O failures
                c6t5d0  UNAVAIL    189     6     0  experienced I/O failures
                c6t6d0  UNAVAIL    187     6     0  experienced I/O failures
                c6t7d0  UNAVAIL    219    16     0  experienced I/O failures
                c6t8d0  UNAVAIL    185     6     0  experienced I/O failures
                c6t9d0  UNAVAIL    187     6     0  experienced I/O failures
    The console outputs this repeated error:
    SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: 20
    PLATFORM: SUNW,SPARC-Enterprise-T5220, CSN: -, HOSTNAME: t5220-nas
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: e935894e-9ab5-cd4a-c90f-e26ee6a4b764
    DESC: The number of I/O errors associated with a ZFS device exceeded acceptable levels.
    AUTO-RESPONSE: The device has been offlined and marked as faulted. An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -x' for more information. Please refer to the associated reference document at http://sun.com/msg/ZFS-8000-FD for the latest service procedures and policies regarding this diagnosis.
    Chassis | major: Host detected fault, MSGID: ZFS-8000-FD
    /var/adm/messages has an error message for each disk in the data pool, this being the error for sd7:
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.warning] WARNING: /pci@0/pci@0/p
    ci@9/scsi@0/disk@2,0 (sd7):
    May  3 16:24:02 t5220-nas       Error for Command: read(10)                Error
    Level: Fatal
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.notice]         Requested Block:
    1815064264                Error Block: 1815064264
    Have tried rebooting the system and running zpool clear as the zfs link in the console errors suggest.  Sometimes the system will reboot fine, other times it requires issuing a break from LOM, because the shutdown command is still trying after more than an hour.   The console usually outputs more messages, as the reboot is completing,  basically saying the faulted hardware has been restored, and no additional action is required.  A scrub is recommended in the console message.  When I check the pool status the previously suspended scrub starts back where it left off:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.83T scanned out of 9.59T at 165M/s, 6h37m to go
        0 repaired, 60.79% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    Then after an hour or two all the disks go back into an I/O error state.   Thought it might be the SAS controller card, PCI slot, or maybe the cable, so tried using the other PCI slot in the riser card first (don't have another cable available).   Now the system is back online and again trying to complete the previous scrub:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.58T scanned out of 9.59T at 139M/s, 8h26m to go
        0 repaired, 58.14% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    the zfs file systems are mounted:
    bash# df -h|grep media
    media_NAS               14T   493K   6.3T     1%    /media_NAS
    media_NAS/archive       14T   784M   6.3T     1%    /media_NAS/archive
    media_NAS/exercise      14T    42G   6.3T     1%    /media_NAS/exercise
    media_NAS/ext_subs      14T   3.9M   6.3T     1%    /media_NAS/ext_subs
    media_NAS/movies        14T   402K   6.3T     1%    /media_NAS/movies
    media_NAS/movies/bluray    14T   4.0T   6.3T    39%    /media_NAS/movies/bluray
    media_NAS/movies/dvd    14T   585K   6.3T     1%    /media_NAS/movies/dvd
    media_NAS/movies/hddvd    14T   176G   6.3T     3%    /media_NAS/movies/hddvd
    media_NAS/movies/mythRecordings    14T   329K   6.3T     1%    /media_NAS/movies/mythRecordings
    media_NAS/music         14T   347K   6.3T     1%    /media_NAS/music
    media_NAS/music/flac    14T    54G   6.3T     1%    /media_NAS/music/flac
    media_NAS/mythTV        14T    40G   6.3T     1%    /media_NAS/mythTV
    media_NAS/nuc-celeron    14T   731M   6.3T     1%    /media_NAS/nuc-celeron
    media_NAS/pictures      14T   5.1M   6.3T     1%    /media_NAS/pictures
    media_NAS/television    14T   3.0T   6.3T    33%    /media_NAS/television
    but the format command is not seeing any of the disks:
    format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SEAGATE-ST9146803SS-0006 cyl 65533 alt 2 hd 2 sec 2187>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
           2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>  solaris
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Before moving the card into the other slot in the riser card format saw each disk in the zfs pool.    Not sure why the disks are not seen in format but the zfs pool seems to be available to the OS.    The disks in the attached tray were setup for Solaris to see using the Sun StorageTek RAID Manager, they were passed as 2TB raid0 components to Solaris, and format saw them as available 2TB disks.    Any suggestions as to how to proceed if the scrub completes with the SAS card in the new I/O slot?    Should I force a reconfigure of devices on the next reboot?  If the disks fault out again with I/O errors in this slot, the next steps were to try a new SAS  card and/or cable.  Does that sound reasonable?
    Thanks,

    Was the system online (and the ZFS pool) too when you moved the card? That might explain why the disks are confused. Obviously, this system is experiencing some higher level problem like a bad card or cable because disks generally don't fall over at the same time. I would let the scrub finish, if possible, and shut the system down. Bring the system to single-user mode, and review the zpool import data around the device enumeration. If the device info looks sane, then import the pool. This should re-read the device info. If the device info is still not available during the zpool import scan, then you need to look at a higher level.
    Thanks, Cindy

Maybe you are looking for