ISCSI array died, held ZFS pool.  Now box han

I was doing some iSCSI testing and, on an x86 EM64T server running an out-of-the box install of Solaris 10u5, created a ZFS pool on two RAID-0 arrays on an IBM DS300 iSCSI enclosure.
One of the disks in the array died, the DS300 got really flaky, and now the Solaris box gets hung in boot. It looks like it's trying to mount the ZFS filesystems. The box has two ZFS pools, or had two, anyway. The other ZFS pool has some VirtualBox images filling it.
Originally, I got a few iSCSI target offline messages on the console, so I booted to failsafe and tried to run iscsiadm to remove the targets, but that wouldn't work. So I just removed the contents of /etc/iscsi and all the iSCSI instances in /etc/path_to_inst on the root drive.
Now the box hangs with no error messages.
Anyone have any ideas what to do next? I'm willing to nuke the iSCSI ZFS pool as it's effectively gone anyway, but I would like to save the VirtualBox ZFS pool, if possible. But they are all test images, so I don't have to save them. The host itself is a test host with nothing irreplaceable on it, so I could just reinstall Solaris. But I'd prefer to figure out how to save it, even if only for the learning experience.

Try this. Disconnect the iSCSI drives completely, then boot. My fallback plan on zfs if things get screwed up is to physically disconnect the zfs drives so that solaris doesn't see them on boot. It marks them failed and should boot. Once it's up, zpool destroy the pools WITH THE DRIVES DISCONNECTED so that it doesn't think there's a pool anymore. THEN reconnect the drives and try to do a "zpool import -f".
The pools that are on intact drives should be still ok. In theory :)
BTW, if you removed devices, you probably should do a reconfiguration boot (create a /a/reconfigure in failsafe mode) and make sure the devices gets reprobed. Does the thing boot in single user ( pass -s after the multiboot line in grub )? If it does, you can disable the iscsi svcs with "svcadm disable network/iscsi_initiator; svcadm disable iscsitgt".

Similar Messages

  • Solaris 10 upgrade and zfs pool import

    Hello folks,
    I am currently running "Solaris 10 5/08 s10x_u5wos_10 X86" on a Sun Thumper box where two drives are mirrored UFS boot volume and the rest is used in ZFS pools. I would like to upgrade my system to "10/08 s10x_u6wos_07b X86" to be able to use ZFS for the boot volume. I've seen documentation that describes how to break the mirror, create new BE and so on. This system is only being used as iSCSI target for windows systems so there is really nothing on the box that i need other then my zfs pools. Could i simply pop the DVD in and perform a clean install and select my current UFS drives as my install location, basically telling Solaris to wipe them clean and create an rpool out of them. Once the installation is complete, would i be able to import my existing zfs pools ?
    Thank you very much

    Sure. As long as you don't write over any of the disks in your ZFS pool you should be fine.
    Darren

  • Zfs pool I/O failures

    Hello,
    Been using an external SAS/SATA tray connected to a t5220 using a SAS cable as storage for a media library.  The weekly scrub cron failed last week with all disks reporting I/O failures:
    zpool status
      pool: media_NAS
    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
       see: http://www.sun.com/msg/ZFS-8000-HC
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        2.34T scanned out of 9.59T at 14.7M/s, 143h43m to go
        0 repaired, 24.36% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   UNAVAIL  10.6K    75     0  experienced I/O failures
              raidz2-0  UNAVAIL  21.1K    10     0  experienced I/O failures
                c6t0d0  UNAVAIL    212     6     0  experienced I/O failures
                c6t1d0  UNAVAIL    216     6     0  experienced I/O failures
                c6t2d0  UNAVAIL    225     6     0  experienced I/O failures
                c6t3d0  UNAVAIL    217     6     0  experienced I/O failures
                c6t4d0  UNAVAIL    202     6     0  experienced I/O failures
                c6t5d0  UNAVAIL    189     6     0  experienced I/O failures
                c6t6d0  UNAVAIL    187     6     0  experienced I/O failures
                c6t7d0  UNAVAIL    219    16     0  experienced I/O failures
                c6t8d0  UNAVAIL    185     6     0  experienced I/O failures
                c6t9d0  UNAVAIL    187     6     0  experienced I/O failures
    The console outputs this repeated error:
    SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: 20
    PLATFORM: SUNW,SPARC-Enterprise-T5220, CSN: -, HOSTNAME: t5220-nas
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: e935894e-9ab5-cd4a-c90f-e26ee6a4b764
    DESC: The number of I/O errors associated with a ZFS device exceeded acceptable levels.
    AUTO-RESPONSE: The device has been offlined and marked as faulted. An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -x' for more information. Please refer to the associated reference document at http://sun.com/msg/ZFS-8000-FD for the latest service procedures and policies regarding this diagnosis.
    Chassis | major: Host detected fault, MSGID: ZFS-8000-FD
    /var/adm/messages has an error message for each disk in the data pool, this being the error for sd7:
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.warning] WARNING: /pci@0/pci@0/p
    ci@9/scsi@0/disk@2,0 (sd7):
    May  3 16:24:02 t5220-nas       Error for Command: read(10)                Error
    Level: Fatal
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.notice]         Requested Block:
    1815064264                Error Block: 1815064264
    Have tried rebooting the system and running zpool clear as the zfs link in the console errors suggest.  Sometimes the system will reboot fine, other times it requires issuing a break from LOM, because the shutdown command is still trying after more than an hour.   The console usually outputs more messages, as the reboot is completing,  basically saying the faulted hardware has been restored, and no additional action is required.  A scrub is recommended in the console message.  When I check the pool status the previously suspended scrub starts back where it left off:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.83T scanned out of 9.59T at 165M/s, 6h37m to go
        0 repaired, 60.79% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    Then after an hour or two all the disks go back into an I/O error state.   Thought it might be the SAS controller card, PCI slot, or maybe the cable, so tried using the other PCI slot in the riser card first (don't have another cable available).   Now the system is back online and again trying to complete the previous scrub:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.58T scanned out of 9.59T at 139M/s, 8h26m to go
        0 repaired, 58.14% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    the zfs file systems are mounted:
    bash# df -h|grep media
    media_NAS               14T   493K   6.3T     1%    /media_NAS
    media_NAS/archive       14T   784M   6.3T     1%    /media_NAS/archive
    media_NAS/exercise      14T    42G   6.3T     1%    /media_NAS/exercise
    media_NAS/ext_subs      14T   3.9M   6.3T     1%    /media_NAS/ext_subs
    media_NAS/movies        14T   402K   6.3T     1%    /media_NAS/movies
    media_NAS/movies/bluray    14T   4.0T   6.3T    39%    /media_NAS/movies/bluray
    media_NAS/movies/dvd    14T   585K   6.3T     1%    /media_NAS/movies/dvd
    media_NAS/movies/hddvd    14T   176G   6.3T     3%    /media_NAS/movies/hddvd
    media_NAS/movies/mythRecordings    14T   329K   6.3T     1%    /media_NAS/movies/mythRecordings
    media_NAS/music         14T   347K   6.3T     1%    /media_NAS/music
    media_NAS/music/flac    14T    54G   6.3T     1%    /media_NAS/music/flac
    media_NAS/mythTV        14T    40G   6.3T     1%    /media_NAS/mythTV
    media_NAS/nuc-celeron    14T   731M   6.3T     1%    /media_NAS/nuc-celeron
    media_NAS/pictures      14T   5.1M   6.3T     1%    /media_NAS/pictures
    media_NAS/television    14T   3.0T   6.3T    33%    /media_NAS/television
    but the format command is not seeing any of the disks:
    format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SEAGATE-ST9146803SS-0006 cyl 65533 alt 2 hd 2 sec 2187>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
           2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>  solaris
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Before moving the card into the other slot in the riser card format saw each disk in the zfs pool.    Not sure why the disks are not seen in format but the zfs pool seems to be available to the OS.    The disks in the attached tray were setup for Solaris to see using the Sun StorageTek RAID Manager, they were passed as 2TB raid0 components to Solaris, and format saw them as available 2TB disks.    Any suggestions as to how to proceed if the scrub completes with the SAS card in the new I/O slot?    Should I force a reconfigure of devices on the next reboot?  If the disks fault out again with I/O errors in this slot, the next steps were to try a new SAS  card and/or cable.  Does that sound reasonable?
    Thanks,

    Was the system online (and the ZFS pool) too when you moved the card? That might explain why the disks are confused. Obviously, this system is experiencing some higher level problem like a bad card or cable because disks generally don't fall over at the same time. I would let the scrub finish, if possible, and shut the system down. Bring the system to single-user mode, and review the zpool import data around the device enumeration. If the device info looks sane, then import the pool. This should re-read the device info. If the device info is still not available during the zpool import scan, then you need to look at a higher level.
    Thanks, Cindy

  • Create ZONE in ZFS pool solaris10

    Hi Gurus,
    I'm reading some solaris 10 tutorials about ZFS and Zones. Is it possible to create a new storage pool using my current hard disk in which I installed solaris???
    I'm a bit new in Solaris, I have a SPARC box in which I'm learnin about solaris 10. I have installed Solaris 10 using ZFS file system. I think my box only have 1 disk but not sure. I see 46 GB of free space running "df -kh " command
    I run "format" command, this is the output
    root@orclidm # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@0,0
    1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    Specify disk (enter its number):
    zpool list "display this:"
    root@orclidm # zpool list
    NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
    rpool 68G 13.1G 54.9G 19% ONLINE -
    zfs list "display this:"
    root@orclidm # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 21.3G 45.6G 106K /rpool
    rpool/ROOT 11.6G 45.6G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 11.6G 45.6G 11.6G /
    rpool/dump 1.50G 45.6G 1.50G -
    rpool/export 66K 45.6G 32K /export
    rpool/export/home 34K 45.6G 34K /export/home
    rpool/swap 8.25G 53.9G 16K -
    I read in a tutorial that when you create a zpool you need to specify an empty hard disk, is that correct?
    Please point me on the best approach to create zones using zfs pools.
    Regards

    manin21 wrote:
    Hi Gurus,
    I'm reading some solaris 10 tutorials about ZFS and Zones. Is it possible to create a new storage pool using my current hard disk in which I installed solaris???IF you have a spare partition you may use that.
    >
    I'm a bit new in Solaris, I have a SPARC box in which I'm learnin about solaris 10. I have installed Solaris 10 using ZFS file system. I think my box only have 1 disk but not sure. I see 46 GB of free space running "df -kh " command
    I run "format" command, this is the output
    root@orclidm # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@0,0
    1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    Specify disk (enter its number):
    This shows two disks. In a production setup you might mirror this.
    zpool list "display this:"
    root@orclidm # zpool list
    NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
    rpool 68G 13.1G 54.9G 19% ONLINE -
    The command:
    zpool status
    would show you what devices you are using
    zfs list "display this:"
    root@orclidm # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 21.3G 45.6G 106K /rpool
    rpool/ROOT 11.6G 45.6G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 11.6G 45.6G 11.6G /
    rpool/dump 1.50G 45.6G 1.50G -
    rpool/export 66K 45.6G 32K /export
    rpool/export/home 34K 45.6G 34K /export/home
    rpool/swap 8.25G 53.9G 16K -
    I read in a tutorial that when you create a zpool you need to specify an empty hard disk, is that correct?
    No.
    You can use partions/slices instead. A zone storage pool is composed of one or more devices; each device can be a a whole disk, disk slice or even a file if i remember correctly ( .... but you really dont want to use a file normally).
    Please point me on the best approach to create zones using zfs pools.
    RegardsYour storage rpool is 68GB in size on a 72GB disk .... therefore the disk is full up and their is no space for another zfs pool. If zpool status shows your disk is mirrored by zfs that is that. Otherwise you may choose to create a storage pool on the other disk (not best production practice).
    often one creates a zfs filesystem out of an existing filesystem.
    zfs create -o mountpoint=/zones rpool/zones
    zfs create rpool/zones/myzone
    Then use zonepath=/zones/myzone creating the zone.
    - I was googling to cross check my answer ... the following blog has an example but it is a little old and may be opensolaris orientated.
    https://blogs.oracle.com/DanX/entry/solaris_zfs_and_zones_simple
    Authorative information is at http://docs.oracle.com, notably:
    http://docs.oracle.com/cd/E23823_01/index.html
    http://docs.oracle.com/cd/E23823_01/html/819-5461/index.html
    http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html

  • SFTP chroot from non-global zone to zfs pool

    Hi,
    I am unable to create an SFTP chroot inside a zone to a shared folder on the global zone.
    Inside the global zone:
    I have created a zfs pool (rpool/data) and then mounted it to /data.
    I then created some shared folders: /data/sftp/ipl/import and /data/sftp/ipl/export
    I then created a non-global zone and added a file system that loops back to /data.
    Inside the zone:
    I then did the ususal stuff to create a chroot sftp user, similar to: http://nixinfra.blogspot.com.au/2012/12/openssh-chroot-sftp-setup-in-linux.html
    I modifed the /etc/ssh/sshd_config file and hard wired the ChrootDirectory to /data/sftp/ipl.
    When I attempt to sftp into the zone an error message is displayed in the zone -> fatal: bad ownership or modes for chroot directory /data/
    Multiple web sites warn that folder ownership and access privileges is important. However, issuing chown -R root:iplgroup /data made no difference. Perhaps it is something todo with the fact the folders were created in the global zone?
    If I create a simple shared folder inside the zone it works, e.g. /data3/ftp/ipl......ChrootDirectory => /data3/ftp/ipl
    If I use the users home directory it works. eg /export/home/sftpuser......ChrootDirectory => %h
    FYI. The reason for having a ZFS shared folder is to allow separate SFTP and FTP zones and a common/shared data repository for FTP and SFTP exchanges with remote systems. e.g. One remote client pushes data to the FTP server. A second remote client pulls the data via SFTP. Having separate zones increases security?
    Any help would be appreciated to solve this issue.
    Regards John

    sanjaykumarfromsymantec wrote:
    Hi,
    I want to do IPC between inter-zones ( commnication between processes running two different zones). So what are the different techniques can be used. I am not interested in TCP/IP ( AF_INET) sockets.Zones are designed to prevent most visibility between non-global zones and other zones. So network communication (like you might use between two physical machines) are the most common method.
    You could mount a global zone filesystem into multiple non-global zones (via lofs) and have your programs push data there. But you'll probably have to poll for updates. I'm not certain that's easier or better than network communication.
    Darren

  • Large number of Transport errors on ZFS pool

    This is sort of a continuation of thread:
    Issues with HBA and ZFS
    But since it is a separate question thought I'd start a new thread.
    Because of a bug in 11.1, I had to downgrade to 10_U11. Using an LSI 9207-8i HBA (SAS2308 chipset). I have no errors on my pools but i consistently see errors when trying to read from the disks. They are always Retryable or Reset. All in all the system functions but as I started testing I am seeing a lot of errors in IOSTAT.
    bash-3.2# iostat -exmn
    extended device statistics ---- errors ---
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
    0.1 0.2 1.0 28.9 0.0 0.0 0.0 41.8 0 1 0 0 1489 1489 c0t5000C500599DDBB3d0
    0.0 0.7 0.2 75.0 0.0 0.0 21.2 63.4 1 1 0 1 679 680 c0t5000C500420F6833d0
    0.0 0.7 0.3 74.6 0.0 0.0 20.9 69.8 1 1 0 0 895 895 c0t5000C500420CDFD3d0
    0.0 0.6 0.4 75.5 0.0 0.0 26.7 73.7 1 1 0 1 998 999 c0t5000C500420FB3E3d0
    0.0 0.6 0.4 75.3 0.0 0.0 18.3 68.7 0 1 0 1 877 878 c0t5000C500420F5C43d0
    0.0 0.0 0.2 0.7 0.0 0.0 0.0 2.1 0 0 0 0 0 0 c0t5000C500420CE623d0
    0.0 0.6 0.3 76.0 0.0 0.0 20.7 67.8 0 1 0 0 638 638 c0t5000C500420CD537d0
    0.0 0.6 0.2 74.9 0.0 0.0 24.6 72.6 1 1 0 0 638 638 c0t5000C5004210A687d0
    0.0 0.6 0.3 76.2 0.0 0.0 20.0 78.4 1 1 0 1 858 859 c0t5000C5004210A4C7d0
    0.0 0.6 0.2 74.3 0.0 0.0 22.8 69.1 0 1 0 0 648 648 c0t5000C500420C5E27d0
    0.6 43.8 21.3 96.8 0.0 0.0 0.1 0.6 0 1 0 14 144 158 c0t5000C500420CDED7d0
    0.0 0.6 0.3 75.7 0.0 0.0 23.0 67.6 1 1 0 2 890 892 c0t5000C500420C5E1Bd0
    0.0 0.6 0.3 73.9 0.0 0.0 28.6 66.5 1 1 0 0 841 841 c0t5000C500420C602Bd0
    0.0 0.6 0.3 73.6 0.0 0.0 25.5 65.7 0 1 0 0 678 678 c0t5000C500420D013Bd0
    0.0 0.6 0.3 76.5 0.0 0.0 23.5 74.9 1 1 0 0 651 651 c0t5000C500420C50DBd0
    0.0 0.6 0.7 70.1 0.0 0.1 22.9 82.9 1 1 0 2 1153 1155 c0t5000C500420F5DCBd0
    0.0 0.6 0.4 75.3 0.0 0.0 19.2 58.8 0 1 0 1 682 683 c0t5000C500420CE86Bd0
    0.0 0.0 0.2 0.7 0.0 0.0 0.0 1.9 0 0 0 0 0 0 c0t5000C500420F3EDBd0
    0.1 0.2 1.0 26.5 0.0 0.0 0.0 41.9 0 1 0 0 1511 1511 c0t5000C500599E027Fd0
    2.2 0.3 133.9 28.2 0.0 0.0 0.0 4.4 0 1 0 17 1342 1359 c0t5000C500599DD9DFd0
    0.1 0.3 1.1 29.2 0.0 0.0 0.2 34.1 0 1 0 2 1498 1500 c0t5000C500599DD97Fd0
    0.0 0.6 0.3 75.6 0.0 0.0 22.6 71.4 0 1 0 0 677 677 c0t5000C500420C51BFd0
    0.0 0.6 0.3 74.8 0.0 0.1 28.6 83.8 1 1 0 0 876 876 c0t5000C5004210A64Fd0
    0.6 43.8 18.4 96.9 0.0 0.0 0.1 0.6 0 1 0 5 154 159 c0t5000C500420CE4AFd0
    Mar 12 2013 17:03:34.645205745 ereport.fs.zfs.io
    nvlist version: 0
         class = ereport.fs.zfs.io
         ena = 0x114ff5c491a00c01
         detector = (embedded nvlist)
         nvlist version: 0
              version = 0x0
              scheme = zfs
              pool = 0x53f64e2baa9805c9
              vdev = 0x125ce3ac57ffb535
         (end detector)
         pool = SATA_Pool
         pool_guid = 0x53f64e2baa9805c9
         pool_context = 0
         pool_failmode = wait
         vdev_guid = 0x125ce3ac57ffb535
         vdev_type = disk
         vdev_path = /dev/dsk/c0t5000C500599DD97Fd0s0
         vdev_devid = id1,sd@n5000c500599dd97f/a
         parent_guid = 0xcf0109972ceae52c
         parent_type = mirror
         zio_err = 5
         zio_offset = 0x1d500000
         zio_size = 0xf1000
         zio_objset = 0x12
         zio_object = 0x0
         zio_level = -2
         zio_blkid = 0x452
         __ttl = 0x1
         __tod = 0x513fa636 0x26750ef1
    I know all of these drives are not bad and I have confirmed they are all running the latest firmware and correct sector size, 512 (ashift 9). I am thinking it is some sort of compatibility with this new HBA but have no way of verifying. Anyone have any suggestions?
    Edited by: 991704 on Mar 12, 2013 12:45 PM

    There must be something small I am missing. We have another system configured nearly the same (same server and HBA, different drives) and it functions. I've gone through the recommended storage practices guide. The only item I have not been able to verify is
    "Confirm that your controller honors cache flush commands so that you know your data is safely written, which is important before changing the pool's devices or splitting a mirrored storage pool. This is generally not a problem on Oracle/Sun hardware, but it is good practice to confirm that your hardware's cache flushing setting is enabled."
    How can I confirm this? As far as I know these HBAs are simply HBAs. No battery backup. No on-board memory. The 9207 doesn't even offer RAID.
    Edited by: 991704 on Mar 15, 2013 12:33 PM

  • I have creative suite 5.5 design standard. I used to have a pc, however it totally died. I've now got a macbook pro. How do I transfer to mac?

    I purchased Creative Suite 5.5 design standard. I had a pc at the time. My PC laptop just died and i've now got a macbook. how to I transfer?

    Hi johemsley,
    We would like to inform you that it would only work on the macbook pro if you have a Mac Serial number, as Windows serial number doesn't work on Mac.
    Here is the link to download CS5.5 Design Standard.
    Link: Download CS5.5 products
    Thanks,
    Atul Saini

  • ZFS pool, raidz

    Hello,
    I have a zfspool that has for disks in raidz. Is there a way to add to this pool? I understand that 'attach' is only for mirror sets. I have two more disks I want to add to the pool. If I just do an add, I get a mismatch as I would have a 4 way and a 2way raidz in the same pool.
    What I have;
    NAME STATE READ WRITE CKSUM
    tank ONLINE 0 0 0
    raidz ONLINE 0 0 0
    c4t5d0 ONLINE 0 0 0
    c4t8d0 ONLINE 0 0 0
    c4t9d0 ONLINE 0 0 0
    c4t10d0 ONLINE 0 0 0
    What I tried;
    # zpool add tank raidz c4t11d0 c4t12d0
    invalid vdev specification
    use '-f' to override the following errors:
    mismatched replication level: pool uses 4-way raidz and new vdev uses 2-way raidz
    Is there no way to increment this pool now that it is raidz other than adding devices four at a time?

    I don't believe you can extend a raidz vdev.
    But you can add new vdevs to the pool.
    So you could add the 2 disks as a mirrored pair vdev

  • Replace FC Card and ZFS Pools

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

  • Unable to destroy ZFS pool

    Hello everyone,
    is there any way how to remove suspended ZFS pool when underlying storage has been removed from the OS?
    # zpool status test
    pool: test
    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
    see: http://www.sun.com/msg/ZFS-8000-HC
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    test UNAVAIL 0 0 0 experienced I/O failures
    c2t50060E8016068817d2 UNAVAIL 0 0 0 experienced I/O failures
    All the zpool operations hang on the system :
    # ps -ef |grep zpool
    root 5 0 0 May 16 ? 151:42 zpool-rpool
    root 19747 1 0 Jun 02 ? 0:00 zpool clear test
    root 12714 1 0 Jun 02 ? 0:00 zpool destroy test
    root 9450 1 0 Jun 02 ? 0:00 zpool history test
    root 13592 1 0 Jun 02 ? 0:00 zpool destroy test
    root 19684 1 0 May 30 ? 0:00 zpool destroy -f test
    root 9166 0 0 May 30 ? 0:07 zpool-test
    root 18514 1 0 Jun 02 ? 0:00 zpool destroy -f test
    root 3327 0 0 May 30 ? 4:25 zpool-OScopy
    root 7332 1 0 May 30 ? 0:00 zpool clear test
    root 5016 1 0 Jun 02 ? 0:00 zpool online test c2t50060E8016068817d2
    root 25080 1 0 Jun 01 ? 0:00 zpool clear test
    root 23451 1 0 01:26:57 ? 0:00 zpool destroy test
    Disk is not more visible on the system:
    # ls -la /dev/dsk/c2t50060e8016068817d2*
    /dev/dsk/c2t50060e8016068817d2*: No such file
    Any suggestions how to remove the pool without preforming reboot?
    Thanks in advance for any help

    I had the same issue recently (solaris 11.1 system) where I deleted a LUN from the SAN before destroying a zpool on it. The pool was suspended and all operations on it failed. I also tried a zpool clear but that did not work and additionally, all other operations on other zpools were also hanging after that. The "workaround" was to delete /etc/zpool.cache and reboot the system.
    I raised an SR and a feature request for this but to my knowledge, nothing has been done yet. There is note 1457074.1 on MOS that describes this for Solaris 10 (including a bug and patch) and claims that solaris 11 is not affected.
    good luck
    bjoern

  • Connecting Fabric Interconnect to iSCSI array

    I need to connect my Fabric Interconnect to a new iSCSI array.  There are a number of UCS blades that need to connect to iSCSI LUNs. The FI is currently connected to the rest of the network through  Nexus 7000K switches.   Should I connected directly from the FI to the iSCSI array, or go through the 7000K and then to the array?

    Hi
    this is more of a design question, you have to think what will need access to the ISCCI storage array. For example, if only UCS blaes will have access to this storage array, you may want to consider connecting it directly as iscsi traffic won't have to go through your N7Ks if both fabrics are active.
    If you want another type of server such as HP or IBM to access the storage you may want to consider connecting the storage array to the N7Ks if your fabric are configured in end-host-mode. Again this will depend on your current implementation

  • My battery died, I charged phone now black screen apple logo blinking. How do I fix?

    My battery died, I charged phone now black screen apple logo blinking. How do I fix?

    If Apple replaced the battery, contact them to check the phone, the battery replacement is covered by a warranty.
    iPhone - Contact Support - Apple Support

  • I update my iTunes to update my ipad and my computer died during this and now my ipad won't connect it has an iTunes sigh on the screen

    I update my iTunes to update my ipad and my computer died during this and now my ipad won't connect it has an iTunes sigh on the screen

    Hello Macahoy18,
    Thank you for using Apple Support Communities.
    For more information, take a look at:
    If you can't update or restore your iOS device
    http://support.apple.com/kb/ht1808
    Have a nice day,
    Mario

  • I have an iPhone 4S and it started dying on 50% and now it's gone up to being on 80% and dying. Do I need a new phone?

    I have an iPhone 4S and it started dying on 50% and now it's gone up to being on 80% and dying. Do I need a new phone?

    Make sure you're using genuine Apple chargers/cables. Inspect the port area for debris & clean out if necessary. If all fails, take it to an Apple store for evaluation.

  • ZFS iscsi share failing after ZFS storage reboot

    Hello,
    ZFS with VDI 3.2.1 works fine until i reboot the ZFS server.
    After the ZFS server reboot, the desktop providers are not able to access the virtual disks via iscsi.
    This is what is see when listing the targets:
    Target: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    iSCSI Name: iqn.1986-03.com.sun:02:e5012bea-487c-40ff-81d5-af89405c7121
    Alias: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Connections: 0
    ACL list:
    TPGT list:
    LUN information:
    LUN: 0
    GUID: 600144f04cf383010000144f201a2c00
    VID: SUN
    PID: SOLARIS
    Type: disk
    Size: 20G
    Backing store: /dev/zvol/rdsk/vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Status: No such file or directory
    zfs list -Hrt volume vdi | grep d711b8c1a1d3
    vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 0 774G 8.69G -
    The vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 was cloned from a template prior to the reboot.
    The workaround is to delete and re-create all pools after the ZFS server reboot.
    I use the VDI broker as the ZFS server which mounts an iscsi volume from a Linux box. The following shows an error in sharing a ZFS snaphoot via iscsi after a reboot. Not sure why. The same seems to happen with zfs clone as shown above.
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460 114M 765G 114M -
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1 0 - 114M -
    zfs set shareiscsi=off vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    zfs set shareiscsi=on vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1': iscsitgtd failed request to share
    I'm a bit desparate as i cannot find a solution. VDI 3.2.1 is really good but the above behaviour in deleting and re-creating vdi pools after reboots is not sustainable.
    Thanks
    Thierry.

    Hello,
    ZFS with VDI 3.2.1 works fine until i reboot the ZFS server.
    After the ZFS server reboot, the desktop providers are not able to access the virtual disks via iscsi.
    This is what is see when listing the targets:
    Target: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    iSCSI Name: iqn.1986-03.com.sun:02:e5012bea-487c-40ff-81d5-af89405c7121
    Alias: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Connections: 0
    ACL list:
    TPGT list:
    LUN information:
    LUN: 0
    GUID: 600144f04cf383010000144f201a2c00
    VID: SUN
    PID: SOLARIS
    Type: disk
    Size: 20G
    Backing store: /dev/zvol/rdsk/vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Status: No such file or directory
    zfs list -Hrt volume vdi | grep d711b8c1a1d3
    vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 0 774G 8.69G -
    The vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 was cloned from a template prior to the reboot.
    The workaround is to delete and re-create all pools after the ZFS server reboot.
    I use the VDI broker as the ZFS server which mounts an iscsi volume from a Linux box. The following shows an error in sharing a ZFS snaphoot via iscsi after a reboot. Not sure why. The same seems to happen with zfs clone as shown above.
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460 114M 765G 114M -
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1 0 - 114M -
    zfs set shareiscsi=off vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    zfs set shareiscsi=on vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1': iscsitgtd failed request to share
    I'm a bit desparate as i cannot find a solution. VDI 3.2.1 is really good but the above behaviour in deleting and re-creating vdi pools after reboots is not sustainable.
    Thanks
    Thierry.

Maybe you are looking for

  • Error in Query Execution

    Hello, I am Running the same Query on 2 different database consisting of same objects But on Different Versions I have 2 versions Oracle 9i (9.2) & Oracle 10g (10.2) But This Query is Not getting executed & Giving me Group by Error in my 10g version

  • Z10 - 10.2.1 to 10.3.1: One year of camera pictures missing after updating

    Hello, After updating a Z10 from 10.2.1 to 10.3.1, I have one device (out of 5) with one year of pictures that are unaccounted for (April 2014 to before the update). The Z10 has a SD card but pictures were always saved on the Device storage. The Blac

  • External JObs in oracle 10

    I have the next problem. I need delete files of server B from server A in Windows. Server A is my Oracle 10 server. I am treating creating a file BAT with the following one it lines del \ \ <IP Server B>\Pruebas \ *.tmp and then a JOBS that executes

  • Installation completed though some optional components failed to install correctly.(6)

    while installing Photoshop cc" installation completed though some optional components failed to install correctly.(6)" this massage appear close to 82% download my free trial of one month. can you help me ?

  • CMS wrong connection made

    Unable to connect to CMS prdapp22. A wrong connection is made to @@PRDAPP22(PRDAPP22, PRDAPP22:6400). Logon cannot continue. I have reconfigured (manually) Tomcat to point at a virtual IP on the hostname above.  I have this running in another environ