ZFS and 4.6C

Hi forum,
does SAP support ZFS in a Oracle 9.2.0.7 for R/3 4.6C SR2 with Kernel 4.6D EXT ?
Regards.
Ganimede Dignan.

SAP does not certify filesystems.
Oracle doesn´t do that any more (they did in the past) - and yes, you can run your SAP system on ZFS (we do so successfully).
Markus

Similar Messages

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • ZFS and Windows 2003 Diskpart

    I was given some space on a thumper that has zfs drives. I connect from a Windows 2003 using ISCI. I was running out of space and the admin gave me more space which ended up with my losing the drive, but it came back (is that normal). When I went to use diskpart to expand the drive to the additional space, it would not work. Can I not use diskpart to extend the drive size or do I need to something additional?
    Thanks for your help.

    Earl,
    I'm stuck with the 2003 itunes install problem, too. Can you post or email your solution?
    THANKS,
    Glenn

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • Zfs and encryption

    we are looking for a filesystem level encryption technology. At this point most of our services are on zfs. At one time I saw encryption on the roadmap for zfs features. Where does this sit now?
    Are there test bed versions of opensolaris where we can test this?
    Is the answer known as to if and when zfs encryption will be in Solaris 10 or beyond??
    Thanks.

    I don't believe that the feature is ready yet, but you may find some more information about the project here: [http://hub.opensolaris.org/bin/view/Project+zfs-crypto/]
    You would probably also be better of with asking for a status on the forum/mailinglist for the project: [http://opensolaris.org/jive/forum.jspa?forumID=105]
    Edited by: Tenzer on May 11, 2010 9:31 AM

  • ZFS and grown disk space

    Hello,
    I installed Solaris 10 x86 10/09 using ZFS in vSphere, and the disk image was expanded from 15G to 18G.
    But Solaris still sees 15G.
    How can I convince it to make notice of the expanded disk image, how can I grow the rpool?
    Searched a lot, but all documents give answers about adding a disk, but not if the space is additionally allocated on the same disk.
    -- Nick

    nikitelli wrote:
    if that is really true what you are saying, then this is really disappointing!
    Solaris can so many tricks, and in this specific case it drops behind linux, aix and even windows?
    Not even growfs can help?Growfs will expand a UFS filesystem so that it can address additional space in its container (slice, metadevice, volume, etc.). ZFS doesn't need that particular tool, it can expand itself based on the autogrow property.
    The problem is that the OS does not make the LUN expansion visible so that other things (like the filesystems) can use that space. Years and years ago, "disks" were static things that you didn't expect to change size. That assumption is hard coded into the Solaris disk label mechanics. I would guess that redoing things to remove that assumption isn't the easiest task.
    If you have an EFI label, it's easier (still not great), but fewer steps. But you can't boot from an EFI disk, so you have to solve the problem with a VTOC/SMI label if you want it to work for boot disks.
    Darren

  • Where can I find the latest research on Solaris 10, zfs and SANs?

    I know Arul and Christian Bilien have done a lot of writing about storage technologies as they related to Oracle. Where are the latest findings? Obviously there are some exotic configurations that can be implemented to optimizer performance, but is there a set of "best practices" that generally work for "most people"? Is there common advice for folks using Solaris 10 and zfs on SAN hardware (ie, EMC)? Does double-striping have to be configured with meticulous care, or does it work "pretty well" just by taking some rough guesses?
    Thanks much!

    Hello,
    I have a couple of links that I have used:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
    http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
    These are not exactly new, so you may have encountered them already.
    List of ZFS blogs follows:
    http://www.opensolaris.org/os/community/zfs/blogs/
    Again, there does not seem to be huge activity on the blogs featured there.
    jason.
    http://jarneil.wordpress.com

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

  • ZFS and "MIssing" Disk

    Have A V240 in my lab.
    Previously configured a ZFS pool using one of the two disks.
    Need to rebuild box for other tests. Reinstall Solaris 10 using only one disk.
    Unable to see the second disk that was originally configured with the zpool.
    What do I need to "recover" this disk so I can use it again.
    Thanks

    That's because zpool/zfs uses EFI labels on the disk, 8 partitions 0-6 and 8 which is the whole disk.
    while normal disk usage is an SMI label, 8 partitions, 0-7, with number 2 being the entire disk.
    to get the label back to smi, do the following:
    execute format -e
    select the drive in question
    enter "l" (lower case L), for label
    select the entry for smi and write it to the disk.
    now exit format
    the disk should now be available for use outside of the zpool/zfs.

  • Zfs and Oracle raw volumes

    This is the way I configured raw volumes to be used by Oracle 9:
    # zpool create -f pool146gb c0t1d0
    # zfs create -V 500mb pool146gb/system.dbf
    # cd /dev/zvol/rdsk/pool146gb
    # ls -l system.dbf
    lrwxrwxrwx 1 root root 40 Oct 4 19:26 system.dbf -> ../../../../devices/pseudo/zfs@0:16c,raw
    # chown oracle:oinstall ../../../../devices/pseudo/zfs@0:16c,raw
    # zfs list -o name,volsize| grep system.dbf
    pool146gb/system.dbf 500M
    Resizing system.dbf to 600 MB
    # zfs set volsize=600mb pool146gb/system.dbf
    # zfs list -o name,volsize| grep system.dbf
    pool146gb/system.dbf 600M
    My question:
    Is this a good approach for creating Oracle tablespaces in raw volumes?

    Marcus Ruehmann (guest) wrote:
    : Hi,
    : did anybody successfully test Stephen Tweedy3s raw device
    against
    : Oracle? I testet also against Sybase but could not get it to
    run.
    : Stephen checked straces and stuff but there seems to be no
    error!
    : Look at ftp.uk.linux.org/pub/linux/sct/fs
    : This patch is against Linux 2.2.9.
    : Please people, test it and tell me if it works for you
    : Cheers
    : Marcus
    Hi Marcis,
    I tested raw device with Oracle but in AIX with Oracle 7.3 and
    it worked perfectly. While I'm waiting for Oracle8i on Linux and
    would test it with raw device, I don't think there would be too
    much different.
    Brdgs,
    Quoc Trung
    null

  • ZFS and Jumpstart

    Hello All ---
    Does anyone know if Sun intends on putting ZFS creation functionality in custom jumpstart profiles? I've put together a really lame script to get done what I'm attempting to do, but it stinks and would be much more professional if the filesys directive could also contain a filesystem type of "zfs" or something so nifty.
    Thanks
    -bw

    Yes, they do.
    I understand zfs root is scheduled for u4 so mid 2007.
    So it will certainly be available then.
    But its possible that non root jumpstart partitions might be available in u3 ie end 2006.

  • ZFS and Veritas DMP

    Does anyone know whether ZFS is supported on DMP devices on Storage Foundation 5.1
    It is not supported on Storage Foundation 5.0 MP3 but I was wondering if it had been introduced
    for 5.1
    http://seer.entsupport.symantec.com/docs/324001.htm
    The 5.1 documentation is not clear as to whether it is supported or not.

    Ended up using EMC Powerpath. It was much easier and worked like a champ the first time. I didn't have to fiddle with creating diskgroups and logical volumes, just changed the permissions on the /dev/rdsk/emcpower* devices to grid:asmadmin, ran them through format to build partitions that start at cylinder 1, and the installer picked them up right away.

Maybe you are looking for

  • Mac Mini (mid 2011) ram slot has failed: Any others experiencing?

    This seems similar to this problem, mentioned in a number of threads covering MacBooks, best described here: MacBook Pro loses access to 1 memory slot upon shutdown or reboot in OS X 10.8.1 I'm having the same issue with a Mid-2011 Mac Mini (purchase

  • BAPI_SALESDOCU_CREATEFROMDATA1  with multiple line items

    Hi Experts, I  trying to create a sales document using BAPI_SALESDOCU_CREATEFROMDATA1, below my code : CALL FUNCTION 'BAPI_SALESDOCU_CREATEFROMDATA1'        EXPORTING             sales_header_in     = header             sales_header_inx    = headerx

  • How to seperate Toggle takes into individual tracks

    Hallo Hallo, I was recording some vox for a pop track 2 days ago. I got carried away laying down high harmonies & rather than using toggle correctly, I did 3 takes higher, 3 takes mid and one as an Ad Lib big vocal. So on this rare occasion, I like a

  • Itunes shuttin down my computer

    When I open Itunes, it prompts me to authorize my computer for automatic downloads.  After typing my password, it takes me to the itunes store and asks me the same thing.  Soon after that, it shuts down my computer.  Please Help!

  • Flash Plug-in Will Not Load in OSX 10.6.6

    The download says it is complete and installed, but the plug-in doesn't show up. Other posts say to reinstall SL 10.6.6, but I am a complete noob and not very confident that I should do this. Has anybody else seen this problem?