Oracle on zfs

Hello!
We have Solaris 10 box with latest patches connected to Storedge 6130 by FC.
Storedge disks configured as RAID5
Our Oracle 10g (10.2.0.4) DB performance is terrible.
Write speed reported by "zpool iostat" about 5 MB/s and "iostat" showing 100% load for data pool.
Redo logs and data files placed on different pools. Recordsize for log - 128k, for data files 8k.
I have no idea what else i can tune. There are guides for tuning zfs for oracle (best practices guide and evil tuning guide) but nothing for tuning oracle for zfs.
I'm also worried about fact that simple
dd if=/dev/zero of=/zpool1 bs=8192k count=10240
gives 40MB/s in zpool iostat
Maybe Oracle settings are wrong?
Current settings are:
filesystemio_options=asynch
db_writer_processes=2
db_file_multiblock_read_count=8
Does ZFS support DirectIO and/or asynchronous writes like UFS?
Please help.
Thanks in advance!
(and sorry for my English)

tolik2525 wrote:
I'm also worried about fact that simple
dd if=/dev/zero of=/zpool1 bs=8192k count=10240
gives 40MB/s in zpool iostatThis means that I/O itself is simply slow.
Maybe Oracle settings are wrong?How can Oracle be at fault when your diagnosis of the underlying file system shows it to perform poorly? How is Oracle support to rectify the storage system's poor performance?
Note that RAID5 requires a parity calculation for every single write() to that storage system. If that calculation is not done asynchronously via something like ASICs (Application Specific Integrated Circuits), it means that the I/O latency includes the parity calculation.
Obviously this will severely impact performance. (search for BAARF via your favourite search engine for more details).

Similar Messages

  • SAP R/3 on Oracle / Solaris / ZFS

    Hello,
    My unix administator is planning on upgrading to a new OS environment to Solaris 10 with ZFS. We currently run SAP R/3 4.6C kernel / Oracle 9i on Solaris release 5.8 and I would like to know what would be the path to take from here. Does SAP R/3 4.6C or even 4.7 Support ZFS and what about Running Oracle 10g datafiles on a ZFS system for the SAP Database.
    Any support is appreciated,
    Arun

    Oracle doesn't certify any more filesystems or certain Storage Systems (see Oracle Metalink 403202.1 - Zeta File System (Zfs) On Solaris 10 Certified/Supported By Oracle) but it's supported to run on them
    We run several databases (Oracle and non-Oracle) on ZFS - and combined with zones it's GREAT to consolidate systems.
    Check http://www.sun.com/software/whitepapers/solaris10/zfs_veritas.pdf for a comparison between VXFS and ZFS.
    The SAP system itself is agnostic about the underlying filesystem.
    Markus

  • ORACLE ASM OR ZFS IN SOLARIS 11

    ORACLE DBA EXPERTS
    What kind of storage technology is recommended in order to implement an Oracle 11gr2 database in a Solaris 11 OS?
    -Oracle ASM or the Solaris 11 ZFS filesystem.
    Its any difference between the performance of both storage technologies?
    Thanks for answer my questions and i wish to yours a very "Happy Hollydays".
    Sincerely,
    JOHN JAIRO GOMEZ LAVERDE

    From what I understand Oracle recommends ASM. You cannot directly compare ASM with ZFS. ASM is a storage solution providing data management, availability and redundancy specific to Oracle. ZFS is an advanced file system. Whether you choose one over the other will depend on your storage requirements and knowledge.

  • Solaris 10 determining ZFS version of FAULTED rpool incompatible version

    {0} ok boot cdrom -s
    Boot device: /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f File and args: -s
    SunOS Release 5.10 Version Generic_118833-17 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Booting to milestone "milestone/single-user:default".
    Configuring devices.
    Using RPC Bootparams for network configuration information.
    Attempting to configure interface ipge3...
    Skipped interface ipge3
    Attempting to configure interface ipge2...
    Skipped interface ipge2
    Attempting to configure interface ipge1...
    Skipped interface ipge1
    Attempting to configure interface ipge0...
    Skipped interface ipge0
    Requesting System Maintenance Mode
    SINGLE USER MODE
    # cat /cdrom/Solaris_10/Product/SUNWsolnm/reloc/etc/release
                           Solaris 10 6/06 s10s_u2wos_09a SPARC
               Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                 Assembled 09 June 2006
    # zpool import
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
    WARNING: /pci@780/pci@0/pci@9/scsi@0/sd@0,0 (sd1):
            Corrupt label; wrong magic number
      pool: rpool
        id: 383013941482167518
    state: FAULTED
    status: The pool is formatted using an incompatible version.
    action: The pool cannot be imported.  Access the pool on a system running newer
            software, or recreate the pool from backup.
       see: http://www.sun.com/msg/ZFS-8000-A5
    config:
            rpool       UNAVAIL   newer version
              c0t1d0    ONLINE
              c0t2d0    ONLINE
              c0t3d0    ONLINE
    # zpool upgrade -v
    This system is currently running ZFS version 2.
    The following versions are suppored:
    VER  DESCRIPTION
    1   Initial ZFS version.
    2   Ditto blocks (replicated metadata)
    For more information on a particular version, including supported releases, see:
    http://www.opensolaris.org/os/community/zfs/version/N
    Where 'N' is the version number.
    # zdb -l /dev/rdsk/c0t1d0s0 | head -9
    LABEL 0
        version=22
        name='rpool'
        state=0
        txg=325246
        pool_guid=383013941482167518
        hostid=2221609516
    Reference ZFS Pool Versions - Oracle Solaris ZFS Administration Guide

    hello
    the zpool version supported comes from the kernel patch this doc will provide you the information
    ZFS Filesystem and Zpool Version Matrix (Doc ID 1359758.1)
    So for version 22 as Pascal wrote sol10u9 or higher has to be used to import that pool
    22
    Received properties
    9/10 (U9) - 142909-17/142910-17
    Yes (snv_128)
    Regards
    Eze

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

  • ZFS - overfilled pool

    I installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
    Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any data and when I unmounted the pool, I could not even mount it again.
    I've heard that this is a standard behavior of ZFS filesystems and that the correct way to avoid such problems in future is not to use the full capacity of the pool.
    Now I'm thinking about creating quotas on my filesystems (as they describe in [this article](ZFS: Set or create a filesystem quota)), but I am wondering if that is enough.
    I have got a tree hiearchy of filesystems on the pool, e.g. something like this (pool is the name of the zpool and also the name of the root filesystem on the pool):
    /pool
    /pool/svn
    /pool/home
    Is it OK to just set a quota on "pool" (as they should propagate to all sub-filesystems)? I mean, is this enough to prevent such event to happen again? For instance, would it prevent me to make a new fs snapshot should the quota be overrun?
    How much space should I reserve, e.g. make unavailable (I read somewhere that it is a good practise to use only about 80% of the pool capacity)?
    Finally, is there a better/more suitable solution to my original problem than setting the quota on fs?
    Thank you very much for your advice.
    Dusan

    ZFS has a couple of different quotas - each of them applies only to the dataset it has been configured on. That means that no quota will propagate down the descendant file systems, so keep that in mind. Another thing one should always have in mind is that ZFS gets really slow, if you populate the underlying zpool more than 80% - so watch out for that as well.
    Now for the quotas - there are basically two of them plus one thing called reservation, which is also a good thing to keep a zfs fs from filling up completely.
    quota vs. refquota: quota sets the amount of space a zfs fs can occupy including all the data that belong to that zfs fs, so e.g. snapshots are inlcuded, whereas refquota only refers to the amount of "active" data of that zfs which leaves snapshots out of the equation. You'll want to use refquota e.g. for file servers, where you want to enforce a quota on the actual work data, but want to be free to create as many snapshots as you like.
    Reservations ensure the minumum space which has to be available to the zfs fs it is set upon. If you set a reservation of 10G on each zfs fs, you won't be able to fill up your zpool more then the capacity of your zpoll minus the reservations combined.
    You can read about this in the excellent Oracle® Solaris ZFS Administration Guide, which is available on Oracle's site.
    Cheers,
    budy

  • ZFS / vxfs Quick IO

    Hi,
    Apologies if this has been asked previously but i'll ask anyway...
    We are having discussions around whether to configure ZFS or Veritas Foundation Suite for a new Oracle 10G installation. My thought is we should should go ZFS for definate but on previous Volume Manager install's a license for Quick I/O was purchased so I was wondering which aspect of ZFS was comparible to Quick I/O..??.. Appreciate ZFS is a totally different filesystem but unfortunately we only have ZFS installed on Test/Dev server which do not have Oracle configured and although ive seen many a blog comapring ZFS to VXFS still's a still a little unclear.
    I did read it was advisable NOT to place the Oracle redo logs on ZFS due to write performance so has anyone any experience (good/bad) on installing Oracle on ZFS or is it best to take the safe option and configure Veritas??
    Thanks in advance...

    waleed.badr wrote:
    My point of view is that VxFS is much better than ZFS ... and even ZFS tries to imitate VxFS in many ways but it couldn't
    VxFS have a perfect read/write IO performance ... plus the support and the ease of troubleshooting and administration.
    As for VxFS from 1997 ... there are a lot of enhancments and bunch of new features and development on it and a massive ability to the data availability. It is a noticable change in the file systemEr...how exactly is VxFS "much better" than ZFS? And at what?
    The elegance, simplicity and power of ZFS surpasses any other Filesystem I have encountered in a 15yr SA career. And yes, I swore by Veritas till I happened to work with ZFS. After that, there was no looking back.
    Edited by: implicate_order on Apr 8, 2010 11:34 AM

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • I want to run linux directly on ultraSPARC server t4.

    I've been used to linux for a long time now and now i need to work with the UltraSPARC t4 server which has solaris on it. So, i am thinking of maybe removing Solaris and running linux directly on the server. But I'm really curious to know about the virtualization technologies that would be available if i run linux directly on the t4 server. And which of them serves better compared with the VM Server for SPARC and containers ? Any help will be appreciated..

    While there are versions of Linux for SPARC available, currently none are "qualified" by Oracle. You are certainly free to run any OS that you wish, but running an unqualifed/unsupported OS may prevent us from giving you service in the event you have any issues.
    Currently, the Sun System Handbook lists the following OS's as Qualified/Supported.
    Oracle Solaris 11
    Oracle Solaris 10 8/11 Update 10 (U10) - pre-installed
    Oracle Solaris 10 09/10 (U9) plus Oracle Solaris 10 08/11 SPARC Patch Bundle *
    Oracle Solaris 10 10/09 (U8) plus Oracle Solaris 10 08/11 SPARC Patch Bundle *
    Oracle VM Server for SPARC 2.1
    Electronic Prognostics 1.2
    Oracle Solaris ZFS
    Your best best will probably be to run either Oracle VM or Solaris as your Host OS, then run whatever else in VM's.
    Specific configuration decisions will be up to you, but feel free to Open a Service Request (SR) if you need more help.

  • Oracle DB Can't Survive ZFS SA Controller Failover

    We are running two new Sparc T4-1 servers against a ZFS SA with two heads and a single DE2-24P disk shelf. It is configured with a single pool for all the storage. Our servers are clustered with VCS as an active/passive pair, so only one server accesses storage at a time. The active server runs the Oracle Enterprise DB version 12c, using dNFS to connect to the shares. Before deployment, we are testing out various failure scenarios, and I was disheartened to see that the Oracle DB doesn't handle a controller failover very well. Here's how I tested:
    My DBA kicked off a large DB import job to provide some load.
    I logged in to the secondary head, and issued "takeover" on the "cluster" page.
    My DBA monitored the DB alert log, and reported everything looking fine.
    When the primary head was back online, I logged in to it, and issued "takeover" on the "cluster" page.
    This time things didn't go so well. We logged the following:
    Errors in file /u04/app/oracle/diag/rdbms/aasc/aasc/trace/aasc_arc2_1296.trc:
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ARCH: Archival stopped, error occurred. Will continue retrying
    Tue Aug 12 14:25:14 2014
    ORACLE Instance aasc - Archival Error
    Tue Aug 12 14:25:14 2014
    ORA-16038: log 15 sequence# 339 cannot be archived
    ORA-19510: failed to set size of  blocks for file "" (block size=)
    12-AUG-14 14:32:03.424: ORA-02374: conversion error loading table "ARCHIVED"."CR_PHOTO"
    12-AUG-14 14:32:03.424: ORA-00600: internal error code, arguments: [kpudpcs_ccs-1], [], [], [], [], [], [], [], [], [], [], []
    12-AUG-14 14:32:03.424: ORA-02372: data for row: INVID : ''
    12-AUG-14 14:32:03.513: ORA-31693: Table data object "ARCHIVED"."CR_PHOTO" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-00600: internal error code, arguments: [kpudpcs_ccs-1], [], [], [], [], [], [], [], [], [], [], []
    My DBA said that this was a very risky outcome, and that we certainly wouldn't want this to happen to a live production instance.
    I would have hoped that the second controller failover would have been invisible to the Oracle instance. What am I missing?
    Thanks.
    Don

    your FRA filed up.
    you are getting ORA-16038: log 15 sequence# 339 cannot be archived
    This means that there is no more space on FRA.
    You need to clean up the FRA. Here are some steps:
    SQL > alter system set db_recovery_file_dest_size=18G;
    http://oraclenutsandbolts.net/knowledge-base/oracle-data-guard/65-oracle-dataguard-and-oracle-standby-errors

  • Oracle 11.2.0.3 upgrade running on Solaris 10 using zfs/zones

    Hello,
    We currently run solaris 10 using zfs/zones.
    We have a global zone and several sparse root zones.
    The oracle upgrade (from 10.2.0.4 to 11.2.0.3) prerequiste check is reporting the following warnings:
    OS kernel parameter "tcp_smallest_anon_port" plus 3 other warnings for tcp_largest_anon_port and for the udp ports as well
    This requires that we use ndd to change the values for these ports in the global zone.
    These changes will affect all of the sparse root zones and not just the one we are upgrading Oracle in.
    Will this pose any problems or is safe to make these port changes in the global zone.
    Thanks
    Kevin

    I would recommend you log an SR with Support
    Srini

  • Parameters of NFS in Solaris 10 and Oracle Linux 6 with ZFS Storage 7420 in cluster without database

    Hello,
    I have ZFS 7420 in cluster and OS Solaris 10 and Oracle Linux 6 without DB and I need mount share NFS in this OS and I do not know which parameters are the best for this.
    Wich are the best parameters to mount share NFS in Solaris 10 or Oracle Linux 6?
    Thanks
    Best regards.

    Hi Pascal,
    My question is because when We mount share NFS in some servers for example Exadata Database Machine or Super Cluster  for best performance we need mount this shares with specific parameters, for example.
    Exadata
    192.168.36.200:/export/dbname/backup1 /zfssa/dbname/backup1 nfs rw,bg,hard,nointr,rsize=131072,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0
    Super Cluster
    sscsn1-stor:/export/ssc-shares/share1      -       /export/share1     nfs     -       yes     rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    Now,
    My network is 10GBE
    What happen with normal servers only with OS (Solaris and Linux)?
    Which parameters I need use for best performance?
    or are not necessary specific parameters.
    Thanks.
    Best regards.

  • Using ZFS for Oracle RAC 11gR2 binaries

    Hi,
    We have following scenario,
    Two Node Cluster: Oracle RAC 11Gr2 with Clusterware on Solaris 10
    We want to keep Oracle & Clusterware binaries on ZFS mirror file system on each node locally and for Data files, FRA, Voting disks & OCR on shared SAN using ASM.
    My question, is the above scenario certified by Oracle or can we keep Oracle binaries on ZFS...?
    Will appreciate your input.
    Thanks

    Well my confusion started after reading this doc on oracle support:
    Certification of Zeta File System (Zfs) On Solaris 10 for Oracle RDBMS [ID 403202.1]
    "Oracle database 10gR2 (10.2.0.3 and higher patches), 11gR1 (11.1.0.6 and higher patches) and 11gR2 (11.2.0.1 and higher patches) are certified with Solaris 10 ZFS on Sparc 64-bit and Solaris x84-64. See Solaris ZFS_Best_Practices_Guide. This is for single instance ONLY. ZFS is currently not applicable to RAC and not Certified to use it as a shared filesystem."

  • Multipath.conf for oel 6.4 and oracle zfs appliance

    i have a zfs 7320 storage appliance connected via brocade fiber switches to clustered linux servers. one zfs controller with four fiber connections each server has two fiber connections. there are redundant switches so zfs is connected with two fiber connections to each switch and each server is connected one fiber connection to each switch. the zfs appliance manual does not have oel or redhat or any other similar 6.x multipath configuration. oracle support or sales has not been able as of yet to find multipath config for 6.x and suggested i post it here. Since I had issues that seem to be resolved with new zfs appliance software I want to verify my multipath config before i put the unit back in production, I would feel much better with a known good config.
    This is what is in the manual for 5.x linux:
    device
    vendor "SUN"
    product "Sun Storage 7310" or "Sun Storage 7410" (depending on storage system)
    getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
    prio_callout "/sbin/mpath_prio_alua /dev/%n"
    hardware_handler "0"
    path_grouping_policy group_by_prio
    failback immediate
    no_path_retry queue
    rr_min_io 100
    path_checker tur
    rr_weight uniform
    This is what seems to work with 6.x:
    device {
    vendor "SUN"
    product "ZFS Storage 7320"
    getuid_callout "/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
    prio alua
    hardware_handler "0"
    path_grouping_policy group_by_prio
    failback immediate
    no_path_retry queue
    rr_min_io 100
    path_checker tur
    rr_weight uniform
    }

    Nice to see a proper redundant storage fabric layer for once on OTN. ;-)
    We've been in a similar position using multipath with a custom storage layer (Infiniband based). Like you we hunted for a best-match config - and then stress tested the fabric layer using fio (Flexible I/O Tester). And don't forget to pull some fibre cables, or shutdown/powerdown one of the fibre channel switches, while fio tests are running.

  • Zfs and Oracle raw volumes

    This is the way I configured raw volumes to be used by Oracle 9:
    # zpool create -f pool146gb c0t1d0
    # zfs create -V 500mb pool146gb/system.dbf
    # cd /dev/zvol/rdsk/pool146gb
    # ls -l system.dbf
    lrwxrwxrwx 1 root root 40 Oct 4 19:26 system.dbf -> ../../../../devices/pseudo/zfs@0:16c,raw
    # chown oracle:oinstall ../../../../devices/pseudo/zfs@0:16c,raw
    # zfs list -o name,volsize| grep system.dbf
    pool146gb/system.dbf 500M
    Resizing system.dbf to 600 MB
    # zfs set volsize=600mb pool146gb/system.dbf
    # zfs list -o name,volsize| grep system.dbf
    pool146gb/system.dbf 600M
    My question:
    Is this a good approach for creating Oracle tablespaces in raw volumes?

    Marcus Ruehmann (guest) wrote:
    : Hi,
    : did anybody successfully test Stephen Tweedy3s raw device
    against
    : Oracle? I testet also against Sybase but could not get it to
    run.
    : Stephen checked straces and stuff but there seems to be no
    error!
    : Look at ftp.uk.linux.org/pub/linux/sct/fs
    : This patch is against Linux 2.2.9.
    : Please people, test it and tell me if it works for you
    : Cheers
    : Marcus
    Hi Marcis,
    I tested raw device with Oracle but in AIX with Oracle 7.3 and
    it worked perfectly. While I'm waiting for Oracle8i on Linux and
    would test it with raw device, I don't think there would be too
    much different.
    Brdgs,
    Quoc Trung
    null

Maybe you are looking for

  • Delta drain in R/3 for FI data sources

    Hi Experts, As part of EHP4 upgrade on the R/3 side we need to drain the delta queue in R/3. We took the downtime in R/3 and I triggered the delta infopackages for 0fi_gl_4, 0fi_gl_6, 0fi_aa_12, 0fi_ar_4 for 5 iterations. Please clarify the following

  • How to: Give 1 SSID higher priority than other SSIDs

    I sometime have up to 90 people connected to an AP.I have 3 SSIDs on my APs. 1 of them is the Corporate one and I need people who on that SSID have special treatment and make their network run faster , in the event of congestion. I dont really care a

  • File to ile using udfs

    hi can any one provide me sample file  to file scenarios using udfs regards ram

  • Frutiger for more languages

    Is there a version of Frutiger that can be used in different languages? The current version does not support the special characters used in, for instance, Polish or Romanian. Is there perhaps a place where I could find a list of the languages support

  • How to fill data in BusinessPartner - Role Employee - Tab Identification

    Hi Experts! My Program creates a new BP with BAPI_BUPA_CREATE_FROM_DATA. After that I create the role 'employee' with BAPI_BUPA_ROLE_ADD_2. In the role employee there is the tab 'Identification' with a block 'personal data' and 'employee data' how th