ASM Endianess conversion

Hi,
anyone ever tried to mount a big endian asm disk (AIX for example) on little endian env (linux x86)?
I have a disk that KFED is able to read but ASM is unable to mount it due to endianess incompatibility.
Thanks!
from the KFED read:
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                     128 ; 0x008: file=128
kfbh.check:                  1999209486 ; 0x00c: 0x7729840e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                     4106 ; 0x020: 0x0000100a
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:              AIXDG_0000 ; 0x028: length=10
kfdhdb.grpname:                   AIXDG ; 0x048: length=5
kfdhdb.fgname:               AIXDG_0000 ; 0x068: length=10
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:           3043620609 ; 0x0a8: HOUR=0x1 DAYS=0x18 MNTH=0xd YEAR=0x5a7
kfdhdb.crestmp.lo:              9184640 ; 0x0ac: USEC=0x180 MSEC=0x309 SECS=0x8 MINS=0x0
kfdhdb.mntstmp.hi:           3043620609 ; 0x0b0: HOUR=0x1 DAYS=0x18 MNTH=0xd YEAR=0x5a7
kfdhdb.mntstmp.lo:              7910528 ; 0x0b4: USEC=0x80 MSEC=0x22d SECS=0x7 MINS=0x0
kfdhdb.secsize:                       2 ; 0x0b8: 0x0002
kfdhdb.blksize:                      16 ; 0x0ba: 0x0010
kfdhdb.ausize:                     4096 ; 0x0bc: 0x00001000
kfdhdb.mfact:                2159804672 ; 0x0c0: 0x80bc0100
kfdhdb.dsksize:                 7864320 ; 0x0c4: 0x00780000
kfdhdb.pmcnt:                  33554432 ; 0x0c8: 0x02000000
kfdhdb.fstlocn:                16777216 ; 0x0cc: 0x01000000
kfdhdb.altlocn:                33554432 ; 0x0d0: 0x02000000
kfdhdb.f1b1locn:               33554432 ; 0x0d4: 0x02000000
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:                   4106 ; 0x0e0: 0x0000100a
kfdhdb.grpstmp.hi:           3043620609 ; 0x0e4: HOUR=0x1 DAYS=0x18 MNTH=0xd YEAR=0x5a7
kfdhdb.grpstmp.lo:              2630528 ; 0x0e8: USEC=0x380 MSEC=0x208 SECS=0x2 MINS=0x0
kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000
kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000
kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000
kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000
kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000
kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000
kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000
kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000
kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000
kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000
kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000
kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000
kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000
kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000
kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000
kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000
kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000
kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000
kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000
kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000
kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

Using ASM the anwser is NO, also you need convert database before move to another ENDIAN does not matter if you are ASM or not, then you should use Convert Datafile on source  and Import it on Linux.
This post is Linux to AIX but also work AIX to Linux
How Convert full Database 10g Linux-x86-64bit to AIX-64bit different Endian Format | Levi Pereira
The point is AVOID LAN, correct?
Using RAW Devices during migration can avoid copy over LAN. You can test it to make sure will work.
eg
CONVERT TABLESPACE
   DATA_MOVE TO PLATFORM = "Linux x86 64-bit"
    DB_FILE_NAME_CONVERT ('+TSM/tsm/datafile/data_move.285.782738837', '/dev/rhdisk12');
Starting conversion at source at 08-MAY-12
allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=498 device type=DISK
channel ORA_DISK_1: starting datafile conversion input
datafile file number=00011 name=+TSM/tsm/datafile/data_move.285.782738837
converted datafile=/dev/rhdisk12
channel ORA_DISK_1: datafile conversion completed
Then when import dumpfile just point the raw disk.
The raw device must not be created under VG. (i.e you must create each datafile as raw device (LUN)on storage, NOT use LV of AIX).

Similar Messages

  • 11510 to RAC ASM RAW Conversion

    Hi,
    We are using 11510 in multinode env, Now customer asked me to convert into RAC ASM with RAW device. My present architecture is Database is running HP-Itanium 10gR1 and Application running on HP-Tru64 11510. Please tell me any good note to implement the solutions. i have rman coldbackup backup for my database.
    Thanks in Advance,
    Panneer.

    Metalink Note 220970.1 could be a starting point, section: "What is the optimal migration path to be used while migrating the E-Business suite to RAC?"
    C.

  • Any one used ASM CP command to do an endian conversion

    using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production on Solaris.  We want to convert it to Linux.  We have read about the asm cp command doing the conversion,   This will allow us to bypass using RMAN Convert.
    Does anyone have any experience doing this?

    Hi,
    Are two different things.
    ASM is a filesystem with own endian, which is platform dependent.
    Datafile is a File which have also its own endian and  is stored over ASM Disk (Filesystem) which have its own endian.
    Example:
    Datafile of my database on AIX have Big Endian stored on ASM Disk (Filesystem) with Big Endian.
    But I can convert this Datafile to Little Endian on AIX (i.é RMAN Convert datafile on source before move to destination)  and store the Datafile Little Endian  over ASM Disk with Big Endian format.
    So, in this case I can use ASM CP from Big ENDIAN to move datafile converted (Little endian) to my destination which is ASM with Little Endian.
    How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
    If the source files are on ASM diskgroup and target platform has different endianess , the CONVERT command needs to be executed. The files cannot be copied directly between two ASM instances of different platform.

  • How can I move back the rman convert file from file system to ASM?

    I have no idea on pluging-in the data files which were unloaded as follows:
    SQL> alter tablespace P_CDDH_DSPGD_V1_2011 read only;
    SQL> alter tablespace P_IDX_CDDH_DSPGD_V1_2011 read only;
    SQL> exec dbms_tts.transport_set_check('P_CDDH_DSPGD_V1_2011,P_IDX_CDDH_DSPGD_V1_2011',true);
    SQL> select * from transport_set_violations;
    UNIX> expdp tossadm@pmscdhf1 dumpfile=ttsfy1.dmp directory=trans_dir transport_tablespaces = P_CDDH_DSPGD_V1_2011,P_IDX_CDDH_DSPGD_V1_2011
    RMAN> convert tablespace P_CDDH_DSPGD_V1_2011, P_IDX_CDDH_DSPGD_V1_2011 format = '/appl/oem/backup/temp/%I_%s_%t_extbspace.dbf';
    Starting conversion at source at 03-OCT-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=116 instance=pmscdhf11 device type=DISK
    channel ORA_DISK_1: starting datafile conversion
    input datafile file number=00079 name=+PMSCDHF1/p_cddh_dspgd_v1_2011_01.dbf
    converted datafile=/appl/oem/backup/temp/3536350174_2820_827849001_extbspace.dbf
    channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:15
    channel ORA_DISK_1: starting datafile conversion
    input datafile file number=00080 name=+PMSCDHF1/p_idx_cddh_dspgd_v1_2011_02.dbf
    converted datafile=/appl/oem/backup/temp/3536350174_2821_827849136_extbspace.dbf
    channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:01:45
    Finished conversion at source at 03-OCT-13
    Starting Control File Autobackup at 03-OCT-13
    piece handle=/dbms/oracle/r1110/db_01/dbs/c-3536350174-20131003-02 comment=NONE
    Finished Control File Autobackup at 03-OCT-13
    SQL> drop tablespace P_CDDH_DSPGD_V1_2011 including contents;
    SQL> drop tablespace P_IDX_CDDH_DSPGD_V1_2011 including contents;
    Afterward, how can I relocate the backup files "/appl/oem/backup/temp/3536350174_2820_827849001_extbspace.dbf", "/appl/oem/backup/temp/3536350174_2821_827849136_extbspace.dbf" back to the ASM group +PMSCDHF1 ???

    The 11.1 documentation only says  "Enables you to copy files between ASM disk groups on local instances to and from remote instances" and "You can also use this command to copy files from ASM disk groups to the operating system."
    http://docs.oracle.com/cd/B28359_01/server.111/b31107/asm_util.htm#CHDJEIEA
    The 11.2 documentation says "Copy files from a disk group to the operating system  Copy files from a disk group to a disk group Copy files from the operating system to a disk group"
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_util003.htm#CHDJEIEA
    I've never tried 11.1
    Hemant K Chitale

  • Non-unicode to Unicode conversion in ECC 6.0

    hello friends,
                      WE have ECC 6.0 at our location , it was upgraded from 4.7 to Ecc 6.0, but it is in non-unicode format , just we have to convert that thing to unicode , for that we have seen that we can export the data from current non-unicode system by SAPINST and import that data to another ECC 6.0 unicode system by using SAPINST .
                         i bit confuised that which oprn from SAPINST i have to select to export the data and which option to be select from SAPINST to import the data so pls can anybody help me regarding the same.
    Regards
    Anil

    > Go for the exact document which u need. Code page
    > 4100 is common for Unicode conversion....
    4100? No - 4102 or 4103 are the correct ones - depending on processor and operating system (endianess). See
    Note 552464 - What is Big Endian / Little Endian? What Endian do I have?
    Markus

  • HowtO - Upgrade 10gR1 WIN 2003 ent(non-asm) to 11gR2 WIN 2008 ent(non-asm)

    I did this on VMWare:
    Two servers:
    Two servers - old ; new
    Steps
    old - oracle 10g ; win 2003 enter
    - 3 databases (rman catalog ; production ; testing)
    1.installed 11g r2 - different home (software ONLY)
    2.cold backups - 10g db
    3.upgraded them to 11g r2 (minor issues, easily corrected)
    4.cold backups - 11g db
    5.using Oracle toad get script database structure - tablespaces ONLY - 11g db
    new
    6.win 2008 enter
    7.installed oracle grid (later for conversion from non-asm to asm)
    8.installed 11g software ONLY
    9.created disks (for later use for ASM)
    10.Created three blank databases
    11.Created similar folder structure as 11g db in 'old' server
    12.Created similar tablespaces/datafiles in blank databases as source database.
    13.cold backup of three databases - quick backup and recovery
    14.Copied datafiles and log files over from 'old' server to 'new' server.
    15.Using VBScripts, cloned each database from old server (11g db version) cold backups.
    Notes
    A.Should mention, I tried export/import option (various options) - tons of errors ! Gave up....
    B.Above approach is time-consuming, not quick...
    Warnings:
    1.Tried installing Oracle SQL Developer plus Toad will not work/or install on x64 bit OS. Developer does
    not work, due to Java. (Large number of hits on this).(hit and miss affair - you may get it to work or not..)
    2.So I used 'old' server, toad tool, to query database at new server
    RMAN Side.
    1.Errors occur when you log onto RMAN FIRST TIME.
    http://seilerwerks.wordpress.com/2007/03/06/fixing-a-32-to-64-bit-migration-with-utlirpsql/
    Hope this helps others....
    Ref:
    http://download.oracle.com/docs/html/B13831_01/ap_64bit.htm#CHDCDAGE

    Hi there
    Good idea but what I find, is I usually come to this site for knowledge...or I google it.
    The way I see it, if somone has upgrade issue (similar to this), this is place to add a thread...

  • Cross Platform migration AIX to linux (ERROR IN CONVERSION ORA-19994: Message 19994 not found)

    I am performing a cross platform migration aix to Linux from release 11.2.0.2 to 11.2.0.4. I am using this doc
    Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1).
    I am using the db_file_transer method and I have hit this error
    Error:
    /home/oracle/local/db_convert/dest_rman/xxttconv_i3q0t1ha_1_1_4.sql execution failed
    ERROR IN CONVERSION ORA-19994: Message 19994 not found;  product=RDBMS;
    facility=ORA
    ORA-19600: input file is backup piece
    (/oraback/nfclpat1/i2q0t1ha_1_1)
    ORA-19601: output file is backup piece
    (/oraback/nfclpat1_tmp/xib_i2q0t1ha_1_1_5_28)
    CONVERTED BACKUP PIECE/oraback/nfclpat1_tmp/xib_i2q0t1ha_1_1_5_28
    PL/SQL procedure successfully completed.
    I have searched and do not have anything to go on.
    Can anyone who has seen this give me some direction?

    As per Oracle documentation on given error code, Please see if following action helps.
    ORA-19994: cross-platform backup of compressed backups to different endianess is not supported
    Cause: A cross-platform backup was requested for a compressed backup to a different endianess from the current platform.
    Action: Do not specify a compressed backup or specify the same endian platform.

  • Transportable Tablespace in ASM

    Hi all,
    I want to move one tablespace from a database to another database using the 10g cross platform transportable tablespace.
    My source database is running on Linux 32 bit (Little endian format) and the target database is running on AIX (Big endian format). However both databases use the ASM diskgroup as a storage option.
    Do I need to convert the endian format of the source tablespace datafile to transport to the target database?
    Thanks,

    Because ASM diskgroup consists of one or more raw
    partitions or raw devices, I am wondersing if I still
    have to convert the endian format of datafiles..Interesting question.
    As you said earlier:
    - Linux natively uses Little endian
    - AIX natively uses Big endian
    You imply that perhaps Linux's ASM 'converts' to an 'Oracle neutral-endian' format whenever something is stored in the ASM disk group. Which would make it identical to AIX's ASM format, also 'converted'.
    (I use 'convert' in quotes, as one or the other would not need to get switched, if a specific endian form is maintained.)
    Worth investigating further. Although I am not sure why Oracle would introduce the overhead of conversion to-and-from every storage call.

  • GI+ASM+DBMS+DB cloning

    Hi.
    I have 1-node RAC 11.2.0.3 deployed on OL5.11 with test DB in ASM (3 x OCR_VOTE disks, 2 x DATA disks, 2 x FRA disks).
    The aim is to create new cluster using the current cluster as a master and to get DB through ASM disks cloning.
    As for http://docs.oracle.com/cd/E11882_01/rac.112/e41959/clonecluster.htm#CWADD92122 and "HOW TO CLONE AN 11.2.0.3 GRID INFRASTRUCTURE HOME AND CLUSTERWARE (DOC ID 1413846.1)" it is necessary to:
    prepare node
    deploy GI
    run clon.pl
    run config.sh
    Steps 1 and 2 in my case were replaced with:
    creating a master node as for docs (direct deleting cluster configuration and other files vs. deleting on copy).
    cloning disks and virtual machine
    changing relevant names and IPs
    I was successful up to "run config.sh". But running config.sh I got:
    [FATAL] [INS-40401] The Installer has detected a configured Oracle Clusterware home on the system.
    Any ideas? comments?
    Thank you in advance for your valuable time.

    Problem solved.
    Environment:
    VirtualBox.
    Linux 5.11
    1-node cluster with ASM (11.2.0.3)
    DBMS (11.2.0.3) installed
    1 RAC DB created.
    Disks in ASM:
    sharable 3 x ocrvote
    sharable 2 x data
    sharable 2 x fra
    1. Create pfile for DB.
    2. Stop all RAC services, disable crs.
    3. Delete files as for official docs.
    4. Clean OS files: logs, tmp, etc.
    5. Detach GI home
    6. Detach DB home
    Now we have master node.
    1. Clone VM (or disks).
    Now we have first node for new cluster.
    1. Prepare table of conversion: IPs, names.
    2. Register IPs, names in DNS
    3. Start cloned host.
    4. Change IPs, names in OS.
    5. Delete and create ssh, establish ssh equivalence for grid and oracle users.
    6. Redefine ASM disks assignments.
    7. Run clone.pl as GI user.
    8. Run root.sh as root.
    9. unlock GI.
    10. Move or delete ocr.loc.
    11. Run config.sh.
    12. Check cluster configuration.
    13. Check ASM.
    14. Change scan name to new one.
    15. Resolve multicast (if necessary)
    GI is ready.
    To prepare RAC DBMS.
    1. Run clone.pl.
    2. Run root.sh.
    3. Modify DB registration.
    4. Add instance.
    5. Modify tnsnames.ora with new scan  name.
    6. Modify created early pfile with new scan name.
    7. Start DB from pfile.
    8. Modify remote_listener. Create spfile.
    9. Start DB.
    10. If necessary it is possible to change DB name, DBID, disk groups names.
    You are welcome to new cluster and new RAC DB .

  • Moving from RAW FILES on hp-ux to ASM on Linux

    We need to migrate a 10g R2 Database from HP-UX raw files sitting on EMC Storage to ASM on a Linux machine.
    How can that be done with minimal downtime since the endianess is different between the 2 O/S ?
    Yoni

    If you can; create a DiskGroup on the HP/UX system and let ASM move the files. If you can do it that way downtimeshould be zero.
    Otherwise use RMAN.

  • ASM diskgroups and RAID

    I've been reading and reading about this, but can't seem to find a definitive answer...
    We have a new implementation which will be using ASM with RAID. The data area needs to be 3TB, and the recovery area, to be used for archive logs and RMAN backups, needs to be 1TB.
    The configuration i'm thinking about now is:
    DATA diskgroup: 5*600G disks, using RAID 10 (this will include control files)
    +FRA diskgroup: 2*500G disks, using RAID 1
    +LOG diskgroup: 2*1G disks, using RAID 1 (this is only for redo logs)
    So here are my questions:
    1. Am I right in supposing that we would get the best performance on the FRA and LOG diskgroups by not using RAID 1+0? (i.e. hardware mirror, no hardware stripe)
    2. It has already been agreed to use RAID 1+0 for the +DATA diskgroup, but I can't see the added benefit of this. ASM will already stripe the data, so surely RAID 0 will just stripe it again (i.e double striping the data). Would it not be the better just to mirror the data at a hardware level?
    Any other notes about my proposed configuration would also be greatly appreciated.
    Thanks
    Rup

    user573914 wrote:
    I've been reading and reading about this, but can't seem to find a definitive answer...
    We have a new implementation which will be using ASM with RAID. The data area needs to be 3TB, and the recovery area, to be used for archive logs and RMAN backups, needs to be 1TB.
    The configuration i'm thinking about now is:
    DATA diskgroup: 5*600G disks, using RAID 10 (this will include control files)I'm guessing that you mean 600G LUNs, not physical disks, right? Are the LUNs carved from one RAID-10 array, or multiple arrays?
    +FRA diskgroup: 2*500G disks, using RAID 1
    +LOG diskgroup: 2*1G disks, using RAID 1 (this is only for redo logs)
    So here are my questions:
    1. Am I right in supposing that we would get the best performance on the FRA and LOG diskgroups by not using RAID 1+0? (i.e. hardware mirror, no hardware stripe)RAID-1 is exactly 2 physical devices, by definition - no more, no less. Since REDO and FRA are (generally) sequential write, you're only getting the throughput benefit of 1 physical disk per mirrored pair due to RAID-1 overhead (2x write overhead due to mirroring). Of course, that doesn't take into account write cache on the storage device, but that's an entirely different conversation. Maybe 1 disk's throughput is enough for your environment - it really depends on your requirements. Check your redo I/O and throughput against the rated disk IOPS and throughput to determine if you're configured correctly. Be sure to consider RAID overhead in your calcs!
    It might make more sense to carve a couple of 1G LUNs from multiple RAID-10 arrays, which stripes data across multiple disks for better throughput, assuming the arrays are not IO-bound.
    2. It has already been agreed to use RAID 1+0 for the +DATA diskgroup, but I can't see the added benefit of this. ASM will already stripe the data, so surely RAID 0 will just stripe it again (i.e double striping the data). Would it not be the better just to mirror the data at a hardware level?Again, it depends on how that RAID-10 is being presented to you by the storage admins. The 5x600GB LUNs could be carved out of 5 different RAID-10 arrays, which would increase the total number of spindles behind your diskgroup and therefore improve performance (assuming no other contention). Or, they may have carved all 5 LUNs out of a single dedicated array, which can still provide benefit, assuming the array has more than 10 disks. Or, even if the array only has 10 disks, it may be more cost effective for the storage admin due to a lower number of hot spares required to support a single array vs. multiple arrays (which doesn't benefit you - and this is where a potential performance different due to the double-striping vs. RAID-1 comes in). There are many variables that come into play.
    All things being equal, you're generally going to get better performance out of more physical disks, as long as the disks are managed properly - the additional spindles will easily negate any striping overhead (as long as the IOPS are not all happening to multiple LUNs in the same array).
    It all comes back to your need and how your storage guys are configuring things.
    Let us know if you have more specific questions - there are so many combinations of configuration that it's difficult to determine a "best" configuration without understanding your storage setup... :)
    K

  • Byte conversion

    I have a read function that collects a vector of nbytes of data from an external memory device using memory addresses.  These bytes are an amalgamation of multiple data types (U16, U32, doubles) that need to be parsed into their respective variables.  I'm doing this by taking the corresponding elements in the vector, flipping them to the correct for endianess, and type casting the multiple bytes to a single value.
    To speed up the data transfer process, I can transfer multiple vectors of nbytes as a matrix.  However, my speed up in reading the data from the device is thwarted since I have to move through the data one vector at a time to parse it into values (cannot type cast a 2D array).
    The bytes allotted for each vector are as follows:
    U16
    U16
    U32
    U32
    U32
    Double
    Double
    U32
    U16
    U16
    If there are any suggestions you may offer, I would greatly appreciate them.  Thank you for your time.
    Adam

    Can you run your VI so the "Byte array" indicator contains data, then right-clich the terminal and create constant. Delete everything else except the constant and the conversion code. (alternativley, do the same with the 1D array a few steps earlier)
    You have way too many local variables and associated hidden indicators. "Tracematrix" and "trace" belong in shift registers. No hidden indicators needed. Why do you read "trace" twice in parallel in the last inner frame? Why not read once and branch the wire?
    (For reference, here my earlier code, but dealing with arrays)
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    ParseMultipleArray.vi ‏17 KB

  • Database upgrade, platform migration with ASM

    Hi All
    I have using R12.1.2 with 10.2.0.4 database on HP-PA RISK 64 bit Operating system . I want to upgrade database to 11g r2 with ASM and want to migrate OS to RHEL5 , 64 bit. Kindly suggest me the sequence how we can proceed?
    Thanks
    Krishna

    Hi Krishna;
    I have using R12.1.2 with 10.2.0.4 database on HP-PA RISK 64 bit Operating system . I want to upgrade database to 11g r2 with ASM and want to migrate OS to RHEL5 , 64 bit. Kindly suggest me the sequence how we can proceed?For your issue I suggest just follow below steps:
    1. Upgrade your db from 10.2 to 11.2 on HP
    2. Migrate your 11.2 from HP to linux
    3. Convert your system to ASM
    Please see below notes:
    1. For convert ASM:
    Convert datafile to asm:
    Re: convert to ASM
    How to move a datafile from a file system to ASM [ID 390274.1]
    How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure [ID 747457.1]
    How To Create ASM Diskgroups using NFS/NAS Files? [ID 731775.1]
    How to copy a datafile from ASM to a file system not using RMAN [ID 428893.1]
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/dbxptrn.htm
    convert to ASM
    Re: convert to ASM
    Also check below gooling:
    http://www.google.com.tr/search?hl=tr&q=convert+datafile+to+asm&meta=&aq=f&aqi=&aql=&oq=&gs_rfai=
    RAC for EBS 12.1.1 with DB 11.1.0.7
    2. For 11g upgrade:
    Interoperability Notes Oracle EBS 11i with Oracle Database 11gR2 (11.2.0.2) [ID 881505.1]
    3. For RHEL5:
    DB migration from AIX to Linux
    Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2 [ID 454616.1]
    General Notes For E-Business Suite Release 12 [ID 986673.1]

  • ACFS and ASM Vol

    when should w to use ACFS and ASM Vol on 11G Rel 2 11.2.0.3.0
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmfilesystem.htm
    •Oracle ASM is the preferred storage manager for all database files. It has been specifically designed and optimized to provide the best performance for database file types.
    •Oracle ACFS is the preferred file manager for non-database files. It is optimized for general purpose files.
    •Oracle ACFS does not support any file that can be directly stored in Oracle ASM.
    is ACFS optional and does it incur any additional licensing costs

    Oracle has thrown in all sorts of nonsensical verbiage to completely confuse everyone as to how you can us e ACFS. You can use it for "database-related" stuff but you can't use it for data files, archivelog files, backup files. You can store exports because if you do a parallel expdp the export file(s) must be visible to the entire cluster. Which **I think** is to be interpreted that you cannot store "application" files that are required to be accessible by your applications. I surmise that the reason for this is that Oracle decided they could make more $$$ by releasing a "product" called CloudFS. CloudFS **is** ASM/ACFS - and nothing more. So, on your Weblogic clustered middle-tier you can use Cloudfs to store your application data files.
    Example for mid-tier:
    some process transfers 100's of files to be processed
    The middle-tier - running on multiple servers (nodes)
    take the files and process them in parallel (mechanisms in place to ensure only one node processes a given file )
    writes output files to another shared directory.
    In the past, you would provide a "share" by using NFS. Well, as it turns out - really bad things can happen when when one of the nodes processing and not receiving the files dies, you can have files get "lost" if that NFS share dies.
    ACFS works great for this - and Oracle realized that since "ASM/GI" was sort-f "free" they needed to close that loophole. In RAC, the only thing that "cost extra" was the RAC license which only applied to the RDBMS code that made the database cluster-aware. <this from a conversation from an ex-Oracle Salesman>.

  • Conversion to segment space management auto

    My production databases is 10gR2, tablespaces are created locally managed with segment space management manual.
    I wanted to change the segment space management to AUTO.
    What is the best way to do this keeping the downtime minimal?
    Thanks
    S~

    A summary.
    First, you cannot convert to ASSM. The only mechanism provided is to create a new tablespace with segment space management auto and then move all objects across from their existing tablespace. That is the only conversion mechanism provided or possible.
    Second, you probably don't want to convert to ASSM anyway. It is designed to resolve the problem of massive contention for hot blocks on inserts -the kind of thing that will happen in a RAC. In a RAC, ASSM is extremely good news and you'd be mad NOT to use it. But if you don't have a RAC, then the chances of you needing ASSM are much less. If you find particular segments that suffer from insert contention (the symptoms are lots of buffer busy waits and ITL waits), then move those few segments that need the ASSM treatment into specially-created ASSM tablespaces. But don't go doing bulk converts of things that don't need it!
    Third, if you are using OMF and ASM, then that's another situation in which ASSM makes a lot of sense: you're automating everything else anyway, so why not use the automatic 'freelists' mechanism, too?
    Fourth, ASSM is about how a table knows where its next insert will take place. Extent Management Local/Uniform is all about how a tablespace allocates chunks of space to segments that need it. Completely different technologies.
    Fifth, I would always use extent management auto, because do you know what are the "right" extent sizes to allocate to tables? No, I didn't think so. Oracle can work it out for you, though, with no detriment to you, your tables or your performance levels.
    Sixth, back on the topic of ASSM, you might find this article useful:
    http://www.dizwell.com/prod/node/541

Maybe you are looking for

  • In import commit 100000 records

    Dear Friends, Please let me know whether in import any method is there to commit records for an interval of records say 100000. If we use commit=y and each table has 5M records, it take multiple days to complete. And if we don't give commit=y, it req

  • Error in running jsp page

    Hi, I managed to connect to the DB using sql developer. But when i try to run the jsp page that i create. it gives me the following error: While trying to retrieve the URL: http://192.168.1.64:8988/StaffDirectorySystem-ViewController-context-root/add

  • Concept of association and authentication?

    Hello, hope someone can enlighten me on this.  We have a 5508 WLC with a few WAP's (1131's and 1242's).  Our wireless clients use certificate base authentication against our AD (i.e. both computer cert and user cert are required).  However, from time

  • Update enrolled table which has 6 composite primary key

    Hi Everyone, I am trying to update a grade column in table called enrolled which has 6 composite primary key column including SID, TERMYEAR, FACCODE, DEPCODE, COURSENO, SECNO and 2 extra column including GRADE, IDD all of them are of type VARCHAR2 as

  • Dynamic navigation from ZCOMP1 to ZCOMP2

    Hi experts, Hope you are all doing good! This is the first time I'm working with the dynamic navigation. So, I'm just trying to understand how really it works with two custom components. I've already searched this forum n got some info. But, I'm gett