ASM Configuration on Solaris

Hi,
I want to configure ASM on Solaris Oracle sparc-64 bit server with Netapp storage.
I am bit confused, when to create diskgroup using asmca. Because there is no asmlib package available. I cannot start grid installation because during installation it asking for disk (which i want to use from shared storage diskgroup).
Yuvraj

You should read the installation REQUIREMENTS for the platform you are using. You do NOT need to follow some other document to "mark" the devices for ASM. It is not necessary. I have NEVER done this, nor have I EVER used asmlib on Linux. IT IS NOT NECESSARY!!!!!!! Read the Oracle docs.
have sys admin configure devices to be visible for all nodes (stand-alone or cluster)
Make sure you use fdisk and use a partition that excludes the first 2 cylinders (0-1). Your partitions should start at cylinder 2. If you do not, ASM will overwrite the Solaris VTOC for the device rendering it unusable until you correctly partition the devices. The device should only have 2 partitions. Partition 1 is cylinder 0-1, and Partition 2 is cylinders 2-<last>. All ASM disks in a given DISKGROUP need to be EXACTLY the same size. You can have multiple diskgroups with different size devices, but that just makes managing your devices more complicated. A word of warning, in 11gR2, if you use a diskgroup with EXTERNAL REDUNDANCY, you will only get 1 voting file. The recommendation for a CRS diskgroup is to be made up of 3 x 2G devices for OCR/VOTING with NORMAL redundancy. This will create 3 voting files.
set correct ownership of grid:oinstall (or whatever:whatevergroup) for only partition 2 of these devices
Next make sure the grid user and the oracle rdbms user can both READ and WRITE to these partitions (rdbms user must be a member of the oinstall/whatever group).
during the install, make sure you change the ASM_DISKSTRING to point to these devices:
eg: /dev/whatever*p2. The installer will then mark the devices properly and create the diskgroup(s)
I don't know why so many people make this so hard. It is NOT that hard!!!!!
[Frame of reference:  I have installed > 75 10g-11gR2 clusters.]
Edited by: onedbguru on Dec 9, 2012 1:12 PM

Similar Messages

  • Oracle ASM Configuration on Solaris Cluster - Oracle 11.2.0.3

    Hi,
    I want some clarifications!
    I need to set Active and Passive Cluster settup on Solaris 10 SPARC Operating System, the HA software is Solaris Cluster and Oracle 11.2.0.3.
    1) I understand "Single instance Oracle ASM is not supported with Oracle 11g release 2" so we need to go for Clustered ASM - is it required to use RAC framework in this case?
    2) When i use the RAC framework, do i need to have license for RAC?
    Am new to Oracle, any help is appreciated.
    Regards,
    Shashank

    Hi,
    I want some clarifications!
    I need to set Active and Passive Cluster settup on Solaris 10 SPARC Operating System, the HA software is Solaris Cluster and Oracle 11.2.0.3.
    1) I understand "Single instance Oracle ASM is not supported with Oracle 11g release 2" so we need to go for Clustered ASM - is it required to use RAC framework in this case?
    2) When i use the RAC framework, do i need to have license for RAC?
    Am new to Oracle, any help is appreciated.
    Regards,
    Shashank

  • ASM Configuration

    Hi Everybody,
    My question is:
    Is it possible to configure ASM using shared file system on RAC other than configuring raw devices . If it so how to do it?
    Environment:
    OS: Sun SPARC Solaris 10
    SUN Servers:V490
    Number of Nodes: 2
    Storage Server:3150
    Switches:SAN
    RDBMS: Oracle 10.2.0 RAC
    Clusterware: Configuring Raw Devices for OCR and Voting Disk.
    How to configure ASM on Shared Storage for database files as we are not using Third party clusters?
    What are the possible storage options for ASM as per my requirement?
    I seen in document it is very dfficult to manage and administure ASM configured using RAW Devices?
    in both the cases kindly provide me steps how to do it by using DBCA.
    Is it necessary to use recovery area on ASM?
    Waiting for your favourable reply from RAC experts.

    Hi,
    Steve Karam has some great notes on this:
    One thing to remember is that ASM is not RAID. Oracle portrays ASM as a Volume Manager, filesystem, miracle, whatever you would like to call it, but in reality it is no more than extent management and load balancing; it scatters extents across your LUNs (1MB stripe size for datafiles/archivelogs, 128k stripe size for redo/controlfiles). It also provides extent-based mirroring for extra redundancy.
    This benefits us in a couple ways. First, remember that your OS, HBA, or other parts of the host driver stack may have limits per LUN on I/O. Distributing your extents across multiple LUNs with ASM will provide better I/O concurrency by load balancing across them, eliminating this bottleneck.
    Second, carving into multiple LUNs allows multiple ASM volumes. Multiple volumes help us if our hardware has any LUN-based migration utilities for snapshots or cloning.
    Third, you may end up with multiple LUNs if you need to add capacity. ASM allows us to resize a diskgroup on-the-fly AND rebalance our extents evenly at the same time when we add a new LUN. Even if you only start with a single LUN, you may end up with more in the long run.
    Fourth, because an ASM diskgroup is not true RAID, you are able to use it to stripe across volumes. This means that in a SAN with 3 trays, you can carve a LUN from each tray and use it to form a single ASM diskgroup. This further distributes your storage and reduces throughput bottlenecks.
    I have not seen any tried and true formula for the number of LUNs per ASM diskgroup, but you can calculate it based on your throughput per capacity. Make sure the LUNs provide maximum and equivalent I/O operations per second per gigabyte.
    http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm

  • ASM Configurations

    Hi,
    I am new to configuring ASM,
    Can you tell me how to start from scratch like what kind of disks required. which raid level should be configured, which one is the task of sysadmin and which tasks will be performed by dba?
    My operating system is solaris 9 x86 and database would be 10gR2.
    We are going to deploy data warehouse environment.
    alomost 100gb RAM is available (expectedly from 80gb to100gb ram).
    please help me understanding the ASM configuration and disks knowledge along with RAID levels.
    Note: we have 600GB hard disk (6 disks 100gb each).
    Thanks all........
    Message was edited by:
    Fkhalid

    Have a lok at ASM Best Practice Documents
    www.oracle.com/technology/products/database/asm
    www.oracle.com/technology/products/database/asm/pdf/asm_10gr2_bptwp_sept05.pdf
    www.oracle.com/technology/products/database/asm/pdf/asm_bestpractices_9_7.pdf

  • How to reuse the same disk (partition) for next ASM configuration?

    Hello All,
    I had successfully installed and configured ASM instance once, but I need to do a reinstallation and I would like to use the same disk (partition) as with previous ASM configuration.
    The disk path used was: /dev/rdsk/c1t3d0s7
    I have to regretfully admit that I may not have used the right ASM un-installation procedure. I ran installer and opted to de-install products. Consequently, I had removed ASM home. Now, when I running new ASM installation, everything goes fine until the screen where I should select disks for ASM configuration. Unfortunately, I see the /dev/rdsk/c1t3d0s7 with status > MEMBER < and I can't select it for the new ASM installation.
    I will really appreciate if anyone can let me know how to change status from MEMBER to CANDIDATE.
    Thank you for your time.
    DanielD

    Overwrite the first several MB of the partition with /dev/zero.
    So, use this command:
    # dd if=/dev/zero of=/dev/rdsk/c1t3d0s7 bs=1024k count=10
    That clears out the ASM header on the disk so that it looks like a clean disk again.
    -- John

  • Database is not created on ASM configured Disks

    hi,
    I have installed Oracle 11gR2 Grid Infrastructure and Automatic Storage Management for a Standalone Server through "Grid" user. After then I install Oracle Database 11gR1 Software through "Oracle" User. There is no problem occure during the installation of these software. The OS is OEL-5.4. AND
    ASM configured Disks are "DISK1, DISK2 AND DISK3" and the Group Name is "+DATA".
    The Problem is when i want to create Database and chose ASM configured disk for Database storage, at the database creation process on 27% an error occur that is "ORA-03114 Not connected to Oracle". Sir but when i chose File System for Database Storage, Database created successfully.
    Sir i am new in Oracle Grid Infrastructure and ASM please help me.

    Hi buddy,
    Are there errors in the alert.log of RDBMS istance, and what about traces ? Have it been generated ?
    Regards,
    Cerreia

  • Oracle database 10g RAc an ASM configuration

    Hi all,
    I want to ask to everybody something about Oracle 10g RAC and ASm configuration. We plan to migrate to Oracle 10g from 9i, and we will begin configuring oracle but we have to decide which configuration are the best.
    Our materials are bellow:
    Hardware: RP 3440 (HP)
    OS : HPUX 11i Ver 1
    Storage: EVA 4000 (eva disk group)
    The problem is:
    Our supplier recommand us to use HP serviguard + HP serviceguard extension for RAC+ RAc and Raw device as configuration.
    But we want to use Oracekl Clusterware + RAC + ASM
    My question is if anybody know what is the best configuration, we want to use ASm.
    Can we use HP serviguard and ASM.
    Some documentations or link explain oracle RAC and ASM configuration will be appreciate.
    Thanks for your help.
    Regards.
    raitsarevo

    Hello,
    there's no extra RAC software package, but the option is only offered, if one of the supported cluster layers for the respective OS has been installed before.
    10.1.0.3 looks like a complete redesign, but anyway it is a patch, you have to install 10.1.0.2 first.

  • Cloning the asm configured database

    Hi all DBA Guys, Have a Nice Day For all
    How to cloning the asm configured database?
    Regards
    S.Azar

    Hi,
    How to cloning the asm configured database?
    The same way which you going to use for without ASM instance.
    Nice day to you too. :)

  • Oracle database 10g RAC and ASM configuration

    Hi all,
    I want to ask to everybody something about Oracle 10g RAC and ASm configuration. We plan to migrate to Oracle 10g from 9i, and we will begin configuring oracle but we have to decide which configuration are the best.
    Our materials are bellow:
    Hardware: RP 3440 (HP)
    OS : HPUX 11i Ver 1
    Storage: EVA 4000 (eva disk group)
    The problem is:
    Our supplier recommand us to use HP serviguard + HP serviceguard extension for RAC+ RAc and Raw device as configuration.
    But we want to use Oracekl Clusterware + RAC + ASM
    My question is if anybody know what is the best configuration, we want to use ASm.
    Can we use HP serviguard and ASM.
    Some documentations or link explain oracle RAC and ASM configuration will be appreciate.
    Thanks for your help.
    Regards.
    raitsarevo

    Hello,
    there's no extra RAC software package, but the option is only offered, if one of the supported cluster layers for the respective OS has been installed before.
    10.1.0.3 looks like a complete redesign, but anyway it is a patch, you have to install 10.1.0.2 first.

  • Getting error with sudo configuration on solaris 10

    Valuable Member,
    I have some issue with Sudo configuration on Solaris 10 (Sparc), I installed gcc & libiconv then install sudo package everything done well till I run "./configure" but when I run "make" its giving lots of error ... I am confused what I have to do ... Please tell me what exactly I have to do or what I am missing
    ---------------------------ERROR CUT-------------------------------------
    # make
    gcc -c -I. -I. -I/tmp/rsa -O2 -D__EXTENSIONS__ -D_PATH_SUDOERS=\"/etc/sudoers\" -D_PATH_SUDOERS_TMP=\"/etc/sudoers.tmp\" -DSUDOERS_UID=0 -DSUDOERS_GID=0 -DSUDOERS_MODE=0440 check.c
    In file included from /usr/include/sys/wait.h:24,
    from /usr/include/stdlib.h:22,
    from check.c:31:
    /usr/include/sys/siginfo.h:259: error: parse error before "ctid_t"
    /usr/include/sys/siginfo.h:292: error: parse error before '}' token
    /usr/include/sys/siginfo.h:294: error: parse error before '}' token
    /usr/include/sys/siginfo.h:390: error: parse error before "ctid_t"
    /usr/include/sys/siginfo.h:392: error: conflicting types for `__proc'
    /usr/include/sys/siginfo.h:261: error: previous declaration of `__proc'
    /usr/include/sys/siginfo.h:398: error: conflicting types for `__fault'
    /usr/include/sys/siginfo.h:267: error: previous declaration of `__fault'
    /usr/include/sys/siginfo.h:404: error: conflicting types for `__file'
    /usr/include/sys/siginfo.h:273: error: previous declaration of `__file'
    /usr/include/sys/siginfo.h:420: error: conflicting types for `__prof'
    /usr/include/sys/siginfo.h:287: error: previous declaration of `__prof'
    /usr/include/sys/siginfo.h:424: error: conflicting types for `__rctl'
    /usr/include/sys/siginfo.h:291: error: previous declaration of `__rctl'
    /usr/include/sys/siginfo.h:426: error: parse error before '}' token
    /usr/include/sys/siginfo.h:428: error: parse error before '}' token
    /usr/include/sys/siginfo.h:432: error: parse error before "k_siginfo_t"
    /usr/include/sys/siginfo.h:437: error: parse error before '}' token
    In file included from /usr/include/sys/procset.h:24,
    from /usr/include/sys/wait.h:25,
    from /usr/include/stdlib.h:22,
    from check.c:31:
    /usr/include/sys/signal.h:85: error: parse error before "siginfo_t"
    In file included from /usr/include/stdlib.h:22,
    from check.c:31:
    /usr/include/sys/wait.h:86: error: parse error before "siginfo_t"
    In file included from check.c:55:
    /usr/include/signal.h:111: error: parse error before "siginfo_t"
    /usr/include/signal.h:113: error: parse error before "siginfo_t"
    *** Error code 1
    make: Fatal error: Command failed for target `check.o'
    ---------------------END ERROR---------------------------
    ///Thanks
    Mohammed Tanvir

    How did you install gcc, I don't think its working correctly.
    There should be a copy of gcc installed with solaris 10 in /usr/sfw/bin.
    I suggest you use that one instead..

  • Silent install- ASM Configuration Assistant fails

    I am trying to do a silent install with response file. At the end of the install it says I need to run configToolAllCommands which I do but the ASM Configuration assistant fails. I assume I need to pass in asmsnmp password but how do I do that? I see I can specify ResponseFile=<fn> when I run configToolAllCommands but how do I set the asm passwords in a response file? I tried just creating a response file that included asmsnmpPassword=mypassword and sysAsmPassword=mypassword but it didn't like that. Anyone know how to pass in the password? I think it will set up the asmsnmp user in this configuration assistant? because that is one of my problems after the install completes I don't have that user set up. I need all of this to be automated.
    thanks for any ideas.

    well I did not find any documentation on how to create a resposne file for the ConfigToolAllCommands- all I found using the -help option was that I could include a ResponseFile=<fn> parameter. so all I did was put 2 llines in the file sysAsmPassword=mypassword and asmsnmpPassword=pAssword. And that doesn't work...

  • Need to format the old ASM disks on solaris.10.

    Hello Gurus,
    we uninstalled the ASM on solaris, but while installing the ASM again it says that mount point already used by another instance, but there is no db and ASM running (this is the new server) so we need to use dd command or need to reformat the raw devices which already exists and used by the old ASM instance,here is the confusion...
    there are 6 Luns presented to the this host for ASM,its not used by anyone...
    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
    1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
    2. c2t60050768018E82BE98000000000007B2d0 <IBM-2145-0000-150.00GB>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b2
    3. c2t60050768018E82BE98000000000007B3d0 <IBM-2145-0000 cyl 44798 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b3
    4. c2t60050768018E82BE98000000000007B4d0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b4
    5. c2t60050768018E82BE98000000000007B5d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b5
    6. c2t60050768018E82BE98000000000007B6d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b6
    7. c2t60050768018E82BE98000000000007B7d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b7
    but the thing is when we try to list the raw devices by ls -ltr on /etc/rdsk location all disk owned by root and other not in oracle:dba & oinstall.
    root@b2dslbmom3dbb3301 [dev/rdsk]
    # ls -ltr
    total 144
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:g,raw
    lrwxrwxrwx 1 root root 68 Jun 13 15:34 c2t60050768018E82BE98000000000007B2d0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:wd,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:47 c2t60050768018E82BE98000000000007B3d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:48 c2t60050768018E82BE98000000000007B4d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:49 c2t60050768018E82BE98000000000007B5d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:51 c2t60050768018E82BE98000000000007B6d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:53 c2t60050768018E82BE98000000000007B7d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:h,raw
    so we need to know where the raw devices located for oracle to do the dd command to remove the old asm header on the raw device inorder to start the fresh installation
    but when we use the command which already given by the unix person who is no longer works here now, we are able to see the following information
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    also we are having the information of the mkode, with minor and major number we used for making the softlinks for raw device with ASM.
    Cd dev/oraasm/
    /usr/sbin/mknod asm_disk_03 c 118 232
    /usr/sbin/mknod asm_disk_02 c 118 224
    /usr/sbin/mknod asm_disk_01 c 118 216
    /usr/sbin/mknod asm_ocrvote_03 c 118 208
    /usr/sbin/mknod asm_ocrvote_02 c 118 200
    /usr/sbin/mknod asm_ocrvote_01 c 118 192
    But the final thing is we need find out where the above configuration located on the host, i think this raw device present method is different than the normal method on solaris??
    please help me to proceed my installtion .... thanks in advance....
    i am really confused with the following command from where we are getting the oracle disk raw devices information,since there is no info there in /etc/rdsk location (Os is solaris 10)
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    please help....

    Hi Winner;
    For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Grid Computing » Automatic Storage Management which you can get more quick response
    Regard
    Helios

  • Backup to disk storage, from ASM enabled DB, Solaris 10,11gRel1. RAC

    Hi,
    I am struggling with completing backup of database within appropriate time. It keeps running for days. I have tried with 07 channels and now trying with 03 channels.
    Database is 2 node RAC cluster, each running "Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production" on Solaris 10, SPARC-Enterprise-T5220 with 6 CPU and 32GB RAM. The ASM is running on LUNs taken from IBM XIV.
    RMAN> SHOW ALL;
    RMAN configuration parameters for database with db_unique_name EPMPRD are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u998/backups/epmprd/rman_backup/ora_cf%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 3 BACKUP TYPE TO COPY;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 2;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u998/backups/oepmprd/rman_backup/%d_%u_%p_%N_fileno:%f.dbf';
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/u998/backups/epmprd/rman_backup/ora_df_%d_%T_%s_%c_%p' CONNECT '*';
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '/u998/backups/epmprd/rman_backup/ora_df_%d_%T_%s_%c_%p' CONNECT '*';
    CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '/u998/backups/epmprd/rman_backup/ora_df_%d_%T_%s_%c_%p' CONNECT '*';
    CONFIGURE MAXSETSIZE TO 4296 M;
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'ZLIB';
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    SQL> SELECT filename, status, bytes FROM v$block_change_tracking;
    FILENAME STATUS BYTES
    +DGA1/epmprd/block_change.dbf            ENABLED      22085632
    RMAN script running under crontab has:
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '$DATA_FILE_DIR/ora_df_%d_%T_%s_%c_%p' connect = '$CONNECT_TARGET1';
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '$DATA_FILE_DIR/ora_df_%d_%T_%s_%c_%p' connect = '$CONNECT_TARGET1';
    CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '$DATA_FILE_DIR/ora_df_%d_%T_%s_%c_%p' connect = '$CONNECT_TARGET1';
    BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1;
    BACKUP ARCHIVELOG ALL not backed up FORMAT '$DATA_FILE_DIR/ora_arch_%d_%s_%T_%c_%p';
    DELETE FORCE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-7' backed up 2 times to disk;
    BACKUP SPFILE format '$DATA_FILE_DIR/spfile_%d_%T_%s_%p';
    where TARGET1=epmprd1 (instance 1 of RAC i.e. using instance 1 as backup node)
    SQL> @tbs_usage.sql
    TABLESPACE_NAME SUM_SPACE(M) SUM_BLOCKS USED_SPACE(M) USED_RATE(%) FREE_SPACE(M)
    SYSTEM 2000 256000 1074.94 53.75 925.06
    CALC 250 32000 5.5 2.2 244.5
    FDMCOMMA2D 250 32000 .06 .02 249.94
    FDMXCHANGING1I 250 32000 .06 .02 249.94
    FDMXCHANGING1D 250 32000 .06 .02 249.94
    FDMXCHANGING3I 250 32000 .06 .02 249.94
    FDMXCHANGING4D 451.56 57800 421.06 93.25 30.5
    USERS 2000 256000 381.37 19.07 1618.63
    BIPLUS 250 32000 81.44 32.58 168.56
    FDMCOMMAI 250 32000 .06 .02 249.94
    FDMXCHANGING2I 250 32000 .06 .02 249.94
    UNDOTBS2 4279 547712 18.44 .43 4260.56
    EPMA 250 32000 57.81 23.12 192.19
    EPMAINT 250 32000 .06 .02 249.94
    HFM 50000 6400000 18566.75 37.13 31433.25
    HSS 250 32000 25.06 10.02 224.94
    FDMSTAT 250 32000 .06 .02 249.94
    FDMCOMMAD 250 32000 .06 .02 249.94
    FDMCOMMA2I 250 32000 .06 .02 249.94
    FDMXCHANGING2D 250 32000 .06 .02 249.94
    FDMXCHANGING3D 250 32000 .06 .02 249.94
    SYSAUX 2000 256000 1208.37 60.42 791.63
    FDMXCHANGING4I 250 32000 .06 .02 249.94
    UNDOTBS1 2000 256000 26.56 1.33 1973.44
    TEMP 10284 1316352 10284 100 0
    25 rows selected.
    SQL>select sum(bytes/1024)/1024/1024 from dba_segments;
    SUM(BYTES/1024)/1024/1024
    40.0876465
    Please share what i can do to optimize the backups. Your suggestions are highly appreciated.
    regards,
    Anjum

    Hi,
    /998 is SAN mountpoint taken from IBM XIV.
    oepmprd@basfisprddatg01$ cat /etc/vfstab | grep -i u998
    /dev/md/dsk/d200 /dev/md/rdsk/d200 /u998 ufs 2 yes logging
    I have done some testing and following are results.
    1.
    ASMCMD> cp hfm22.dbf /u998/backups/epmprd/rman_backup
    copying +DGA1/epmprd/hfm22.dbf -> /u998/backups/epmprd/rman_backup/hfm22.dbf
    oepmprd@basfisprddatg01$ du -sh hfm22.dbf
    2.0G hfm22.dbf
    Time taken: 29 seconds
    ================================================
    2.
    ASMCMD> cp group_5.600.766282841 /u998/backups/epmprd/rman_backup
    copying +FRA/epmprd/ONLINELOG/group_5.600.766282841 -> /u998/backups/epmprd/rman_backup/group_5.600.766282841
    oepmprd@basfisprddatg01$ du -sh group_5.600.766282841
    500M group_5.600.766282841
    Time taken: 11 seconds
    ================================================
    3.
    ASMCMD> cp temp01.dbf /u998/backups/epmprd/rman_backup
    copying +DGA1/epmprd/temp01.dbf -> /u998/backups/epmprd/rman_backup/temp01.dbf
    oepmprd@basfisprddatg01$ du -sh temp01.dbf
    10.0G temp01.dbf
    Time taken: 03 mins, 16 seconds
    =================================================
    Moreover, i copied a file from /u998 to /tmp and it was quick:
    grid@basfisprddatg01$ pwd
    /u998/backups/epmprd/rman_backup
    grid@basfisprddatg01$ du -sh ora_df_EPMPRD_20111210_7783_1_1
    266M ora_df_EPMPRD_20111210_7783_1_1
    grid@basfisprddatg01$ cp ora_df_EPMPRD_20111210_7783_1_1 /tmp/
    It took 2-3 seconds.
    regards,
    Anjum
    Edited by: Anjum Shehzad on Dec 12, 2011 11:18 AM

  • Cannot see EMC PowerPath devices from ASM instance on Solaris 10 x86

    Hello,
    I'm trying to use EMC PowerPath devices as ASM disks, but cannot see any usable disk
    from the ASM instance.
    My configuration:
    Hardware: Sun X4600M2
    Solaris 10 8/07 x86_64
    EMC PowerPath 5.0.2_b030
    Oracle 10.2.0.1.0
    The PP devices are /dev/rdsk/emcpower* which are softlinks to the correpsonding /devices/pseudo/emcp*.
    Since I know that these devices must have suitable permissions, I've set ownership and
    permission of e.g. /devices/pseudo/emcp@1:a,raw to oracle:dba and 660. The ASM instance's
    init.ora file has *.asm_diskstring='/dev/rdsk/emcpower*', so I assume it will scan all
    /dev/rdsk/emcpower* softlinks and fetch the ones with sufficient permissions. But nonetheless
    a 'select count(*) from v$asm_disk' gives me 0, i.e. nothing.
    I've just read some recommendations of using slice 6 of the disk with partition type 'usr', but
    this seems to me quite superficial. I would like to use simply slice (=partition) 0 starting from
    cylinder 1.
    I would be very glad for any solution.
    Best regards
    Udo

    Problem solved.
    After some tests with plain files instead of devices via undocumented parameter
    asmallow_only_raw_disks=false in order to check ASM functionality,
    I've finally created new node devices only for oracle (and set_asm_allow... back to true).
    original EMC PowerPath pseudo devices (e.g.):
    -bash-3.00$ ls -l /devices/pseudo/emcp@13:a,raw
    crw------- 1 root sys 215, 832 Dec 10 15:29 /devices/pseudo/emcp@13:a,raw
    corresponding new character device with proper ownership and permissions:
    -bash-3.00$ ls -l /u01/app/ora-dev/raw/emcp13a
    crw-rw---- 1 oracle dba 215, 832 Dec 10 14:56 /u01/app/ora-dev/raw/emcp13a
    With *.asm_diskstring='/u01/app/ora-dev/raw/emcp*' in ASM-instance's init.ora,
    I was able to see the devices via select * from v$asm_disk and could create the
    ASM diskgroups.
    Regards
    Udo

  • ASM Configuration on oracle 10.2 RAC

    hi all
    os : sun sparc solaris 10
    db : 10.2 rac database
    clusterware configured using raw devices.
    we have no other option than configuring asm on raw devices.
    could u tell me narrows steps how to do it manually as well as using dbca

    Hi
    I've do mine last week but it's in HPUX not sun and not with raw device but i think it's the same
    Steps:
    1- Launch dbca
    2- Choose configure ASm
    3- Select all the nodes
    4- Choose Create Initiialization parameter file and click on ASM parameters and in the asm_diskstring parameter specify the disk you want to use in asm group example
    vg01/vol1,vg01/vol2, ... Don't specify first the disk group
    5- Create new disk group
    6- choose the disk candidate for the group
    7- Finished
    regards raitsarevo

Maybe you are looking for