Expand ASM Diskgroup on RAC 11.2 online

Hi,
I am currently in search of an article or document which describes the following workflow:
Expand ASM Diskgroup on RAC 11.2 online:
Resize physical LUN (no Infos needed)
Resize Multipath
Resize Partition
Resize ASM Disk
Resize ASM DiskGroup ( I dont know if this is necessary?)
It is an Oracle Enterprise Linux 5.5 with 11.2 Clusterware. We use ISCSI Disk, ASMLib and Multipath (Linux -> multipathd).
The main goal is to do that online.
Does anyone know of such an article, or else has other tips regarding this scenario?
Kind Regards,
Richi

Below links may help.
http://www.hds.com/assets/pdf/hitachi-dynamic-provisioning-software-best-practices-guide-oracle.pdf
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101586
http://www.oracle.com/technetwork/database/oracle-automatic-storage-management-132797.pdf
Ta
Jag

Similar Messages

  • How to create a new ASM Diskgroup in Oracle 10g RAC?

    Hi,
    Our env is Oracle 10g R2 RAC on HP-UX. I want to create a new ASM Diskgroup. Please let me know if the following steps are ok to create a new ASM Diskgroup.
    1. Ensure the new Disk is visible in both ASM instances in RAC (v$asm_disk) and the header_status is 'CANDIDATE'
    2. From Node 1 ASM Instance issue the create diskgroup command.
    SQL> create diskgroup DATA2 external redundancy disk '/dev/rdsk/c4t0d5';
    3. Query v$asm_diskgroup and make sure the Diskgroup is created.
    4. Mount the DATA2 Diskgroup from Node 2 ASM Instance.
    5. Query v$asm_diskgroup and make sure the Diskgroup is visible from Node2 ASM instance.
    6. Ensure the header_status is 'MEMBER'.
    Rgds,

    correct.
    instead of using device file '/dev/rdsk/c4t0d5' you can create an alternate device file using mknod, which is called "asm_disk_xg" for example.
    check here: http://download.oracle.com/docs/cd/B19306_01/install.102/b14202/storage.htm#CDEECIHI
    hth

  • Question: 10gR2 database can not see the 11gR2 ASM diskgroup?

    Hi there,
    env:
    uname -rm
    2.6.18-92.1.22.el5xen x86_64
    Single server(non-RAC)
    note: we don't want to upgrade 10gr2 database into 11gR2 yet. But we created the 11gR2 ASM, then a 11gr2 database on ASM, and plan to migrate datafile in 10gR2 database to 11gR2 ASM
    1. oracle 10gR2 installed first version: 10.2.0.3.0
    2. then install 11gR2 Grid Infrastructure, and created ASM (version 11gr2)
    $ sqlplus / as sysasm
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Oct 19 10:30:56 2010
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Automatic Storage Management option
    SQL> col name form a15
    SQL> col COMPATIBILITY form a15
    SQL> col DATABASE_COMPATIBILITY form a15
    SQL> l
    1* select name , STATE, COMPATIBILITY, DATABASE_COMPATIBILITY from v$asm_diskgroup
    SQL> /
    NAME STATE COMPATIBILITY DATABASE_COMPAT
    ORCL_DATA1 MOUNTED 11.2.0.0.0 10.1.0.0.0
    ORA_DATA MOUNTED 10.1.0.0.0 10.1.0.0.0
    3. in 10gR2 database
    sqlplus /
    SQL*Plus: Release 10.2.0.3.0 - Production on Tue Oct 19 12:12:31 2010
    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning and Data Mining options
    SQL> select * from v$asm_diskgroup;
    no rows selected
    4. pin the node into css
    # /u01/app/product/11.2.0/grid/bin/crsctl pin css -n mynodename
    CRS-4000: Command Pin failed, or completed with errors.
    Question: 10gR2 database can not see the 11gR2 ASM diskgroup?
    please help
    Thanks
    Scott

    What is the output of
    olsnodes -t -n
    Also, see unix error log and ohasd error log if you find anything in that

  • Find out the devices of an ASM Diskgroup in Oracle Linux

    Hi
    I want to drop an ASM Diskgroup and I want to find out what are the devices attached to that Disk group that I am going to drop.
    I am in RAC 10.2.0.4 on Oracle Linux
    When I query with this:
    SELECT name, header_status, path FROM V$ASM_DISK;
    The column path says:
    ORCL:FRA1
    ORCL:FRA2
    but really what I want to know what is the real device in the OS that represent that ASM Disk.
    How can i find out the name of my real devices ?
    Thanks

    I thought you were supposed to run this query when connected to the ASM instance, not the regular instance. But I tried it here and it works fine with my regular instances:
    SQL> SELECT name, header_status, path FROM V$ASM_DISK;
    NAME                           HEADER_STATUS PATH
    ASMDG01_0005                   MEMBER        /dev/raw/raw6
    ASMDG01_0004                   MEMBER        /dev/raw/raw5
    ASMDG01_0002                   MEMBER        /dev/raw/raw3
    ASMDG01_0003                   MEMBER        /dev/raw/raw4
    ASMDG01_0000                   MEMBER        /dev/raw/raw1
    ASMDG01_0001                   MEMBER        /dev/raw/raw2
                                   FORMER        /dev/raw/raw7
                                   FORMER        /dev/raw/raw8
                                   FORMER        /dev/raw/raw9Edited by: marcusrangel on Jun 26, 2012 6:05 PM

  • "Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

    We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
    In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
    Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
    * What do they need to confirm and/or set regarding I/O on the Linux side
    * What do they need to confirm and/or set regarding I/O on the NetApp side?
    On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
    Hope this makes sense. Please let me know if there is any other info I can provide.

    Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
    Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

  • Asm diskgroup really full?

    Hi gurus,
    after install RAC (11.2.0.3 two nodes) on linux, a warning appears shown a diskgroup almost full:
    SQL>  select group_number,  name, block_size, state, total_mb, free_mb, hot_used_mb from V$ASM_DISKGROUP WHERE NAME='VOL';
    GROUP_NUMBER NAME                           BLOCK_SIZE STATE         TOTAL_MB      FREE_MB HOT_USED_MB
               4 VOL                          4096 MOUNTED          23838       284           0correctly, It's looks like almost full.
    It is a new RAC database, if i check any file at asmcmd:
    asmcmd> cd VOL
    asmcmd> ls
    asmcmd>no files appear, so i go to asmca and check:
    ASM CLUSTER FILE SYSTEM
    ACTIVE MOUNT POINT | STATE | ALL_MOUNT_POINTS | VOLUME DEVICE    | SIZE (GB) | VOLUME | DISK_GROUP | USED (%)
    /backup                 MOUNTED   /backup           /dev/asm/vol-24   22.91      VOL VOL       4.19it's only 4.19% in use
    what is happening? at filesystem is no files and have enough space, but asm shows that it is almost full

    ASM and ACFS are related (an ACFS volume is part of an ASM diskgroup), but not the same
    you have ASM diskgroup with 23 GB storage
    you created an ACFS volume on it with ~23 GB size
    from ASM point of view, the diskgroup is almost full, because of the ACFS volume
    from ACFS point of view, its empty, because you didnt put anything yet on that volume
    its similar as LVM
    you create a volume group (ASM diskgroup in this analogy) with 23 GB size
    then you create a volume (ACFS volume in this analogy) with nearly 23 GB size
    volume group is almost "full", while volume is empty

  • Add ASM disk in RAC

    Hello everyone,
    We are on 10gR2 ASM.
    The system admin has provided one disk as follows:
    On node-1 (tphsmsd1)
    /dev/rdisk/disk83 CANDIDATE
    On node-2 (tphsmsd2)
    /dev/rdisk/disk77 CANDIDATE
    I've read that paths of ASM disks in RAC may be different.
    When I give "ls -l /dev/rdisk/disk83" gives no such files and vice-versa for disk on node2.
    I don't understand how ASM handles this ? If I add disk (disk83) on node 1 how it's going to get to that disk on node 2 ?
    thanks
    Jitu Keshwani

    Jitu Keshwani wrote:
    I've read that paths of ASM disks in RAC may be different.
    When I give "ls -l /dev/rdisk/disk83" gives no such files and vice-versa for disk on node2.
    I don't understand how ASM handles this ? If I add disk (disk83) on node 1 how it's going to get to that disk on node 2 ?The header label of the disk, identifies the ASM disk name and diskgroup it belongs to. You can hexdump or octaldump the 1st 128 bytes of the device as ascii chars to view the label. The string "+ORCLDISK+" in the header identifies it as an ASM disk. This is then followed by the ASM diskname and then the AM diskgroup it belongs to.
    But there's no reason for not having a consistent and static device name layer across all cluster nodes. Each scsi device has a WWID (World Wide ID)- a unique identifier. This means that the kernel (and kernel drivers) can uniquely identify a device.
    On Linux, Multipath is used to map a logical device name to a WWID - and using the same +/etc/multipath.conf+ configuration file on all cluster nodes, ensures that the same device names are used across the cluster.
    But from the the device file entry you listed, you're likely not running Linux? In that case, depending on the Unix flavour used and the type of cluster storage, there can be similar options to Linux's Multipath. One such option is EMC's PowerPath - but that of course is specific to EMC SANs and requires additional licensing fees.

  • Unable to Create ASM Diskgroup ORA-15020 and ORA-15018

    Hello Team,
    Unable to create ASM diskgroup with following error:
    SQL> create diskgroup data_asm1 external redundancy disk '/dev/sdf*';
    create diskgroup data_asm1 external redundancy disk '/dev/sdf*'
    ERROR at line 1:
    ORA-15018: diskgroup cannot be created
    ORA-15020: discovered duplicate ASM disk "DATA_ASM1_0000"
    ASM Diskstring
    SQL> show parameter asm_diskstring
    NAME                                 TYPE        VALUE
    asm_diskstring                       string      /dev/oracleasm/disks/DISK*, /dev/sd*
    Please let me know how to i solve this issue
    Regards,

    Hi Tobi,
    I checked the status of the res GRID.dg ... it was offline on second node. Logged on second node and checked the status of it viz:v$asm_diskgroup, it was dismount. I mounted it and then try to add the newly added diskgroup(+GRID) with OCR and viola it worked....
    ========================================================
    ora.GRID.dg
                   ONLINE  ONLINE       rac3                                       
                   OFFLINE OFFLINE      rac4                                       
    SQL> select group_number,name,state,type from v$asm_diskgroup;
    GROUP_NUMBER NAME                           STATE       TYPE
               1 DATA                           MOUNTED     EXTERN
               0 GRID                           DISMOUNTED
    SQL> alter diskgroup grid mount;
    Diskgroup altered.
    SQL>  select group_number,name,state,type from v$asm_diskgroup;
    GROUP_NUMBER NAME                           STATE       TYPE
               1 DATA                           MOUNTED     EXTERN
               2 GRID                           MOUNTED     EXTERN
    ==============================================
    ora.GRID.dg
                   ONLINE  ONLINE       rac3                                       
                   ONLINE  ONLINE       rac4                                       
    ===============================================
    [root@rac3 bin]# ./ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          3
             Total space (kbytes)     :     262120
             Used space (kbytes)      :       2804
             Available space (kbytes) :     259316
             ID                       :   48011651
             Device/File Name         :      +DATA
                                        Device/File integrity check succeeded
             Device/File Name         :      +grid
                                        Device/File integrity check succeeded
                                        Device/File not configured
                                        Device/File not configured
                                        Device/File not configured
             Cluster registry integrity check succeeded
             Logical corruption check succeeded
    ==========================================================================================
    ASMCMD> lsdg
    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
    MOUNTED  EXTERN  N         512   4096  1048576     20472    16263                0           16263              0             N  DATA/
    MOUNTED  EXTERN  N         512   4096  1048576      5114     4751                0            4751              0             N  GRID/
    ======================================================================================================
    Thank you very much, appreciated..
    Thank you Aritra .
    Guys you rock.
    Regards,

  • How to identify ASM DiskGroup attached to which Disks ???

    Hi Guys,
    In 11gR2 RAC, How to identify which ASM Diskgroup is attached to which Disks...( OS is RHEL 5.4).
    We could list ASM Diskgroups by,
    *#oracleasm listdisks* but this command doesn't show the disks assigned to ASM DiskGroup.
    Even for checking location of OCR and Voting Disks only show Diskgroup name and not the actual disks.
    $ocrcheck
    $crsctl query css votedisk
    ( like in 10gR2 RAC, We do entry in /etc/rules.d/udev/60-raw-rules file for raw mapping of OCR, Voting Disk and Other ASM Diskgroup)
    Plz help me, As one of the client place, I could see so many LUNs assigned to the Server and not getting exact idea which Disks have been used for OCR, Voting Disk and DATA Diskgroup.
    Thanks,
    Manish

    Well for this you can use oracleasm querydisk.Using this you can identify which device as marked for asm or not. for example you can see this below example.
    [oracle@localhost init.d]$ sqlplus "/as sysdba"
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jun 3 11:52:12 2010
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select path from v$asm_disk;
    PATH
    /dev/oracleasm/disks/VOL2
    /dev/oracleasm/disks/VOL1
    SQL> exit;
    [oracle@localhost init.d]$ su
    Password:
    [root@localhost init.d]# /sbin/fdisk -l
    Disk /dev/sda: 80.0 GB, 80000000000 bytes
    255 heads, 63 sectors/track, 9726 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1        1305    10482381   83  Linux
    /dev/sda2            1306        9401    65031120   83  Linux
    /dev/sda3            9402        9662     2096482+  82  Linux swap / Solaris
    /dev/sda4            9663        9726      514080    5  Extended
    /dev/sda5            9663        9726      514048+  83  Linux
    Disk /dev/sdb: 80.0 GB, 80026361856 bytes
    255 heads, 63 sectors/track, 9729 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1        4859    39029886   83  Linux
    /dev/sdb2            4860        9729    39118275   83  Linux
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb1
    Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb2
    Device "/dev/sdb2" is marked an ASM disk with the label "VOL2"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sda1
    Device "/dev/sda1" is not marked as an ASM disk
    [root@localhost init.d]#Also in windows :
    C:\Documents and Settings\comp>asmtool -list
    NTFS                             \Device\Harddisk0\Partition1           140655M
    ORCLDISKDATA1                    \Device\Harddisk0\Partition2             4102M
    ORCLDISKDATA2                    \Device\Harddisk0\Partition3             4102M
    NTFS                             \Device\Harddisk0\Partition4           152617M
    C:\Documents and Settings\comp>answered by chinar.
    refer:-how to identify which rawdevice Disk Is named as VOL1 IN ASM from os level
    Happy New Year.
    regards,

  • Recommended Number LUNs for ASM Diskgroup

    We are installation Oracle Clusterware 11g, Oracle ASM 11g and Oracle Database 11g R1 (11.1.0.6) Enterprise Edition with RAC option. We have EMC Clariion CX-3 SAN for shared storage (All oracle software will reside on locally). We are trying to determine the recommended or best practice number of LUNs and LUN size for ASM Diskgroups. I have found only the following specific to ASM 11g:
    ASM Deployment Best Practice
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.
    1) Recommended number of LUNs?
    2) Recommended size of LUNs?
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?

    user10437903 wrote:
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.This means that the LUNs (disks) should be created over multiple SCSI adapters in the storage box. EMCs have multiple SCSI channels to which disks are attached. Best practice says that the disks/luns that you assing to a diskgroup should be spread over as many channels in the storage box as possible. This increases the bandwidth and therefore, performance.
    1) Recommended number of LUNs?Like the best practice says, if possible, at least 4
    2) Recommended size of LUNs?That depends on your situation. If you are planning a database of 100GB, then a LUN size of 50GB is a bit overkill.
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?LUNs, spindles if you have only access to physical spindles
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?If you are using RAID5, I'd advise to keep a 4+1 spindle allocation, but it might not be possible to realize that. It all depends on the storage solution and how far you can go in configuring it.
    Arnoud Roth

  • ASM configuration on RAC

    hi,
    i am instaling a to node RAC on RedHat Linux 5.i am using shared storage and OCFs2 . My db version is 10.2 while i am using DBCA to configure ASM Diskgroup i am getting the follwing errors
    could not mount the diskgroup on remote node node2
    using connection service node2:1521+ASM2.Ensure that
    the listener is running on this node and the ASM
    instance is registered to the listener. Received the following error:-
    ORA-15032 not all alteratios performed
    ORA-15130 diskgroup "D" is being dismounted
    Regards
    Supriyo Dey

    Has the second node access to the disks/partitions, which you are using for the diskgroups?
    Are the permission for the disks/partitions set to oracle:dba on both nodes?
    If you use ASMLIB, have you scanned the disks on the second node?
    Regards
    Sebastian

  • Best way to migrate ASM diskgroups to new diskgroups on new storage

    Hi, we are currently planning a Storage migration, and we have 2 node RAC 10gR2 so we need to know a better way to perform data migration from current ASM Diskgroups to new storage.
    Could anyone comment about this ?

    Connect new storage to host, add new disks to disk groups, remove old disks from diskgroups.

  • RMAN backup goes to filesystem and not to ASM diskgroup

    Hi,
    DB: 11.2.0.1
    OS: Linux
    Parameter configured in database:
    SQL> show parameter db_recovery_file_dest
    NAME TYPE VALUE
    db_recovery_file_dest string +BACKUP
    db_recovery_file_dest_size big integer 10184M
    If i execute the command " RMAN> backup database; " , then the backup ( backup pieces) is going to +BACKUP destination and this is as expected.
    But, if i execute the same command using script, then the backup pieces are going to filesystem(default location $ORACLE_HOME/dbs ).
    Could you suggest me(if i understood wrongly), why the backup pieces are going to the location of filesystem , and not to the ASM diskgroup ?.
    I want to take the backup to ASM diskgroup, because of less space at filesystem.
    The script i used is this:
    [oracle@rac1 rmanscripts]$ more online.sh
    export ORACLE_SID=test;
    export NLS_DATE_FORMAT='dd/mm/yy hh24:mi:ss';
    umask 022
    date
    rman target / cmdfile online.rcv msglog online.log
    [oracle@rac1 rmanscripts]$ more online.rcv
    run {
    backup
    full
    tag b_db_full_test
    filesperset 2
    format 'df_%d_%t_%s_%p'
    database include current controlfile;
    Thanks in advance,
    Regards,

    Hi mseberg,
    Thanks for your reply and the thing is that the controlfile autobackup is going to ASM diskgroup ( +BACKUP).
    Even after changing the suggested config, no luck to me.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name TEST are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 15 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+BACKUP';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 3 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_test.f'; # default
    Backup piece destination info from log is here:
    channel ORA_DISK_2: finished piece 1 at 16/02/13 22:41:22
    piece handle=/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/df_TEST_807575973_155_1 tag=B_DB_FULL_TEST comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:01:49
    Finished backup at 16/02/13 22:41:22
    Starting Control File and SPFILE Autobackup at 16/02/13 22:41:22
    piece handle=+BACKUP/test/autobackup/2013_02_16/s_807576082.342.807576083 comment=NONE
    Finished Control File and SPFILE Autobackup at 16/02/13 22:41:25
    I cannot understand why this is,
    Regards,

  • SAN reboot for oracle DB at ASM in linux RAC

    Hi Experts,
    we use 10.2.0.4 database in ASM at oracle RAC in red hat 5 linux.
    we use 3 directoey ( asm, crs, and database)
    I got notes that SAN box (support SAM in database) will be reboot.
    Under this condition, what do i need to do? shutdown instance? database? crs? or asm instance?
    Thanks for your help?
    JIM

    user589812 wrote:
    what means is about start all oracle related services in sequence?CRS will start the complete Oracle cluster s/w cluster for you - ASM, RAC, nodeapps, etc.
    Usually, the only effort required is simply hitting the reset/power on button - as the o/s will boot, CRS will start and it will in turn bring up the s/w stack. No manual intervention required. (unless you on purpose configured it differently)
    Based on Billy suggestion, can I use srvctl stop nodeapps -n all and #ORA_CRS_HOME/bin/crsctl stop crsNo - my suggestion is that before the SAN maintenance window period start, you do a "+shutdown -h now+" on all cluster nodes to halt/powerdown each and every RAC server.
    And after the SAN maintenance period is over, and the SAN available again, ssh into the LoM (Lights Out Management) console of each server and do a "+start SYS+" (or equivalent) to powerup the server.
    In other words, with the SAN down/busy rebooting/undergoing maintenance, I would not want to have my RAC servers up and running as there is no storage layer to run them on. IMO, it is a lot safer to have these servers powered down to during such a maintenance period.
    PS. I have even had the odd case that during SAN maintenance power cables being pulled, Interconnect switches accidentally reset and so on - or you could have some bright spark also shutting down the aircon with the SAN and your RAC servers suffering heat problems and potential damage while running. So my question is - why should I take the risk of keeping my RAC servers up when the storage layer is not there and the cluster is broken and useless? Surely it makes a lot more sense to power down those servers too and then only power them on again when the maintenance period is over and the SAN (and data centre) is in a proper running state again.

  • How to clean the asm instance from RAC manually

    for some reason i run crs_unregister asm and crs_unregister lsnr to remove the asm and listener resource from crs yesterday
    and today i want rebuild the asm instance ,so i run dbca again ,but error
    Error when starting ASM instance on node rac1: PRKS-1009 : Failed to start ASM instance "+ASM1" on node "rac1", [PRKS-1011 : Failed to check status of ASM instance "+ASM1" on node "rac1", [CRS-0210: Could not find resource ora.rac1.ASM1.asm.]]
    [PRKS-1011 : Failed to check status of ASM instance "+ASM1" on node "rac1", [CRS-0210: Could not find resource ora.rac1.ASM1.asm.]]
    DBCA could not startup the ASM instance on node: rac1. Manual intervention is required to recreate these instances. If you choose to proceed, ASM diskgroups will not be mounted on non-started remote ASM instances. Do you want to proceed with ASM diskgroup management?
    and problem is how to do this "Manual intervention is required to recreate these instances", I already do 1, dd the asm disk, 2, remove the +ASM directory from $ORACLE_BASE, 3,clean ASM info from /etc/oratab, so what i can do next ?
    i try restart crs ,now the error info is different !!
    [oracle@rac1 ~]$ dbca -silent -responseFile /home/oracle/dbca.rsp
    Look at the log file "/opt/ora/product/10.2.0/db_1/cfgtoollogs/dbca/silent6.log"
    for further details.
    [oracle@rac1 ~]$ cat /opt/ora/product/10.2.0/db_1/cfgtoollogs/dbca/silent6.log
    ORA-00119: invalid specification for system parameter LOCAL_LISTENER
    ORA-00132: syntax error or unresolved network name 'LISTENER_+ASM1'
    ORA-00119: invalid specification for system parameter LOCAL_LISTENER
    ORA-00132: syntax error or unresolved network name 'LISTENER_+ASM1'
    Edited by: 859340 on 2011-7-8 下午11:01

    Hi,
    Can you post the dbca log?
    and do you share the asm home with oracle home? if it is separated then add LISTENER_+ASM1 in tnsnames in ASMHOME, otherwise add it in ORACLEHOME
    Cheers

Maybe you are looking for