11R2 双节点rac搭建到建库时报错

刘大,麻烦您看一下~
环境:vmware workstation 8.0
OS:linux 5.5 x86-64
DB:11gR2 64 11.2.0.1
grid:11gR2 64
共享磁盘4个,大小为 1G 1G 4G 4G
采用ASM
搭建到最后dbca建库时,进行到百分之85,正在进行数据库创建时
报:
PRCR-1079:无法启动资源ora.rac.db
ORA-15081:无法将I/O操作提交到磁盘
CRS-2674:未能启动ora.rac.db(在rac2上)
CRS-2632:没有更多符合资源'ora.rac.db’的布局策略的服务器来尝试放置该资源
查看了一下状态和手动启动,还是失败
[root@rac2 ~]# /oracle/app/grid/product/11.2.0/grid/bin/crsctl status res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.eons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.oc4j
1 OFFLINE OFFLINE
ora.rac.db
1 ONLINE ONLINE rac1 Open
2 ONLINE OFFLINE Instance Shutdown
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[root@rac2 ~]# mhdd
-bash: mhdd: command not found
[root@rac2 ~]# /oracle/app/grid/product/11.2.0/grid/bin/crsctl start res -all
CRS-5702: 资源 'ora.DATA.dg' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.DG.dg' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.LISTENER.lsnr' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.LISTENER_SCAN1.lsnr' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.asm' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.eons' 已在 'rac1' 上运行
CRS-2501: 资源 'ora.gsd' 已禁用
CRS-5702: 资源 'ora.net1.network' 已在 'rac1' 上运行
CRS-2501: 资源 'ora.oc4j' 已禁用
CRS-5702: 资源 'ora.ons' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.asm' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.LISTENER.lsnr' 已在 'rac1' 上运行
CRS-2501: 资源 'ora.gsd' 已禁用
CRS-5702: 资源 'ora.ons' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.rac1.vip' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.asm' 已在 'rac2' 上运行
CRS-5702: 资源 'ora.LISTENER.lsnr' 已在 'rac2' 上运行
CRS-2501: 资源 'ora.gsd' 已禁用
CRS-5702: 资源 'ora.ons' 已在 'rac2' 上运行
CRS-5702: 资源 'ora.rac2.vip' 已在 'rac2' 上运行
CRS-5702: 资源 'ora.registry.acfs' 已在 'rac1' 上运行
CRS-5702: 资源 'ora.scan1.vip' 已在 'rac1' 上运行
CRS-2672: 尝试启动 'ora.rac.db' (在 'rac2' 上)
ORA-15081: 无法将 I/O 操作提交到磁盘
CRS-2674: 未能启动 'ora.rac.db' (在 'rac2' 上)
CRS-2679: 尝试清除 'ora.rac.db' (在 'rac2' 上)
CRS-2681: 成功清除 'ora.rac.db' (在 'rac2' 上)
CRS-2528: 无法放置 'ora.rac.db' 的实例, 因为所有可能的服务器都已被资源占用
CRS-4000: Command Start failed, or completed with errors.
对11g的rac的维护不是很精通,有不足的地方麻烦您赐教,洗耳恭听。
Edited by: 951276 on 2012-10-21 上午4:52

[root@rac1 ~]# vi /var/log/messages
Oct 21 04:02:05 rac1 syslogd 1.4.1: restart.
Oct 21 04:12:25 rac1 kernel: usb 2-2.1: USB disconnect, address 5
Oct 21 04:12:25 rac1 hcid[3041]: HCI dev 0 down
Oct 21 04:12:25 rac1 hcid[3041]: Stopping security manager 0
Oct 21 04:12:25 rac1 hcid[3041]: Device hci0 has been disabled
Oct 21 04:12:25 rac1 hcid[3041]: HCI dev 0 unregistered
Oct 21 04:12:25 rac1 hcid[3041]: Unregister path:/org/bluez/hci0
Oct 21 04:12:25 rac1 hcid[3041]: Device hci0 has been removed
Oct 21 04:12:25 rac1 kernel: usb 2-2.1: new full speed USB device using uhci_hcd and address 6
Oct 21 04:12:25 rac1 kernel: usb 2-2.1: configuration #1 chosen from 1 choice
Oct 21 04:12:25 rac1 hcid[3041]: HCI dev 0 registered
Oct 21 04:12:25 rac1 hcid[3041]: Register path:/org/bluez/hci0 fallback:0
Oct 21 04:12:25 rac1 hcid[3041]: HCI dev 0 up
Oct 21 04:12:25 rac1 hcid[3041]: Device hci0 has been added
Oct 21 04:12:25 rac1 hcid[3041]: Starting security manager 0
Oct 21 04:12:25 rac1 hcid[3041]: Device hci0 has been activated
~~~~~~
[grid@rac1 rac1]$ vi alertrac1.log
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2012-10-20 21:57:04.899
[client(9053)]CRS-2106:The OLR location /oracle/app/grid/product/11.2.0/grid/cdata/rac1.olr is inaccessible. Details in /oracle/app/grid/product/11.2.0/grid/log/rac1/client/ocrconfig_9053.log.
2012-10-20 21:57:04.938
[client(9053)]CRS-2101:The OLR was formatted using version 3.
2012-10-20 21:57:57.507
[ohasd(9411)]CRS-2112:已在节点 rac1 上启动 OLR 服务。
2012-10-20 21:57:57.635
[ohasd(9411)]CRS-2772:已将服务器 'rac1' 分配到池 'Free'。
2012-10-20 21:58:21.321
[ohasd(9411)]CRS-2302:无法获取 GPnP 概要文件。错误 CLSGPNP_NO_DAEMON (GPNPD 守护程序未运行)。
2012-10-20 21:58:23.411
[cssd(10648)]CRS-1713:CSSD 守护程序已在 exclusive 模式下启动
2012-10-20 21:58:29.673
[cssd(10648)]CRS-1709:由于未配置任何表决文件, 未能获取节点 rac1 的租约; 详细信息见 (:CSSNM00031:) (位于 /oracle/app/grid/product/11.2.0/grid/log/rac1/cssd/ocssd.log)
2012-10-20 21:58:47.286
[cssd(10648)]CRS-1601:CSSD 重新配置完毕。活动节点为 rac1 。
2012-10-20 21:58:47.999
[ctssd(10700)]CRS-2407:新的集群时间同步服务引用节点为主机 rac1。
2012-10-20 21:58:48.945
[ctssd(10700)]CRS-2401:已在主机 rac1 上启动了集群时间同步服务。
[client(10880)]CRS-10001:ACFS-9327: 正在验证 ADVM/ACFS 设备。
[client(10884)]CRS-10001:ACFS-9322: 完成。
2012-10-20 21:59:14.246
[client(10920)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /oracle/app/grid/product/11.2.0/grid/log/rac1/client/ocrconfig_10920.log.
2012-10-20 21:59:23.417
[ctssd(10700)]CRS-2405:主机 rac1 上的集群时间同步服务已由用户关闭
2012-10-20 21:59:35.142
[cssd(10648)]CRS-1603:用户已关闭节点 rac1 上的 CSSD。
[client(11124)]CRS-10001:ACFS-9200: Supported
2012-10-20 22:22:47.265
[client(12139)]CRS-2106:The OLR location /oracle/app/grid/product/11.2.0/grid/cdata/rac1.olr is inaccessible. Details in /oracle/app/grid/product/11.2.0/grid/log/rac1/client/ocrconfig_12139.log.
2012-10-20 22:22:47.280
[client(12139)]CRS-2101:The OLR was formatted using version 3.
2012-10-20 22:22:53.027
[ohasd(12177)]CRS-2112:已在节点 rac1 上启动 OLR 服务。
2012-10-20 22:22:53.050
[ohasd(12177)]CRS-8017:位置: /etc/oracle/lastgasp 具有 2 个重新启动指导日志文件, 0 个已发布, 0 个出现错误
2012-10-20 22:22:53.106
[ohasd(12177)]CRS-2772:已将服务器 'rac1' 分配到池 'Free'。
2012-10-20 22:23:13.473
[gpnpd(13378)]CRS-2328:已在节点 rac1 上启动 GPNPD。
2012-10-20 22:23:15.541
[cssd(13439)]CRS-1713:CSSD 守护程序已在 exclusive 模式下启动
2012-10-20 22:23:19.831
[cssd(13439)]CRS-1709:由于未配置任何表决文件, 未能获取节点 rac1 的租约; 详细信息见 (:CSSNM00031:) (位于 /oracle/app/grid/product/11.2.0/grid/log/rac1/cssd/ocssd.log)
2012-10-20 22:23:37.548
[cssd(13439)]CRS-1601:CSSD 重新配置完毕。活动节点为 rac1 。
2012-10-20 22:23:39.195
[ctssd(13489)]CRS-2407:新的集群时间同步服务引用节点为主机 rac1。
2012-10-20 22:23:40.135
[ctssd(13489)]CRS-2401:已在主机 rac1 上启动了集群时间同步服务。
[client(13531)]CRS-10001:ACFS-9203: true
[client(13675)]CRS-10001:ACFS-9327: 正在验证 ADVM/ACFS 设备。
[client(13679)]CRS-10001:ACFS-9322: 完成。
2012-10-20 22:24:03.495
[client(13708)]CRS-1006:The OCR location +DATA is inaccessible. Details in /oracle/app/grid/product/11.2.0/grid/log/rac1/client/ocrconfig_13708.log.
2012-10-20 22:24:03.631
[client(13708)]CRS-1001:The OCR was formatted using version 3.
2012-10-20 22:24:05.732
[crsd(13748)]CRS-1012:已在节点 rac1 上启动 OCR 服务。
2012-10-20 22:24:06.240
[cssd(13439)]CRS-1605:CSSD 表决文件联机: ORCL:VOL1; 详细资料见 /oracle/app/grid/product/11.2.0/grid/log/rac1/cssd/ocssd.log。
2012-10-20 22:24:06.545
[cssd(13439)]CRS-1626:配置更改请求已成功完成
2012-10-20 22:24:06.561
[cssd(13439)]CRS-1601:CSSD 重新配置完毕。活动节点为 rac1 。
2012-10-20 22:24:21.486
[ctssd(13489)]CRS-2405:主机 rac1 上的集群时间同步服务已由用户关闭
2012-10-20 22:24:33.227
[cssd(13439)]CRS-1603:用户已关闭节点 rac1 上的 CSSD。
2012-10-20 22:24:38.542
[mdnsd(13364)]CRS-5602:mDNS 服务根据请求停止。
2012-10-20 22:24:46.899
[gpnpd(14231)]CRS-2328:已在节点 rac1 上启动 GPNPD。
2012-10-20 22:24:48.622
76,1 43%
[cssd(13439)]CRS-1603:用户已关闭节点 rac1 上的 CSSD。
2012-10-20 22:24:38.542
[mdnsd(13364)]CRS-5602:mDNS 服务根据请求停止。
2012-10-20 22:24:46.899
[gpnpd(14231)]CRS-2328:已在节点 rac1 上启动 GPNPD。
2012-10-20 22:24:48.622
[cssd(14292)]CRS-1713:CSSD 守护程序已在 clustered 模式下启动
2012-10-20 22:25:22.859
[cssd(14292)]CRS-1707:节点 rac1 (编号为 1) 的租约获取已完成
2012-10-20 22:25:22.865
[cssd(14292)]CRS-1605:CSSD 表决文件联机: ORCL:VOL1; 详细资料见 /oracle/app/grid/product/11.2.0/grid/log/rac1/cssd/ocssd.log。
2012-10-20 22:25:40.448
[cssd(14292)]CRS-1601:CSSD 重新配置完毕。活动节点为 rac1 。
2012-10-20 22:25:42.090
[ctssd(3984)]CRS-2407:新的集群时间同步服务引用节点为主机 rac1。
2012-10-20 22:25:43.042
[ctssd(3984)]CRS-2401:已在主机 rac1 上启动了集群时间同步服务。
2012-10-20 22:25:57.067
[crsd(8702)]CRS-1012:已在节点 rac1 上启动 OCR 服务。
2012-10-20 22:25:57.747
[evmd(9085)]CRS-1401:已在节点 rac1 上启动 EVMD。
2012-10-20 22:25:59.267
[crsd(8702)]CRS-1201:已在节点 rac1 上启动 CRSD。
2012-10-20 22:25:59.480
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'Free'。
2012-10-20 22:29:47.468
[cssd(14292)]CRS-1601:CSSD 重新配置完毕。活动节点为 rac1 rac2 。
2012-10-20 22:30:06.241
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'Free'。
[client(26013)]CRS-10001:ACFS-9203: true
[client(26700)]CRS-10001:ACFS-9203: true
[client(26710)]CRS-10001:ACFS-9203: true
[client(23277)]CRS-10001:ACFS-9203: true
[client(23385)]CRS-10001:ACFS-9203: true
[client(23395)]CRS-10001:ACFS-9203: true
[client(23444)]CRS-10001:ACFS-9203: true
[client(23454)]CRS-10001:ACFS-9203: true
2012-10-20 23:02:51.668
[crsd(8702)]CRS-2773:已将服务器 'rac2' 从池 'Free' 中删除。
2012-10-20 23:02:51.670
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'Generic'。
2012-10-20 23:02:51.671
[crsd(8702)]CRS-2773:已将服务器 'rac1' 从池 'Free' 中删除。
2012-10-20 23:02:51.671
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'Generic'。
2012-10-20 23:02:51.671
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'ora.orcl'。
2012-10-20 23:02:51.672
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'ora.orcl'。
2012-10-21 02:14:12.962
[crsd(8702)]CRS-2765:资源 'ora.orcl.db' 已失败 (在服务器 'rac1' 上)。
2012-10-21 02:14:12.972
[crsd(8702)]CRS-2767:目标资源 'ora.orcl.db' 已脱机, 将不会恢复。
2012-10-21 02:14:49.523
[crsd(8702)]CRS-2773:已将服务器 'rac1' 从池 'ora.orcl' 中删除。
2012-10-21 02:14:49.523
[crsd(8702)]CRS-2773:已将服务器 'rac2' 从池 'ora.orcl' 中删除。
2012-10-21 02:14:49.524
[crsd(8702)]CRS-2773:已将服务器 'rac1' 从池 'Generic' 中删除。
2012-10-21 02:14:49.525
[crsd(8702)]CRS-2773:已将服务器 'rac2' 从池 'Generic' 中删除。
2012-10-21 02:14:49.525
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'Free'。
2012-10-21 02:14:49.551
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'Free'。
2012-10-21 02:17:24.815
[crsd(8702)]CRS-2773:已将服务器 'rac2' 从池 'Free' 中删除。
2012-10-21 02:17:24.816
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'Generic'。
2012-10-21 02:17:24.816
[crsd(8702)]CRS-2773:已将服务器 'rac1' 从池 'Free' 中删除。
2012-10-21 02:17:24.816
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'Generic'。
2012-10-21 02:17:24.816
[crsd(8702)]CRS-2772:已将服务器 'rac1' 分配到池 'ora.rac'。
2012-10-21 02:17:24.816
[crsd(8702)]CRS-2772:已将服务器 'rac2' 分配到池 'ora.rac'。
当前安装的日志:
[grid@rac1 rac]$ tail -f trace.log
[Thread-267] [ 2012-10-21 02:23:46.739 CST ] [CRSNativeResult.addLine:106] callback: ora.rac.db false CRS-2676: 成功启动 'ora.rac.db' (在 'rac1' 上)
[Thread-267] [ 2012-10-21 02:23:46.775 CST ] [CRSNativeResult.addComp:162] add comp: name ora.rac.db, rc 0, msg Success
[Thread-267] [ 2012-10-21 02:23:46.775 CST ] [CRSNativeResult.addComp:162] add comp: name ora.rac.db, rc 223, msg CRS-0223: Resource 'ora.rac.db 2 1' has placement error.
[Thread-267] [ 2012-10-21 02:23:46.776 CST ] [CRSNative.internalStartResource:352] Failed to start resource: Name: ora.rac.db, node: null, filter: null, msg ORA-15081: 无法将 I/O 操作提交到磁盘
CRS-2674: 未能启动 'ora.rac.db' (在 'rac2' 上)
CRS-2632: 没有更多符合资源 'ora.rac.db' 的布局策略的服务器来尝试放置该资源
[Thread-267] [ 2012-10-21 02:23:46.776 CST ] [PostDBCreationStep.executeImpl:828] Exception while Starting with HA Database Resource PRCR-1079 : 无法启动资源 ora.rac.db
ORA-15081: 无法将 I/O 操作提交到磁盘
CRS-2674: 未能启动 'ora.rac.db' (在 'rac2' 上)
CRS-2632: 没有更多符合资源 'ora.rac.db' 的布局策略的服务器来尝试放置该资源
Edited by: 951276 on 2012-10-21 下午11:43

Similar Messages

  • How to migrate data from oracle 9i database to new machine 11gr2 RAC ASM

    Hi Expert
    I need your expertise to advise me what is the best method to move data from oracle 9i database to new machine running oracle 11r2 RAC database with ASM.
    Currently my production server running on HPUX ORACLE 9I database with normal file system. My new server is running SUN SOLARIS SPACR 64 bit ORACLE 11gr2 RAC with ASM. What is best method to move data over so it will be consistent. Any guide can refer.
    Regard
    William

    Hi William,
    See the note in metalink Migration of Oracle Database Instances Across OS Platforms [ID 733205.1] to saw the Endian Format of your OS. If is the same you can use the RMAN to convert the database to another OS, if not the only option is using export/import (Transportable Tablespaces).
    To upgrade from 9i to 11g, see the note 837570.1 - Complete Checklist for Manual Upgrades to 11gR2.
    To migrate your FS to ASM the only way is using RMAN, so see the note - How to move a datafile from a file system to ASM [ID 390274.1].
    Hope this help you.
    Best Regards,
    Ruben Morais

  • Oracle 11gr2 RAC installation

    Hi Gurus,
    In new project we have proposed to have 2 nodes single site RAC database Oracle 11gR2 64 Bit . We are going to have OEL 5.4 64 Bit Linux as OS and IBM as machine .
    We are migrating standalone 10g R2 database to 11r2 Rac and from HP-UX to OEL Linux . We are migrating from different datacenter .
    What would be the pfile configuration (i mean memory size doubles ) in RAC ?
    We are going to use IBM storage . Need light on followings . Pls help me out
    1>Requirements for installation
    2>Grid installation requirements
    3>ASM instance is physical database instance or just service ?
    4>Any oracle document for installation of 11gR2 Grid and RAC?
    If you provide me authorized links of RAC installation that would be better .
    Thanks & Regards

    1>Requirements for installation
    Resp.: http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10840/toc.htm
    2>Grid installation requirements
    Resp.: http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10812/toc.htm
    3>ASM instance is physical database instance or just service ?
    Resp.: ASM is a database instance (i.e. just the memory structures, no physical files). See this link:
    4>Any oracle document for installation of 11gR2 Grid and RAC?
    Resp.: http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10813/toc.htm (RAC), the GI is in item 2 above.
    Always check the documentation site 1st: http://www.oracle.com/pls/db112/homepage

  • If you have a working DB, why upgrade?

    If you have a working db, don't try to upgrade even if your local Oracle sales man says. Why? Because you will have new problems during upgrade...

    Hi Mark,
    Its DB, but the story a little bit long, not just one bug, i will try summarize it.
    source db - 10.2.0.3 RAC on rhel 4, ocfs2 900GB datafiles
    target db - 11R2 RAC on rhel 5.5 ASM
    - I could't stop source db ( as usual 7/24 db )
    - First I tried data pump over network, tere is some bugs on 10.2.0.3 data pump can't export it (we have to upgrade db to 10.2.0.4 but i can't)
    - Second I tried old export import tool, but could't do it i got "snapshot too old error" during export o LOB tables!
    - Then RMAN Backups, I could't use costumer rman backups
    -as usual they were making backups but not testing them - When i tried it, it was not working , end of restore operation I had "system01.dbf file restored from older backup please restore it from newer backup" error but there is only one backup.
    -I Changed backup script and make too many cleaning in catalog, then we have a new internal error, 10.2.0.3 has a bug, if a datafile grows during rman backup backup crashing.
    After all I talked with managers to stop db for one day maintance, then called storage experts (copiying over network is too slow) and during one day stop we changed the new storage configuration ( it was raid 1 and raw for ASM)
    - Setup 10.2.0.1
    - upgrade 10.2.0.3
    - copy datafiles to their orgial path in new storage
    - run db ( happy day)
    - upgrade it to 10.2.0.4
    - set up grid
    - set up 11r2
    - upgrade to 11r2 using option move datafiles to ASM
    - surprise Upgrade could't do it because of some bugs. (I tried to set up new db on asm there was no problem with ASM or grid)
    - Decided to upgrade 11gr2 single instance and make datapump export ( after 1 day it gave error about file size, so tried to split the dmp files)
    - set up an empty working rac db
    - tried to import dmp files
    finally i got that error -> 11gR2 impdp parallelism (don't use. It has a bug!!)

  • RAC 11R2 Private Interconnect Issue

    Friends
    We had setup our Oracle Clusterware on Solaris Sparc with a version 11.2.0.3 PSU 2 patch sets. Some changes happen at the OS level and the private Interconnect IPs were picked wrong by our Oracle Clusterware registry.
    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registry
    Thank You !
    Jai

    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registryThis error happen when clusterware is down and you are trying to change Interconnect configuration, then you must start the Oracle Clusterware on the node to make changes.
    We are not able to bring up the clusterware. Some changes happen at the OS level and the private Interconnect IPsWhy you clusterware is not starting?...please post alertlog and crsd.log of cluster (only relevant info).
    If the error on crsd.log is : PROC-44: Error in network address and interface operations Network address and interface operations error
    This errors indicate a mismatch between OS setting (oifcfg iflist) and gpnp profile setting profile.xml.
    You will restore the OS network configuration back to the original status, start Oracle Clusterware. Then try make the changes again.

  • Steps to Upgrade Oracle 9.2 to 11R2 on IBM/AIX 5.3

    Hi,
    I want the complete steps to Upgrade Oracle 9.2 to 11R2 on IBM/AIX 5.3 Non -RAC environment.
    Also, I want the reasons to upgrade? What are the Advantages,Features and Benefits ?
    Plz post the complete details here.
    Thanks,
    Saswat

    All of the steps are documented in the Upgrade Guide - http://docs.oracle.com/cd/E11882_01/server.112/e23633/toc.htm
    To upgrade directly (using DBUA or scripts) to 11gR2, you need to be at 9.2.0.8 version.
    http://docs.oracle.com/cd/E11882_01/server.112/e23633/preup.htm#i1007814
    HTH
    Srini

  • Implementing 11gR2 RAC with dataguard

    Could any one provide the steps on how to setup 11gR2 two node RAC With Dataguard . Could the 11R2 Active database duplication can be used in setting up the standby ?
    I just need the order of steps to be followed to set up the environment.
    1] Set up the Grid Infrsatructure for the 2 node RAC .
    2] Create the database .
    3] Modify the init.ora prameter to chage the above created database as primary .
    4] Set up the grid infrastructure for the 2 node RAC on the DR site.
    5] Create the standby database using 11gR2 active database dupication.
    Is the above order correct ? If not , let me know the correct order of steps that needs to be followed to setup 11gR2 RAC with dataguard.

    Could any one provide the steps on how to setup 11gR2 two node RAC With Dataguard . Could the 11R2 Active database duplication can be used in setting up the standby ?This below document is one of best one, to configure two node standby for two node primary database,
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimaryracphysicalsta-131940.pdf
    starting from 11gR2 you can use duplication from active database. also refer below note.
    How to create physical standby database with 11g RMAN DUPLICATE FROM ACTIVE DATABASE [ID 747250.1]

  • Upgrade to 11R2 Win Cluster Only?

    Hi,
    i have seen with new version 11R2 the approch of the installation have change. Now the cluster is part of the Grid upgrade package.
    My rac installation is a clusteware 1.1.0.7 and my rdbms is a 10.2.0.3 under Window 2003 sp2.
    I there a way i can upgrade my cluster in these step (like in older version)
    1- Upgrade cluster only (or cluster and asm)
    2- Upgrade Database and ASM
    3- Upgrade GRID at the end (because i would like to upgrade all my GRID agents and server at the same time).
    I have check the installation doc. and i did not fine a way to do these step in the order i specify.
    Thanks

    download the document you indicate me in the answer.
    Great document, but i still have my question unanswered after i bass thru.
    Every time i try to upgrade my cluster the OUI force me to upgrade my GRID (or new installation) before i can upgrade my cluster, ASM and RDBMS.
    When i do the step in the OUI, at one point the SCAN ask me for a 'TCP/IP host name lookup', message INS-40718.
    Is there a way i can bypass this GRID upgrade (or installation) and go directly to the cluster and ASM upgrade?
    If no and i understand the installation documentation, i will have to create 1 new adress for my 3 nodes in the DNS (one alias adress in DNS round robin for the 3 vip adress of my 3 nodes) and added to my HOST file, so the installation can solve it? (at this ponit i'm not shure if the 3 ip adress have to be the 3 vip or 3 new ip adress)
    Can you indicate me a example what should i put in my host file for the scan client adress
    example:
    # public
    host1 10.200.0.11
    host2 10.200.0.12
    host3 10.200.0.13
    #vip
    host1-vip 10.200.0.21
    host2-vip 10.200.0.22
    host3-vip 10.200.0.23
    #vip
    host1-priv 10.200.0.31
    host2-priv 10.200.0.32
    host3-priv 10.200.0.33
    # scna client
    host_scan ????????
    Thanks
    Edited by: Ron_B on 2010-04-13 13:35
    Edited by: Ron_B on 2010-04-14 05:46

  • How to connect to Oracle RAC via SCAN

    I just finished Oracle RAC install but I cannot connect via the SCAN name from a remote client - only via the VIP:
    $ sqlplus system/[email protected]:1521/racdb.development.info
    SQL*Plus: Release 11.2.0.3.0 Production on Fri May 25 15:14:13 2012
    Copyright (c) 1982, 2011, Oracle. All rights reserved.
    ERROR:
    ORA-12545: Connect failed because target host or object does not exist
    This is Oracle 11r2 on Unbreakable Linux 6.2. The sqlplus above is from Instant Client 11.2. Further info is:
    $ ./srvctl status scan
    SCAN VIP scan1 is enabled
    SCAN VIP scan1 is running on node racnode1
    SCAN VIP scan2 is enabled
    SCAN VIP scan2 is running on node racnode2
    SCAN VIP scan3 is enabled
    SCAN VIP scan3 is running on node racnode2
    ./srvctl status scan_listener
    SCAN Listener LISTENER_SCAN1 is enabled
    SCAN listener LISTENER_SCAN1 is running on node racnode1
    $ nslookup rac-scan
    Server:          172.20.0.15
    Address:     172.20.0.15#53
    Name:     rac-scan.xxx.local
    Address: 172.20.0.213
    Name:     rac-scan.xxx.local
    Address: 172.20.0.214
    Name:     rac-scan.xxx.local
    Address: 172.20.0.210
    on racnode1:
    $ /sbin/ifconfig
    eth0 Link encap:Ethernet HWaddr 00:1A:A0:96:A6:B2
    inet addr:172.20.0.221 Bcast:172.20.0.255 Mask:255.255.255.0 <--- public ip
    inet6 addr: fe80::21a:a0ff:fe96:a6b2/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:9458999 errors:0 dropped:0 overruns:0 frame:0
    TX packets:14852588 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:4001261935 (3.7 GiB) TX bytes:1196090235 (1.1 GiB)
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth0:1 Link encap:Ethernet HWaddr 00:1A:A0:96:A6:B2
    inet addr:172.20.0.212 Bcast:172.20.0.255 Mask:255.255.255.0 <---- VIP
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth0:2 Link encap:Ethernet HWaddr 00:1A:A0:96:A6:B2
    inet addr:172.20.0.214 Bcast:172.20.0.255 Mask:255.255.255.0 <---- one of the SCAN ips
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth1 Link encap:Ethernet HWaddr 90:E2:BA:0F:F9:8F
    inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0 <---- private interconnect
    inet6 addr: fe80::92e2:baff:fe0f:f98f/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:26461881 errors:4 dropped:0 overruns:0 frame:2
    TX packets:33628826 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:4053295644 (3.7 GiB) TX bytes:695537051 (663.3 MiB)
    on racnode2
    eth0 Link encap:Ethernet HWaddr 00:1A:A0:96:A4:5B
    inet addr:172.20.0.174 Bcast:172.20.0.255 Mask:255.255.255.0 <--- public IP
    inet6 addr: fe80::21a:a0ff:fe96:a45b/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:3233473 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1766459 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:41109717 (39.2 MiB) TX bytes:179509273 (171.1 MiB)
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth0:1 Link encap:Ethernet HWaddr 00:1A:A0:96:A4:5B
    inet addr:172.20.0.211 Bcast:172.20.0.255 Mask:255.255.255.0 <--- VIP
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth0:2 Link encap:Ethernet HWaddr 00:1A:A0:96:A4:5B
    inet addr:172.20.0.210 Bcast:172.20.0.255 Mask:255.255.255.0 <--- another SCAN IP
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:20 Memory:fdfc0000-fdfe0000
    eth0:3 Link encap:Ethernet HWaddr 00:1A:A0:96:A4:5B
    inet addr:172.20.0.213 Bcast:172.20.0.255 Mask:255.255.255.0 <--- another SCAN IP
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:20 Memory:fdfc0000-fdfe0000
    $ ./lsnrctl status LISTENER_SCAN1
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 25-MAY-2012 15:12:35
    Copyright (c) 1991, 2009, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
    STATUS of the LISTENER
    Alias LISTENER_SCAN1
    Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
    Start Date 25-MAY-2012 14:28:11
    Uptime 0 days 0 hr. 44 min. 23 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File /home/oracle/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File /home/oracle/app/oracle/diag/tnslsnr/racnode1/listener_scan1/alert/log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.20.0.214)(PORT=1521)))
    Services Summary...
    Service "racdb.development.info" has 2 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    Service "racdbXDB.development.info" has 2 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    The command completed successfully
    Any ideas?

    How does SCAN work?
    “When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client’s behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener (Using SCAN the connection is initiated using the SCANIP, but is established using the VIP) on the least-loaded node (Each scan listener keeps updated cluster load statistics) where the service is currently being offered. Finally, the client establishes connection to the service through the listener using VIP on the node where service is offered.All of these actions take place transparently to the client without any explicit configuration required in the client.”
    So, to SCAN Works you must aware:
    Server (cluster)
    -The service must be registered on Local/Scan ListenerI believe I've done this now but see below.
    Database (rac)
    -You must use remote_listener parameterThe remote listener parameter I have is:
    SQL> show parameter remote listener
    NAME                    TYPE     VALUE
    remote_dependencies_mode     string     TIMESTAMP
    remote_listener           string     rac-scan:1521
    remote_login_passwordfile     string     EXCLUSIVE
    remote_os_authent          boolean     FALSE
    remote_os_roles           boolean     FALSE
    result_cache_remote_expiration     integer     0
    Cient
    -Must resolve all SCAN Names and VIP Names (check with nslookup)I'd made a mistake there. My VIP names were not available from DNS.
    -Must access port of Listener
    Try it:
    http://levipereira.wordpress.com/2011/05/03/configuring-client-to-use-scan-11-2-0/
    Thanks, that document was useful however I don't think I've got it completely right as yet as I have no listener_scan2.
    $ olsnodes -i -s -n
    racnode1     1     racnode1-vip     Active
    racnode2     2     racnode2-vip     Active
    srvctl config vip -n racnode1
    VIP exists.:racnode1
    VIP exists.: /racnode1-vip/172.20.0.212/255.255.255.0/eth0
    srvctl config vip -n racnode2
    VIP exists.:racnode2
    VIP exists.: /racnode2-vip/172.20.0.211/255.255.255.0/eth0
    srvctl config scan
    SCAN name: rac-scan.xxx.local, Network: 1/172.20.0.0/255.255.255.0/eth0
    SCAN VIP name: scan1, IP: /172.20.0.214/172.20.0.214
    SCAN VIP name: scan2, IP: /rac-scan.xxx.local/172.20.0.210
    SCAN VIP name: scan3, IP: /172.20.0.213/172.20.0.213
    SQL> select INST_ID, NAME, VALUE
    2 from gv$parameter
    3 where name like '%_listener%';
    INST_ID
    NAME
    VALUE
         1
    local_listener
    (address=(protocol=tcp)(port=1521)(host=racnode1-vip.xxx.local))
         1
    remote_listener
    rac-scan:1521
    INST_ID
    NAME
    VALUE
         2
    local_listener
    (address=(protocol=tcp)(port=1521)(host=racnode1-vip.xxx.local))
         2
    remote_listener
    INST_ID
    NAME
    VALUE
    rac-scan:1521
    racnode1
    $ lsnrctl service listener_scan1
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 28-MAY-2012 13:36:10
    Copyright (c) 1991, 2009, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
    Services Summary...
    Service "racdb.development.info" has 2 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    REMOTE SERVER
    (address=(protocol=tcp)(port=1521)(host=racnode1-vip.xxx.local))
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    REMOTE SERVER
    (address=(protocol=tcp)(port=1521)(host=racnode2-vip.xxx.local))
    Service "racdbXDB.development.info" has 2 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: racnode1.xxx.local, pid: 3651>
    (ADDRESS=(PROTOCOL=tcp)(HOST=racnode1.xxx.local)(PORT=62553))
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: racnode2.xxx.local, pid: 6501>
    (ADDRESS=(PROTOCOL=tcp)(HOST=racnode2.xxx.local)(PORT=10619))
    The command completed successfully
    $ lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 28-MAY-2012 13:38:02
    Copyright (c) 1991, 2009, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+ASM1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "racdb.development.info" has 1 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "racdbXDB.development.info" has 1 instance(s).
    Instance "racdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: racnode1.xxx.local, pid: 3651>
    (ADDRESS=(PROTOCOL=tcp)(HOST=racnode1.xxx.local)(PORT=62553))
    The command completed successfully
    racnode2
    lsnrctl service listener_scan2
    none of listener_scan1, 2 or 3 say anything other than TNS-01101: Could not find service name listener_scanN
    lsnrctl service
    $ lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 28-MAY-2012 13:19:50
    Copyright (c) 1991, 2009, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+ASM2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "racdb.development.info" has 1 instance(s).
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:14 refused:0 state:ready
    LOCAL SERVER
    Service "racdbXDB.development.info" has 1 instance(s).
    Instance "racdb2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: racnode2.xxx.local, pid: 6501>
    (ADDRESS=(PROTOCOL=tcp)(HOST=racnode2.xxx.local)(PORT=10619))
    The command completed successfully
    UPDATE:
    $ srvctl config scan_listener
    SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
    There is a period of time when I shutdown one node where I cannot connect and get "ORA-12514: TNS:listener does not currently know of service requested in connect descriptor"
    Edited by: MartinJEvans on May 28, 2012 2:02 PM

  • Implemeting 11gR2 RAC with Data Guard

    Hi ,
    Could any one provide the steps on how to setup 11gR2 two node RAC With Dataguard . Could the 11R2 Active database duplication can be used in setting up the standby ?
    I just need the order of steps to be followed to set up the environment.
    Thanks,
    shashi.

    Hi Fiedi ,
    Thanks for the reply .
    I know how to build the oracle dataguard . But , I'm looking for the order of steps that I need to follow to build 11gR2 RAC with data guard.
    1] Set up the Grid Infrsatructure for the 2 node RAC .
    2] Create the database .
    3] Modify the init.ora prameter to chage the above created database as primary .
    4] Set up the grid infrastructure for the 2 node RAC on the DR site.
    5] Create the standby database using 11gR2 active database dupication.
    Is the above order correct ? If not , let me know the correct order of steps that needs to be followed to setup 11gR2 RAC with dataguard.

  • MULTIPLE USERS 10G RAC ORACLE_HOME INSTALL WITH ASM/CRS

    Hi,
    We need to install multiple 10g RAC databases on a two node Sun servers. Below is our configuration:
    1) Sun Solaris (ver 10) with Sun Cluster 3.2
    2) One ASM/CRS install (by 1 OS account)
    3) Four ORACLE_HOME 10g database install (by 4 different OS user accounts)
    We would like to use one ASM instance for all four databases with appropriate privileges.
    OS User:           OS Group
    ========      =========
    oraasm           dbaasm - (ASM and CRS install owner)
    ora1           dbaora1 - first db owner
    ora2           dbaora2 - second db owner
    ora3           dbaora3 - third db owner
    ora4           dbaora4 - fourth db owner
    I understand that certain privileges need to be shared between ASM/CRS and DB owners. Please let me know the steps to be followed to complete this install.
    Thanks in advance.

    Hi
    Please read that: Documentation http://download.oracle.com/docs/html/B10766_08/intro.htm
    - You can install and operate multiple Oracle homes and different versions of Oracle cluster database software on the same computer as described in the following points:
    -You can install multiple Oracle Database 10g RAC homes on the same node. The multiple homes feature enables you to install one or more releases on the same machine in multiple Oracle home directories. However, each node can have only one CRS home.
    -In addition, you cannot install Oracle Database 10g RAC into an existing single-instance Oracle home. If you have an Oracle home for Oracle Database 10g, then use a different Oracle home, and one that is available across the entire cluster for your new installation. Similarly, if you have an Oracle home for an earlier Oracle cluster database software release, then you must also use a different home for the new installation.
    If the OUI detects an earlier version of a database, then the OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous-version databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
    - You can use the OUI to complete some of the de-install and re-install steps for Oracle Database 10g Real Application Clusters if needed.
    Note:
    Do not move Oracle binaries from one Oracle home to another because this causes dynamic link failures.
    . If you are using ASM with Oracle database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. This prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home.

  • Error while running runcluvfy.sh(11g RAC on CentOS 5(RHEL 5))

    Oracle Version: 11G
    Operating System: Centos 5 (RHEL 5) : Linux centos51-rac-1 2.6.18-128.1.6.el5 #1 SMP Wed Apr 1 09:19:18 EDT 2009 i686 i686 i386 GNU/Linux
    Question (including full error messages and setup scripts where applicable):
    I am attempting to install oracle 11g in a RAC configuration with Centos 5 (redhat 5) as the operating system. I get the following error
    ERROR : Cannot Identify the operating system. Ensure that the correct software is being executed for this operating system
    Verification cannot complete
    I get this error message when I run runcluvfy.sh, to verify the my configuration is clusterable. I don't know why.
    I edited the /etc/redhat-release and entered echo "Red Hat Enterprise Linux AS release 4 (Nahant Update 7)" to attempt to fool the installer into thinking its red hat 4.
    But still shows the same message.
    Anyone knows how to fix this ?
    Please help me.

    http://www.idevelopment.info/data/Oracle/DBA_tips/Linux/LINUX_20.shtml
    runcluvfy.sh will not work on centos because the cluster verification utility checks the operating system version using the redhat-release packag and centos do this with his packages, so you must install and use redhat-release package
    Get rpm-build to be able to build rpm’s:
    [root@centos5 ~]# yum install rpm-build
    Get source rpm of redhat-release
    [root@centos5 ~]# wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/redhat-release-5Server-5.1.0.2.src.rpm
    Build package:
    [root@centos5 ~]# rpmbuild –rebuild redhat-release-5Server-5.1.0.2.src.rpm
    Install newly generated rpm:
    [root@centos5 ~]# rpm -Uvh –force /usr/src/redhat/RPMS/i386/redhat-release-5Server-5.1.0.2.i386.rpm

  • Error in Creation of Dataguard for RAC

    My pfile of RAC looks like:
    RACDB2.__large_pool_size=4194304
    RACDB1.__large_pool_size=4194304
    RACDB2.__shared_pool_size=92274688
    RACDB1.__shared_pool_size=92274688
    RACDB2.__streams_pool_size=0
    RACDB1.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/RACDB/adump'
    *.background_dump_dest='/u01/app/oracle/admin/RACDB/bdump'
    *.cluster_database_instances=2
    *.cluster_database=true
    *.compatible='10.2.0.1.0'
    *.control_files='+DATA/racdb/controlfile/current.260.627905745','+FLASH/racdb/controlfile/current.256.627905753'
    *.core_dump_dest='/u01/app/oracle/admin/RACDB/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
    *.db_name='RACDB'
    *.db_recovery_file_dest='+FLASH'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDBXDB)'
    *.fal_client='RACDB'
    *.fal_server='RACDG'
    RACDB1.instance_number=1
    RACDB2.instance_number=2
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(RACDB,RACDG)'
    *.log_archive_dest_1='LOCATION=+FLASH/RACDB/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDB'
    *.log_archive_dest_2='SERVICE=RACDG VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDG'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='DEFER'
    *.log_archive_format='%t_%s_%r.arc'
    *.log_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_listener='LISTENERS_RACDB'
    *.remote_login_passwordfile='exclusive'
    *.service_names='RACDB'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    RACDB2.thread=2
    RACDB1.thread=1
    *.undo_management='AUTO'
    RACDB2.undo_tablespace='UNDOTBS2'
    RACDB1.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/u01/app/oracle/admin/RACDB/udump'
    My pfile of Dataguard Instance in nomount state looks like:
    RACDG.__db_cache_size=58720256
    RACDG.__java_pool_size=4194304
    RACDG.__large_pool_size=4194304
    RACDG.__shared_pool_size=96468992
    RACDG.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/RACDG/adump'
    *.background_dump_dest='/u01/app/oracle/admin/RACDG/bdump'
    ##*.cluster_database_instances=2
    ##*.cluster_database=true
    *.compatible='10.2.0.1.0'
    ##*.control_files='+DATA/RACDG/controlfile/current.260.627905745','+FLASH/RACDG/controlfile/current.256.627905753'
    *.core_dump_dest='/u01/app/oracle/admin/RACDG/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DATADG'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
    *.db_name='RACDB'
    *.db_recovery_file_dest='+FLASHDG'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDGXDB)'
    *.FAL_CLIENT='RACDG'
    *.FAL_SERVER='RACDB'
    *.job_queue_processes=10
    *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACDB,RACDG)'
    *.log_archive_dest_1='LOCATION=+FLASHDG/RACDG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDG'
    *.log_archive_dest_2='SERVICE=RACDB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDB'
    *.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
    *.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
    *.log_archive_format='%t_%s_%r.arc'
    *.log_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    ##*.remote_listener='LISTENERS_RACDG'
    *.remote_login_passwordfile='exclusive'
    SERVICE_NAMES='RACDG'
    sga_target=167772160
    standby_file_management='auto'
    undo_management='AUTO'
    undo_tablespace='UNDOTBS1'
    user_dump_dest='/u01/app/oracle/admin/RACDG/udump'
    DB_UNIQUE_NAME=RACDG
    and here is what I am doing on the standby location:
    [oracle@dg01 ~]$ echo $ORACLE_SID
    RACDG
    [oracle@dg01 ~]$ rman
    Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jul 17 21:19:21 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    RMAN> connect auxiliary /
    connected to auxiliary database: RACDG (not mounted)
    RMAN> connect target sys/xxxxxxx@RACDB
    connected to target database: RACDB (DBID=625522512)
    RMAN> duplicate target database for standby;
    Starting Duplicate Db at 2007-07-17 22:27:08
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=156 devtype=DISK
    contents of Memory Script:
    restore clone standby controlfile;
    sql clone 'alter database mount standby database';
    executing Memory Script
    Starting restore at 2007-07-17 22:27:10
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting datafile backupset restore
    channel ORA_AUX_DISK_1: restoring control file
    channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl4.ctl
    channel ORA_AUX_DISK_1: restored backup piece 1
    piece handle=/software/backup/ctl4.ctl tag=TAG20070717T201921
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:23
    output filename=+DATADG/racdg/controlfile/current.275.628208075
    output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
    Finished restore at 2007-07-17 22:27:34
    sql statement: alter database mount standby database
    released channel: ORA_AUX_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 07/17/2007 22:27:43
    RMAN-05501: aborting duplication of target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database
    RMAN>
    Any help to clear this error will be apprecited.......
    Message was edited by:
    Bal
    null

    Hi
    Thanks everybody for helping me on this issue...........
    As suggested, I had taken the parameter log_file_name_convert and db_file_name_convert out of my RAC primary database but still I am getting the same error.
    Any help will be appriciated..............
    SQL> show parameter convert
    NAME TYPE VALUE
    db_file_name_convert string
    log_file_name_convert string
    SQL>
    oracle@dg01<3>:/u01/app/oracle> rman
    Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 18 17:07:49 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    RMAN> connect auxiliary /
    connected to auxiliary database: RACDB (not mounted)
    RMAN> connect target sys/xxx@RACDB
    connected to target database: RACDB (DBID=625522512)
    RMAN> duplicate target database for standby;
    Starting Duplicate Db at 2007-07-18 17:10:53
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=156 devtype=DISK
    contents of Memory Script:
    restore clone standby controlfile;
    sql clone 'alter database mount standby database';
    executing Memory Script
    Starting restore at 2007-07-18 17:10:54
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting datafile backupset restore
    channel ORA_AUX_DISK_1: restoring control file
    channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl5.ctr
    channel ORA_AUX_DISK_1: restored backup piece 1
    piece handle=/software/backup/ctl5.ctr tag=TAG20070718T170529
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:33
    output filename=+DATADG/racdg/controlfile/current.275.628208075
    output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
    Finished restore at 2007-07-18 17:11:31
    sql statement: alter database mount standby database
    released channel: ORA_AUX_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 07/18/2007 17:11:43
    RMAN-05501: aborting duplication of target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
    RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to install 11gR2 RAC on 64 bit linux OS

    I am completely new to this topic of RAC and need to be installing and standing up RAC on Linux 64 bit OS . I have good knowledge of installing oracle database ENTERPRISE version 11gR2.
    Can you guide me as to how to start. I am looking for leads. Probably we will have 2 nodes.
    Thank you very much for helping me in advance

    If you are a My Oracle Support (Metalink) user, go check out these two notes created by the Oracle RAC Assurance Team. They are excellent.
    NOTE: 810394.1 RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
    NOTE: 811306.1 RAC Assurance Support Team: RAC Starter Kit (Linux)
    In the Linux note mentioned above there is a link to a Linux Step by Step Instruction Guide. This step by step instruction guide is the best start to finish document I've seen for how to set-up and install Oracle RAC. I believe the guide is written for installing release 11.2.0.2.

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

Maybe you are looking for