RAC Cache Fusion

Hello,
RAC databases have multiple instances. Here we assume there are two nodes (Node A and Node B).
If suppose node A reads data block (abc) from disk and update it as "xyz" but not committed. At the same time node B wants to read the same data block. So here comes cache fusion technology which transfers data "xyz" from node A instance to node B instance through inter connect mechanism.
My doubt is since the changed data is not been committed in node A then how node B can view the data "xyz"? Only the old image should be transferred to node B.
Is my understanding is correct? Any one can explain this.
Thanks
Gokul

hi,
hope this helps
http://advait.wordpress.com/2008/07/21/oracle-rac-10g-cache-fusion/
regards,
Deepak

Similar Messages

  • Oracle RAC Cache Fusion , Insert intensive tables.

    Hi,
    I have one general questions in RAC cache fusion. It was observed that there are too much interinstance contention as the insert intensive queries are executed. What is the best possible solution to solve this. I mean how to avoid the contention. Is there any general approach.
    Thanking you
    Rocky

    As others have pointed out, the first step would be to determine what object or objects was actually causing contention.
    If we're talking about a hypothetical OLTP application with 400 concurrent users on 2 nodes of a RAC cluster, inserting into a heap-organized table with no bitmap indexes I would wager that the most likely cause of contention would be contention for index blocks on monotonically increasing columns (i.e. sequence-generated primary keys). Since indexes are sorted structures, the next row will almost always be going into the right-most block of the index. If both nodes are fighting over who has that right-most block at any instant, that can cause lots of contention as the block gets moved back and forth between the nodes. If you don't need to do range scans on the index, the solution would be to change the primary key index to be a reverse key index so that inserts are spread across a larger number of index blocks at any instant. Of course, this is just one possible scenario-- there are dozens if not hundreds or thousands of possible permutations. It comes down to figuring out what resource the various sessions are contending on and then figuring out how to reduce that contention without screwing up anything else too badly.
    Justin
    Edited by: Justin Cave on Jun 19, 2009 12:50 AM

  • Oracle 9i rac cache fusion queries

    Hi RAC experts,
    Would like to understand more on the background of cache fusion.
    Kindly advise on the following cache fusion operation as i'm having some confusion.
    1. read/read - user on node 1 wants to read a block that user on node 2 has recently read.
    in this case, the block is already on the cache of node 2 since it is recently read. so does node 1 read from disk to store in it's down cache since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for select?
    2. read/write - user on node 1 want to read a block that user on node 2 has recently updated.
    in this case, node one will get the recently updated info from remote cache(node 2) to it's local cache for update right?
    3) write/read - user on node 1 want to update a block that user on node 2 has recently read.
    does node 1 read from disk since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for update?
    thanks
    junior dba

    1. read/read - user on node 1 wants to read a block that user on node 2 has recently read.Since the block is in another instance's buffer cache, the questing instance(1 in your case) will request the block to be shipped from node 2. It will avoid going to disk.
    .2. read/write - user on node 1 want to read a block that user on node 2 has recently updated.in this case, node one will get the recently updated info from remote cache(node 2) to it's local cache for update right?
    That's correct.
    oes node 1 read from disk since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for update?Same as in 1) above, it gets from the other's instance.
    These concepts have been explained well in RAC Handbook by K Gopalakrishnan. Here is the additional information on the book. http://www.amazon.com/Database-Application-Clusters-Handbook-Osborne/dp/007146509X/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1211888392&sr=8-1
    HTH
    Thanks
    -Chandra

  • Disable RAC Cache Fusion in 10r2

    Hi all,
    is it possible to disable the Oracle Cache Fusion in Oracle Database 10.2.0.4 Database ?
    Regards,

    I think it is nonsense to think about disabling cache fusion. and it is almost not possible. On the other hand, how can RAC work without this feature ? How can you imagine the work of your RAC system without global synchronization between nodes ???
    Regarding to singleton concept which is mentioned by guys, it means that the idea here is concentrating the database service which your application uses on one node, so mostly your application will be served by this node, which means the roundtrip between instances will be low. You can think about this state just like you are working with single instance database, as here cache synchronization overhead is almost eliminated. Basically RAC itself also does remastering of resources, based on workload dynamically, but remember that if your database service is served by several instances, your connections could be directed to both of them and in inpredictable fashion, based on load balansing goal to be exact.
    Besides all of this, there are several wait event classes spesific for RAC and corresponding views for them. you can explore them and find the idea about which wait classes are bottleneck.

  • [RAC] 9I REAL APPLICATION CLUSTERS의 CACHE FUSION 에 대한 이해

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    [RAC] 9I REAL APPLICATION CLUSTERS의 CACHE FUSION 에 대한 이해
    ==========================================================
    PURPOSE
    이 문서는, 오라클 리얼 애플리케이션 클러스터 환경의 캐쉬 퓨전의 기능과 장점을
    설명하는 것을 목적을 한다.
    SCOPE
    Real Application Clusters(RAC) Option은 9i Standard Edition에서는
    지원하지 않는다.
    Explanation
    Oracle 8i OPS에서 캐쉬 퓨전의 개념이 소개 되었다. 이 기능은, 원격 인스턴스가 관장하는
    블럭에 대한 읽기 일관성 있는 뷰를 위해 디스크를 통한 블럭의 pinging을 감소 시키는 목적으로
    추가되었다. 이 기능은, 다른 인스턴스에 의해 lock이 걸린 데이터를 select하는데 필요한 시간을
    현격하게 감소 시켜 주었다.
    lock을 건 인스턴스가 변경 사항을 기록하도록 강제 하여 (I/O 발생을 강제 발생) 원격 인스턴스가
    읽을 수 있게 하는 대신, 캐쉬 퓨전은 버퍼의 사본을 생성하여, 노드간 고속 연결 라인을 통해
    데이터를 조회하는 측에 전달 시켜준다. 이 개념은, 읽기/쓰기 경합에 따른 성능상의 문제를
    경감시켜 주기는 하나, 블럭의 내용을 변경시켜야 한다면, 원격 인스턴스에서 해당 블럭에 대해 쓰기를
    하도록 한 후 읽어 들여 사용하는 기존 방식과 동일한 ping 메카니즘을 사용 해야 했다.
    9i 리얼 애플리케이션 클러스터에서는, 쓰기/쓰기 경합 문제를 해결할 수 있도록 캐쉬 퓨전이
    설계되었으므로, 성능 향상 및 확장성이 개선 되었다. RAC cache fusion을 통해 Oracle 에서는
    다른 인스턴스에서 lock이 걸린 블럭에 대해 디스크를 통한 'ping'이 개념상으로는 더이상
    발생하지 않도록 함으로써, I/O의 양을 더 줄여 주게 되었다. 대신 RAC 인스턴스는, 더티
    버퍼의 사본에 대해 원격 인스턴스에서 쓰기 권한을 가질 수 있게 해 주었다. 이 기능은
    dba (1:1) 를 사용하는 realesable locking (특정 데이터 파일에 gc_files_to_locks 값을
    0으로 지정하거나, gc_files_to_locks 값을 아예 설정하지 않을 경우 사용 가능) 에서만
    적용되는 기능이다. Hashed lck은 ping 메카니즘을 이전 방식과 동일하게 사용하며, fixed lock은
    9i RAC에서는 더이상 사용 할 수 없게 되었다. (잇점이 없기에)
    앞에서 언급한 바와 같이 RAC 캐쉬 퓨전에서는 인스턴스가 dirty buffer의 사본을 사용할 수 있게
    해 준다. RAC 캐쉬 퓨전에서 새로 등장한 개념은 past image라는 개념인데 이것은 버퍼의 이전
    사본이 디스크게 기록되지 않은 상태를 말한다. 이 past image를 추적하기 위해 오라클에서는 전역
    락과 로컬 락의 role과 BWR (block written redo)를 사용한다. global 과 local lock의 role을 명확히
    하기 위해 Oracle 8i의 lock을 local lock으로 생각해보자. 이 경우 오라클 8i에서는 3가지 종류의
    락 모드가 존재한다 :
    N - Null
    S - Shared
    X - Exclusive
    오라클 9i RAC의 lock모드를 언급할 때는 3 글자로 lock의 종류를 구분한다. 첫번째 글자는 lock mode를
    나타내고, 두번째 글자는 lock의 role, 세번째 글자는 (숫자) 로컬 인스턴스에 past image가 있는지
    여부를 나타낸다. 이와 같은 방식으로 lock의 종류는 다음과 같이 나타낼 수 있다 :
    NL0 - Null Local 0 - 8i의 N과 동일 (past image 없음)
    SL0 - Shared Local 0 - 8i의 S와 동일 (past image 없음)
    XL0 - Exclusive Local 0 - 8i의 X와 동일 (past image 없음)
    SG0 - Shared Global 0 - 글로벌 S 락, 인스턴스에 current block image 소유
    XG0 - Exclusive Global 0 - 글로벌 X 락, 인스턴스에서 current block image 소유
    NG1 - Null Global 1 - 글로벌 N 락, 인스턴스에서 past image 소유
    SG1 - Shared Global 1 - 글로벌 S 락, 인스턴스에서 past image 소유
    XG1 - Exclusive Global 1 - 글로벌 X 락, 인스턴스에서 past image 소유
    lock이 최초로 획득된 경우, 로컬 role로 획득된다.만약 lock이 획득된 시점에 원격 인스턴스에 더티
    버퍼가 존재한다면, 글로벌 락 role을 가지고 획득하며, 이 경우, 더티 버퍼를 모든 원격 인스턴스에서는
    버퍼의 'past image'로 취급한다. 리커버리를 위해 past image를 가지고 있는 인스턴스에서는
    lock을 점유하고 있는 마스터 인스턴스에서 lock을 해재했다고 알려 오기 전 까지는 past image를
    버퍼 캐쉬에 저장한다. 버퍼내의 블럭이 폐기 될 때는 past image를 관리하는 인스턴스에서 DBR
    또는'block written redo'를 redo 스트림에 기록하게 되는데 이것은, 블럭이 이미 디스크에
    기록되었으므로, 이 인스턴스를 recovery하는데는 필요하지 않다는 것을 나타낸다.
    RAC 클러스터의 3번 노드가 EMP 테이블의 블럭을 관장하는 락 엘리먼트 123을 소유하고 있다고
    가정해 보자 :
    사용자 C가 인스턴스 3에 연결된 상태에서 EMP 테이블에 대한 select를 하면서 SL0 락이
    오픈 되었다 :
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held: | | Lock Held: |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | | | SL0 |
    | | | | | |
    Shared lock을 획득하는 것은 락 role에 아무런 영향을 미치지 않는다. 따라서 만약 인스턴스
    2번에 연결된 사용자 B가 동일한 EMP 테이블에 대해 select를 하더라도, 인스턴스 2의 락
    엘리먼트 123의 락 모드는 동일하다. ( S 락이기 때문임 ) 그리고 락의 role 또한 동일한데
    버퍼 내용에 더티 상태로 변경된 내용이 없기 때문이다 :
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | SL0 | | SL0 |
    | | | | | |
    첫번째 exclusive 락을 획득하는 것은 락의 role에 영향을 미치지 않는데 이 경우에도
    락 엘리먼트에 대한 더티 버퍼가 없기 때문이다. 따라서 만약 사용자 B가 EMP 테이블의 락
    엘리먼트 123에 속하는 row를 update 하더라도, XL0이 획득되고, 이미 획득된
    SL0 락은 제거된다. 이것은 8i OPS와 동일한 동작이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | XL0 | | |
    | | | | | |
    원격 노드상에 더티 버퍼가 존재하는 락 에리먼트에 대해 exclusive 락을 회극하는 것은 캐쉬
    퓨전에서 2단계로 넘어가도록 한다. 이 경우 인스턴스 1번에 연결된 사용자 A가
    락 엘리먼트 123번에 속하는 EMP 테이블의 row를 update 할 때, 인스턴스 2번의 블럭은
    버퍼 캐쉬 내에서 더티 상태로 존재하므로, 인스턴스는 블럭에 대한 사본을 생성 하여 인스턴스
    1번에 전달 한 후, 인스턴스는 null 글로벌 락을 같는다 (page image)*. 동시에
    인스턴스 1은 exclusive, globally dirtied lock을 획득하며, 인스턴스 2는 past
    image를 유지한다.
    * 인스턴스에서 블럭에 대한 자신만의 page image를 갖는 것은, 블럭이 디스크에 게록되지 않고
    또, 마스터 노드에서 알려주기 전 까지는 이미지를 폐기 하지 않기 위해서이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | XG0 | | NG1 | | |
    | | | | | |
    이제 인스턴스 3에 연결된 사용자 C가 EMP 테이블을 select 하려고 하며, EMP 테이블이
    락 엘리먼트 123에 속해있는 경우를 살펴보자. 사용자 C가 select 명령을 수행 시키면
    인스턴스 1의 락은, S 락으로 하향 조정되며, 가장 최근의 buffer 내 사본을 갖고 있으며
    인스턴스 2번은 그 이전의 버퍼에 대한 past image를 유지하게 된다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | SG1 | | NG1 | | SG0 |
    | | | | | |
    이제 인스턴스 2번의 사용자 B가 EMP 락 엘리먼트 123에 대해 select를 하는 경우를 가정해
    보자. 인트넛느 2는 다른 인스턴스로 부터 읽기 일관성 있는 버퍼의 사본을 요청하게 된다.
    인스턴스 2번에 버퍼 내용을 전달해 줄 인스턴스는 다음과 같은 순서에 의해 결정된다:
    1. lock에 대한 마스터 인스턴스.
    2. 가장 최후의 page image를 S lock으로 점유하고 있는 인스턴스.
    3. Shared local 상태로 lock을 점유한 인스턴스.
    4. 가장 최근에 S lock를 부여 받은 인스턴스.
    인스턴스 1번이 락에 대한 마스터 인스턴스라고 가정해 보자 ( 그리고 가장 최근의 past image를 소유 )
    이 경우 인스턴스 2는 인스턴스 1번의 버퍼 캐쉬에 저장된 블럭 사본을 전달 받고, 인스턴스 2번이
    SG1 (past image에 대한 일기 일관성을 갖춘 사본) lock을 획득한다. 나머지 노드는 동일한 상태가
    유지된다:
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | SG1 | | SG1 | | SG0 |
    | | | | | |
    이제 인스턴스 3번에 연결된 사용자 C가 테이블의 락 엘리먼트 123의 row를 upate 하고자 한다고
    가정해 보자. 사용자 C는 exclusive lock을 요청하며, 인스턴스 1번과 2번은 소유하고 있던 lock이
    하향 조정된다. 결과적으로는 인스턴스 3번이 XG0 lock을 점유하게 되고 인스턴스 1, 2번은
    NG1을 각각 점유 하게 된다:
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | NG1 | | NG1 | | XG0 |
    | | | | | |
    인스턴스 3번에서 체크 포인트가 발생하는 경우, 모든 더티 버퍼의 내용을 디스크에 기록하게 된다.
    이 때 인스턴스 3은 마스터노드가 기록을 하였음을 다른 노드에게 알려주게 된다. 이 경우 인스턴스
    1, 2번은 past image를 폐기 처분할 수 있게 되며 인스턴스 3번만 XL0 락을 점유하게 된다.
    유의해야 할 점은 인스턴스 1, 2가 BWR(block written redos)을 각각의 리두 스트림에 기록하여
    해당 블럭이 이미 디스크에 기록되었으며, 인스턴스 리커버리에는 필요하지 않음을 기록한다는 점이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | | | XL0 |
    | | | | | |
    Example
    Reference Documents
    Note:139436.1 - Understanding 9i Real Application Clusters Cache Fusion
    Note 144152.1 - Understanding 9i Real Application Clusters Cache Fusion Recovery
    Note 139435.1 - Fast Reconfiguration in 9i Real Application Clusters

    Hi, how are you ?
    Well, Oracle8i greatly improved scalability for read/write applications through the introduction of Cache Fusion. Oracle9i improved Cache Fusion for write/write applications by further minimizing much of the disk write activity used to control data locking.
    It�s this.
    If you still having doubt, please call me.
    Regina Vidal

  • Enable Cache Fusion in Oracle RAC

    Hi gurus,
    I cannot find on google how to enable and test Cache Fusion feautre in Oracle RAC. Could you help me please?
    Best.

    I don't know. I cannot find parameter which enable or disable CFAs Aman has already stated the feature is already present
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/consist.htm#CNCPT1317
    I even cannot find information how to test CF to ensure that it really works?!http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/monitor.htm#RACAD981
    @Aman
    Cache Fusion is the technology which makes the 10g RAC, 10g RAC. And 11g Database as well :)
    Edited by: Amy De Caj on Jul 19, 2009 4:26 AM
    Edited by: Amy De Caj on Jul 19, 2009 4:28 AM

  • RAC - buffer cache hit ratio

    Hi, I'm a newbie to the forums, so please bear with me.
    I am trying to determine if the usual buffer cache hit ratio figures are irrelevant when used on a RAC environment, because of Cache Fusion and the blocks being moves between the SGA's of the instance?

    Buffer cache hit ratios are always relevant (but not necessarily an indicator of performance).
    Wether it is coming in by cache fusion or a direct read from a datafile, you are still moving a datablock into the cache or reading from the cache.

  • Oracle 10g RAC on Windows

    Installed Oracle 10g RAC on Windows... Everything was
    working fine, now when I install CRS it completed at 100% and then in the configuration it gives error
    INFO: exitonly tools to be excuted passed: 0
    INFO: Starting to execute configuration assistants
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/install/crssetup.config.bat
    PROT-1: Failed to initialize ocrconfig
    Step 1: checking status of CRS cluster
    Step 2: creating directories (C:\oracle\product\10.2.0\crs)
    Step 3: configuring OCR repository
    ocr upgrade failed with (-1)
    Execution of the plugin was aborted
    INFO: Configuration assistant "Oracle Clusterware Configuration Assistant" was canceled.
    *** Starting OUICA ***
    Oracle Home set to C:\oracle\product\10.2.0\crs
    Configuration directory is set to C:\oracle\product\10.2.0\crs\cfgtoollogs. All xml files under the directory will be processed
    INFO: The "C:\oracle\product\10.2.0\crs\cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
    I am using VMware WorkStation 5
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnWindows2003UsingVMware.php
    I can see my disks from both machines. Open the "Computer Management" dialog (Start > All Programs > Administrative Tools > Computer Management
    I use Windows XP SP2 as the host operating system
    Any suggestion?
    Message was edited by:
    MEXMAN

    You know its not certified?
    My first thought when I read the subject of this thread, and before reading what you posted, is that you've either done no research on RAC or didn't understand what you've read.
    You can create a RAC cluster using any operating system certified by Oracle.
    But the operating system is the least important part of creating a cluster. The questions you need to be able to address are:
    1. What is your solution to created shared storage?
    2. What is your solution to create the cache fusion interconnect and VIPs?
    If you don't have an answer to these questions you can not build a RAC cluster.

  • Error - convert single node-RAC-ConvertTomydb.xml -

    my single node init.ora file:
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=mydb
    # Diagnostics and Statistics
    background_dump_dest=/u01/app/oracle/admin/mydb/bdump
    core_dump_dest=/u01/app/oracle/admin/mydb/cdump
    user_dump_dest=/u01/app/oracle/admin/mydb/udump
    # File Configuration
    control_files=("/u01/app/oracle/oradata/mydb/control01.ctl", "/u01/app/oracle/oradata/mydb/control02.ctl", "/u01/app/oracle/oradata/mydb/control03.ctl")
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.2.0.1.0
    # Processes and Sessions
    processes=150
    # SGA Memory
    sga_target=1083179008
    # Security and Auditing
    audit_file_dest=/u01/app/oracle/admin/mydb/adump
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=mydbXDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=360710144
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    my ConvertTomydb.xml ------------which is copy of ConvertToRAC.xml file
    <?xml version="1.0" encoding="UTF-8" ?>
    - <n:RConfig xmlns:n="http://www.oracle.com/rconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/rconfig">
    - <n:ConvertToRAC>
    - <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY
    -->
    - <n:Convert verify="ONLY">
    - <!-- Specify current OracleHome of non-rac database for SourceDBHome
    -->
    <n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
    - <!-- Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
    -->
    <n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
    - <!-- Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
    -->
    - <n:SourceDBInfo SID="mydb">
    - <n:Credentials>
    <n:User>sys</n:User>
    <n:Password>oracle</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:SourceDBInfo>
    - <!-- ASMInfo element is required only if the current non-rac database uses ASM Storage
    -->
    - <n:ASMInfo SID="+ASM1">
    - <n:Credentials>
    <n:User>sys</n:User>
    <n:Password>oracle</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:ASMInfo>
    - <!-- Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.
    -->
    - <n:NodeList>
    <n:Node name="linux1" />
    <n:Node name="linux2" />
    </n:NodeList>
    - <!-- Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.
    -->
    <n:InstancePrefix>mydb</n:InstancePrefix>
    - <!-- Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist
    -->
    <n:Listener port="1521" />
    - <!-- Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.
    -->
    - <n:SharedStorage type="ASM">
    - <!-- Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.
    -->
    <n:TargetDatabaseArea>+ORCL_DATA1</n:TargetDatabaseArea>
    - <!-- Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area.
    -->
    <n:TargetFlashRecoveryArea>+FLASH_RECOVERY_AREA</n:TargetFlashRecoveryArea>
    </n:SharedStorage>
    </n:Convert>
    </n:ConvertToRAC>
    </n:RConfig>
    Ran the xml file
    $ rconfig ConvertTomydb.xml
    Got this below error.
    [oracle@linux1 bin]$ sh rconfig ConvertTomydb.xml
    <?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    [oracle@linux1 bin]$
    Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/rconfig.log
    [main] [0:14:4:4] [RuntimeExec.runCommand:175] Returning from RunTimeExec.runCommand
    oracle.ops.mgmt.cluster.RemoteShellException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
    linux2.com: Connection refused
         at oracle.ops.mgmt.nativesystem.UnixSystem.checkRemoteExecutionSetup(UnixSystem.java:1880)
         at oracle.ops.mgmt.nativesystem.UnixSystem.getRemoteShellCmd(UnixSystem.java:1634)
         at oracle.ops.mgmt.nativesystem.UnixSystem.createCommand(UnixSystem.java:614)
         at oracle.ops.mgmt.nativesystem.UnixSystem.removeFile(UnixSystem.java:622)
         at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1352)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
         at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
         at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
         at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
         at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
         at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
         at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
         at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
    [main] [0:14:4:16] [UnixSystem.isSharedPath:1356] UnixSystem.isShared: creating file /u01/app/oracle/product/10.2.0/db_1/CFSFileName126249561289258204.tmp
    [main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1817] checkRemoteExecutionSetup:: Checking user equivalence using Secured Shell '/usr/bin/ssh'
    [main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1819] checkRemoteExecutionSetup:: Running Unix command: /usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 linux2 /bin/true
    oracle.ops.mgmt.cluster.SharedDeviceException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
    linux2.com: Connection refused
         at oracle.ops.mgmt.nativesystem.UnixSystem.testCFSFile(UnixSystem.java:1444)
         at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1402)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
         at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
         at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
         at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
         at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
         at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
         at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
         at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
    [main] [0:14:35:152] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:152] [UnixSystem.getCSSConfigType:1985] configFile=/etc/oracle/ocr.loc
    [main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrconfig_loc props.val=/u02/oradata/orcl/OCRFile propValue=/u02/oradata/orcl/OCRFile
    [main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrmirrorconfig_loc props.val=/u02/oradata/orcl/OCRFile_mirror propValue=/u02/oradata/orcl/OCRFile_mirror
    [main] [0:14:35:157] [Utils.getPropertyValue:292] propName=local_only propValue=FALSE
    [main] [0:14:35:157] [UnixSystem.getCSSConfigType:2029] configType=false
    [main] [0:14:35:158] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:168] [OCRTree.init:201] calling OCRTree.init
    [main] [0:14:35:169] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:177] [OCRTree.<init>:157] calling OCR.init at level 7
    [main] [0:14:35:177] [HASContext.getInstance:190] Module init : 24
    [main] [0:14:35:177] [HASContext.getInstance:214] Local Module init : 0
    [main] [0:14:35:177] [HASContext.getInstance:249] HAS Context Allocated: 4 to oracle.ops.mgmt.has.ClusterLock@f47bf5
    [main] [0:14:35:177] [ClusterLock.<init>:60] ClusterLock Instance created.
    [main] [0:14:35:178] [OCR.getKeyValue:411] OCR.getKeyValue(SYSTEM.local_only)
    [main] [0:14:35:178] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
    [main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 6
    [main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: value is [false] dtype = 3
    [main] [0:14:35:179] [OCRTreeHA.getLocalOnlyKeyValue:1697] OCRTreeHA localOnly string = false
    [main] [0:14:35:180] [HASContext.getInstance:190] Module init : 6
    [main] [0:14:35:180] [HASContext.getInstance:214] Local Module init : 0
    [main] [0:14:35:180] [HASContext.getInstance:249] HAS Context Allocated: 5 to oracle.ops.mgmt.has.Util@f6438d
    [main] [0:14:35:180] [Util.<init>:86] Util Instance created.
    [main] [0:14:35:180] [has.UtilNative.Native] prsr_trace: Native: hasHAPrivilege
    [main] [0:14:35:184] [HASContext.getInstance:190] Module init : 56
    [main] [0:14:35:184] [HASContext.getInstance:214] Local Module init : 32
    [main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: allocHASContext
    [main] [0:14:35:184] [has.HASContextNative.Native]
    allocHASContext: Came in
    [main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: prsr_initCLSR
    [main] [0:14:35:185] [has.HASContextNative.Native]
    allocHASContext: CLSR context [1]
    [main] [0:14:35:185] [has.HASContextNative.Native]
    allocHASContext: retval [1]
    [main] [0:14:35:185] [HASContext.getInstance:249] HAS Context Allocated: 6 to oracle.ops.mgmt.has.ClusterAlias@18825b3
    [main] [0:14:35:185] [ClusterAlias.<init>:85] ClusterAlias Instance created.
    [main] [0:14:35:185] [has.UtilNative.Native] prsr_trace: Native: getCRSHome
    [main] [0:14:35:186] [has.UtilNative.Native] prsr_trace: Native: getCRSHome crs_home=/u01/app/oracle/product/10.2.0/crs(**)
    [main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1328] DATABASE.ASM.linux1.+asm1 does exist
    [main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1329] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
    [main] [0:14:35:280] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
    [main] [0:14:35:281] [OCR.getKeyValue:411] OCR.getKeyValue(DATABASE.ASM.linux1.+asm1.ORACLE_HOME)
    [main] [0:14:35:281] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
    [main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 36
    [main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: value is [u01/app/oracle/product/10.2.0/db_1] dtype = 3
    [main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1346] getASMInstanceOracleHome:ohome=/u01/app/oracle/product/10.2.0/db_1
    [main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1367] Releasing shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
    [main] [0:14:35:282] [has.ClusterLockNative.Native] prsr_trace: Native: unlock
    [main] [0:14:35:802] [nativesystem.OCRNative.Native] keyExists: procr_close_key retval = 0
    [main] [0:14:35:802] [ASMTree.getNodes:1236] DATABASE.ASM does exist
    [main] [0:14:35:802] [ASMTree.getNodes:1237] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM
    [main] [0:14:35:802] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
    [main] [0:14:35:803] [OCR.listSubKeys:615] OCR.listSubKeys(DATABASE.ASM)
    [main] [0:14:35:803] [nativesystem.OCRNative.Native] listSubKeys: key_name=[DATABASE.ASM]
    [main] [0:14:35:809] [GetASMNodeListOperation.run:78] Got nodes=[Ljava.lang.String;@11a75a2
    [main] [0:14:35:809] [GetASMNodeListOperation.run:91] result status 0
    [main] [0:14:35:809] [LocalCommand.execute:56] LocalCommand.execute: Returned from run method
    [main] [0:14:35:810] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
    [main] [0:14:35:858] [ASMInstance.setBestDiskGroup:1422] sql to be executed:=select name from v$asm_diskgroup where free_mb= (select max(free_mb) from v$asm_diskgroup)
    [main] [0:14:35:864] [ASMInstance.setBestDiskGroup:1426] Setting best diskgroup....
    [main] [0:14:35:888] [SQLEngine.doSQLSubstitution:2165] The substituted sql statement:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
    [main] [0:14:35:888] [ASMInstance.setTemplates:1345] sql to be executed:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
    [main] [0:14:35:892] [ASMInstance.setTemplates:1349] Getting templates for diskgroup: oracle.sysman.assistants.util.asm.DiskGroup@170888e
    [main] [0:14:35:892] [ASMInstance.setTemplates:1357] template: PARAMETERFILE
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DUMPSET
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DATAGUARDCONFIG
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: FLASHBACK
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: CHANGETRACKING
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: XTRANSPORT
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: AUTOBACKUP
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: BACKUPSET
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: TEMPFILE
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: DATAFILE
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ONLINELOG
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ARCHIVELOG
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: CONTROLFILE
    [main] [0:14:35:894] [ASMInstance.createUtilASMInstanceRAC:113] Diskgroups loaded
    [main] [0:14:35:894] [LocalNodeCheck.checkLocalNode:107] Performing LocalNodeCheck
    [main] [0:14:35:894] [OracleHome.getNodeNames:270] inside getNodeNames
    [main] [0:14:36:116] [OracleHome.isClusterInstalled:252] bClusterInstalled=false
    [main] [0:14:36:120] [Step.execute:251] STEP Result=Clusterware is not configured
    [main] [0:14:36:121] [Step.execute:280] Returning result:Operation Failed
    [main] [0:14:36:121] [RConfigEngine.execute:67] bAsyncJob=false
    [main] [0:14:36:124] [RConfigEngine.execute:76] Result=<?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/mydb/sqllog
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        FALSE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        FALSE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    Please help me where I am making mistake.
    Thanks

    1) I have created single node standard database called mydb in /u01/app/oracle/product/10.2.0/db_1 home (hostname linux1)
    2) installed crs and asm on linux1 and linux2 and shared storag on ASM(which external HD running ieee1294 cards and ports) . no database is created on linux1 or linux2
    3) I want to convert mydb to RAC DATABASE called mydb1 instance on linux1 and mydb2 on linux2 machine respectively.
    4) copied and modifed xml as you see above called ConvertTomydb.xml to $ORACLE_HOME/bin directory
    5) when I run
    $rconfig ConvertTomydb.xml from $ORACLE_HOME/bin directory , i get the following error
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC>
    $
    Please see my crs_stat -t command output
    Name Type Target State Host
    ora....SM1.asm application ONLINE ONLINE linux1
    ora....X1.lsnr application ONLINE ONLINE linux1
    ora.linux1.gsd application ONLINE ONLINE linux1
    ora.linux1.ons application ONLINE ONLINE linux1
    ora.linux1.vip application ONLINE ONLINE linux1
    ora....SM2.asm application ONLINE ONLINE linux2
    ora....X2.lsnr application ONLINE ONLINE linux2
    ora.linux2.gsd application ONLINE ONLINE linux2
    ora.linux2.ons application ONLINE ONLINE linux2
    ora.linux2.vip application ONLINE ONLINE linux2
    ora.orcl.db application ONLINE ONLINE linux1
    ora....l1.inst application ONLINE ONLINE linux1
    ora....l2.inst application ONLINE ONLINE linux2
    ora....test.cs application ONLINE ONLINE linux1
    ora....cl1.srv application ONLINE ONLINE linux1
    ora....cl2.srv application ONLINE UNKNOWN linux2
    please see the output from olsnodes command
    [oracle@linux1 bin]$ olsnodes
    linux1
    linux2
    [oracle@linux1 bin]$
    What is your cache fusion interconnect strategy?
    I don't about this, please let me know where can i find the answers. what kind of command do i have to use get the answer
    damorgan , Please let me know, if I gave answers to your questions. if not please let me know, i can give as much as possible. i really appreciate for your help
    Thanks

  • How are discs managed under a RAC data server

    I am considering using Oracle 9i for a very large database and I am being offered RAC as a scalable architecture. I am trying to find out how the 'shared disc' part of the architecture maps into physical reality. All storage devices in my experience can only attach an individual logical unit of storage to one server at any one time. There is no mechanism to request use of a shared resource. (The logical unit can however be passed around between servers in a cluster transition).
    There are two obvious mechanisms:-
    1)Oracle has a request/lock/release mechanism to pass logical units around between nodes.
    2)Cache Fusion abstracts the physical layer so that the location of the storage is invisible. The storage is shared between the nodes and each piece of physical storage attached to one database node.
    Or it could be something I have not thought of.
    Any information gratefully received.

    Here you can see all the process flows : http://scn.sap.com/docs/DOC-8292
    Input file server is used to retrieve/save report, output is used to store report instances.
    If you have multiple FRS servers - they all have to point to the same shared location (one for IFRS and one for OFRS)
    Report processing servers utilize local cache directories. In some cases it makes sense to share them in some not.
    There are several KB articles on webi cache and CR cache configs in addition to default practices described in Admin guide.

  • RAC private  ip  error

    hi,
    i implemented rac on OEL4 and oracle 10g. here i am getting problem my cluster using private IP and my instance using Public ip how can i resolve this problem
    With Regards
    tmadugula

    What version? 10g is not a version it is a marketing label.
    select * from v$version;But assuming 10gR2 ... run the Cluster Verify tool and post the full output.
    How can we help you when all you write is "i am getting problem" and you don't tell us what the problem is? Should we guess?
    When you post the output ... tell us the details about your cache fusion interconnect solution.

  • Oracle 10g RAC on solaris 10 installation

    hi
    i want to know the default file location of following:
    oracle base
    oracle home
    control files
    redo log files
    data files
    plz tell me exactly the default path for all the above

    I'm assuming you mean running RAC with or without Sun Cluster? If so, the answer is that there would almost no difference in most cases. The only case I can think of would be if you had a sub-optimal interconnect without Sun Cluster where Sun Cluster's clprivnet would have given you striping and availability for free.
    I am currently working on a Blueprint that describes the important differences between the two configurations (with and without SC). They can broadly be summarised as:
    Sun Cluster gives you:
    * Better data integrity protection
    * Faster, more reliable node failure detection
    * Makes your name space homogeneous, simplifying installation and device management (DID structure), so no need for messy symbolic links
    * Gives you a highly available, striped cluster interconnect for the Cache Fusion traffic. (No need for tricky IPMP or link aggregation configurations.)
    * Allows you to use volume managers link Solaris Volume Manager or VxVM
    * Provides support for shared QFS as a file system for all Oracle objects, data includes (while still allowing ASM)
    * A substantial collection of Sun written and supported agents to manage other applications you might also have on your cluster, e.g. NFS, SAP, Apache, etc, etc.
    Hope that helps,
    Tim
    ---

  • Query performance is slow in RAC

    Hi,
    I am analyzing the purpose of Oracle RAC and how it will fit/useful into our product. So i have setup two nodes RAC 10g in our lab and i am doing various testing with RAC.
    Test 1 : Fail-over:
    ~~~~~~~~~~~
    First i have started with fail-over testing, done two types of testing "connect-time" failover and "TAF".
    Here TAF has some limitation, it's not handle DML transactions.
    Test 2 : Performance:
    ~~~~~~~~~~~~~~
    Second, i have done performance testing. Here i have taken 10,000 records for insert, update, read and delete operations with singe and two node instances. Here there is no performance difference with single and two nodes.
    But i am assumed, RAC will provide high performance than single instance oracle.
    So i am in confusion, whether we can choose Oracle RAC to our project.
    DBAs,
    Please give me your answers for my following questions, it will be great helpful for me to come to conclusion:
    1. What is the main purpose of RAC (because in my assumption, failover is partially supported and no difference in performance of query processing) ?
    2. What kind of business enviroment RAC will perfectly fit ?
    3. What is the unique benefits in RAC, which is not in single instance Oracle?
    Thanks
    Edited by: Anandhan on Aug 7, 2009 1:40 AM

    Hi !
    Well RAc ensures High Availibility. With Conditions Apply !
    For the database create more than 1 service and have applications connected to the database using these services created.
    RAC is service driven to access the database. So if plannned thoughtfully, Load on the database can be distributed physically using the services created for the database.
    SO if you have a single database servicing more than one application( of any type(s) ie oltp/warehouse etc.) connect to the database using different services so that the Init parameters are set for the purpose of the connection.
    NOTE: each database instance running on node can have different Init_sid.ora to ensure optimum perfromance for the designated purpose.
    RAC uses CSS with cache fusion for reducing I/O on a running production server by transferring the buffers from the Global cache to nodes when required thus reducing the Physical Reads. This is contribution to the perfromance front.
    Any database that requires access with different init.oa for the same physical data; RAC is the best way!
    For High Avail. use TAF type service.

  • Solaris 10 Installation for Oracle RAC 10g Rel2

    Hello,
    I have bought Enterprise 4000 and 250 in order to install and explore Oracle 10g Rel2 RAC.
    In this regard I need some guide lines for
    1. Installinig Solaris 10
    2. Should I use ASM or other other volume manager?
    If anybody can give me a starting document, it will be great.
    Thanks,
    Javed

    I'm assuming you mean running RAC with or without Sun Cluster? If so, the answer is that there would almost no difference in most cases. The only case I can think of would be if you had a sub-optimal interconnect without Sun Cluster where Sun Cluster's clprivnet would have given you striping and availability for free.
    I am currently working on a Blueprint that describes the important differences between the two configurations (with and without SC). They can broadly be summarised as:
    Sun Cluster gives you:
    * Better data integrity protection
    * Faster, more reliable node failure detection
    * Makes your name space homogeneous, simplifying installation and device management (DID structure), so no need for messy symbolic links
    * Gives you a highly available, striped cluster interconnect for the Cache Fusion traffic. (No need for tricky IPMP or link aggregation configurations.)
    * Allows you to use volume managers link Solaris Volume Manager or VxVM
    * Provides support for shared QFS as a file system for all Oracle objects, data includes (while still allowing ASM)
    * A substantial collection of Sun written and supported agents to manage other applications you might also have on your cluster, e.g. NFS, SAP, Apache, etc, etc.
    Hope that helps,
    Tim
    ---

  • Single node file system to 3 node rac and asm migration

    hi,
    we have several utl_file and external table applications running on 10.2 single node veritas file system. and we want to migrate to 3 node RAC ASM environment. what is the best practices in order to succeed this migration during this migration. thanks.

    1. Patch to 10.2.0.3 or 10.2.0.4 if not already there.
    2. Dump Veritas from any future consideration.
    3. Build and validate the new RAC environment and then plug in your data using transportable tablespaces.
    Do not expect the first part of step 3 to work perfectly the first time if you do not have experience building RAC clusters.
    This means have appropriate hardware in place for perfecting your skills.
    Be sure, too, that you are not trying to do this with blade or 1U servers. You need a minimum of 2U servers to be able
    to plug in sufficient hardware to have redundant paths to storage and for cache fusion and public access (a minimum of 6 ports).
    And don't let any network admin try to convince you that they can virtualize the network paths: They can not do so successfully
    for RAC.

Maybe you are looking for