Setup statspack on RAC

Hi,Experts,
If it is good practise that setup statspack on each RAC node using different schema and different job.
Thanks
Simon Lai

Simon,
Sorry, that's not good practice, it is incorrect to do it this way.
All statspack tables register the instance_id of the instance the job was running on.
You need to install statspack in 1 schema, set up two different dbms_job calls, using the instance_id.
If you do it your way, all the data you gather is of no use.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Statspack in RAC

    Hi Team,
    Please help to setup statspack in oracle 10g RAC env.
    Regards,
    Arun

    Question for you:
    In a RAC environment, how many times do you need to run the spcre(ate).sql script?
    a) one time
    b) one time for each node
    If you answer correctly, why do you need 'steps' other than those already posted?
    We can't run the code at your site, or is that what you want?
    Sybrand Bakker
    Senior Oracle DBA

  • Setup 2-Node RAC 11.2.0.1.0 on Windows Server 2008

    I have been setting up RAC environments in VirtualBox and VMWare in my local machine and in our test server. I have done the following setup:
    1. RAC 10g on RHEL 5.4 (VMWare / VBox)
    2. RAC 11g on RHEL 5.4 (VBox)
    3. RAC 11g on Windows Server 2008 (VBox)
    Now, our management wants me to setup a 2-node RAC in real world. Cost is not an issue here as this will be financed by a big private group.
    I am excited to do the project as I am really enthusiastic in database clustering. Of course, there is a little nervous feeling since this is
    my first time doing it in real world (as the best RAC expert started from his first deployment :) ).
    I am going to build RAC 11.2.0.1.0 in Windows Server 2008
    I would like to seek advise on:
    - the best practices, what are the things that I have to consider?
    - any straight forward real world deployment guide or technical papers that can serve as my reference
    - any issues that I might encounter
    - any help and feedback
    I know I can be successful in this first time project if ill seek advise to the experts.
    I hope you can help me and I will be glad to read and review your contributions.
    Thanks a lot.....

    Hi,
    Few months ago I had a project to setup RAC on Windows 2008 R2. For six months now there are no problems and it's working fine, but I have to say that installation wasn't easy and took me a lot of time. Another thing is the troubleshooting, I feel completely helpless when (if) something screws up. It takes between 10 to 15 mins to have database running in case of reboot of the servers.
    So here are few things to consider:
    - Disable write cache on shared disks.
    - Disable User Access Control!
    - Disable firewall (really important)!
    - Use diskpart command to create extended and logical partition on all disks.
    - There is also nasty bug we hit, I've blogged about it:
    http://sve.to/2011/09/29/exhaust-of-windows-2008-heap-memory-with-oracle-database-11-2-0-2/
    - I had a terrible problems with user equivalence. Verify Privileges for copying files in the cluster:
    net use \\nodeX\c$
    - Once installation is completed apply latest Bundle Patch - currently this is BP6 (Patch 13965211):
    DB 11.2.0.3 Patch 6 includes all bugs fixed in 11.2.0.3 Patch 1 to Patch 5 and also includes CPU2012. It must be applied on top of the 11.2.0.3.
    Useful MOS notes:
    RAC and Oracle Clusterware Best Practices and Starter Kit (Windows) [ID 811271.1]
    Windows: CLUVFY Fails with TCP Check PRVF-7617 Due to Case of Node Names [ID 1286394.1]
    Finally if you have a choice then go with Linux, it's robust, easy to install and maintain. You have better control on the system and user processes, it's more flexible and easy to troubleshoot.
    Regards,
    Sve

  • Network Setup for 10g RAC

    We are planning a 2 node RAC cluster with nodes in two room on a single site. The shared storage will be on a SAN which is replicated/mirrored (via the SAN) to a second SAN in the second room.
    Setup
    room A :- Node A SAN A
    room B :- Node B SAN A
    Room B :- mirrored copy of Data on SAN B
    (both SAN and Host are capable of connecting to each other through SAN infrastructure)
    My question /issue is around the IP network
    The plan is for both node A and Node B to have 3 or 4 network connections
    1 - Public Net
    2 - Backup Net
    3 - Private Net
    4 - VIP addr - on the Public Net interface
    My question s are
    1 Do the Public Nets for both Nodes have to be on the same Vlan(subnet) or can they be on different Vlans (subnets)
    eg
    room A Public Net = a.b.1.0/24 (say gateway is a.b.1.1)
    room A Host A public IP = a.b.1.20
    room A Host A VIP = a.b.1.21
    and
    room B public Net = a.b.2.0/24
    room B Host B public IP = a.b.2.20
    room B Host B VIP = a.b.2.21
    assume that there are other host on both networks
    Will this work ??
    How does the RAC nodes load balance across both nodes ? and are the VIP addrs used to achieve load balancing ??
    Second question
    While I can connect both room via a private network without routing (via dedicated fibre to two dedicated private net switches. I could save $$ by using a private vlan routed to each room Say a /29 network ( a.b.3.0/29)
    Can I connect the private nets via a routed vlan ?

    The public IP's all have to be on the same subnet.
    Ie, the first 3 sets of number for the IP address
    must be the same for the public and virtual IP's.
    There are no options in that.Not necessarily, but probably most often correct. If the subnet mask is something other than 255.255.255.0, then the statement above may not hold and/or may not be sufficient. The public IPs and VIPs all must share the same subnet, that much is always correct. What constitutes the same subnet may vary based on the subnet mask used in your network.
    For example, if the subnet mask is 255.255.254.0, then 172.16.172.4 and 172.16.173.12 are on the same subnet. Conversely, if the subnet mask is 255.255.255.128, then 172.16.4.2 and 172.16.4.210 are NOT on the same subnet. If you're not sure, consult a network admin--there are also plenty of online tutorials regarding subnet masks and how they work.

  • 10g setup with add rac option later

    Hi
    We have plans of upgrading our production environment to 10g on a seprate machine.
    Can we install all cluster ware on that machine and install
    oracle 10g then export and import and start using 10g,later once we plan to move to oracle 10g RAC we can use all this setup just by adding one more node.
    In short can I install oracle 10g now and move to RAC just by adding node or with minimal impact to production.
    Will it work ? If so how can I achive that.
    Thanks in advance

    Yes you can install a so called single node RAC. Just take care that all prerequisites are met that later if you add the other node(s) it will work (e.g. If you just have 1 node, it is difficult to check the sharedness of the storage. Or network interfaces that they will be the same when the second node is added etc.).
    Also setup ssh for your own node and check with cluvfy.
    Installation of a single node RAC is equal to the installation of a multiple node RAC.
    - Install Clusterware then Patch Clusterware to 10.2.0.3
    - Install ASM Home, Patch ASM Home to 10.2.0.3, Configure ASM
    - Install DB Home, Patch DB Home to 10.2.0.3, Configure DB
    Before adding the second node use clufvy to check your setup:
    http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html
    Later an additional node can be added from the first node with the addnode.sh script without downtime. Detailed description found here:
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/toc.htm
    Chapter 10 UNIX or 11 Windows.
    Just 1 tip, since a cluster is more complex than a single instance.
    Test it for yourself to get experience in installing and adding the node(s), that nothing unexpected happens when doing this on the production server.

  • GG Recommonded setup on Oracle RAC

    Hi All,
    we are planning to setup goldengate in our 2node RAC env. someone please provide some documents or best practices for settingup goldengate in rac env..
    Thank you very much...am looking specifically about
    1) gg installation is required in both the nodes or
    2) single node or
    3) on shared storage
    4) if it is in shared storage or at single node then how it will extract the data from other node..?
    Please help me on the basic questions
    Thank you very much...
    Regards
    Mvk

    Here is a summary:
    You basically just can use the documentation, unless you have a more specific issue. Oracle RAC is covered in the docs.
    * http://www.oracle.com/technetwork/middleware/goldengate/documentation/index.html
    You might also want to check out the high-availability reference docs,
    * http://www.oracle.com/technetwork/database/features/availability/index-087701.html
    * And OCW (oracle clusterware): http://www.oracle.com/technetwork/middleware/goldengate/overview/ha-goldengate-whitepaper-128197.pdf

  • CONCURRENT MANAGER SETUP AND CONFIGURATION REQUIREMENTS IN AN 11I RAC ENVIR

    제품 : AOL
    작성날짜 : 2004-05-13
    PURPOSE
    RAC-PCP 구성에 대한 Setup 사항을 기술한 문서입니다.
    PCP 구현은 CM의 workload 분산, Failover등을 목적으로 합니다.
    Explanation
    Failure sceniro 는 다음 3가지로 구분해 볼수 있습니다.
    1. The database instance that supports the CP, Applications, and Middle-Tier
    processes such as Forms, or iAS can fail.
    2. The Database node server that supports the CP, Applications, and Middle-
    Tier processes such as Forms, or iAS can fail.
    3. The Applications/Middle-Tier server that supports the CP (and Applications)
    base can fail.
    아래부분은 CM,AP 구성과
    CM과 GSM(Global Service Management)과의 관계를 설명하고 있습니다.
    The concurrent processing tier can reside on either the Applications, Middle-
    Tier, or Database Tier nodes. In a single tier configuration, non PCP
    environment, a node failure will impact Concurrent Processing operations do to
    any of these failure conditions. In a multi-node configuration the impact of
    any these types of failures will be dependent upon what type of failure is
    experienced, and how concurrent processing is distributed among the nodes in
    the configuration. Parallel Concurrent Processing provides seamless failover
    for a Concurrent Processing environment in the event that any of these types of
    failures takes place.
    In an Applications environment where the database tier utilizes Listener (
    server) load balancing is implemented, and in a non-load balanced environment,
    there are changes that must be made to the default configuration generated by
    Autoconfig so that CP initialization, processing, and PCP functionality are
    initiated properly on their respective/assigned nodes. These changes are
    described in the next section - Concurrent Manager Setup and Configuration
    Requirements in an 11i RAC Environment.
    The current Concurrent Processing architecture with Global Service Management
    consists of the following processes and communication model, where each process
    is responsible for performing a specific set of routines and communicating with
    parent and dependent processes.
    아래 내용은 PCP환경에서 ICM, FNDSM, IM, Standard Manager의 역활을 설명하고
    있습니다.
    Internal Concurrent Manager (FNDLIBR process) - Communicates with the Service
    Manager.
    The Internal Concurrent Manager (ICM) starts, sets the number of active
    processes, monitors, and terminates all other concurrent processes through
    requests made to the Service Manager, including restarting any failed processes.
    The ICM also starts and stops, and restarts the Service Manager for each node.
    The ICM will perform process migration during an instance or node failure.
    The ICM will be
    active on a single node. This is also true in a PCP environment, where the ICM
    will be active on at least one node at all times.
    Service Manager (FNDSM process) - Communicates with the Internal Concurrent
    Manager, Concurrent Manager, and non-Manager Service processes.
    The Service Manager (SM) spawns, and terminates manager and service processes (
    these could be Forms, or Apache Listeners, Metrics or Reports Server, and any
    other process controlled through Generic Service Management). When the ICM
    terminates the SM that
    resides on the same node with the ICM will also terminate. The SM is ?hained?
    to the ICM. The SM will only reinitialize after termination when there is a
    function it needs to perform (start, or stop a process), so there may be
    periods of time when the SM is not active, and this would be normal. All
    processes initialized by the SM
    inherit the same environment as the SM. The SM environment is set by APPSORA.
    env file, and the gsmstart.sh script. The TWO_TASK used by the SM to connect
    to a RAC instance must match the instance_name from GV$INSTANCE. The apps_<sid>
    listener must be active on each CP node to support the SM connection to the
    local instance. There
    should be a Service Manager active on each node where a Concurrent or non-
    Manager service process will reside.
    Internal Monitor (FNDIMON process) - Communicates with the Internal Concurrent
    Manager.
    The Internal Monitor (IM) monitors the Internal Concurrent Manager, and
    restarts any failed ICM on the local node. During a node failure in a PCP
    environment the IM will restart the ICM on a surviving node (multiple ICM's may
    be started on multiple nodes, but only the first ICM started will eventually
    remain active, all others will gracefully terminate). There should be an
    Internal Monitor defined on each node
    where the ICM may migrate.
    Standard Manager (FNDLIBR process) - Communicates with the Service Manager and
    any client application process.
    The Standard Manager is a worker process, that initiates, and executes client
    requests on behalf of Applications batch, and OLTP clients.
    Transaction Manager - Communicates with the Service Manager, and any user
    process initiated on behalf of a Forms, or Standard Manager request. See Note:
    240818.1 regarding Transaction Manager communication and setup requirements for
    RAC.
    Concurrent Manager Setup and Configuration Requirements in an 11i RAC
    Environment
    PCP를 사용하기위한 기본적인 Setup 절차를 설명하고 있습니다.
    In order to set up Setup Parallel Concurrent Processing Using AutoConfig with
    GSM,
    follow the instructions in the 11.5.8 Oracle Applications System Administrators
    Guide
    under Implementing Parallel Concurrent Processing using the following steps:
    1. Applications 11.5.8 and higher is configured to use GSM. Verify the
    configuration on each node (see WebIV Note:165041.1).
    2. On each cluster node edit the Applications Context file (<SID>.xml), that
    resides in APPL_TOP/admin, to set the variable <APPLDCP oa_var="s_appldcp">
    ON </APPLDCP>. It is normally set to OFF. This change should be performed
    using the Context Editor.
    3. Prior to regenerating the configuration, copy the existing tnsnames.ora,
    listener.ora and sqlnet.ora files, where they exist, under the 8.0.6 and iAS
    ORACLE_HOME locations on the each node to preserve the files (i.e./<some_
    directory>/<SID>ora/$ORACLE_HOME/network/admin/<SID>/tnsnames.ora). If any of
    the Applications startup scripts that reside in COMMON_TOP/admin/scripts/<SID>
    have been modified also copy these to preserve the files.
    4. Regenerate the configuration by running adautocfg.sh on each cluster node as
    outlined in Note:165195.1.
    5. After regenerating the configuration merge any changes back into the
    tnsnames.ora, listener.ora and sqlnet.ora files in the network directories,
    and the startup scripts in the COMMON_TOP/admin/scripts/<SID> directory.
    Each nodes tnsnames.ora file must contain the aliases that exist on all
    other nodes in the cluster. When merging tnsnames.ora files ensure that each
    node contains all other nodes tnsnames.ora entries. This includes tns
    entries for any Applications tier nodes where a concurrent request could be
    initiated, or request output to be viewed.
    6. In the tnsnames.ora file of each Concurrent Processing node ensure that
    there is an alias that matches the instance name from GV$INSTANCE of each
    Oracle instance on each RAC node in the cluster. This is required in order
    for the SM to establish connectivity to the local node during startup. The
    entry for the local node will be the entry that is used for the TWO_TASK in
    APPSORA.env (also in the APPS<SID>_<HOSTNAME>.env file referenced in the
    Applications Listener [APPS_<SID>] listener.ora file entry "envs='MYAPPSORA=<
    some directory>/APPS<SID>_<HOSTNAME>.env)
    on each node in the cluster (this is modified in step 12).
    7. Verify that the FNDSM_<SID> entry has been added to the listener.ora file
    under the 8.0.6 ORACLE_HOME/network/admin/<SID> directory. See WebiV Note:
    165041.1 for instructions regarding configuring this entry. NOTE: With the
    implementation of GSM the 8.0.6 Applications, and 9.2.0 Database listeners
    must be active on all PCP nodes in the cluster during normal operations.
    8. AutoConfig will update the database profiles and reset them for the node
    from which it was last run. If necessary reset the database profiles back to
    their original settings.
    9. Ensure that the Applications Listener is active on each node in the cluster
    where Concurrent, or Service processes will execute. On each node start the
    database and Forms Server processes as required by the configuration that
    has been implemented.
    10. Navigate to Install > Nodes and ensure that each node is registered. Use
    the node name as it appears when executing a nodename?from the Unix prompt on
    the server. GSM will add the appropriate services for each node at startup.
    11. Navigate to Concurrent > Manager > Define, and set up the primary and
    secondary node names for all the concurrent managers according to the
    desired configuration for each node workload. The Internal Concurrent
    Manager should be defined on the primary PCP node only. When defining the
    Internal Monitor for the secondary (target) node(s), make the primary node (
    local node) assignment, and assign a secondary node designation to the
    Internal Monitor, also assign a standard work shift with one process.
    12. Prior to starting the Manager processes it is necessary to edit the APPSORA.
    env file on each node in order to specify a TWO_TASK entry that contains
    the INSTANCE_NAME parameter for the local nodes Oracle instance, in order
    to bind each Manager to the local instance. This should be done regardless
    of whether Listener load balancing is configured, as it will ensure the
    configuration conforms to the required standards of having the TWO_TASK set
    to the instance name of each node as specified in GV$INSTANCE. Start the
    Concurrent Processes on their primary node(s). This is the environment
    that the Service Manager passes on to each process that it initializes on
    behalf of the Internal Concurrent Manager. Also make the same update to
    the file referenced by the Applications Listener APPS_<SID> in the
    listener.ora entry "envs='MYAPPSORA= <some directory>/APPS<SID>_<HOSTNAME>.
    env" on each node.
    13. Navigate to Concurrent > Manager > Administer and verify that the Service
    Manager and Internal Monitor are activated on the secondary node, and any
    other addititional nodes in the cluster. The Internal Monitor should not be
    active on the primary cluster node.
    14. Stop and restart the Concurrent Manager processes on their primary node(s),
    and verify that the managers are starting on their appropriate nodes. On
    the target (secondary) node in addition to any defined managers you will
    see an FNDSM process (the Service Manager), along with the FNDIMON process (
    Internal Monitor).
    Reference Documents
    Note 241370.1

    What is your database version? OS?
    We are using VCP suite for Planning Purpose. We are using VCP environment (12.1.3) in Decentralized structure connecting to 3 differect source environment ( consisting 11i and R12). As per the Oracle Note {RAC Configuration Setup For Running MRP Planning, APS Planning, and Data Collection Processes [ID 279156]} we have implemented RAC in our test environment to get better performance.
    But after doing all the setups and concurrent programs assignment to different nodes, we are seeing huge performance issue. The Complete Collection which takes generally on an avg 180 mins in Production, is taking more than 6 hours to complete in RAC.
    So I would like to get suggestion from this forum, if anyone has implemented RAC in pure VCP (decentralized) environment ? Will there be any improvement if we make our VCP Instance in RAC ?Do you PCP enabled? Can you reproduce the issue when you stop the CM?
    Have you reviewed these docs?
    Value Chain Planning - VCP - Implementation Notes & White Papers [ID 280052.1]
    Concurrent Processing - How To Ensure Load Balancing Of Concurrent Manager Processes In PCP-RAC Configuration [ID 762024.1]
    How to Setup and Run Data Collections [ID 145419.1]
    12.x - Latest Patches and Installation Requirements for Value Chain Planning (aka APS Advanced Planning & Scheduling) [ID 746824.1]
    APSCHECK.sql Provides Information Needed for Diagnosing VCP and GOP Applications Issues [ID 246150.1]
    Thanks,
    Hussein

  • Document on 10g RAC setup on solaris using vmware

    Hi All,
    I am planning to setup "Oracle 10g RAC setup on Solaris using vmware", but I am strucked up at installation of Soaris 10 in VMWare.
    Can any body please help/provide me the document on Solaris 10 installation for RAC setup .
    The main problems I am having during the SOlaris 10 setup is
    1) Setting up the static public and private addresses required for RAC
    2) Partitioning the disk space.
    Thanks in advance,
    Mahipal Reddy

    Refer these,
    http://www.scribd.com/doc/15650880/Install-Rac-on-Solaris-Vmware
    http://www.disperu.com/using-vmware-server-install-10g-rac-on-solaris/
    http://nayyares.blogspot.com/2008/11/step-by-step-rac-10g-r2-solaris-10.html
    Thanks
    Edited by: Cj on Dec 13, 2010 2:38 AM

  • Active/Passive RAC setup in 11.2 Standard Edition?

    For the ACTIVE/PASSIVE setup in 10g RAC, the ACTIVE_INSTANCE_COUNT parameter works very well. However, in 11.2 version this parameter is obsolete. I'm testing this on our Test RAC server with Oracle Standard Edition 11.2.0.1. What are the other ways possible for an ACTIVE/PASSIVE cluster in Oracle 11.2? Has anybody gone through this before? I know that Oracle Enterprise Edition supports RAC One Node, but I've Standard Edition & moreover, I'm planning to have 2 databases on this cluster with 1 database being Active/Passive & the other being Passive/Active on the 2 nodes.
    Appreciate any quick help!! Thanks.
    Satish...

    Hi user1701261,
    One option to divert traffic to specific node could be setting 11g Services in active , passive mode.
    srvctl add service -d DB01 -s DB01_SRV -r "db011" -a "db012, db013"
    Where
    -d <db_unique_name> Unique name for the database
    -s <service> Service name
    -r "<preferred_list>" Comma separated list of preferred instances
    -a "<available_list>" Comma separated list of available instances
    Session will switch to incase db011 instance goes down.
    active_instance_count parameter is effective only for Two node cluster, whereas using this method you could get more flexibility.
    Regards
    Krishan Jaglan

  • Oracle10g RAC setup over OEL

    Hi,
    I want to perform setup of Oracle10gR2 RAC over OEL in my machine using vmware, which OEL version should I select ?
    Regards
    Edited by: user640001 on Feb 25, 2013 5:44 AM

    Hi Friend,
    Please look below URLs
    Oracle 10g RAC On Linux Using VMware Server:
    http://oracle-base.com/articles/10g/oracle-db-10gr2-rac-installation-on-centos-4-using-vmware.php
    Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2:
    http://oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-ol5-using-vmware-server-2.php
    Oracle Real Application Clusters (RAC) 11g  Release 2 (x86 32-bit & 64-bit):
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Hope it helps..
    Thanks
    LaserSoft

  • Datagurad setup from 2 Node RAC to single instance (DR site)

    Dear Expert,
    I have request from management to setup the DR site from current production RAC database using active dataguard. I have two node rac database 11.2.0.3 running on sun solaris machine. I need proper step or good document can refer to setup between production RAC database to DR site single instance standby database. I only experience setup single instance to single instance .Apreciate expert can provide me some link
    Regard
    liang

    Hello;
    This will provide a good start and overview :
    Creating a Single Instance Physical Standby for a RAC Primary : ( please note parameter changes for Oracle 11 )
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf
    As will this :
    http://oracleinstance.blogspot.com/2012/01/create-single-instance-standby-database.html
    Oracle 11
    Rapid Oracle RAC Standby Deployment: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    Best Regards
    mseberg

  • Streams Setup from RAC to Single instance

    Does anyone have a document to setup streams from RAC to Non RAC. I successfully setup streams on 2 single instances but I am having issues in replicating, Streams is setup on node1 or Rac and Apply process is also setup on single node. but data is not replicating.
    Appreciate any suggestions.

    From Metalink Note 418755.1:
    Additional Configuration for RAC Environments for a Source Database Archive Logs
    The archive log threads from all instances must be available to any instance
    running a capture process. This is true for both local and downstream capture.
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned by
    the same instance. The Streams components (capture/propagation/apply) all
    use that same owning instance to perform their work. This means that
    + a capture process is run at the owning instance of the source queue.
    + a propagation job must run at the owning instance of the queue
    + a propagation job must connect to the owning instance of the target queue.
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE
    and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
    If the primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Capture will automatically follow the ownership of the queue.If the ownership
    changes while capture is running, capture will stop on the current instance
    and restart at the new owner instance.
    For queues created with Oracle Database 10g Release 2, a service will be
    created with the service name= schema.queue and the network name
    SYS$schema.queue.global_name for that queue. If the global_name of the
    database does not match the db_name.db_domain name of the database, be sure
    to include the global_name as a service name in the init.ora.
    For propagations created with the Oracle Database 10g Release 2 code with
    the queue_to_queue parameter to TRUE, the propagation job will deliver only
    to the specific queue identified. Also, the source dblink for the target
    database connect descriptor must specify the correct service (global name of
    the target database ) to connect to the target database. For example, the
    tnsnames.ora entry for the target database should include the CONNECT_DATA
    clause in the connect descriptor for the target database. This claus should
    specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')).
    Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
    For example, consider the tnsnames.ora file for a database with the global name
    db.mycompany.com. Assume that the alias name for the first instance is db1 and
    that the alias for the second instance is db2. The tnsnames.ora file for this
    database might include the following entries:
    db.mycompany.com=
    (description=
    (load_balance=on)
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)))
    db1.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db1)))
    db2.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db2)))
    Use the italicized tnsnames.ora alias in the target database link USING clause.
    DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies
    all active services for the database In non_RAC configurations, the service
    name will typically be the global_name. However, it is possible for users to
    manually create alternative services and use them in the TNS connect_data
    specification . For RAC configurations, the service will appear in these views
    as SYS$schema.queue.global_name.
    Propagation Restart
    Use the procedures START_PROPAGATION and STOP_PROPAGATION from
    DBMS_PROPAGATION_ADM to enable and disable the propagation schedule.
    These procedures automatically handle queue_to_queue propagation.
    Example:
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation'); or
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation',force=>true);
    exec DBMS_PROPAGATION_ADM.start_propagation('name_of_propagation');
    If you use the lower level DBMS_AQADM procedures to manage the propagation schedule,
    be sure to explicitly specify the destination_queue name when queue_to_queue propagation has been configured.
    Example:
    DBMS_AQADM.UNSCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');
    DBMS_AQADM.SCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');,
    DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');
    Changing the GLOBAL_NAME of the Source Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that any propagations
    are dropped and recreated with the queue_to_queue parameter set to TRUE.
    In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the
    database, include the global_name for the queue (NETWORK_NAME in DBA_QUEUES)
    in the list of services for the database in the database parameter
    initialization file.
    Section 4. Target Site Configuration
    The following recommendations apply to target databases, ie, databases in which
    Streams apply is configured.
    1. Privileges
    Grant Explicit Privileges to APPLY_USER for the user tables
    Examples:
    Privileges for table level DML: INSERT/UPDATE/DELETE,
    Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX,
    CREATE (ANY) PROCEDURE
    2. Instantiation
    Set Instantiation SCNs manually if not using export/import. If manually
    configuring the instantiation scn for each table within the schema, use the
    RECURSIVE=>TRUE option on the DBMS_STREAMS_ADM.SET_SCHEMA_INSTANTIATION_SCN
    procedure
    For DDL Set Instantiation SCN at next higher level(ie,SCHEMA or GLOBAL level).
    3. Conflict Resolution
    If updates will be performed in multiple databases for the same shared
    object, be sure to configure conflict resolution. See the Streams
    Replication Administrator's Guide Chapter 3 Streams Conflict Resolution,
    for more detail.
    To simplify conflict resolution on tables with LOB columns, create an error
    handler to handle errors for the table. When registering the handler using
    the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the
    ASSEMBLE_LOBS parameter as TRUE.
    In Streams Concepts manual 10.2 chapter 22: Monitoring Apply
    Displaying detailed information about Apply errors.
    4. Apply Process Configuration
    A. Rules
    If the maintain_* procedures are not suitable for your environment,
    please use the ADD_RULES  procedures (ADDTABLE_RULES , ADD_SCHEMA_RULES ,
    ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES (DML only).
    These procedures minimize the number of steps required to configure Streams
    processes. Also, it is possible to create rules for non-existent objects,
    so be sure to check the spelling of each object specified in a rule carefully.
    APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can
    be used to apply all changes in the queue for the database. If no ruleset is
    specified for the apply process, all changes in the queue are processed by the apply process.
    A single Streams apply can process rules for multiple tables or schemas
    located in a single queue that are received from a single source database .
    For best performance, rules should be simple. Rules that include LIKE clauses are
    not simple and will impact the performance of Streams.
    To eliminate changes for particular tables or objects, specify the
    include_tagged_lcr clause along with the table or object name in the
    negative rule set for the Streams process. Setting this clause will
    eliminate all changes, tagged or not, for the table or object.
    B. Parameters
    Set the following parameters after a apply process is created:
    + DISABLE_ON_ERROR=N Default: Y
    If Y, then the apply process is disabled on the first unresolved error,
    even if the error is not fatal.
    If N, then the apply process continues regardless of unresolved errors.
    + PARALLELISM=3* Number of CPU Default: 1
    Apply parameters can be set using the SET_PARAMETER procedure from the
    DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter
    of the streams apply process named APPLY_EX, use the following syntax while
    logged in as the Streams Administrator:
    exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
    Change the apply parallelism parameter recommendation to a lower number.
    In general, try 4 or 8 and increase or decrease as necessary for your workload.
    In some cases, performance can be improved by setting the following hidden
    parameter. This parameter should be set when the major workload is UPDATEs
    and the updates are performed on just a few columns of a many-column table.
    + DYNAMICSTMTS=Y Default: N
    If Y, then for UPDATE statements, the apply process will optimize the
    generation of SQL statements based on required columns.
    CHECKPOINTFREQUENCY=1000
    Increase the frequency of logminer checkpoints especially in a
    database with significant LOB or DDL activity.
    exec dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
    5. Additional Configuration for RAC Environments for a Apply Database
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned
    by the same instance. The Streams components (capture/propagation/apply)
    all use that same owning instance to perform their work. This means that
    the database link specified in the propagation must connect to the owning
    instance of the target queue. the apply process is run at the owning instance
    of the target queue
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE and
    SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the
    primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Apply will automatically follow the ownership of the queue. If the ownership
    changes while apply is running, apply will stop on the current instance and
    restart at the new owner instance.
    Changing the GLOBAL_NAME of the Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that the queue is
    empty before changing the name and that the apply process is dropped and
    recreated with the apply_captured parameter = TRUE. In addition, if the
    GLOBAL_NAME does not match the db_name.db_domain of the database, include
    the GLOBAL_NAME in the list of services for the database in the database
    parameter initialization file.

  • Configuring udev rules for Oracle 10g R2 Rac on OEL 5.5 U4 with Qnap

    I'm trying to setup a 10g RAC Cluster following the guide by Jeff Hunter on http://www.idevelopment.info/
    I have to admit, im no Linux admin, and have searched round the net for help with the following Issue.
    I'm trying to set my iSCSI targets to have persistent mappings using udev rules
    This is what I have done so far
    [root@racnode1 Server]# iscsiadm -m discovery -t sendtargets -p nas-priv | grep 192.168.2.196
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d
    192.168.2.196:3260,1 iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d
    -- Manually Log into iSCSI Targets
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d -p 192.168.2.196 -l
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d -p 192.168.2.196 -l
    -- Make iSCSI Targets Automatically Login
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d -p 192.168.2.196 --op update -n node.startup -v automatic
    -- Create Persistent Local SCSI Device Names
    - Identify Mappings
    [root@racnode1 ~]# (cd /dev/disk/by-path; ls -l qnap | awk '{FS=" "; print $9 " " $10 " " $11}')
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d-lun-0 -> ../../sdg
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d-lun-0 -> ../../sdf
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d-lun-0 -> ../../sdi
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d-lun-0 -> ../../sdd
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d-lun-0 -> ../../sdj
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d-lun-0 -> ../../sdc
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d-lun-0 -> ../../sdb
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d-lun-0 -> ../../sde
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d-lun-0 -> ../../sdh
    - Create Rules File
    cat >> /etc/udev/rules.d/55-openiscsi.rules <<EOF
    # /etc/udev/rules.d/55-openiscsi.rules
    KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"
    EOF
    - Create Shell Script
    mkdir -p /etc/udev/scripts
    vi /etc/udev/scripts/iscsidev.sh
    #!/bin/sh
    # FILE: /etc/udev/scripts/iscsidev.sh
    BUS=${1}
    HOST=${BUS%%:*}
    [ -e /sys/class/iscsi_host ] || exit 1
    file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
    target_name=$(cat ${file})
    # This is not an open-scsi drive
    if [ -z "${target_name}" ]; then
    exit 1
    fi
    # Check if QNAP drive
    check_qnap_target_name=${target_name%%:*}
    if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
    target_name=`echo "${target_name%.*}"`
    fi
    echo "${target_name##*.}"
    chmod 755 /etc/udev/scripts/iscsidev.sh
    service iscsi stop
    service iscsi start
    [root@racnode1 ~]# ls /dev/iscsi/*
    ls: /dev/iscsi/*: No such file or directory
    1.) For some reason I cannot get the mappings to work correctly, I have rebooted the server and tried a number of different changes in the rules script. But for the life of me I cannot get it work.
    I noticed when I rebooted the server that it failed to execute the iscsidev. When I manually run the shell script with a parameter it produces output
    Can anyone help me to get this up and running?
    2.) My QNAP Nas doesnt seem to publish iSCSI targets to only one NIC. I think this is down to the firmware/feature not being available. When I discover targets I get the following
    [root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *qnap* | awk '{FS=" "; print $9 " " $10 " " $11}')
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d-lun-0 -> ../../sdh
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d-lun-0 -> ../../sdm
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d-lun-0 -> ../../sdn
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d-lun-0 -> ../../sde
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d-lun-0 -> ../../sdr
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d-lun-0 -> ../../sdd
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d-lun-0 -> ../../sdb
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d-lun-0 -> ../../sdk
    ip-192.168.1.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d-lun-0 -> ../../sdp
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs1.c59a2d-lun-0 -> ../../sdi
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbcrs2.c59a2d-lun-0 -> ../../sdg
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata1.c59a2d-lun-0 -> ../../sdo
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbdata2.c59a2d-lun-0 -> ../../sdj
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra1.c59a2d-lun-0 -> ../../sds
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbfra2.c59a2d-lun-0 -> ../../sdf
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote1.c59a2d-lun-0 -> ../../sdc
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote2.c59a2d-lun-0 -> ../../sdl
    ip-192.168.2.196:3260-iscsi-iqn.2004-04.com.qnap:ts-459:iscsi.racdbvote3.c59a2d-lun-0 -> ../../sdq
    It shows the same targets on both NIC's, I only need them on the private ip 192.168.2.196
    Edited by: user1728822 on 07-May-2011 15:53
    Edited by: user1728822 on 07-May-2011 16:08

    Hi,
    I'm facing the same issue.. If your issue is fixed..could you please let me know?
    I'm trying to configure 11g RAC with OPenfiler and got stuck here.
    Regards,
    Kumar

  • One Instance of two instance/nodes Crash in RAC

    Dear all,
    I setup two nodes RAC 9.2.0.2 in HP Alpha OpenVMS 8.2 , one node is CAPL21(instance number=1) and another is CAPL22(instance number=2) . when normally run about five days. the second instance CAPL22 terminated abnormally, I check the alert and trace file as follows:
    6-AUG-2008 06:29:39.82:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMS1_007.TRC;:
    ORA-00600: internal error code, arguments: [kjctr_rksxp:rcvr], [1], [2], [], [], [], [], []
    6-AUG-2008 06:29:40.32:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMS1_007.TRC;:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code, arguments: [kjctr_rksxp:rcvr], [1], [2], [], [], [], [], []
    6-AUG-2008 06:29:46.60:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMS1_007.TRC;:
    ORA-00600: internal error code, arguments: [ksxpcncl3], [], [], [], [], [], [], []
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code, arguments: [kjctr_rksxp:rcvr], [1], [2], [], [], [], [], []
    6-AUG-2008 06:29:47.38:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMS1_007.TRC;:
    ORA-00600: internal error code, arguments: [ksxpcncl3], [], [], [], [], [], [], []
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code, arguments: [kjctr_rksxp:rcvr], [1], [2], [], [], [], [], []
    6-AUG-2008 06:30:37.33:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_PMON_002.TRC;:
    ORA-00484: LMS* process terminated with error
    6-AUG-2008 06:30:37.33:
    PMON: terminating instance due to error 484
    6-AUG-2008 06:30:37.37:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMD0_005.TRC;:
    ORA-00484: LMS* process terminated with error
    6-AUG-2008 06:30:37.37:
    Errors in file $1$DGA2:[ORACLE.V9.ADMIN.YUSDB.BDUMP]CAPL22_YUSDB2_BG_LMS0_006.TRC;:
    ORA-00484: LMS* process terminated with error
    6-AUG-2008 06:30:37.40:
    System state dump is made for local instance
    6-AUG-2008 06:30:37.56:
    Trace dumping is performing id=[cdmp_20080806063037]
    6-AUG-2008 06:30:42.47:
    Instance terminated by PMON, pid = 20401500
    Starting up ORACLE RDBMS Version: 9.2.0.2.0.
    Oracle Bug? who can help me? Thanks in advance.
    Message was edited by:
    Efrain

    > I use 100M switch not Gibit switch, maybe it's also problem?
    Could be.. but one will need to look at the interface stats and switch stats to see whether the interconnect is coping.
    For 2 nodes, a decent interconnect is not that expensive. 2 HBC cards and small Inifiband switch.
    RAC is only as robust as the hardware it is running on.

  • Oracle Rac 11.2.0.3 doubts

    Hi experts,
    Current system info:
    server 1 with Redhat 6.5 and Orale ASM with SAP ECC 6  GRID 11.2.0.3 standalone installation
    Target system info:
    Server 1 and server 2 running  RAC 11.2.0.3 with SAP ECC 6  and RedHat 6.5 GRID  with cluster
    We are trying to convert our current system to oracle RAC but have some doubts.
    We are following  "Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2.0.2 and Oracle Real Application Clusters 11g Release 2: A Best Practices Guide"  so:
    On page 29 It says: "Prepare the storage location for storing the shared ORACLE_HOME directory in the cluster. The Oracle RDBMS software should be installed into an empty directory, accessible from all nodes in the cluster" Same thing for ORACLE_BASE for the RDBMS, SAP subdirectories (sapbackup, sapcheck, sapreorg, saptrace, oraarch etc.) and homedirectories for SAP users ora<SID> and <SID>adm to a shared filesystem.
    1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
    'cause on note 527843 it says:
    You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
    RAC 11.2.0.3/4 (x86 & x86_64 only):
    Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
           Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
    Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
    It does not mention just NFS.
    2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
    Regards

    Hi Ramon,
    1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
    'cause on note 527843 it says:
    You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
    RAC 11.2.0.3/4 (x86 & x86_64 only):
    Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
           Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
    Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
    It does not mention just NFS.
    NFS mount as suggest in SAP documentation should work. The use of ACFS always requires a special Oracle Grid Infrastructure (GI) Patch Set Update (PSU). Oracle Support Note 1369107.1 contains details about which GI PSU is required when you use ACFS with a specific RHEL update, service pack from SLES or UEK version of Oracle.
    2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
    You may perform DB backup using backup tools and then scrap the existing Grid setup. Configure RAC and then restore the backup into the new configuraiton as per SAP guidelines under
    Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2 with Oracle Real Application Clusters 11g Release 2
    Hope this helps.
    Regards,
    Deepak Kori

Maybe you are looking for

  • Despite having all the correct serial #'s CS3 will not install on new Imac

    Hi, Please Help. I am trying to install CS3 on my new iMac 27 running 10.62. First I just put in the CS3 disk, and when it came time for the full version number of CS(full version). It would not accept it. Then I tried the serial number from my copy

  • Apple tv (3rd gen) set up

    I have a strange problem. I had an apple TV in the past which had to be sent back for a replacement because we continually got an error code (can't remember the number now). Until then, it worked well and I had it hooked up to the same TV we currentl

  • The given link is not working sap help(best Practice documets)

    Hi Experts, http://help.sap.com/bpcrmv250/CRM_DE/BBLibrary/html/BBlibrary.htm_ Please see the above link, Earliet  the link consists of documents for CRM 7.0 confrigration. But now the link is not working at all. PLease provide a link if it is there

  • ICloud account on Mail is frozen

    Hi there, My (paid) iCloud account seems to be locked in Apple Mail (6.2).  It will neither send nor receive messages.  I removed the account from Mail, closed Mail, then re-opened it, the account re-appeared.  Still the same problem. I can access it

  • Banshee, metadata and syncing [Solved]

    *I have a lot of music from various sources *I use banshee to manage and play my music, I would like to continue using banshee if possible *I use banshee to sync my music to my android phone (running cyanogenmod7). *Due to my music being from various