RAC Planning

Hi all,
I have 4 servers that i intend to use for a multi-node RAC cluster utilizing available shared storage of 20 TB.
I am going to have 2 separate databases for use by two different applications and i need some advice on how to implement this.
A single 4-node RAC cluster that will host the two databases for the two applications or two 2-node RAC clusters each hosting a single database for use by a single application. Will there be any contention issues if i use a single 4-node RAC cluster for the two databases?
Also i need some advice on sizing the storage. i have 20 1TB 7200K RPM drives and i want to implement a RAID 10 giving me 10 1TB pairs for use. I want to use some of these and add more as the databases grow. What size should i use for the LUNs to be presented to the RAC nodes. Also a friend recommended having a data diskgroup, flash recovery area diiskgroup and an additional diskgroup for the redo logs.
I am using Brocade FC switches.
All advice is appreciated.
Thanks.

Hi,
user11970290 wrote:
Hi all,
I have 4 servers that i intend to use for a multi-node RAC cluster utilizing available shared storage of 20 TB.
I am going to have 2 separate databases for use by two different applications and i need some advice on how to implement this.
A single 4-node RAC cluster that will host the two databases for the two applications or two 2-node RAC clusters each hosting a single database for use by a single application. Will there be any contention issues if i use a single 4-node RAC cluster for the two databases?
Also i need some advice on sizing the storage. i have 20 1TB 7200K RPM drives and i want to implement a RAID 10 giving me 10 1TB pairs for use. I want to use some of these and add more as the databases grow. What size should i use for the LUNs to be presented to the RAC nodes. Also a friend recommended having a data diskgroup, flash recovery area diiskgroup and an additional diskgroup for the redo logs.
I am using Brocade FC switches.
I advise you create one cluster with 4 nodes if hardware is the same. High availability is longer guaranteed when infrastructure can support multiple type of the failure in a cluster.
With 2 nodes in your production envorimment if one node fail you will have all workload in a single node.
With 4 nodes in your production envorimment if one or two nodes fail you can guarantee your workload by distributing the all load across 3 or 2 nodes providing stability to the system as the workloads change.
To ease the management workload of clusterware Oracle introduced on 11.2 QoS Management.
Quality of Service (QoS) Management
A new Quality of Service (QoS) Management Server enables run time management of service levels for hosted database applications on a shared infrastructure by cluster administrators. The goal is to present an easy-to-use, policy-driven management system that ensures meeting service levels if sufficient resources are available and when they are not, allocates resources to the most business critical workloads not meeting their service levels at the expense of the less critical ones.
http://download.oracle.com/docs/cd/E11882_01/server.112/e16542/apqos_intro.htm#APQOS269
Find more about Policy Managed Databases.
About ASMDisks
I recommend you read this above, any question just ask.
Re: Oracle 10.2 ASM on AIX 5.3 compatibility with IBM DS4300 & EXP700 storage
Regards,
Levi Pereira

Similar Messages

  • Best practice for RAC planned downtime maintenance?

    I have a 3 node RAC on Linux redhat. DB version is 11gr2.
    I want to know the steps to perform each node OS patch upgrade.
    I want to be sure I did right steps:
    1). Node 1, stop crs and do the OS patch upgrade.
    2). same steps for Node 2, 3.
    Is this right?
    Thank you for any help.

    user569151 wrote:
    I have a 3 node RAC on Linux redhat. DB version is 11gr2.
    I want to know the steps to perform each node OS patch upgrade.
    I want to be sure I did right steps:
    1). Node 1, stop crs and do the OS patch upgrade.
    2). same steps for Node 2, 3.
    If your RAC environment is configured properly follow this step:
    Node 1:
    * Relocate services to node 2 or 3 and stop database instance and service using SRVCTL
    * Stop Clusterware using CRSCTL
    * Disable automatic Startup of Clusterware on this node using CRSCTL
    * Patch your OS
    * Relink Oracle Binaries (Grid Infrastructure/Oracle Database) (P.S You will need unlock and re-lock your Grid Home to relink binaries see note RAC: Frequently Asked Questions [ID 220970.1])
    If you are using ACFS you must check if that OS Upgrade will affect ACFS drivers (ACFS Supported On OS Platforms. [ID 1369107.1])
    * Enable Automatic Startup of Clusterware on this node using CRSCTL
    * Start Clusterware using CRSCTL
    * Start Database and Services with SRVCTL
    Repeat steps above on Node 2 and 3
    Of course don't forget a good plan to backup and recovery to support case of failure.
    http://docs.oracle.com/cd/E11882_01/server.112/e17157/planned.htm#CJACDIJD
    Is It Necessary To Relink Oracle Following OS Upgrade? [ID 444595.1]
    My concern is here, let's say each node has to be restarted.
    So my procedure should be on 1st node: crsctl stop crs to stop everything and failover everything to other nodes.
    I wonder crsctl stop crs will cause ASM instance to be down?Yes "CRSCTL STOP CRS" will try stop all clusterware resource with clean state. If any problem occurs will be raised on your prompt.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Nov 1, 2012 7:35 PM

  • Consumer Group/Resource Plan to use only one instance of a RAC

    What would be the way to create a Consumer Group/Resource Plan to use only one instance of a RAC? I have 10.1.0.5 database
    running on 10.1.0.5 RAC.
    Thanks,

    You should use "services" to limit workload to just one specific node of RAC:
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10739/create.htm#sthref400

  • Configuring Loadbalanced RDBMS (RAC) in Planning Datasources (11.1.1.2)

    Hi everyone, during the configurarion of every RDB in System 11.1.1.2, I'm able to use "loadbalanced" JDBC URL (by using "advanced options" in the Oracle Hyperion EPM Configurator) --- Thanks for that DataDirect Connect® for JDBC Drivers v3.7!!
    But when creating Planning datasources I have no option to enter custom JDBC URL in order to connect to the RAC (load balanced JDBC URL).
    Should I manually change the JDBC URL in the Planning System schema? (HSP_DATASOURCES table).
    It is a supported configuration???
    Thanks in advance.
    Best regards,
    JavierV

    Hi everyone,
    I have successfully configured my Planning Datasource with RAC capabilities, I have achieved that by changing the JDBC URL from the backend (by modifying the HSPSYS_DATASOURCE table in Planning System Catalog).
    FYI:
    Planning 11.1.1.2 comes by default with 2 JDBC libraries:*
    *$HYPERION_HOME/deployments/WebLogic9/servers/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/lib/hyjdbc.jar*
    That ships the following classes:
    hyperion.jdbc.base.BaseDriver
    hyperion.jdbc.db2.DB2Driver
    hyperion.jdbc.informix.InformixDriver
    hyperion.jdbc.mysql.MySQLDriver
    hyperion.jdbc.oracle.OracleDriver
    hyperion.jdbc.sqlserver.SQLServerDriver
    hyperion.jdbc.sybase.SybaseDriver
    hyperion.jdbcspy.SpyDriver
    +$HYPERION_HOME/deployments/WebLogic9/servers/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/lib/jdbc12.jar+
    That ships the following classes:
    oracle.jdbc.OracleDriver
    oracle.jdbc.driver.OracleDriver
    Both Oracle Drivers (Hyperion or Oracle) supports RAC configuration, but you have be careful about each driver’s JDBC connection syntax:
    JDBC sample URL for connecting to a RAC (Hyperion JDBC Driver):
    jdbc:hyperion:oracle://<servername>:1521;ServiceName=<sid>;LoadBalancing=true;AlternateServers=(<servername>:1521;ServiceName=<sid>)
    JDBC sample URL for connecting to a RAC (Oracle JDBC Driver):
    jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(FAILOVER=on)(ADDRESS_LIST=(ADDRESS=(protocol=tcp)(host=<servername>)(port=1521))(ADDRESS=(protocol=tcp)(<servername>)(port=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=<sid>)))
    As an additional note, RDB_SERVER_URL field is VARCHAR2(255 Byte) so the JDBC URL must not exceed this limitation.
    Best regards everyone.

  • Planning Oracle RAC with ASM

    Hello Experts.
    Planning to build a RAC on following environment.
    Oracle Database Ver.: 11.2.0.3
    OS: Linux RHEL 6.1 - 64 Bit
    Servers: HP
    ASM and ACFS for storage.
    Please advise on planning guide/articles. Specially Hardware concerns such as interconnects speed.
    Thank you in advance:
    Ajazjn76.

    http://www.amazon.com/Pro-Oracle-Database-Linux-ebook/dp/B004VJ472I
    http://tahiti.oracle.com
    http://support.oracle.com
    http://www.oracle.com/partners/en/knowledge-zone/database/rac11g-exam-330024.html
    Based on what you posted no one can advise you on interconnect speeds. How is anyone to know whether you could get by with 1GEth, need 10gEth with JumboFrames, or perhaps 40G InfiniBand.
    What you wrote is roughly equivalent to ... I need to move some stuff please tell me how to do it? We don't know if you are moving a ball of cotton or a submarine.

  • RAC Capacity Planning

    Does anybody has any document or some thumb rules to perform Server Sizing in terms of CPU Speed, No. of CPUs, Kind of CPU, Storage Sizing, RAM etc. for DSS or OLTP applications.
    Kind Regards | Sanjiv

    Hi Sanjiv,
    I wanted to post the same question when I found your thread. Unfortunately, I also have questions rather than answers, but I hope that by providing a more specific question, I can attract some interest to this thread.
    We are in the process of installing Oracle RAC 11g R2 RAC on IBM Power server P740. We use IBM Logical Partitioning (LPARs) and more specifically, micro-partitioning feature (capability to allocate fractions of a core to an LPAR). Operating system is AIX 6.1. I/O is virtualized through VIO partition.
    We have started with 5 non-clustered Oracle 11.2.0.2 instances and planned to deploy them in 4 two-node clusters. Each cluster would host only a single instance, except for one cluster that was supposed to host two database instances. We sized clusters by assigning to each node the same number of cores that we had on the non-clustered instances.
    We have soon learned that clusterware and ASM processes represent a significant CPU utilization overhead of about 0.35 cores on each node. So if a node runs on an LPAR with 0.6 core assigned to it, the CPU utilization is above 50% even when cluster is idle. As a result, although we have doubled the resources when migrating to cluster, we have an undersized system and insufficient number of cores in the resource pool.
    As this point we are attempting to resolve the problem by consolidating the database instances into 2 clusters only, first 2-node cluster with a single large instance and second 2-node cluster with 4 smaller instances. We hope that reducing the number of clusters per P7 server will reduce the compound effect of clusterware overheads to CPU utilization of the P7 resource pools.
    Can somebody please comment on our sizing assumptions and CPU utilization findings:
    1. We have sized each node in the 2 node cluster to be of the equal size as the original non-clustered Oracle 11g instance. Is this sizing approach common?
    2. Our findings are that clusterware processes cause significant CPU utilization overhead and that we cannot use micro-partioning as we did before for non-clustered Oracle 11g instances. In other words, now we need LPARs with a min of a 1 core assigned to each node.
    Thank you and regards,
    VladB

  • Select best plan for DATAGUARD IN RAC ARCHITECTURE

    Hi
    In my planing i will have 2node in my RAC but i don't know how do i must to config DATA Guard ..is it be one server or 2server!
    if i have only one server for data guard in fail-over what happen for my RAC?
    what about storage?
    pls help me to select best solution.

    In my planing i will have 2node in my RAC but i don't know how do i must to config DATA Guard ..is it be one server or 2server!What is the version?
    If your primary is RAC database, you can configure standby either Single node or RAC standby. It purely depends on your requirement.
    If it is RAC then it should be on Two servers, If its standalone standby then one server.
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf -- for RAC primary Standalone standby
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimaryracphysicalsta-131940.pdf -- for RAC primary and RAC standby
    if i have only one server for data guard in fail-over what happen for my RAC?Failover applicable when no Primary in your configuration, If you have performed failover to standby database, As of now its only Single standalone but its an clusterd database. If you add any node, of course you can extend your instance to the newly added node to the cluster.
    what about storage?it depends on you. I prefer SAN.
    >
    user13344656      Newbie
         Handle:      user13344656
    Status Level:      Newbie
    Registered:      Mar 7, 2011
    Total Posts:      82
    Total Questions:      51 (45 unresolved)
    >
    Answer me why your questions are not answered, Here lot of ideas who shares still they are not helpful to you? or you simply ignoring after getting help?
    Consider closing your questions as answered and keep the forum clean.
    Read Forums Etiquette / Reward Points https://forums.oracle.com/forums/ann.jspa?annID=718

  • Data plans in the new wireless world

    I see an impending collapse of many peoples internet service approaching soon and very little discussion on the topic, so I decided to write this to help inform Verizon of some of its customers needs.
           What originally attracted me to Verizon was its effort to push beyond the big markets and reach all of America. Create a country where everyone is "connected". Join Verizon and we will take care of you no matter where you live. These are the ideas Verizon has pushed onto people over the years and I commend them on doing a very good job at reaching these goals. The issue is, they have acquired such a large customer base it is now creating problems. An excellent marketing tactic of wireless service providers was always to offer up unlimited data plans to attract people to their service and this used to create very little risk for the service provider because wireless data was slower the telegraph!! Cellular providers used to sell phones, now they sell computers. Have they adjusted to this new market accordingly?
           When you had a 3G phone or even back when it was 2G or 1XX paying extra for unlimited data was pointless. As Verizon's CFO recently pointed out "Unlimited is just a word" "people don't use near the data they think they do" this is exactly why they could profit from unlimited data in the past, because this was true. Have you ever tried to stream a hi-def movie on 3G service? you were lucky if it wasn't pausing to buffer every 10 secs, what did you do every time this happened, you stopped trying to watch it!! Therefore you didn't use the data required to stream that content. Now just when wireless technology is finally becoming fast enough to be useful or enjoyable, it is being shut down.
           The new 4G LTE services have finally made it possible to reach 8-10 MB download speeds on regular basis. You can stream a full 1080 hi-def Blu-ray quality movie with 7.1 surround sound if your internet connection speed is over 5 MB. This sounds great doesn't it, finally I can watch SyFy, Hulu or Vudu with my aircard. This is the problem Verizon is facing, because that one 90 minute movie from Vudu at 1080 hi-def just sucked up 5 GB!!  No that was not a typo, full hi-def streaming video will push 4-5 GB in a couple hours. Verizon had to curtail this "excessive" data usage by reverting to tiered plans or "shared data"  so with the new data plans that was a $50 movie, at that price you would once again stop watching it and not use the data required to stream that content.
           Verizon broadband could not handle this volume of data being used when people started getting 4G internet service, so they had no choice but to end unlimited plans or be faced with the reality that they could not support there millions of customers data usage. For most people this will not create a big issue because they probably still live somewhere that they have access to other internet services such as DSL, Cable, or the best at Fiber Optic. DSL and cable will usually have a plan that creates fast enough internet speeds for streaming video but you will have to pay for a premium service.
           This however does create a step backwards, in what seemed like a step forward for the rural community. 3G unlimited aircard was a great option for rural community's, finally offering them an affordable internet service that often could still play Netflix, as well as all the normal uses such as e-mail and browsing, although it would not stream a 1080 hi-def movie it could at times stream a standard-def movie. Now with unlimited going out the window all those people that thought it was going to be so great to upgrade to a 4G aircard and finally watch movies, quickly realized that wasn't going to happen and worse Verizon is showing signs of completely eliminating all unlimited data as they course through their contracts looking for legal ways to stop honoring them, and for those that they can not stop honoring they have started to "throttle" usage when it exceeds a certain amount usually around 2 GB a month. This is perfectly legal to do, almost all ISP's will do this this when you exceed a certain amount of data, the difference is when your ISP offers 8 MB connection speeds and throttles you down to 5 MB speeds because you use a lot of data, you never really notice it unless you download media instead of streaming it and your downloads seem to take longer then normal but it doesn't effect you enough to complain. When your internet speeds are 1-2 MB as they are with 3G service, throttling brings you down to 300-600 KBPS!! They will throttle your speed down so much that you can't do much of anything except e-mail and browsing, Netflix will not work anymore, so thinking you can stay with a 3G device and just keep what you have isn't an option. They are forcing everyone into new plans. 
           In the past it was very hard to use 10 GB of data on a wireless carriers broadband because speeds were to slow to allow use of all the online media. Now it is very easy to use 60 GB a month if you like movies and online TV. (That is 60 GB without downloading any media just streaming it) Perhaps if we are lucky Verizon will once again return to the table and look for ways to help their rural customers without taking are internet away from us. If I could pay for DSL from my local phone company I would, but I do not even have a phone line tied to my house, and if I decided to get one it still would not have internet access. But due to Verizon's efforts in expanding there 4G service across the country as quickly as possible I do have excellent 4G wireless service available in my home. Just as I thought it was time to get a Verizon aircard and start watching movies again, they told me you will never be able to afford it. 
          If Verizon can create a rural community data plan for users that have no other means to access the internet and and it doesn't cost $600 a month then I will happily buy it, and also understand that it cannot cost $30 a month either, but if people could get, Phone, Internet (with say 50 GB limit), TV (by tv I don't mean Verizon TV that I agree should be an addition to the bill, i mean the free hulu, syfy and payed netflix accounts that are accessible with any internet connection fast enough to stream), Movie Store in their living room (Vudu) all in one package I think that $200 a month is a reasonable price.
         Now all I can do is wait and see who calls first. If Verizon creates a new data plan that allows media streaming at a reasonable price ill buy it. If my Local phone company offers me high speed internet and phone first, ill buy that instead and throw my current $140 month Verizon bill in the trash when I end the contract. I will not have 2 phones, I only have a Verizon phone because I have no Land line in my home. So I guess the race is on to see who can maintain the rural community's business.

    What about Via Sat Exede satellite? 12 Mbps down and 3 Mbps up
    25GB $130
    Free data midnight to 5 am
    Hook up VoIP phone and you got close to what you are asking, depending if you are a night owl or not.

  • Active session Spike on Oracle RAC 11G R2 on HP UX

    Dear Experts,
    We need urgent help please, as we are facing very low performance in production database.
    We are having oracle 11G RAC on HP Unix environment. Following is the ADDM report. Kindly check and please help me to figure it out the issue and resolve it at earliest.
    ---------Instance 1---------------
              ADDM Report for Task 'TASK_36650'
    Analysis Period
    AWR snapshot range from 11634 to 11636.
    Time period starts at 21-JUL-13 07.00.03 PM
    Time period ends at 21-JUL-13 09.00.49 PM
    Analysis Target
    Database 'MCMSDRAC' with DB ID 2894940361.
    Database version 11.2.0.1.0.
    ADDM performed an analysis of instance mcmsdrac1, numbered 1 and hosted at
    mcmsdbl1.
    Activity During the Analysis Period
    Total database time was 38466 seconds.
    The average number of active sessions was 5.31.
    Summary of Findings
       Description           Active Sessions      Recommendations
                             Percent of Activity  
    1  CPU Usage             1.44 | 27.08         1
    2  Interconnect Latency  .07 | 1.33           1
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Findings and Recommendations
    Finding 1: CPU Usage
    Impact is 1.44 active sessions, 27.08% of total activity.
    Host CPU was a bottleneck and the instance was consuming 99% of the host CPU.
    All wait times will be inflated by wait for CPU.
    Host CPU consumption was 99%.
       Recommendation 1: Host Configuration
       Estimated benefit is 1.44 active sessions, 27.08% of total activity.
       Action
          Consider adding more CPUs to the host or adding instances serving the
          database on other hosts.
       Action
          Session CPU consumption was throttled by the Oracle Resource Manager.
          Consider revising the resource plan that was active during the analysis
          period.
    Finding 2: Interconnect Latency
    Impact is .07 active sessions, 1.33% of total activity.
    Higher than expected latency of the cluster interconnect was responsible for
    significant database time on this instance.
    The instance was consuming 110 kilo bits per second of interconnect bandwidth.
    20% of this interconnect bandwidth was used for global cache messaging, 21%
    for parallel query messaging and 7% for database lock management.
    The average latency for 8K interconnect messages was 42153 microseconds.
    The instance is using the private interconnect device "lan2" with IP address
    172.16.200.71 and source "Oracle Cluster Repository".
    The device "lan2" was used for 100% of interconnect traffic and experienced 0
    send or receive errors during the analysis period.
       Recommendation 1: Host Configuration
       Estimated benefit is .07 active sessions, 1.33% of total activity.
       Action
          Investigate cause of high network interconnect latency between database
          instances. Oracle's recommended solution is to use a high speed
          dedicated network.
       Action
          Check the configuration of the cluster interconnect. Check OS setup like
          adapter setting, firmware and driver release. Check that the OS's socket
          receive buffers are large enough to store an entire multiblock read. The
          value of parameter "db_file_multiblock_read_count" may be decreased as a
          workaround.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Cluster" was not consuming significant database time.
    Wait class "Commit" was not consuming significant database time.
    Wait class "Concurrency" was not consuming significant database time.
    Wait class "Configuration" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Wait class "User I/O" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    Hard parsing of SQL statements was not consuming significant database time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    ----------------Instance 2 --------------------
              ADDM Report for Task 'TASK_36652'
    Analysis Period
    AWR snapshot range from 11634 to 11636.
    Time period starts at 21-JUL-13 07.00.03 PM
    Time period ends at 21-JUL-13 09.00.49 PM
    Analysis Target
    Database 'MCMSDRAC' with DB ID 2894940361.
    Database version 11.2.0.1.0.
    ADDM performed an analysis of instance mcmsdrac2, numbered 2 and hosted at
    mcmsdbl2.
    Activity During the Analysis Period
    Total database time was 2898 seconds.
    The average number of active sessions was .4.
    Summary of Findings
        Description                 Active Sessions      Recommendations
                                    Percent of Activity  
    1   Top SQL Statements          .11 | 27.65          5
    2   Interconnect Latency        .1 | 24.15           1
    3   Shared Pool Latches         .09 | 22.42          1
    4   PL/SQL Execution            .06 | 14.39          2
    5   Unusual "Other" Wait Event  .03 | 8.73           4
    6   Unusual "Other" Wait Event  .03 | 6.42           3
    7   Unusual "Other" Wait Event  .03 | 6.29           6
    8   Hard Parse                  .02 | 5.5            0
    9   Soft Parse                  .02 | 3.86           2
    10  Unusual "Other" Wait Event  .01 | 3.75           4
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Findings and Recommendations
    Finding 1: Top SQL Statements
    Impact is .11 active sessions, 27.65% of total activity.
    SQL statements consuming significant database time were found. These
    statements offer a good opportunity for performance improvement.
       Recommendation 1: SQL Tuning
       Estimated benefit is .05 active sessions, 12.88% of total activity.
       Action
          Investigate the PL/SQL statement with SQL_ID "d1s02myktu19h" for
          possible performance improvements. You can supplement the information
          given here with an ASH report for this SQL_ID.
          Related Object
             SQL statement with SQL_ID d1s02myktu19h.
             begin dbms_utility.validate(:1,:2,:3,:4); end;
       Rationale
          The SQL Tuning Advisor cannot operate on PL/SQL statements.
       Rationale
          Database time for this SQL was divided as follows: 13% for SQL
          execution, 2% for parsing, 85% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "d1s02myktu19h" was executed 48 times and had
          an average elapsed time of 7 seconds.
       Rationale
          Waiting for event "library cache pin" in wait class "Concurrency"
          accounted for 70% of the database time spent in processing the SQL
          statement with SQL_ID "d1s02myktu19h".
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "63wt8yna5umd6" are responsible for 100% of the database time spent on
          the PL/SQL statement with SQL_ID "d1s02myktu19h".
          Related Object
             SQL statement with SQL_ID 63wt8yna5umd6.
             begin DBMS_UTILITY.COMPILE_SCHEMA( 'TPAUSER', FALSE ); end;
       Recommendation 2: SQL Tuning
       Estimated benefit is .02 active sessions, 4.55% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "fk3bh3t41101x".
          Related Object
             SQL statement with SQL_ID fk3bh3t41101x.
             SELECT MEM.MEMBER_CODE ,MEM.E_NAME,Pol.Policy_no
             ,pol.date_from,pol.date_to,POL.E_NAME,MEM.SEX,(SYSDATE-MEM.BIRTH_DATE
             ) AGE,POL.SCHEME_NO FROM TPAUSER.MEMBERS MEM,TPAUSER.POLICY POL WHERE
             POL.QUOTATION_NO=MEM.QUOTATION_NO AND POL.BRANCH_CODE=MEM.BRANCH_CODE
             and endt_no=(select max(endt_no) from tpauser.members mm where
             mm.member_code=mem.member_code AND mm.QUOTATION_NO=MEM.QUOTATION_NO)
             and member_code like '%' || nvl(:1,null) ||'%' ORDER BY MEMBER_CODE
       Rationale
          The SQL spent 92% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "fk3bh3t41101x" was executed 14 times and had
          an average elapsed time of 4.9 seconds.
       Rationale
          At least one execution of the statement ran in parallel.
       Recommendation 3: SQL Tuning
       Estimated benefit is .02 active sessions, 3.79% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "7mhjbjg9ntqf5".
          Related Object
             SQL statement with SQL_ID 7mhjbjg9ntqf5.
             SELECT SUM(CNT) FROM (SELECT COUNT(PROC_CODE) CNT FROM
             TPAUSER.TORBINY_PROCEDURE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
             :B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND PR_EFFECTIVE_DATE<=
             :B2 AND PROC_CODE = :B1 UNION SELECT COUNT(MED_CODE) CNT FROM
             TPAUSER.TORBINY_MEDICINE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
             :B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND M_EFFECTIVE_DATE<= :B2
             AND MED_CODE = :B1 UNION SELECT COUNT(LAB_CODE) CNT FROM
             TPAUSER.TORBINY_LAB WHERE BRANCH_CODE = :B6 AND QUOTATION_NO = :B5
             AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND L_EFFECTIVE_DATE<= :B2 AND
             LAB_CODE = :B1 )
       Rationale
          The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 0% for SQL execution,
          0% for parsing, 100% for PL/SQL execution and 0% for Java execution.
       Rationale
          SQL statement with SQL_ID "7mhjbjg9ntqf5" was executed 31 times and had
          an average elapsed time of 3.4 seconds.
       Rationale
          Top level calls to execute the SELECT statement with SQL_ID
          "a11nzdnd91gsg" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "7mhjbjg9ntqf5".
          Related Object
             SQL statement with SQL_ID a11nzdnd91gsg.
             SELECT POLICY_NO,SCHEME_NO FROM TPAUSER.POLICY WHERE QUOTATION_NO
             =:B1
       Recommendation 4: SQL Tuning
       Estimated benefit is .01 active sessions, 3.03% of total activity.
       Action
          Investigate the SELECT statement with SQL_ID "4uqs4jt7aca5s" for
          possible performance improvements. You can supplement the information
          given here with an ASH report for this SQL_ID.
          Related Object
             SQL statement with SQL_ID 4uqs4jt7aca5s.
             SELECT DISTINCT USER_ID FROM GV$SESSION, USERS WHERE UPPER (USERNAME)
             = UPPER (USER_ID) AND USERS.APPROVAL_CLAIM='VC' AND USER_ID=:B1
       Rationale
          The SQL spent only 0% of its database time on CPU, I/O and Cluster
          waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
          Look at performance data for the SQL to find potential improvements.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "4uqs4jt7aca5s" was executed 261 times and had
          an average elapsed time of 0.35 seconds.
       Rationale
          At least one execution of the statement ran in parallel.
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "91vt043t78460" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "4uqs4jt7aca5s".
          Related Object
             SQL statement with SQL_ID 91vt043t78460.
             begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
             4); end;
       Recommendation 5: SQL Tuning
       Estimated benefit is .01 active sessions, 3.03% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "7kt28fkc0yn5f".
          Related Object
             SQL statement with SQL_ID 7kt28fkc0yn5f.
             SELECT COUNT(*) FROM TPAUSER.APPROVAL_MASTER WHERE APPROVAL_STATUS IS
             NULL AND (UPPER(CODED) = UPPER(:B1 ) OR UPPER(PROCESSED_BY) =
             UPPER(:B1 ))
       Rationale
          The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "7kt28fkc0yn5f" was executed 1034 times and
          had an average elapsed time of 0.063 seconds.
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "91vt043t78460" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "7kt28fkc0yn5f".
          Related Object
             SQL statement with SQL_ID 91vt043t78460.
             begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
             4); end;
    Finding 2: Interconnect Latency
    Impact is .1 active sessions, 24.15% of total activity.
    Higher than expected latency of the cluster interconnect was responsible for
    significant database time on this instance.
    The instance was consuming 128 kilo bits per second of interconnect bandwidth.
    17% of this interconnect bandwidth was used for global cache messaging, 6% for
    parallel query messaging and 8% for database lock management.
    The average latency for 8K interconnect messages was 41863 microseconds.
    The instance is using the private interconnect device "lan2" with IP address
    172.16.200.72 and source "Oracle Cluster Repository".
    The device "lan2" was used for 100% of interconnect traffic and experienced 0
    send or receive errors during the analysis period.
       Recommendation 1: Host Configuration
       Estimated benefit is .1 active sessions, 24.15% of total activity.
       Action
          Investigate cause of high network interconnect latency between database
          instances. Oracle's recommended solution is to use a high speed
          dedicated network.
       Action
          Check the configuration of the cluster interconnect. Check OS setup like
          adapter setting, firmware and driver release. Check that the OS's socket
          receive buffers are large enough to store an entire multiblock read. The
          value of parameter "db_file_multiblock_read_count" may be decreased as a
          workaround.
       Symptoms That Led to the Finding:
          Inter-instance messaging was consuming significant database time on this
          instance.
          Impact is .06 active sessions, 14.23% of total activity.
             Wait class "Cluster" was consuming significant database time.
             Impact is .06 active sessions, 14.23% of total activity.
    Finding 3: Shared Pool Latches
    Impact is .09 active sessions, 22.42% of total activity.
    Contention for latches related to the shared pool was consuming significant
    database time.
    Waits for "library cache lock" amounted to 5% of database time.
    Waits for "library cache pin" amounted to 17% of database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .09 active sessions, 22.42% of total activity.
       Action
          Investigate the cause for latch contention using the given blocking
          sessions or modules.
       Rationale
          The session with ID 17 and serial number 15595 in instance number 1 was
          the blocking session responsible for 34% of this recommendation's
          benefit.
       Symptoms That Led to the Finding:
          Wait class "Concurrency" was consuming significant database time.
          Impact is .1 active sessions, 24.96% of total activity.
    Finding 4: PL/SQL Execution
    Impact is .06 active sessions, 14.39% of total activity.
    PL/SQL execution consumed significant database time.
       Recommendation 1: SQL Tuning
       Estimated benefit is .05 active sessions, 12.5% of total activity.
       Action
          Tune the entry point PL/SQL "SYS.DBMS_UTILITY.COMPILE_SCHEMA" of type
          "PACKAGE" and ID 6019. Refer to the PL/SQL documentation for addition
          information.
       Rationale
          318 seconds spent in executing PL/SQL "SYS.DBMS_UTILITY.VALIDATE#2" of
          type "PACKAGE" and ID 6019.
       Recommendation 2: SQL Tuning
       Estimated benefit is .01 active sessions, 1.89% of total activity.
       Action
          Tune the entry point PL/SQL
          "SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS" of type "PACKAGE" and
          ID 68654. Refer to the PL/SQL documentation for addition information.
    Finding 5: Unusual "Other" Wait Event
    Impact is .03 active sessions, 8.73% of total activity.
    Wait event "DFS lock handle" in wait class "Other" was consuming significant
    database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 8.73% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .03 active sessions, 8.27% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 5.05% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Module "TOAD
          9.7.2.5".
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 3.21% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Module
          "toad.exe".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 6: Unusual "Other" Wait Event
    Impact is .03 active sessions, 6.42% of total activity.
    Wait event "reliable message" in wait class "Other" was consuming significant
    database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 6.42% of total activity.
       Action
          Investigate the cause for high "reliable message" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .03 active sessions, 6.42% of total activity.
       Action
          Investigate the cause for high "reliable message" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 4.13% of total activity.
       Action
          Investigate the cause for high "reliable message" waits in Module "TOAD
          9.7.2.5".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 7: Unusual "Other" Wait Event
    Impact is .03 active sessions, 6.29% of total activity.
    Wait event "enq: PS - contention" in wait class "Other" was consuming
    significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 6.29% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .02 active sessions, 6.02% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 4.93% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits with
          P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
          "3599" respectively.
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 2.74% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Module
          "Inbox Reader_92.exe".
       Recommendation 5: Application Analysis
       Estimated benefit is .01 active sessions, 2.74% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Module
          "TOAD 9.7.2.5".
       Recommendation 6: Application Analysis
       Estimated benefit is .01 active sessions, 1.37% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits with
          P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
          "3598" respectively.
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 8: Hard Parse
    Impact is .02 active sessions, 5.5% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Hard parses due to cursor environment mismatch were not consuming significant
    database time.
    Hard parsing SQL statements that encountered parse errors was not consuming
    significant database time.
    Hard parses due to literal usage and cursor invalidation were not consuming
    significant database time.
    The Oracle instance memory (SGA and PGA) was adequately sized.
       No recommendations are available.
       Symptoms That Led to the Finding:
          Contention for latches related to the shared pool was consuming
          significant database time.
          Impact is .09 active sessions, 22.42% of total activity.
             Wait class "Concurrency" was consuming significant database time.
             Impact is .1 active sessions, 24.96% of total activity.
    Finding 9: Soft Parse
    Impact is .02 active sessions, 3.86% of total activity.
    Soft parsing of SQL statements was consuming significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .02 active sessions, 3.86% of total activity.
       Action
          Investigate application logic to keep open the frequently used cursors.
          Note that cursors are closed by both cursor close calls and session
          disconnects.
       Recommendation 2: Database Configuration
       Estimated benefit is .02 active sessions, 3.86% of total activity.
       Action
          Consider increasing the session cursor cache size by increasing the
          value of parameter "session_cached_cursors".
       Rationale
          The value of parameter "session_cached_cursors" was "100" during the
          analysis period.
       Symptoms That Led to the Finding:
          Contention for latches related to the shared pool was consuming
          significant database time.
          Impact is .09 active sessions, 22.42% of total activity.
             Wait class "Concurrency" was consuming significant database time.
             Impact is .1 active sessions, 24.96% of total activity.
    Finding 10: Unusual "Other" Wait Event
    Impact is .01 active sessions, 3.75% of total activity.
    Wait event "IPC send completion sync" in wait class "Other" was consuming
    significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .01 active sessions, 3.75% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits. Refer
          to Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .01 active sessions, 3.75% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits with P1
          ("send count") value "1".
       Recommendation 3: Application Analysis
       Estimated benefit is .01 active sessions, 2.59% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits in
          Service "mcmsdrac".
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 1.73% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits in
          Module "TOAD 9.7.2.5".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Commit" was not consuming significant database time.
    Wait class "Configuration" was not consuming significant database time.
    CPU was not a bottleneck for the instance.
    Wait class "Network" was not consuming significant database time.
    Wait class "User I/O" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Please help.

    Hello experts...
    Please do the needful... It's really very urgent.
    Thanks,
    Syed

  • Oracle 10g RAC installation on HP-UX 11.31 for SAP ERP 6.04

    Dear experts,
    We are trying to install SAP ERP 6.0 EHP4 with Oracle 10g RAC on HP-UX 11.31. Please note that, We are using VERITAS cluster filesystem (CFS) for this purpose and not HP-UX ServiceGuard ClusterFileSystem.
    *As per SAP procedure, we have installaed plain plain SAP system with single instance Oracle (ref: Configuration of SAP NetWeaver for Oracle Database 10gRAC guide). Now we are first trying to install Oracle Clusterware (CRS) and then we will install Oracle RAC software. Is this procedure right?*
    Which file (and its path) we have to use for CRS installation and its patch? Is it runInstaller under /oracle/stage/102_64/clusterware? With this file we can install CRS 10.2.0.1. Similarly for Oracle RAC, which files we are supposed to use, i.e for installation and patch upgrade?
    Also can we use clusterware package available from Oracle directly in the SAP environment?
    We tried to install CRS with runInstaller while running Configuration Assistant after root.sh script.
    The following commands are failing in Configuration Assistant:
    /oracle/CRS/102_64/bin/racgons add_config erpprdd1:4948 erpprdd2:4948
    /oracle/CRS/102_64/bin/oifcfg setif -global  lan1/20.20.20.0:cluster_interconnect lan0/192.168.3.0:public
    /oracle/CRS/102_64/bin/cluvfy stage -post crsinst -n erpprdd1,erpprdd2
    1st command does not give any output if run manually.
    2nd commands o/p: PRIF-12: failed to initialize cluster support services
    3rd commands fails in every check
    Please suggest the solution and expecting answers to the above mentioned questions.
    Thanks & Regards,
    Tejas

    >
    Charles Yu wrote:
    > Q1:  Oracle RAC with 9.2.x on HP-UX?
    > A:   For HA environment, cluster software is: MC/SG on HP-UX 11.31; there are  optional components of MC/SG  for supporting Oracle RAC and SAP application.  I was confused that I could not find the installation guide regarding 4.6C on MC/SG HA environment of HP-UX.
    > Charles
    Relevant docs for Service Guard (SG) cluster are available at http://docs.hp.com. Hope you have checked for the suport of oracle 92x on HP-UX 11.31
    >
    Charles Yu wrote:
    > Q2: Any reason why you don't use a supported database version?
    > A:   Actually, in order to avoiding the the risk of database upgrade and minimizing the migration risk,  top level has decided that keeping the same Oracle version. Indeed, we don't plan the migration of application. On the other hand, it is complicated to do the assessment for application migration.
    > Charles
    You can go for combined os migration and db release upgrade also at a stretch with the same downtime.

  • Need help on upgrade to 11.1.0.6 to 11.1.0.7 on RAC .

    Hi,
    we have a RAC setup with 2 nodes DB version is 11.1.0.6 and OS is LinuxX-86 64 bit machine. We found that 6890831 needs to be applied as a part of upgrade activity. I gone through read me of patch but have few doubts on the procedure. Can any body help me on the same?
    Upon reading the document i am not clear on upgrade path like do i need to upgrade clusterware as well or not? Based on the initial study following action plan has been prepared by me, please correct me if my action plan is not correct.
    1) Stop the DB.
    2) Stop nodeapps.
    3) Stop the CRS(using non-rolling upgrade).
    4) apply the patch to CLUSTER ORACLE_HOME.
    5) apply the patch to RDBMS ORACLE_HOME.
    6) doing post-installation steps like DB upgrade.
    Please emphasize on 4th step. Thanks in-advance.
    Regards
    APPS-DBA.

    Upon reading the document i am not clear on upgrade path like do i need to upgrade clusterware as well or not?You need to upgrade the clusterware before RDBMS....therefore it is not optional. You first need to upgrade the clusterware and then the RDBMS.
    The steps listed by you seems to be in order (from a high-level).

  • Oracle 10g RAC design with ASM and OCFS

    Hi all,
    I have a question about a proposed Oracle 10g Release 2 RAC design for a 2 node cluster.
    ASM can store database files but not Oracle binaries nor OCR and voting disk. As such, OCFS version 1 does not support a shared Oracle Home. We plan to use OCFS version 2 with ASM version 2 on Red Hat Linux Enteprrise Server 4 with Oracle 10g Release 2 (10.2.0.1).
    For OCFS v2, a shared Oracle home and shared OCR and voting disk are supported. My question is does the following proposed architecture make sense for OCFS v2 with ASM v2 on Red Hat Linux 4?
    Oracle 10g Release 2 on Red Hat Enterprise Linux Server 4:
    OCFS V2:
    - shared Oracle home and binaries
    - shared OCR and vdisk files
    - CRS software shared OCFS v2 filesystem
    - spfile
    - controlfiles
    - tnsnames.ora
    ASM v2 with ASMLib v2:
    Proposed ASM disk groups:
    - data_dg for application data
    - backupdg for flashback and archivelogs
    - undo_rac1dg ASM diskgroup for undo tablespace for racnode1
    - undo_rac2dg ASM diskgroup for undo tablespace for racnode2
    - redo_rac1dg ASM diskgroup to hold redo logs for racnode1
    - redo_rac2dg ASM diskgroup to hold redo logs for racnode2
    - temp1dg temp tablespace for racnode1
    - temp2dg temp tablespace for racnode2
    Does this sound like a good initial design?
    Ben Prusinski, Senior DBA

    OK Tim, thanks for advices.
    I think Netbackup can be integrated with RMAN but I don't want to loose time on this (political).
    To summarize:
    ORACLE_HOME and CRS_HOME on each node (RAID1 and NTFS)
    Shared storage:
    Disk1 and disk 2: RAID1: - Raw partition 1 for OCR
    - Raw partition 2 for VotingDisk
    - OCFS for FLASH_RECOVERY_AREA
    Disk3, disk4 and disk5: RAID 0 - Raw with ASM redundancy normal 1 diskgroup for database files.
    This is a running project here, will start testing the design on VMware and then go for production setup.
    Regards

  • Multiple databases/instances on 4-node RAC Cluster including Physical Stand

    OS: Windows 2003 Server R2 X64
    DB: 10.2.0.4
    Virtualization: NONE
    Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
    Node Memory: 128GB RAM
    We are planning the following on the above-mentioned 4-node RAC cluster:
    Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
    Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
    Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
    Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
    Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
    Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
    Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
    Note: DB4 will be the physical primary Data Warehouse DB.
    Note: Going to 11g is NOT an option.
    Note: Data Guard broker will be used across the board.
    Please answer/advise of the following:
    1. Is the above configuration supported and why so? If not, what are the alternatives?
    2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?

    Hi,
    As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
    1. No of CPU on each servers
    2. Memory on each servers
    3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
    4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
    Thanks,
    Keyur

  • JDBC connection to Oracle 10g RAC periodically times out

    I've been banging my head against the wall for months now and can't figure out why this is and what's causing it.
    We have 6x CF8 servers in our environment. 3 of which work perfectly and the other 3 have the following problem. All 6 machines were installed at the same time and followed the exact same installation plan.
    When I configure Oracle RAC data source, some of the machines time-out connecting to Oracle from time-to-time.
    Config:
    Solaris 9 on both CF and Oracle
    CF8 Enterprise with the latest updater.
    Apache 2 (not that it's relevant)
    6 machines, load-balanced (not clustered), identical install and configuration.
    data source config:
    JDBC URL: jdbc:macromedia:oracle://10.0.0.3:1521;serviceName=dbname.ourdomain.com;AlternateServers= (10.0.0.4:1521);LoadBalancing=true
    DRIVER CLASS: macromedia.jdbc.MacromediaDriver
    The problem:
    Every few minutes, CF starts hanging requests that deal with a specific RAC only data source. After about 30 seconds, all requests bail and generate this error in cfserver.log:
    A non-SQL error occurred while requesting a connection from dbsource.
    Timed out trying to establish connection
    This happens with any RAC data source on the "bad" servers while the "good" servers don't have this problem. The "bad" server doesn't have any problems with direct (non-rac) Oracle data source.
    Already tried:
    Moving server connections around on a switch (rulling out bad switch port)
    Copying driver from the healthy server (but it's the same installer anyway)
    Changed from RAC to normal Oracle type data source - works perfectly. So at the moment I have 3 servers connecting to a specific oracle instance and the other 3 connecting to RAC.
    Tried googling and searching forums and even Oracle metalink - nothing I could see relevant to this.
    It's a shame that after spending a ton of money on CF8 upgrades and Oracle RAC, we can't really utilize fail-over on the database connection.
    Any takers?
    Thanks,
    Henry

    I have the following in my CLASSPATH:
    C:\Ora10g1\product\10.2.0\db_1\jdbc\lib\jdbc.jar;
    C:\Ora10g1\product\10.2.0\db_1\jdbc\lib\ojdbc14.jar;
    C:\Ora10g1\product\10.2.0\db_1\jlib\jndi.jar;
    C:\Ora10g1\product\10.2.0\db_1\jlib\orai18n.jar;
    Still 'Cannot find type 'oracle.jdbc.pool.OracleDataSource'
    Thanks

  • Anybody using 11g RAC Database with OIM 9.1.0.2?

    Hi,
    We have poor performance problems with 11g RAC Database with OIM 9.1.0.2.
    It looks like the issue could be because of RAC Ccluster.
    Anybody using 11g RAC Database with OIM 9.1.0.2?
    If so, What is the JDBC Driver r u using? Also what App Server (Weblogic or OAS etc) ?
    Appreciate your input.
    Regards
    Vijay Chinnasamy

    We are using weblogic 10.3. And I am sorry to tell you that we are yet to move to production.
    We are planning to have Database in RAC, so i just thought of taking inputs from you.
    can you tell me under what operations you face the poor performance issue?
    For us the max no. of users in production is around 6k. So I am wondering how this RAC availability will affect the performance.
    Thanks for sharing your inputs.

Maybe you are looking for