RAC realated queries

Dear Team
As i am configuring RAC using redhat-5 on vmware. I have done basic settings and have installed cluster software but last two scripts are not working on node 2. and i am not able to run vipca command and getting the error as given below.
[root@rac2 ~]# /u01/app/crs/product/10.2.0/crs/root.sh
WARNING: directory '/u01/app/crs/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/crs/product' is not owned by root
WARNING: directory '/u01/app/crs' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/crs/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/crs/product' is not owned by root
WARNING: directory '/u01/app/crs' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/crs/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
And when i am running ./vipca at node 2 then it give the below error.
[root@rac2 ~]# cd /u01/app/crs/product/10.2.0/crs/bin/
[root@rac2 bin]# ./vipca
/u01/app/crs/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
Please suggest

Hi,
I would suggest to go through below discussion on OTN.
Problem with libpthread.so.0 cannot open shared object file
This may help you.
Regards,
SVM

Similar Messages

  • Oracle9i RAC & RAC 10g Queries

    All,
    I was wondering whether you could please help out on the following:
    A governmental organization at Cyprus is currently running Oracle9i RAC on Windows 2003 (Oracle instances were installed on server1's & server2's C: drives and they are sharing an OCFS-formatted database
    ---created on (and, seen as) E: drive for both servers/instances---).
    Now, the customer has requested that we have to upgrade Oracle9i RAC to 10g RAC.
    What might be a recommended migration path, if they desire to use the same servers and storage for 10g RAC, too? I guess I have to have 9i RAC and 10g RAC co-located on the same servers (at least, for some period of time), right? What happens, though, to the shared db? Do I have to create a different partition and map it as a different drive (formatted as OCFS, again???) for 10g RAC? Or, the OCFS-formatted E: (now used for RAC 9i) can also be used for 10g RAC?
    Any support will be greatly appreciated.
    Thanking you and looking forward to hearing from you soon.
    -Pericles Antoniades.

    I am assuming this is a Windows environment.
    I would start with the O/S make sure all O/S level patches required for 10gR2 are applied.
    At a minimum I would suggest going to 10.2.0.3 for Windows.
    The traditional method of upgrade RAC from Oracle 9i to 10g is to install the clusterware, install the binaries for 10gR2 and then do a database upgrade.
    You are correct, for a period of time you will have both 9i and 10g homes existing on the server.
    I have a paper on how to upgrade from 9iRAC to 10gRAC on OTN.. its not for windows but the process should be the similar.
    http://www.oracle.com/technology/pub/articles/vallath_rac_upgrade.html

  • Gather_plan_statistics which reported step is the longest one

    Hi,
    got plan like that from 10.2.0.3
    | Id  | Operation                           | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
    |   1 |  VIEW                               | SYS_DBA_SEGS   |      2 |   1346 |      2 |00:00:22.12 |     237K|    149K|       |       |          |
    |   2 |   UNION-ALL                         |                |      2 |        |      2 |00:00:22.12 |     237K|    149K|       |       |          |
    |*  3 |    FILTER                           |                |      2 |        |      2 |00:00:22.12 |     228K|    148K|       |       |          |
    |*  4 |     HASH JOIN RIGHT OUTER           |                |      2 |    316 |     18 |00:00:22.12 |     228K|    148K|   925K|   925K|     2/0/0|
    |   5 |      TABLE ACCESS FULL              | USER$          |      2 |   1017 |   2230 |00:00:00.01 |      60 |      0 |       |       |          |
    |*  6 |      HASH JOIN                      |                |      2 |    316 |     18 |00:00:22.11 |     228K|    148K|   972K|   972K|     2/0/0|
    |   7 |       TABLE ACCESS FULL             | FILE$          |      2 |    215 |    510 |00:00:00.01 |       6 |      1 |       |       |          |
    |*  8 |       HASH JOIN                     |                |      2 |    316 |     18 |00:00:22.09 |     228K|    148K|   826K|   826K|     2/0/0|
    |   9 |        TABLE ACCESS FULL            | TS$            |      2 |     52 |    132 |00:00:00.01 |     144 |      0 |       |       |          |
    |  10 |        NESTED LOOPS                 |                |      2 |    316 |     18 |00:00:22.05 |     228K|    148K|       |       |          |
    |* 11 |         HASH JOIN                   |                |      2 |   1181 |     18 |00:01:16.36 |     228K|    148K|   841K|   841K|     2/0/0|
    |  12 |          TABLE ACCESS BY INDEX ROWID| OBJ$           |      2 |   1303 |     34 |00:00:01.66 |     803 |    414 |       |       |          |
    |* 13 |           INDEX SKIP SCAN           | I_OBJ2         |      2 |   1303 |     34 |00:00:01.63 |     769 |    411 |       |       |          |
    |  14 |          VIEW                       | SYS_OBJECTS    |      2 |    664K|   1547K|00:02:56.44 |     227K|    148K|       |       |          |
    |  15 |           UNION-ALL                 |                |      2 |        |   1547K|00:02:50.25 |     227K|    148K|       |       |          |
    |* 16 |            TABLE ACCESS FULL        | TAB$           |      2 |    121K|    202K|00:00:51.64 |   42399 |  38268 |       |       |          |
    |  17 |            TABLE ACCESS FULL        | TABPART$       |      2 |    407K|    995K|00:00:43.36 |   10966 |   9600 |       |       |          |
    |  18 |            TABLE ACCESS FULL        | CLU$           |      2 |     10 |     20 |00:00:00.05 |   49513 |  38869 |       |       |          |
    |* 19 |            TABLE ACCESS FULL        | IND$           |      2 |   9522 |  20938 |00:01:45.48 |   56275 |  30987 |       |       |          |
    |  20 |            TABLE ACCESS FULL        | INDPART$       |      2 |    110K|    304K|00:00:00.66 |    3706 |   3617 |       |       |          |
    |* 21 |            TABLE ACCESS FULL        | LOB$           |      2 |    725 |   1470 |00:00:31.22 |   64635 |  26767 |       |       |          |
    |  22 |            TABLE ACCESS FULL        | TABSUBPART$    |      2 |   8256 |   7670 |00:00:00.10 |     244 |    238 |       |       |          |
    |  23 |            TABLE ACCESS FULL        | INDSUBPART$    |      2 |   6644 |  13888 |00:00:00.16 |     144 |    138 |       |       |          |
    |  24 |            TABLE ACCESS FULL        | LOBFRAG$       |      2 |     10 |    134 |00:00:00.01 |       6 |      2 |       |       |          |
    |* 25 |         TABLE ACCESS CLUSTER        | SEG$           |     18 |      1 |     18 |00:00:00.13 |      60 |      8 |       |       |          |
    |* 26 |          INDEX UNIQUE SCAN          | I_FILE#_BLOCK# |     18 |      1 |     18 |00:00:00.01 |      40 |      0 |       |       |          |Please clarify which step takes the most time.
    For me its like: TABLE ACCESS FULL | IND$ because of 1 min 45sec .
    Right ?
    Regards
    GregG

    Execution time was Elapsed: 00:02:57.06, the problem we are observing is on 4-node rac where queries on dba_segments are really slow (like 3 minutes ) for query
    with owner = 'SOMEUSER' and segment_name='SOMETABLE'
    I've filled a SR for that:
    select /* nogather_plan_statistics */ *
    from
    dba_segments where owner = 'SOMEUSER' and segment_name = 'T'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.08       0.15          2         12          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2     13.13     114.79      77575     116642          0           1
    total        4     13.22     114.94      77577     116654          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 842
    Rows     Row Source Operation
          1  VIEW  SYS_DBA_SEGS (cr=116642 pr=77575 pw=0 time=9464381 us)
          1   UNION-ALL  (cr=116642 pr=77575 pw=0 time=9464364 us)
          1    FILTER  (cr=111942 pr=77275 pw=0 time=9464292 us)
          9     HASH JOIN RIGHT OUTER (cr=111942 pr=77275 pw=0 time=9464272 us)
       1115      TABLE ACCESS FULL USER$ (cr=30 pr=0 pw=0 time=2335 us)
          9      HASH JOIN  (cr=111912 pr=77275 pw=0 time=9461535 us)
        255       TABLE ACCESS FULL FILE$ (cr=3 pr=0 pw=0 time=292 us)
          9       HASH JOIN  (cr=111909 pr=77275 pw=0 time=9459660 us)
         66        TABLE ACCESS FULL TS$ (cr=72 pr=0 pw=0 time=1354 us)
          9        NESTED LOOPS  (cr=111837 pr=77275 pw=0 time=9457454 us)
          9         HASH JOIN  (cr=111807 pr=77271 pw=0 time=32951145 us)
         17          TABLE ACCESS BY INDEX ROWID OBJ$ (cr=424 pr=254 pw=0 time=2539083 us)
         17           INDEX SKIP SCAN I_OBJ2 (cr=407 pr=253 pw=0 time=2529132 us)(object id 37)
    771213          VIEW  SYS_OBJECTS (cr=111383 pr=77017 pw=0 time=117253847 us)
    771213           UNION-ALL  (cr=111383 pr=77017 pw=0 time=114168993 us)
      99067            TABLE ACCESS FULL TAB$ (cr=22498 pr=19431 pw=0 time=15880157 us)
    497710            TABLE ACCESS FULL TABPART$ (cr=5481 pr=4814 pw=0 time=21918411 us)
         10            TABLE ACCESS FULL CLU$ (cr=23939 pr=19557 pw=0 time=9906 us)
      10373            TABLE ACCESS FULL IND$ (cr=27142 pr=19521 pw=0 time=56601297 us)
    152472            TABLE ACCESS FULL INDPART$ (cr=1853 pr=1805 pw=0 time=326335 us)
        735            TABLE ACCESS FULL LOB$ (cr=30273 pr=11699 pw=0 time=5149129 us)
       3835            TABLE ACCESS FULL TABSUBPART$ (cr=122 pr=119 pw=0 time=36661 us)
       6944            TABLE ACCESS FULL INDSUBPART$ (cr=72 pr=69 pw=0 time=74745 us)
         67            TABLE ACCESS FULL LOBFRAG$ (cr=3 pr=2 pw=0 time=8357 us)
          9         TABLE ACCESS CLUSTER SEG$ (cr=30 pr=4 pw=0 time=57045 us)
          9          INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=20 pr=0 pw=0 time=430 us)(object id 9)
          0    NESTED LOOPS  (cr=1 pr=1 pw=0 time=17757 us)
          0     NESTED LOOPS  (cr=1 pr=1 pw=0 time=17747 us)
          0      FILTER  (cr=1 pr=1 pw=0 time=17740 us)
          0       NESTED LOOPS OUTER (cr=1 pr=1 pw=0 time=17732 us)
          0        NESTED LOOPS  (cr=1 pr=1 pw=0 time=17723 us)
          0         TABLE ACCESS BY INDEX ROWID UNDO$ (cr=1 pr=1 pw=0 time=17716 us)
          0          INDEX RANGE SCAN I_UNDO2 (cr=1 pr=1 pw=0 time=17703 us)(object id 35)
          0         TABLE ACCESS CLUSTER SEG$ (cr=0 pr=0 pw=0 time=0 us)
          0          INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=0 pr=0 pw=0 time=0 us)(object id 9)
          0        TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
          0         INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
          0      TABLE ACCESS BY INDEX ROWID FILE$ (cr=0 pr=0 pw=0 time=0 us)
          0       INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
          0     TABLE ACCESS CLUSTER TS$ (cr=0 pr=0 pw=0 time=0 us)
          0      INDEX UNIQUE SCAN I_TS# (cr=0 pr=0 pw=0 time=0 us)(object id 7)
          0    FILTER  (cr=4699 pr=299 pw=0 time=3648121 us)
          0     HASH JOIN RIGHT OUTER (cr=4699 pr=299 pw=0 time=3648115 us)
       1115      TABLE ACCESS FULL USER$ (cr=731 pr=171 pw=0 time=2317 us)
          0      HASH JOIN  (cr=3968 pr=128 pw=0 time=2559171 us)
         66       TABLE ACCESS FULL TS$ (cr=72 pr=0 pw=0 time=772 us)
          0       NESTED LOOPS  (cr=3896 pr=128 pw=0 time=2556999 us)
        255        TABLE ACCESS FULL FILE$ (cr=3 pr=0 pw=0 time=290 us)
          0        TABLE ACCESS CLUSTER SEG$ (cr=3893 pr=128 pw=0 time=2555989 us)
          0         INDEX RANGE SCAN I_FILE#_BLOCK# (cr=3893 pr=128 pw=0 time=2554106 us)(object id 9)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             21        0.00          0.01
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                      1000        0.06          9.46
      gc cr grant 2-way                             541        0.00          0.18
      gc current block 2-way                         28        0.00          0.01
      gc current block 3-way                         26        0.00          0.02
      gc cr block 3-way                              16        0.00          0.01
      gc cr block 2-way                             216        0.00          0.12
      gc cr block busy                               10        0.03          0.14
      gc cr multi block request                   19251        0.02          4.76
      db file scattered read                       5704        0.21         77.91
      db file parallel read                         439        0.11          6.86
      latch: KCL gc element parent latch             20        0.00          0.00
      gc current grant 2-way                         15        0.00          0.00
      SQL*Net message from client                     2      214.30        214.31
      latch free                                     10        0.00          0.00
      latch: object queue header operation            3        0.00          0.00
      latch: gcs resource hash                        9        0.00          0.00
      gc cr disk read                                13        0.00          0.00
      latch: cache buffers chains                    35        0.00          0.00
      gc buffer busy                                529        0.04          3.51
      read by other session                         265        0.05          1.92

  • How creating Dev DB environment affects existing Production DB on Exadata

    Hi,
    We have exadata v-2 quarter rack with 6 cell nodes on linux env and we are adding exadata expansion rack HC with 4 cell nodes.
    We will be creating development database also and allocated separate diskgroups for dev env.
    I have few doubts here, some are related to exadata and some on generic RAC/database queries:
    --> In case if there is any issue on production env and we need to down all the cell nodes then development env will also be down. Hence what is the best recommended setting if we are using same cell servers for prod and dev both env?
    --> Since we will have prod and dev database on same compute node so both the databases would be using same CRS services and in case any issue arises and CRS needs to be down then both the DB would be down. Please suggest.
    --> We have 2 instances for HA on production env and for dev env we do NOT want HA feature and would like to create stand alone db on the running cluser only. What is the recommended approach in this case?
    --> What are the other challenges we might face in such scenario when we keep prod and dev on same cluster and on same db/cell servers?
    Thanks for your time and suggestions in advance.
    Regards
    Saurabh
    Edited by: 877938 on Jun 4, 2012 1:08 PM

    Andy said most of what I would have said but the point of Exadata (RAC) is high availability.
    The point of Dev and Test is to identify things that cause breakage.
    There is a very substantial disconnect between the two. If your testing never causes a failure ... then why bother to test? If developers never try new technologies why bother to develop? If every piece of hardware and software on the planet was bug-free none of us would have a job. Oracle is very very good ... but perfection is still over the horizon. I strongly advise you to not mix these loads. For very little money ... an ODA and a small ZFS will give you what you need to be safe. And if it leaves you with a little spare disk capacity on the Exadata ... don't worry ... it will be utilized sooner than you think.

  • Redirecting Readonly Queries to ADG or RAC

    Hi All,
    We are planning to redirect readonly queries to a seperate Database. Alhough we have ADG( Active Data Guard) due to cost constraints we cannot afford that.
    So we are planning to redirect ready only queries to RAC Node #4. As we have a four node rac cluster configured.
    Is this that a optimal Approach. please advice
    Cheers

    Hi,
    It seems you want to partition your workload by RAC nodes (or Instances).
    From my knowledge RAC is not designed for partitioning by SQL-Workload but by data!
    I'd like to give you 2 examples:
    * If you Insert data on node 1 and query the same data on node 4 all the time, a lot of traffic between node 1 and node 4 transfers all the blocks of interst between these 2 nodes (well, more in direction 1=> 4) In this case I would test to bring BOTH workloads onto the same node. Unless CPU is saturated there, I'd expect better results.
    * If you insert data in node 1 into your todays 'daily partition', and on node 4 do a report about 'yesterdays partition' (which I assume NOT to be todays partition) this will be reasonable workload separation. In best/worst case the blocks will travel from node 1 to node 4 once via interlink and all the work is done in node 4 without interfering with node1 much.
    hth
    Martin

  • Queries with respect to Oracle 10g RAC - Primary & Standby DB environment

    Hi,
    Please guide in the following queries:
    On my 3 Node RAC with Primary DB, I have configured database backup (using Flashback) every 30min and I keep only one copy at a time. I also have a Physical Standby in the environment which has redo applied from primary with 30min delay.
    1. During recovery, is it possible to do recovery, on to the Primary DB, to a point-in-time which is 2 hr in past using the same flashback databse?
    2. Is it possible to restore the flashback database backup (taken from 3 Node RAC with Primary DB) into a NEW oracle setup which is different from the 3 Node RAC setup with Primary DB?
    3. If the Storage Device of 3 Node RAC setup fails, can I point the 3 Node instances on to a new Storage Device which has the backup of PrimaryDb restored from a StandBy db?
    Regards

    Is there anybody who can help on this?

  • Network and storage queries in respect to RAC

    Network port usage identification in the switch
    I’m not sure about the usage of number of sockets/ports in the private switches for Oracle RAC. If we run two node cluster, how the socket usage is like and how many more nodes can be added to the cluster.
    For example, if we run two nodes in the cluster, it might use 6 sockets. If the switch is a 24 sockets, I can identify how many more nodes can be added into this cluster. This is to know the scalability of the RAC based on the network component switch.
    for information: Our environment has got a gigabit ethernet switch..not sure about the sockets in it.
    Storage question
    1. How do I identify if a single LUN is shared by multiple databases?
    For example, I see only one data volume in ASM, like '+DATA1' which has got many database datafiles, logfile etc. I want to know is there a way I can see a single LUN or a group of LUN been assigned to this volume group in ASM. Is there a recommended practice to do this setup?
    I read somewhere "The best practices advise not to mix the LUN with different databases present in the cluster to minimize IO operations." The question 1 is based on this only.
    2. If we add/expand the disk or LUN in the volume group, I think the system needs a reboot of all cluster nodes for this to be visible to Oracle ASM. Am I correct? what is restricting not to add dynamically? Storage or OS..no clues for me. Any ideas here...
    I have seen the disks which are already part of the disk group can be added dynamically to the ASM disk group but not sure about the addition/expansion of LUN...
    Thanks in advance.

    Network port usage identification in the switch
    I’m not sure about the usage of number of sockets/ports in the private switches for Oracle RAC. If we run two node cluster, how the socket usage is like and how many more nodes can be added to the cluster.About Switch for Private network... you have 2 nodes ,you should 2 ports on switch. If you make bonding on Private interface (server), so 2 nodes .. each of nodes use 2 ports for bonding -> you should use 4 ports on switch.
    >
    For example, if we run two nodes in the cluster, it might use 6 sockets. If the switch is a 24 sockets, I can identify how many more nodes can be added into this cluster. This is to know the scalability of the RAC based on the network component switch.
    for information: Our environment has got a gigabit ethernet switch..not sure about the sockets in it.about port number on switch... you can check from product owner...
    Many nodes?... if you have plan to add many nodes... you can buy new switch and then cross to old private switch...
    Or example... stack switch feature on Cisco 3750
    http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_25_see/configuration/guide/swstack.html
    Storage question
    1. How do I identify if a single LUN is shared by multiple databases?
    For example, I see only one data volume in ASM, like '+DATA1' which has got many database datafiles, logfile etc. I want to know is there a way I can see a single LUN or a group of LUN been assigned to this volume group in ASM. Is there a recommended practice to do this setup?You have to know what Storage (EMC/NetApp) you are using? and find document... to help
    About ASM:
    Example: ASM with Emc .. check on node1
    $ export ORACLE_SID=+ASM1
    $ sqlplus / as sysdba
    SQL> select path, library from v$asm_disk where LABEL='DATA1';
    PATH LIBRARY
    /dev/raw/raw4 System
    that mean we have to check about device name (DATA1) at /etc/sysconfig/rawdevices file
    Or
    SQL> select path, library from v$asm_disk where LABEL='DATA1';
    PATH LIBRARY
    ORCL:DATA1 ASM Library - Generic Linux, version 2.0.2 (KABI_V2)
    that mean we have used asmlib, So...
    # /etc/init.d/oracleasm querydisk DATA1
    Disk "DATA" is a valid ASM disk on device [1, 127]
    and
    # ls -la /dev/* | grep '1\, ' | grep 127
    We'll see device name 's used for 'DATA1'
    by the way... if you use EMC you can... use "powermt" to check storage on servers
    # powermt display dev=all
    >
    I read somewhere "The best practices advise not to mix the LUN with different databases present in the cluster to minimize IO operations." The question 1 is based on this only.
    Mix the LUN with different database may make bottle neck on HW Storage...
    2. If we add/expand the disk or LUN in the volume group, I think the system needs a reboot of all cluster nodes for this to be visible to Oracle ASM. Am I correct? what is restricting not to add dynamically? Storage or OS..no clues for me. Any ideas here...On Linux... you have to reboot... to reload library... You can reboot each of nodes ... don't need reboot all nodes
    >
    I have seen the disks which are already part of the disk group can be added dynamically to the ASM disk group but not sure about the addition/expansion of LUN...You can use "dbca" to help add/expand LUN , after every nodes see new disks.

  • Oracle RAC 11g - Transition queries

    Hi,
    I am about to get transition on Oracle databases used currently by one of the client managed by another vendor.
    I would like to I have a check list sort of thing for getting transition on Oracle RAC 11g r2 from the current vendor.
    Appreciate if anybody can help me on this asap.
    Thanks in advance.

    user2033016 wrote:
    Hi Everyone,
    I installed Oracle RAC 11g on Windows 2008 64 bit, The installation was successfull crs_state -t executed successfully.
    Bt the machine got crashed ,when executed a select query with 100 sessions( For performance evaluation using TG). A blue screen is appeared on two nodes. So i am not able to evaluate the performance.
    Can you please tell any suggesstions to solve this issue? Is our hardware is not sufficient? Or any software settings has to be done?. I tried most of hte technique ,bt the problem still exists.
    The following machine configuration is used for Oracle RAC setup
    Intel Core 2 Duo 2.66 GHz,
    Ram :4GB.
    OS : windows 2008 Server 64 Bit
    Oracle RAC , Clusterware 11g R2 64 Bit.i guess you have installed/configured RAC using VMware. i guess, the problem is with your RAM size. how much disk space and RAM size you have allocated for each machine?
    which documentation you have followed?

  • Some queries on RAC database 11g

    We have RAC oracle database 11g and we don't want to overburden the OLTP instance. So we thought of bringing the changed data from the OLTP instance to reporting instance via logical standby or streams and log the changes with the help of using total recall feature in the reporting instance.
    Our idea is to retain the historical data using the total recall feature in reporting instance and bring the changed data from the OLTP instance to reporting instance via logical standby or streams
    RAC database: OLTP instance ---> mine the changed data via logical standby or streams ---reporting instance (track all the changes made to a transaction using total recall feature in this instance)
    My doubts are:-
    1) Whether the total recall feature will work with work with logical standby database or streams?
    2) How to transport the total recall feature enabled tables from one database to another. Whether the data pump will work with flashback enabled tables?
    3) How to capture the changes from one instance to another with oracle streams. I could not find much documentation on how the streams work on RAC database. Any pointer /suggestion on this would be much appreciated.
    Regards,
    Richard

    user12075620 wrote:
    1)     Easy backup and recovery for a specific instance in case of historical dataWhat data will that be? If this is a 3rd RAC instance, then it will not make backup easier as the data is in the database (on shared storage media) and that needs to be backed up - the database's data does not reside locally on the instance.
    2)     Point-in-time recovery is possibleAgain, I do not see how this is a specific advantage that now suddenly occurs with the introduction of a 3rd RAC instance.
    3)     If any other node fails, then it won’t affect this instance Correct - a 3rd instance increases the redundancy in the cluster. It also scales it and provides additional processing capacity.

  • Oracle 9i rac cache fusion queries

    Hi RAC experts,
    Would like to understand more on the background of cache fusion.
    Kindly advise on the following cache fusion operation as i'm having some confusion.
    1. read/read - user on node 1 wants to read a block that user on node 2 has recently read.
    in this case, the block is already on the cache of node 2 since it is recently read. so does node 1 read from disk to store in it's down cache since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for select?
    2. read/write - user on node 1 want to read a block that user on node 2 has recently updated.
    in this case, node one will get the recently updated info from remote cache(node 2) to it's local cache for update right?
    3) write/read - user on node 1 want to update a block that user on node 2 has recently read.
    does node 1 read from disk since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for update?
    thanks
    junior dba

    1. read/read - user on node 1 wants to read a block that user on node 2 has recently read.Since the block is in another instance's buffer cache, the questing instance(1 in your case) will request the block to be shipped from node 2. It will avoid going to disk.
    .2. read/write - user on node 1 want to read a block that user on node 2 has recently updated.in this case, node one will get the recently updated info from remote cache(node 2) to it's local cache for update right?
    That's correct.
    oes node 1 read from disk since the info is not in it's cache? or it can request directly from the remote cache in node 2 to do shipping of the block to it's own cache for update?Same as in 1) above, it gets from the other's instance.
    These concepts have been explained well in RAC Handbook by K Gopalakrishnan. Here is the additional information on the book. http://www.amazon.com/Database-Application-Clusters-Handbook-Osborne/dp/007146509X/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1211888392&sr=8-1
    HTH
    Thanks
    -Chandra

  • Parallel queries are failing in 8 node RAC DB

    While running queries with parallel hints , the queries are failing with
    ORA-12805 parallel query server died unexpectedly
    Upon checking the alert logs, I couldnt find any thing about ORA-12805, But the i find this error: Please help me to fix this problem
    Fatal NI connect error 12537, connecting to:
    (LOCAL=NO)
    VERSION INFORMATION:
    TNS for Linux: Version 11.1.0.7.0 - Production
    Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
    Time: 15-MAY-2012 16:49:15
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12537
    TNS-12537: TNS:connection closed
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
    ORA-609 : opiodr aborting process unknown ospid (18807_47295439087424)
    Tue May 15 16:49:16 2012

    A couple of thoughts come immediately to mind:
    1. When I read ... "Tracing not turned on" ... I wonder to myself ... why not turn on tracing?
    2. When I read ... "Version 11.1.0.7.0" ... I wonder to myself ... why not apply all of the patches Oracle has created in the last 3 years and see if having a fully patched version addresses the issue?
    3. When I read ... "parallel query server died" ... I wonder whether you have gone to support.oracle.com and looked up the causes and solutions for Parallel Query Server dying?
    Of course I also wonder why you have an 8 node cluster as that is adding substantial complexity and which leads me to wonder ... "is it happening on only one node or all nodes?"
    Hope this helps.

  • Multi-table INSERT with PARALLEL hint on 2 node RAC

    Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
    servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
    used is what is given below.
    create table t1 ( x int );
    create table t2 ( x int );
    insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
    when (dummy='X') then into t1(x) values (y)
    when (dummy='Y') then into t2(x) values (y)
    select dummy, 1 y from dual;
    I can see multiple sessions using the below query, but on only one instance only. This happens not
    only for the above statement but also for a statement where real time table(as in table with more
    than 20 million records) are used.
    select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
    sql.sql_text
    from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
    WHERE p.sid = s.sid
    and p.serial# = s.serial#
    and p.sid = ps.sid
    and p.serial# = ps.serial#
    and s.sql_address = sql.address
    and s.sql_hash_value = sql.hash_value
    and qcsid=945
    Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
    Thanks,
    Mahesh

    Please take a look at these 2 articles below
    http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
    http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
    thanks
    http://swervedba.wordpress.com

  • What is best use of 1400 gb SGA (2 rac nodes 768gb each)

    currently using 11.2.0.3.0 on unix sun sever with 2 RAC nodes each 8 UltraSPARC-T1 cpus (came out in 2005) four threads each so oracle sees 32 CPUS very slow(1.2 gb).  Database is 4TB in size on regular SAN (10k speed).
    8gb SGA.
    New boss wants to update system to the max to get best performance possible  Money is a concern of course but budget is pretty high,  Our use case is 12-16 users at same time, running reports some small others very large (return single row or 10000s or rows).  reports take 5 sec to 5 minutes, Our job is get the fastest system possible,  We have total of 8 licenses available so we can have 16 cores.  We are also getting a 6tb all flash SSD array for database.  we can get any CPU we want but we cant use parallel query server due to all kinds of issues we have experienced (too many slaves, RAC interconnect saturation etc, whack-a-mole).  sparc has too many threads and without PS oracle runs query in single thread. 
    we have speced out the following system for each RAC node
    HP ProLiant DL380p Gen8 8 SFF server
    2 Intel Xeon E5-2637v2 3.5GHz/4-core cpus
    768 gb ram
    2 HP 300GB 6G SAS 15K drives for database software
    this will give us total of 4 Xeon E5-2637v2 cpus 16 cores total (,5 factor for 8 licenses) and 1536 ram (leaving ~1400 for sga).  this will guarantee an available core for each user.  we intend to create very very large keep pool around 300 gb for each node that will hold all our dimension tables.  this we hope will reduce reads from the SSD to just data from fact tables.,
    Are we doing a massive overkill here?  the budget for this was way less than what our boss expected.  will that big an sga be wasted will say a 256gb be fine.  or will oracle take advantage of it and be able to keep most blocks in there.
    will an sga that big cause oracle problems due to overhead of handling that much ram?

    Current System:
    ===========
    a. Version : 11.2.0.3
    b. Unix Sun
    c. CPU - 8 cpus with 4 threads => 32 logical cpus or cores
    d. database 4TB
    e. SAN - 10k speed disk drives
    f. 8gb SGA
    g. 1.2 gb ??
    h. Users --> 12-16 concurrent and run reports varying size
    i. reports elasped time 5 sec to 5 mins
    j. cpu license -->8
    Target System
    ===========
    a. Version: 11.2.0.3
    b. HP ProLiant DL380p Gen8 8 SFF server
    c. RAM --> 768 GB
    d. 2 HP 300GB 6G SAS 15K drives for database software
    e. large keep pool -->90 gb to  hold all dimension tables. 
    f.  SSD to just data from fact tables
    g. SGA -->256gb
    Reassessment of the performance issues of current system appears to be required.Good performance tuning expert is required to look into tuning issues of current application by analyzing awr performance metrics . If 8GB SGA is not enough,then reason behind so is that queries running in the system are not having good access path to select lesser data to avoid flushing out of recent buffers from different tables involved in the query. Until those issues are identified , wherever you go, performance issue wont be going away as table size increase in future , problem will reappear.Even if the queries are running with more FULL Scan , then re-platforming to Exadata might be right decision as Exadata has smart scan , cell offloading feature which works faster and might be right direction for best performance and best investment for future.Compression (compress for OLTP) could be one of the other feature to exploit to improve further efficiency while reading the lesser block in lesser read time.
    Investment in infrastructure will solve a few issue in short term but long term issue will again arise.
    Investment in identifying the performance issues of current system would be best investment in current scenario.

  • How to tune buffer busy waits in RAC

    Hi,
    In our environment we se lot of buffer busy waits.
    How can I tune that.
    we are using 9i RAC on HACMP.
    Regards
    MMU

    There could be several reasons for buffer busy waits, however the common reason is high logical reads..
    Normally they relate to bad SQL, you can try to identify those bad queries using a 10046 trace at level 8 or 12 and then tkrprof the trace file and look at the output.
    In a RAC environment you will see global buffer busy waits which again relates to buffer busy waits at the instance level.

  • Oracle RAC - Not getting performance(TPS) as we expect on insert/update

    Hi All,
    We got a problem while executing insert/update and delete queries with Oracle RAC system, we are not getting the TPS as we expected in Oracle RAC. The TPS of Oracle RAC (for insert/update and delete ) is less than as that of
    single oracle system.
    But while executing select queries, we are getting almost double TPS as that of Single Oracle System.
    We have done server side and client side load balancing.
    Can anyone knows to solve this strange behaviour? Shall we need to perform any other settings in ASM/ Oracle Nodes
    for better performance on insert/update and delete queries.
    The following is the Oracle RAC configuration
    OS & Hardware :Windows 2008 R2 , Core 2 Du0 2.66GHz , 4 GB
    Software : Oracle 11g 64 Bit R2 , Oracle Clusterware & ASM , Microsoft iSCSI initiator.
    Storage Simulation : Xeon 4GB , 240 GB ,Win 2008 R2, Microsoft iSCSI Traget
    Please help me to solve this. We are almost stuck with this situation.
    Thanks
    Roy

    Load Profile Per Second Per Transaction Per Exec Per Call
    ~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
    DB time(s): 48.3 0.3 0.26 0.10
    DB CPU(s): 0.1 0.0 0.00 0.00
    Redo size: 523,787.9 3,158.4
    Logical reads: 6,134.6 37.0
    Block changes: 3,247.1 19.6
    Physical reads: 3.5 0.0
    Physical writes: 50.7 0.3
    User calls: 497.6 3.0
    Parses: 182.0 1.1
    Hard parses: 0.1 0.0
    W/A MB processed: 0.1 0.0
    Logons: 0.1 0.0
    Executes: 184.0 1.1
    Rollbacks: 0.0 0.0
    Transactions: 165.8
    Instance Efficiency Indicators
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 93.74 Redo NoWait %: 99.96
    Buffer Hit %: 99.99 Optimal W/A Exec %: 100.00
    Library Hit %: 100.19 Soft Parse %: 99.96
    Execute to Parse %: 1.09 Latch Hit %: 99.63
    Parse CPU to Parse Elapsd %: 16.44 % Non-Parse CPU: 84.62
    Shared Pool Statistics Begin End
    Memory Usage %: 75.89 77.67
    % SQL with executions>1: 71.75 69.88
    % Memory for SQL w/exec>1: 75.63 71.38

Maybe you are looking for

  • R12 Format Payment Instructions - new field required in report output

    Hi I need to add a new field in the report "Format Payment Instructions " output. How can I add the logic to pick this field and then add to the layout. As per note 562806.1, it gives instructions on how to re-arrange the layout of the format payment

  • Properties Settings disabled

    I just installed the most recent Adobe updates and now my Acrobat Pro 9.3.1 will not allow any property settings changes on my most recently created PDF files. All of the security and viewing options are grayed out. This is not the case with PDF file

  • OSB: Associate work manager for a proxy service

    Hi, Can anyone please tell me how to associate workmanager(already present in the weblogic admin console) to a proxy service? Thanks. Kalpana.

  • How to install iOS 7.0 Simulator in xcode 6.0.1

    I go to preference and download the simulator 7.0,then i got this error Could not download and install iOS 7.0 Simulator. Failed to mount file:///Users//Library/Caches/com.apple.dt.Xcode/Downloads/Xcode.SDK.iPhoneSimu lator.7.0-7.0.dmg at file:///var

  • UOM conversion problem when issuing material against BOM

    Hi All I have a material which is part of a production BOM. The base UOM is EA. The issuing UOM is KG. I have a UOM conversion of 20 KG = 1 EA. In the BOM I need to issue 0.034 KG. When backflushing happens, it issues 1 EA. Why is it not issuing in K