Query slow on RAC

Hi experts
process is running slow on 2 node RAC , takes 120sects
On single instance takes 28 secs
both are on 11.2.0.3.0
same configuration in terms on memory and init.ora paramters
which logs, parameters to look on RAC
thanks

ASh suggests the following
Top User Events
Event Event Class % Event Avg Active Sessions
CPU + Wait for CPU CPU 90.00 1.44
Back to Top Events
Back to Top
Top Background Events
Event Event Class % Activity Avg Active Sessions
CPU + Wait for CPU CPU 5.09 0.08
Back to Top
Top Cluster Events
No data exists for this section of the report.
Back to Top Events
Back to Top
Top User Events
Event Event Class % Event Avg Active Sessions
CPU + Wait for CPU CPU 88.07 1.28
db file sequential read User I/O 1.09 0.02
Back to Top Events
Back to Top
Top Background Events
Event Event Class % Activity Avg Active Sessions
CPU + Wait for CPU CPU 5.62 0.08
Back to Top Events
Back to Top
Top Cluster Events
No data exists for this section of the report.
Back to Top Events
Edited by: 912919 on 19-Jun-2012 05:48

Similar Messages

  • Query slow on RAC production

    Hi,
    I have problem with one query on production database. I am not DBA so please forgive me lack of knowledge.
    Query on production runs ~ 25 minutes , but on development 10 minutes,
    I already checked :
    1) data volume - the same
    2) statistics generated
    3) execution plans (attached) - looks the same
    Please advice what to check more.
    Thanks,
    Piotr

    EVENT
    TOTAL_WAIT_TIME
    direct path read temp
    495
    ON CPU
    188
    direct path write temp
    162
    gc current block 2-way
    105
    db file parallel read
    24
    db file sequential read
    20
    gc current block 3-way
    7
    gc cr multi block request
    3
    on development:
    EVENT
    TOTAL_WAIT_TIME
    ON CPU
    149
    direct path write temp
    64
    direct path read temp
    44
    SQL*Net break/reset to client
    1
    It's clear from above, if these wait event are for the session's executing the SQL statement, that interacting with the TEMP tablespace is slow. There can be a few main reasons for that. Either the TEMP tablespace is experiencing I/O contention and/or the production environment doesn't have enough PGA for the session to perform the operation in memory without needing more TEMP space on disk. We can often see in a dev environment where resources like PGA aren't taxed nearly as much as in prod because the user base is much lower. From the stats, I see that in prod, the "session pga memory max" is about 1GB in prod but about 1.3GB in dev. Is that enough to make the difference? Not sure. But it's odd because I see that the workarea executions (non-optimal) is about the same in each. From the wait events, I'd expect to see more onepass or multipass operations in prod than dev.
    You are on RAC, so you do have Global Cache wait events in play. However, they aren't nearly as dominant to the runtime as the 'direct path xxx temp' events. Where the total number of waits different between the two systems? You provided total wait time, but not number of waits. I'm going to guess that since the non-optimal workarea executions are similar, that the number of wait events are similar but that the average wait time is much higher. If true, this would indicate to me that the interaction with the temp ts is slower in prod than dev.
    Cheers,
    Brian

  • Sql query slowness due to rank and columns with null values:

        
    Sql query slowness due to rank and columns with null values:
    I have the following table in database with around 10 millions records:
    Declaration:
    create table PropertyOwners (
    [Key] int not null primary key,
    PropertyKey int not null,    
    BoughtDate DateTime,    
    OwnerKey int null,    
    GroupKey int null   
    go
    [Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
    With the following index:
    CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]    
    [PropertyKey] ASC,   
    [BoughtDate] DESC,   
    [OwnerKey] DESC,   
    [GroupKey] DESC   
    go
    Description of the case:
    For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
    following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
    declare @ownerKey int = 40000   
    select PropertyKey, BoughtDate, OwnerKey, GroupKey   
    from (    
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,       
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]   
    from PropertyOwners   
    ) as result   
    where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
    It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
    May be the slowness is due to as OwnerKey/GroupKey in the table  can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
    view.
    Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.

    create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
    insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
    from PropertyOwners
    go
    declare @ownerKey int = 1
    select PropertyKey, BoughtDate, OwnerKey, GroupKey
    from #result as result
    where result.[Rank]=1
    and result.[OwnerKey]=@ownerKey
    go
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Sql query slow in new redhat enviornment

    We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
    I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
    Also three red alert from toad is:
    1. Library Cache get hit ratio: Dynamic or unsharable sql
    2. Chained fetch ratio: PCT free too low for a table
    3. parse to execute ratio: HIgh parse to execute ratio.
    App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
    Here is the parameters in the old system (11gr1 on AIX):
    SQL> show parameter target
    NAME TYPE VALUE
    -------------------------------- archive_lag_target integer 0
    db_flashback_retention_target integer 1440
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    memory_max_target big integer 0
    memory_target big integer 0
    pga_aggregate_target big integer 278928K
    sga_target big integer 0
    SQL> show parameter shared
    NAME TYPE VALUE
    -------------------------------- hi_shared_memory_address integer 0
    max_shared_servers integer
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 31876710
    shared_pool_size big integer 608M
    shared_server_sessions integer
    shared_servers integer 0
    SQL> show parameter db_buffer
    SQL> show parameter buffer
    NAME TYPE VALUE
    -------------------------------- buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 2048000
    use_indirect_data_buffers boolean FALSE
    SQL>
    In new 11gr2 Linux REDHAT parameter:
    NAME TYPE VALUE
    ----------- archive_lag_target integer 0
    db_flashback_retention_target integer 1440
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    memory_max_target big integer 2512M
    memory_target big integer 2512M
    parallel_servers_target integer 192
    pga_aggregate_target big integer 0
    sga_target big integer 1648M
    SQL> show parameter shared
    NAME TYPE VALUE
    ----------- hi_shared_memory_address integer 0
    max_shared_servers integer
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 28M
    shared_pool_size big integer 0
    shared_server_sessions integer
    shared_servers integer 1
    SQL> show parameter buffer
    NAME TYPE VALUE
    ----------- buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 18857984
    use_indirect_data_buffers boolean FALSE
    SQL>
    Please help. Thanks in advance.

    846422 wrote:
    why need ddl? we have a sql query slow.The DDL shows the physical structure of the table and physical storage characteristics. All relevant in performance tuning.
    As for the SQL query being slow. It is not.
    You have not provided any evidence that it is slow. And no, comparing performance with a totally different system is not a valid baseline for comparison. (most cars have 4 wheels, a gearbox and a steering wheel - but that does not mean you can compare different cars like a VW Beetle with a VW Porsche)
    What is slow? What are the biggest wait states for the SQL? What does the execution plan say?
    You have not defined a problem - you identified a symptom called "+query is slow+". You need to diagnose the condition by determining exactly what the SQL qeury is doing in the database. (and please, do not use TOAD and similar tools in an attempt to do this - do it properly instead)

  • Query slow messages in logs

    Hi,
       We have a FAST ESP SP3 set up in windows 2010 server. Recently I have observed that below logs are getting logged in the error logs frequently.
     [2014-08-14 10:46:28] 
     WARNING 
     fdispatch 
     sdsdd
     15700 
     systemmsg 
     engine sdsdd:15751 query slow by 6.842s + 1.114s  
     [2014-08-14 10:46:26] 
     WARNING 
     fdispatch 
     sdsdsd
     15700 
     systemmsg 
     engine sdsdd:15745 query slow by 5.759s + 0.107s  
     [2014-08-14 10:40:50] 
     WARNING 
     fdispatch 
     sdsdd
     15700 
     systemmsg 
     engine sdddss:15741 query slow by 9.774s + 0.943s  
     [2014-08-14 10:21:54] 
    So what are the possible causes for this delay.
    How can it be rectified. Is this to do with indexer partitions are getting filled with loads of documents.
     WIth Regards,
     WARNING 
     fdispatch 
    sdsdds
     15700 
     systemmsg 
     engine sdsdds:15751 query slow by 42.360s + 18.154s  
    Santanu Mishra

    Santanu,
    Your snippet did not contain any error messages.
    Slow query can be caused by variety of factors but as you mentioned index partitions can be one of them. The query slow by messages can come from a partition that contains the most documents.
    Could you monitor resources like CPU usage (should not be pegged), swap usage (there should be none) and I/O when you get these messages.
    Also, querylogs may be helpful from these query slow timeframes . Querylogs contains some statistics that can help identify reason of slow queries. You can add more timings :http://qrserver:15100/control?debug.timings=1
    I`ve as well seen cases where slow queries are causedby  dynamic teaser generation. This can be prevented by setting an upper bound on it. This is set on the admin node, in $FASTSEARCH/etc/config_data/RTSearch/webcluster/fsearch.addon, and the setting
    to be appended at the bottom is:
    juniper.matcher.max_workset_duplicates 35
    After making this change, search-1 must be restarted on all nodes.

  • Query slow down when added a where clause

    I have a procedure that has performance issue, so I copy some of the query and run in the sql plus and try to spot which join cause the problem, but I get a result which I can figuer out why. I have a query which like below:
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    WHERE a.TypeID = 2;
    TableA has 140000 records, when the where clause is not added, the count return quite quick, but if I add the where clause, then the query slow down and seems never return so I have to kill my SQL Plus session. TableA has index on TypeID and TypeID is a number type. When TablA has 3000 records, the procedure return very quick, but it slow down and hang there when the TableA contains 140000 records. Any idea why this will slow down the query?
    Also, the TypeID is a foreign key to another table (TableAType), so the query above can written as :
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    INNER JOIN TableAType atype ON a.TypeID = atype.ID
    WHERE atype.Name = 'typename';
    TableAType table is a small table only contains less than 100 records, in this case, would the second query be more efficient to the first query?
    Any suggestions are welcome, thanks in advance...
    Message was edited by:
    user500168

    TableA now has 230000 records and 28000 of them has the TypeID 2.
    I haven't use the hint yet but thank you for your reply which let me to to run a query to check how many records in TableA has TypeID 2. When I doing this, it seems pretty fast. So I begin with the select count for TableA only and gradually add table to join and seems the query is pretty fast as long as TableA is the fist table to select from.
    Before in my query TableA is the second table to join from, there is another table (which is large as well but not as large as TableA) before TableA. So I think this is why it runs slow before. I am not at work yesterday so the query given in my post is based on my roughly memory and I forget to mention another table is joined before TableA, really sorry about that.
    I think I learn a lesson here, the largest table need to be in the begining of the select statement...
    Thank you very much everyone.

  • Query performance on RAC is a lot slower than single instance

    I simply followed the steps provided by oracle to install a rac db of 2 nodes.
    The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
    However the performance on the select query is very slow compared to single instance.
    I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
    When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
    I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
    Could someone help me how to debug this problem?
    Thanks,
    Chau
    Edited by: user638637 on Aug 6, 2009 8:31 AM

    top 5 timed foreground events:
    DB CPU: times 943(s), %db time (47.5%)
    cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
    direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
    IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
    gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
    another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
    You should check your sql statement from report and tuning them.
    - Check from Execute Plan.
    - If not index, determine to use index.
    SQL> set autot trace explain
    SQL> sql statement;
    cursor: pin S wait on X.
    A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
    use variable SQL , avoid dynamic sql
    http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
    check about memory MEMORY_TARGET initialization parameter.
    By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
    Good Luck

  • Query performance is slow in RAC

    Hi,
    I am analyzing the purpose of Oracle RAC and how it will fit/useful into our product. So i have setup two nodes RAC 10g in our lab and i am doing various testing with RAC.
    Test 1 : Fail-over:
    ~~~~~~~~~~~
    First i have started with fail-over testing, done two types of testing "connect-time" failover and "TAF".
    Here TAF has some limitation, it's not handle DML transactions.
    Test 2 : Performance:
    ~~~~~~~~~~~~~~
    Second, i have done performance testing. Here i have taken 10,000 records for insert, update, read and delete operations with singe and two node instances. Here there is no performance difference with single and two nodes.
    But i am assumed, RAC will provide high performance than single instance oracle.
    So i am in confusion, whether we can choose Oracle RAC to our project.
    DBAs,
    Please give me your answers for my following questions, it will be great helpful for me to come to conclusion:
    1. What is the main purpose of RAC (because in my assumption, failover is partially supported and no difference in performance of query processing) ?
    2. What kind of business enviroment RAC will perfectly fit ?
    3. What is the unique benefits in RAC, which is not in single instance Oracle?
    Thanks
    Edited by: Anandhan on Aug 7, 2009 1:40 AM

    Hi !
    Well RAc ensures High Availibility. With Conditions Apply !
    For the database create more than 1 service and have applications connected to the database using these services created.
    RAC is service driven to access the database. So if plannned thoughtfully, Load on the database can be distributed physically using the services created for the database.
    SO if you have a single database servicing more than one application( of any type(s) ie oltp/warehouse etc.) connect to the database using different services so that the Init parameters are set for the purpose of the connection.
    NOTE: each database instance running on node can have different Init_sid.ora to ensure optimum perfromance for the designated purpose.
    RAC uses CSS with cache fusion for reducing I/O on a running production server by transferring the buffers from the Global cache to nodes when required thus reducing the Physical Reads. This is contribution to the perfromance front.
    Any database that requires access with different init.oa for the same physical data; RAC is the best way!
    For High Avail. use TAF type service.

  • Query slow and contention Oracle RAC

    Hi,
    I have problems with our BD...
    I have a query that takes a long time and freeze...
    our BD is Oracle 10.2.0.4 on RAC with 2 nodes over Red Hat Enterprise Linux Server release 5.3, 64G RAM CPU
    SGA : sga_max_target=16G sga__target=15G
    db_4k_cache_size=4G , keep=800M , default =10G in mode automatic
    return results of the ADDM says:
    ACTION: Investigate the SQL statement with SQL_ID "bb9bma7du62h3" for possible performance improvements.
    RELEVANT OBJECT: SQL statement with SQL_ID bb9bma7du62h3 and PLAN_HASH 2145292722
    SELECT X.DOCDOCUMENTOID, X.DOCNUMERODOCSERVICIO,         X.DOCFECHADOCSERVICIO, X.DOCFECHADOCCGR, IDEN.ISEGLOSANOMBRE,          X.SCGDESCRIPCION, MAT.MDODESCRIPCION, TIP.TDODESCRIPCION,          X.DESNUMEROCASOS, X.DESESTADO, X.DESSECCIONID, X.DESUSERID,          X.TIMESTAMP, X.TSTAMP, IDENREP.ISEGLOSANOMBRE AS REPARTICION,         X.NOMOBSERVACION
    FROM
    ( SELECT DOC.DOCDOCUMENTOID,         DOC.DOCNUMERODOCSERVICIO, DOC.DOCFECHADOCSERVICIO,         DOC.DOCFECHADOCCGR, SEC.SCGDESCRIPCION, DES.DESNUMEROCASOS,         DES.DESESTADO, DES.DESSECCIONID, DES.DESUSERID, DOC.TIMESTAMP, DES.TIMESTAMP AS TSTAMP, NOM.NOMOBSERVACION, DOC.DOCREPARTICIONID, DOC.DOCSERVICIOID,DOC.DOCTIPODOCUMENTO, DOC.DOCMATERIAINGRESO
    FROM
    TBLDOCESTUDIO DES,
    TBLDOCUMENTO DOC, TBLSECCIONCGR SEC,TBLNOMINA NOM
    WHERE 1 = 1
    AND :B7 = DES.DESESTADO
    AND :B6 = DES.DESSECCIONID
    AND DOC.DOCDOCUMENTOID = DES.DESDOCUMENTOID
          AND DES.DESSECCIONID = SEC.SCGCODIGO
    AND NOM.NOMNOMINAID = DES.DESNOMINAACTIVA
    AND DES.DESNUMEROCASOS BETWEEN :B5 AND :B4
    AND NOM.NOMNRONOMINA BETWEEN :B3 AND :B2
    AND DOC.DOCNUMERODOCSERVICIO LIKE :B1 ) X,
    TBLTIPODOCUMENTO TIP,
    TBLMATERIADOCUMENTO MAT,
    TBLIDENTIFICACIONSERVICIO IDEN,
    TBLIDENTIFICACIONSERVICIO IDENREP
    WHERE 1=1
    AND X.DOCMATERIAINGRESO = MAT.MDOCODIGO
    AND X.DOCTIPODOCUMENTO = TIP.TDOCODIGO
    AND X.DOCSERVICIOID = IDEN.ISESERVICIOID
    AND X.DOCREPARTICIONID = IDENREP.ISESERVICIOID
    AND ROWNUM < :B8RATIONALE: SQL statement with SQL_ID "bb9bma7du62h3" was executed 155 times and had an average elapsed time of 283 seconds.
    RATIONALE: Waiting for event "direct path read temp" in wait class "User I/O" accounted for 19% of the database time spent in processing the SQL statement with SQL_ID "bb9bma7du62h3".
    RATIONALE: Waiting for event "enq: TS - contention" in wait class "Other" accounted for 8% of the database time spent in processing the SQL statement with SQL_ID "bb9bma7du62h3".
    RATIONALE: Waiting for event "direct path write temp" in wait class "User I/O" accounted for 4% of the database time spent in processing the SQL statement with SQL_ID "bb9bma7du62h3".
    RATIONALE: Average CPU used per execution was 78 seconds.
    We have table TBLDOCUMENTO with 4K tablespaces.
    We have indexes of TBLDOCUMENTO in other tablespaces with 8K
    segments TBLLDOCUMENTO have size of 3,5G
    Any ideas how to solve "direct path read temp" , "enq: TS - contention" , "direct path write temp"
    and how to make the query faster and not hang
    sorry for my bad english...
    Regards..
    Mario
    Edited by: user1056867 on 07-09-2011 12:21 PM

    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    GENERAL INFORMATION SECTION                                                    
    Tuning Task Name                  : Siaper                                     
    Tuning Task Owner                 : SYSTEM                                     
    Scope                             : COMPREHENSIVE                              
    Time Limit(seconds)               : 60                                         
    Completion Status                 : COMPLETED                                  
    Started at                        : 09/08/2011 10:18:04                        
    Completed at                      : 09/08/2011 10:18:55                        
    Number of SQL Profile Findings    : 1                                          
    Schema Name: SIAPER3                                                           
    SQL ID     : bb9bma7du62h3                                                     
    SQL Text   : SELECT X.DOCDOCUMENTOID, X.DOCNUMERODOCSERVICIO,                  
                 X.DOCFECHADOCSERVICIO, X.DOCFECHADOCCGR, IDEN.ISEGLOSANOMBRE,     
                 X.SCGDESCRIPCION, MAT.MDODESCRIPCION, TIP.TDODESCRIPCION,         
                 X.DESNUMEROCASOS, X.DESESTADO, X.DESSECCIONID, X.DESUSERID,       
                 X.TIMESTAMP, X.TSTAMP, IDENREP.ISEGLOSANOMBRE AS REPARTICION,     
                 X.NOMOBSERVACION FROM ( SELECT DOC.DOCDOCUMENTOID,                
                 DOC.DOCNUMERODOCSERVICIO, DOC.DOCFECHADOCSERVICIO,                
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
                 DOC.DOCFECHADOCCGR, SEC.SCGDESCRIPCION, DES.DESNUMEROCASOS,       
                 DES.DESESTADO, DES.DESSECCIONID, DES.DESUSERID, DOC.TIMESTAMP,    
                 DES.TIMESTAMP AS TSTAMP, NOM.NOMOBSERVACION,                      
                 DOC.DOCREPARTICIONID, DOC.DOCSERVICIOID , DOC.DOCTIPODOCUMENTO,   
                 DOC.DOCMATERIAINGRESO FROM TBLDOCESTUDIO DES, TBLDOCUMENTO DOC,   
                 TBLSECCIONCGR SEC, TBLNOMINA NOM WHERE 1 = 1 AND :B7 =            
                 DES.DESESTADO AND :B6 = DES.DESSECCIONID AND DOC.DOCDOCUMENTOID   
                 = DES.DESDOCUMENTOID AND DES.DESSECCIONID = SEC.SCGCODIGO AND     
                 NOM.NOMNOMINAID = DES.DESNOMINAACTIVA AND DES.DESNUMEROCASOS      
                 BETWEEN :B5 AND :B4 AND NOM.NOMNRONOMINA BETWEEN :B3 AND :B2 AND  
                 DOC.DOCNUMERODOCSERVICIO LIKE :B1 ) X, TBLTIPODOCUMENTO TIP,      
                 TBLMATERIADOCUMENTO MAT, TBLIDENTIFICACIONSERVICIO IDEN,          
                 TBLIDENTIFICACIONSERVICIO IDENREP WHERE 1=1 AND                   
                 X.DOCMATERIAINGRESO = MAT.MDOCODIGO AND X.DOCTIPODOCUMENTO =      
                 TIP.TDOCODIGO AND X.DOCSERVICIOID = IDEN.ISESERVICIOID AND        
                 X.DOCREPARTICIONID = IDENREP.ISESERVICIOID AND ROWNUM < :B8       
    FINDINGS SECTION (1 finding)                                                   
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    1- SQL Profile Finding (see explain plans section below)                       
      A potentially better execution plan was found for this statement.            
      Recommendation (estimated benefit<=10%)                                      
      - Consider accepting the recommended SQL profile.                            
        execute dbms_sqltune.accept_sql_profile(task_name => 'Siaper', replace =>  
                TRUE);                                                             
    EXPLAIN PLANS SECTION                                                          
    1- Original With Adjusted Cost                                                 
    Plan hash value: 3376565491                                                    
    | Id  | Operation                            | Name                         | Ro
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    ws  | Bytes | Cost (%CPU)| Time     |                                          
    |   0 | SELECT STATEMENT                     |                              |  
      1 |   246 |    34   (3)| 00:00:01 |                                          
    |*  1 |  COUNT STOPKEY                       |                              |  
        |       |            |          |                                          
    |*  2 |   FILTER                             |                              |  
        |       |            |          |                                          
    |*  3 |    HASH JOIN                         |                              |  
      1 |   246 |    34   (3)| 00:00:01 |                                          
    |   4 |     TABLE ACCESS FULL                | TBLTIPODOCUMENTO             |  
    32 |   608 |     2   (0)| 00:00:01 |                                          
    |   5 |     NESTED LOOPS                     |                              |  
      1 |   227 |    32   (4)| 00:00:01 |                                          
    |   6 |      NESTED LOOPS                    |                              |  
      1 |   182 |    31   (4)| 00:00:01 |                                          
    |*  7 |       HASH JOIN                      |                              |  
      1 |   137 |    30   (4)| 00:00:01 |                                          
    |   8 |        NESTED LOOPS                  |                              |  
      1 |   113 |    27   (0)| 00:00:01 |                                          
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    |   9 |         NESTED LOOPS                 |                              |  
      1 |    75 |    26   (0)| 00:00:01 |                                          
    |  10 |          NESTED LOOPS                |                              |  
    16 |   688 |     2   (0)| 00:00:01 |                                          
    |  11 |           TABLE ACCESS BY INDEX ROWID| TBLSECCIONCGR                |  
      1 |    29 |     1   (0)| 00:00:01 |                                          
    |* 12 |            INDEX UNIQUE SCAN         | PK_TBLSECCIONCGR             |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  13 |           TABLE ACCESS BY INDEX ROWID| TBLNOMINA                    |  
    16 |   224 |     1   (0)| 00:00:01 |                                          
    |* 14 |            INDEX RANGE SCAN          | IDX_IDX_NOM_NRONOMINA        |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |* 15 |          TABLE ACCESS BY INDEX ROWID | TBLDOCESTUDIO                |  
      1 |    32 |     2   (0)| 00:00:01 |                                          
    |* 16 |           INDEX RANGE SCAN           | IDX_DESNOMINA                |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |* 17 |         TABLE ACCESS BY INDEX ROWID  | TBLDOCUMENTO                 |  
      1 |    38 |     1   (0)| 00:00:01 |                                          
    |* 18 |          INDEX UNIQUE SCAN           | PK_TBLDOCUMENTO              |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  19 |        TABLE ACCESS FULL             | TBLMATERIADOCUMENTO          |  
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    47 |  1128 |     2   (0)| 00:00:01 |                                          
    |  20 |       TABLE ACCESS BY INDEX ROWID    | TBLIDENTIFICACIONSERVICIO    |  
      1 |    45 |     1   (0)| 00:00:01 |                                          
    |* 21 |        INDEX UNIQUE SCAN             | PK_TBLIDENTIFICACIONSERVICIO |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  22 |      TABLE ACCESS BY INDEX ROWID     | TBLIDENTIFICACIONSERVICIO    |  
      1 |    45 |     1   (0)| 00:00:01 |                                          
    |* 23 |       INDEX UNIQUE SCAN              | PK_TBLIDENTIFICACIONSERVICIO |  
      1 |       |     1   (0)| 00:00:01 |                                          
    Predicate Information (identified by operation id):                            
       1 - filter(ROWNUM<TO_NUMBER(:B8))                                           
       2 - filter(:B3<=:B2 AND :B5<=:B4)                                           
       3 - access("DOC"."DOCTIPODOCUMENTO"="TIP"."TDOCODIGO")                      
       7 - access("DOC"."DOCMATERIAINGRESO"="MAT"."MDOCODIGO")                     
      12 - access("SEC"."SCGCODIGO"=:B6)                                           
      14 - access("NOM"."NOMNRONOMINA">=:B3 AND "NOM"."NOMNRONOMINA"<=:B2)         
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
      15 - filter("DES"."DESSECCIONID"=:B6 AND "DES"."DESESTADO"=:B7 AND "DES"."DESN
    UMEROCASOS">=:B5 AND                                                           
                  "DES"."DESNUMEROCASOS"<=:B4)                                     
      16 - access("NOM"."NOMNOMINAID"="DES"."DESNOMINAACTIVA")                     
           filter("DES"."DESNOMINAACTIVA">=0)                                      
      17 - filter("DOC"."DOCNUMERODOCSERVICIO" LIKE :B1)                           
      18 - access("DOC"."DOCDOCUMENTOID"="DES"."DESDOCUMENTOID")                   
      21 - access("DOC"."DOCSERVICIOID"="IDEN"."ISESERVICIOID")                    
      23 - access("DOC"."DOCREPARTICIONID"="IDENREP"."ISESERVICIOID")              
    2- Using SQL Profile                                                           
    Plan hash value: 2086638886                                                    
    | Id  | Operation                            | Name                         | Ro
    ws  | Bytes | Cost (%CPU)| Time     |                                          
    |   0 | SELECT STATEMENT                     |                              |  
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
      1 |   246 |    31   (0)| 00:00:01 |                                          
    |*  1 |  COUNT STOPKEY                       |                              |  
        |       |            |          |                                          
    |*  2 |   FILTER                             |                              |  
        |       |            |          |                                          
    |   3 |    NESTED LOOPS                      |                              |  
      1 |   246 |    31   (0)| 00:00:01 |                                          
    |   4 |     NESTED LOOPS                     |                              |  
      1 |   201 |    30   (0)| 00:00:01 |                                          
    |   5 |      NESTED LOOPS                    |                              |  
      1 |   156 |    29   (0)| 00:00:01 |                                          
    |   6 |       NESTED LOOPS                   |                              |  
      1 |   132 |    28   (0)| 00:00:01 |                                          
    |   7 |        NESTED LOOPS                  |                              |  
      1 |   113 |    27   (0)| 00:00:01 |                                          
    |   8 |         NESTED LOOPS                 |                              |  
      1 |    75 |    26   (0)| 00:00:01 |                                          
    |   9 |          NESTED LOOPS                |                              |  
    16 |   688 |     2   (0)| 00:00:01 |                                          
    |  10 |           TABLE ACCESS BY INDEX ROWID| TBLSECCIONCGR                |  
      1 |    29 |     1   (0)| 00:00:01 |                                          
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
    |* 11 |            INDEX UNIQUE SCAN         | PK_TBLSECCIONCGR             |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  12 |           TABLE ACCESS BY INDEX ROWID| TBLNOMINA                    |  
    16 |   224 |     1   (0)| 00:00:01 |                                          
    |* 13 |            INDEX RANGE SCAN          | IDX_IDX_NOM_NRONOMINA        |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |* 14 |          TABLE ACCESS BY INDEX ROWID | TBLDOCESTUDIO                |  
      1 |    32 |     2   (0)| 00:00:01 |                                          
    |* 15 |           INDEX RANGE SCAN           | IDX_DESNOMINA                |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |* 16 |         TABLE ACCESS BY INDEX ROWID  | TBLDOCUMENTO                 |  
      1 |    38 |     1   (0)| 00:00:01 |                                          
    |* 17 |          INDEX UNIQUE SCAN           | PK_TBLDOCUMENTO              |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  18 |        TABLE ACCESS BY INDEX ROWID   | TBLTIPODOCUMENTO             |  
      1 |    19 |     1   (0)| 00:00:01 |                                          
    |* 19 |         INDEX UNIQUE SCAN            | PK_TBLTIPODOCUMENTO          |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  20 |       TABLE ACCESS BY INDEX ROWID    | TBLMATERIADOCUMENTO          |  
      1 |    24 |     1   (0)| 00:00:01 |                                          
    |* 21 |        INDEX UNIQUE SCAN             | PK_TBLMATERIADOCUMENTO       |  
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
      1 |       |     1   (0)| 00:00:01 |                                          
    |  22 |      TABLE ACCESS BY INDEX ROWID     | TBLIDENTIFICACIONSERVICIO    |  
      1 |    45 |     1   (0)| 00:00:01 |                                          
    |* 23 |       INDEX UNIQUE SCAN              | PK_TBLIDENTIFICACIONSERVICIO |  
      1 |       |     1   (0)| 00:00:01 |                                          
    |  24 |     TABLE ACCESS BY INDEX ROWID      | TBLIDENTIFICACIONSERVICIO    |  
      1 |    45 |     1   (0)| 00:00:01 |                                          
    |* 25 |      INDEX UNIQUE SCAN               | PK_TBLIDENTIFICACIONSERVICIO |  
      1 |       |     1   (0)| 00:00:01 |                                          
    Predicate Information (identified by operation id):                            
       1 - filter(ROWNUM<TO_NUMBER(:B8))                                           
       2 - filter(:B3<=:B2 AND :B5<=:B4)                                           
      11 - access("SEC"."SCGCODIGO"=:B6)                                           
      13 - access("NOM"."NOMNRONOMINA">=:B3 AND "NOM"."NOMNRONOMINA"<=:B2)         
      14 - filter("DES"."DESSECCIONID"=:B6 AND "DES"."DESESTADO"=:B7 AND "DES"."DESN
    UMEROCASOS">=:B5 AND                                                           
    DBMS_SQLTUNE.REPORT_TUNING_TASK('SIAPER')                                      
                  "DES"."DESNUMEROCASOS"<=:B4)                                     
      15 - access("NOM"."NOMNOMINAID"="DES"."DESNOMINAACTIVA")                     
           filter("DES"."DESNOMINAACTIVA">=0)                                      
      16 - filter("DOC"."DOCNUMERODOCSERVICIO" LIKE :B1)                           
      17 - access("DOC"."DOCDOCUMENTOID"="DES"."DESDOCUMENTOID")                   
      19 - access("DOC"."DOCTIPODOCUMENTO"="TIP"."TDOCODIGO")                      
      21 - access("DOC"."DOCMATERIAINGRESO"="MAT"."MDOCODIGO")                     
      23 - access("DOC"."DOCREPARTICIONID"="IDENREP"."ISESERVICIOID")              
      25 - access("DOC"."DOCSERVICIOID"="IDEN"."ISESERVICIOID")                    
    1 row selected.

  • Query tunning in RAC system

    Hi,
    I have a RACsystem with 2 nodes (oracle 10.2.0.1.0, Linux RedHat 4)
    I have a simple query that take a long time but in the same server with a single database it's fast!!??
    here is the query:
    select count(*) as y0_, this_.NOTIFICATION_TYPE as y1_
    from NOTIFICATION this_
    where this_.OPCO_ID=1
    and this_.SENTDATE is null
    and this_.UPDATED_TIME is null
    and this_.CREATED_TIME>=sysdate
    and this_.CREATED_TIME<=sysdate
    group by this_.NOTIFICATION_TYPE
    order by this_.NOTIFICATION_TYPE asc
    With AWR I can see that I lost a long time in gc_buffer_busy_wait!!
    NB:I have a blob in my table
    How I can solve my problem?
    Message was edited by:
    vittel

    You have hit one of the many features of RAC. On a single instance database a piece of data is either in its SGA in local memory, or out on disk. On a multi-instance RAC database a piece of data can now either be in the local SGA memory, or the remote SGA memory of the other node, or out on disk.
    Given that Oracle assumes that it is quicker to go across the network interconnect between the 2 RAC nodes to retrieve the copy of the data from the other SGA than it is to go all the way out to disk to read that data page, Oracle RAC is biased to using copies of data blocks in the SGAs of the other RAC nodes.
    You cannot stop this. Oracle RAC is doing as it is intended to. Either tune and improve the performance of the interconnect between your 2 nodes. Are you using 100 baseT or 1000 baseT or something even faster? It is the latency that is the issue, not the bandwidth.
    Or stop using RAC, and go back to just one database instance.
    John

  • SQL query slow with call to function

    I have a SQL query that will return in less than a second or two with a function in-line selected in the "from" clause of the statement. As soon as I select that returned value in the SQL statement, the statement takes from anywhere from 2 to 5 minutes to return. Here is a simplified sample from the statement:
    This statement returns in a second or 2.
    select A.pk_id
    from stu_schedule A, stu_school B, stu_year C, school_year D,
    (select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
    from stu_schedule Z) II
    where B.pk_id = A.fk_stu_school
    and C.pk_id = B.fk_stu_year
    and D.pk_id = C.year
    and D.school_year = '2011';
    if I add this function call in, the statement runs extremely poor.
    select A.pk_id,
    II.ytd_unx
    from stu_schedule A, stu_school B, stu_year C, school_year D,
    (select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
    from stu_schedule Z) II
    where B.pk_id = A.fk_stu_school
    and C.pk_id = B.fk_stu_year
    and D.pk_id = C.year
    and D.school_year = '2011';
    Here is the function that is called:
    create or replace FUNCTION calc_ytd_class_abs2 (p_fk_stu_schedule in varchar2,
    p_legality in varchar2) return number IS
    l_days_absent number := 0;
    CURSOR get_class_abs IS
    select (select nvl(max(D.days_absent),'0')
    from cut_code D
    where D.pk_id = C.fk_cut_code
    and (D.legality = p_legality
    or p_legality = '%')) days_absent
    from stu_schedule_detail B, stu_class_attendance C
    where B.fk_stu_schedule = p_fk_stu_schedule
    and C.fk_stu_schedule_detail = B.pk_id;
    BEGIN
    FOR x in get_class_abs LOOP
    l_days_absent := l_days_absent + x.days_absent;
    END LOOP;
    return (l_days_absent);
    END calc_ytd_class_abs2;

    Query returns anywhere from 6000 to 32000 rows. For each of those rows a parameter is passed in to 4 different functions to get ytd totals. When I call the functions in the in-line view but do not select from them in the main SQL, the report (oh, this is Application Express 4.0 interactive reports, just an FYI) runs fast. The report comes back in a few seconds. But when I select from the in-line view to display those ytd totals, the report runs extremely slow. I know there are the articles about context switching and how mixing SQL with PL/SQL performs poorly. So I tried a pipeline table function where the function for the ytd totals populate the columns of the pipeline table and I select from the pipeline table in the SQL query in the interactive report. That seemed to perform a little worse from what I can tell.
    Thanks for any help you can offer.

  • Sql query slow due to case statement on Joins

    Hi
    The sql query runs very slow for 30 min when the below case statement is added on the joins. Could you please let me know how to tune it. if the case statement is not there then it runs only for 1 min.
    *( CASE*
    WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
    THEN  PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
    WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
    PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
    THEN DT_REQ_ALL.EMPMGMTCD
    END =DB2.DB2_FDW_MGMT_V.MH_CHILD  )
    SELECT  DISTINCT
      D.DB2_FDW_MGMT_V.RC_PARENT,
      DT_REQ_ALL.FULL_NAME,
      DT_REQ_ALL.EMP_COMPANY_CODE,
      DT_REQ_ALL.EMP_COST_CENTER,
      PO.PO_VENDORS.VENDOR_NAME,
      PO_PO_HEADERS_ALL2.SEGMENT1,
      PO_PO_HEADERS_ALL2.CREATION_DATE,
      PO_DIST_GL_CODE_COMB.SEGMENT1,
      PO_DIST_GL_CODE_COMB.SEGMENT2,
      PO_PO_HEADERS_ALL2.CURRENCY_CODE,
      PO_INV_DIST_ALL.INVOICE_NUM,
      PO_INV_DIST_ALL.INVOICE_DATE,
      (PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
      (NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
      PO_LINES_LOC.LINE_NUM,
      GL.GL_SETS_OF_BOOKS.NAME,
      CASE
            WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
            THEN 1
            ELSE 0
        END ,
      PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
      TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
    FROM
      DB2.DB2_FDW_MGMT_V,
       PO.PO_VENDORS,
      PO.PO_HEADERS_ALL  PO_PO_HEADERS_ALL2,
      GL.GL_CODE_COMBINATIONS  PO_DIST_GL_CODE_COMB,
      AP.AP_INVOICES_ALL  PO_INV_DIST_ALL,
       PO.PO_DISTRIBUTIONS_ALL  PO_DIST_ALL,
      PO.PO_LINES_ALL  PO_LINES_LOC,
      GL.GL_SETS_OF_BOOKS,
      PO.PO_REQUISITION_LINES_ALL,
      PO.PO_LINE_LOCATIONS_ALL,
      AP.AP_INVOICE_DISTRIBUTIONS_ALL  PO_DIST_INV_DIST_ALL,
      APPS.HR_OPERATING_UNITS,
      PO.PO_REQ_DISTRIBUTIONS_ALL,
       SELECT DISTINCT
                            PO_RDA.DISTRIBUTION_ID,
                            PO_RLA.requisition_line_id,
                            PO_RHA.DESCRIPTION PO_Descr,
                            PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
                            Req_Emp.FULL_NAME,
                            GL_CC.SEGMENT1         Req_Company_Code,
                            GL_CC.SEGMENT2         Req_Cost_Center,
                            Req_Emp_CC.SEGMENT1    Emp_Company_Code,
                            Req_Emp_CC.SEGMENT2    Emp_Cost_Center,
                            (Case
                            When GL_CC.SEGMENT2 <> 8000
                            Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
                            Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
                            End) EmpMgmtCD
                FROM
                            PO.po_requisition_lines_all PO_rla,
                            PO.po_requisition_headers_all PO_rha,
                            PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
                            GL.GL_CODE_COMBINATIONS gl_cc,
                            HR.PER_ALL_PEOPLE_F  Req_Emp,
                            HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
                            HR.hr_all_organization_units Req_Emp_Org,
                            HR.pay_cost_allocation_keyflex Req_Emp_CC
                WHERE
                            PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
                            PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
                            PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
                            PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
                            (trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
                            Req_Emp.effective_start_date IS NULL) AND
                            Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
                            Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
                            (trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
            Req_Emp_Assign.effective_start_date IS NULL) AND
            Req_Emp_Assign.primary_flag = 'Y' AND
            Req_Emp_Assign.assignment_type = 'E' AND
            Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
      )  DT_REQ_ALL,
      SELECT
            FROM_CURRENCY,
            TO_CURRENCY,
            CONVERSION_DATE,
            CONVERSION_RATE
        FROM GL.GL_DAILY_RATES
        UNION
        SELECT Distinct
            'USD',
            'USD',
            CONVERSION_DATE,
            1
        FROM GL.GL_DAILY_RATES
      )  PO_Rates_GL_DR
    WHERE
      ( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID  )
      AND  ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID  )
      AND  ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID  )
      AND  ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID  )
      AND  ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID  )
      AND  ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY  )
      AND  ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE  )
      AND  ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+)  )
      AND  ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+)  )
      AND  ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID  )
      AND  ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID  )
      AND  ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+)  )
      AND  ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+)  )
      AND  ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID  )
      AND  ( PO_INV_DIST_ALL.SOURCE(+) <> 'XML GATEWAY'   )
      AND 
       ( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') <> 'Y'   )
       AND
       ( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED'  )
       AND
       ( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') <> 'REJECTED'  )
       AND
       ( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE)  BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011')  )
       AND
       PO_Rates_GL_DR.TO_CURRENCY  =  'USD'
         AND
       DB2.DB2_FDW_MGMT_V.RC_PARENT  In  ( 'Unavailable','Corp','Commercial'  )
       AND
      ( CASE
            WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
            THEN  PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
            WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
                    PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
            THEN DT_REQ_ALL.EMPMGMTCD     
        END =DB2.DB2_FDW_MGMT_V.MH_CHILD  )Explain plan. sorry can't get the explain plan from sql. this is from toad.
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 53,932  Bytes: 2,607  Cardinality: 1                                                                                                                               
         79 HASH UNIQUE  Cost: 53,932  Bytes: 2,607  Cardinality: 1                                                                                                                          
              78 NESTED LOOPS OUTER  Cost: 53,931  Bytes: 2,607  Cardinality: 1                                                                                                                     
                   75 NESTED LOOPS OUTER  Cost: 53,928  Bytes: 2,560  Cardinality: 1                                                                                                                
                        72 NESTED LOOPS  Cost: 53,902  Bytes: 2,552  Cardinality: 1                                                                                                           
                             69 NESTED LOOPS OUTER  Cost: 53,900  Bytes: 2,533  Cardinality: 1                                                                                                      
                                  66 NESTED LOOPS OUTER  Cost: 53,898  Bytes: 2,521  Cardinality: 1                                                                                                 
                                       63 HASH JOIN OUTER  Cost: 53,896  Bytes: 2,509  Cardinality: 1                                                                                            
                                            40 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_DISTRIBUTIONS_ALL Cost: 3  Bytes: 26  Cardinality: 1                                                                                       
                                                 39 NESTED LOOPS  Cost: 17,076  Bytes: 2,400  Cardinality: 1                                                                                  
                                                      37 NESTED LOOPS  Cost: 17,073  Bytes: 2,374  Cardinality: 1                                                                             
                                                           34 NESTED LOOPS  Cost: 17,070  Bytes: 2,362  Cardinality: 1                                                                        
                                                                31 NESTED LOOPS  Cost: 17,066  Bytes: 2,347  Cardinality: 1                                                                   
                                                                     29 NESTED LOOPS  Cost: 17,066  Bytes: 2,339  Cardinality: 1                                                              
                                                                          26 NESTED LOOPS  Cost: 17,065  Bytes: 2,312  Cardinality: 1                                                         
                                                                               23 NESTED LOOPS  Cost: 17,064  Bytes: 2,287  Cardinality: 1                                                    
                                                                                    20 NESTED LOOPS  Cost: 17,062  Bytes: 2,261  Cardinality: 1                                               
                                                                                         17 NESTED LOOPS  Cost: 17,056  Bytes: 6,678  Cardinality: 3                                          
                                                                                              15 HASH JOIN  Cost: 17,056  Bytes: 6,663  Cardinality: 3                                     
                                                                                                   13 MERGE JOIN CARTESIAN  Cost: 135  Bytes: 30,352  Cardinality: 14                                
                                                                                                        5 VIEW VIEW DB2.DB2_FDW_MGMT_V Cost: 4  Bytes: 2,128  Cardinality: 1                           
                                                                                                             4 SORT UNIQUE  Cost: 4  Cardinality: 1                      
                                                                                                                  3 UNION-ALL                 
                                                                                                                       1 REMOTE REMOTE SERIAL_FROM_REMOTE PRDFDW.WORLD          
                                                                                                                       2 FAST DUAL  Cost: 3  Cardinality: 1            
                                                                                                        12 BUFFER SORT  Cost: 135  Bytes: 560  Cardinality: 14                           
                                                                                                             11 VIEW DB2. Cost: 131  Bytes: 560  Cardinality: 14                      
                                                                                                                  10 SORT UNIQUE  Cost: 131  Bytes: 310  Cardinality: 14                 
                                                                                                                       9 UNION-ALL            
                                                                                                                            7 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_DAILY_RATES Cost: 65  Bytes: 270  Cardinality: 9       
                                                                                                                                 6 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64  Cardinality: 1 
                                                                                                                            8 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64  Bytes: 4,368  Cardinality: 546       
                                                                                                   14 TABLE ACCESS FULL TABLE PO.PO_HEADERS_ALL Cost: 16,920  Bytes: 32,754  Cardinality: 618                                
                                                                                              16 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0  Bytes: 5  Cardinality: 1                                     
                                                                                         19 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2  Bytes: 35  Cardinality: 1                                          
                                                                                              18 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1  Cardinality: 2                                     
                                                                                    22 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2  Bytes: 26  Cardinality: 1                                               
                                                                                         21 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1  Cardinality: 1                                          
                                                                               25 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_SETS_OF_BOOKS Cost: 1  Bytes: 25  Cardinality: 1                                                    
                                                                                    24 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_SETS_OF_BOOKS_U2 Cost: 0  Cardinality: 1                                               
                                                                          28 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_VENDORS Cost: 1  Bytes: 27  Cardinality: 1                                                         
                                                                               27 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_VENDORS_U1 Cost: 0  Cardinality: 1                                                    
                                                                     30 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ALL_ORGANIZATION_UNTS_TL_PK Cost: 0  Bytes: 8  Cardinality: 1                                                              
                                                                33 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINES_ALL Cost: 4  Bytes: 60  Cardinality: 4                                                                   
                                                                     32 INDEX RANGE SCAN INDEX (UNIQUE) PO.PO_LINES_U2 Cost: 2  Cardinality: 4                                                              
                                                           36 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINE_LOCATIONS_ALL Cost: 3  Bytes: 12  Cardinality: 1                                                                        
                                                                35 INDEX RANGE SCAN INDEX PO.PO_LINE_LOCATIONS_N1 Cost: 2  Cardinality: 1                                                                   
                                                      38 INDEX RANGE SCAN INDEX PO.PO_DISTRIBUTIONS_N1 Cost: 2  Cardinality: 1                                                                             
                                            62 VIEW DB2. Cost: 36,819  Bytes: 1,090  Cardinality: 10                                                                                       
                                                 61 HASH UNIQUE  Cost: 36,819  Bytes: 2,580  Cardinality: 10                                                                                  
                                                      60 NESTED LOOPS  Cost: 36,818  Bytes: 2,580  Cardinality: 10                                                                             
                                                           57 NESTED LOOPS  Cost: 36,798  Bytes: 2,390  Cardinality: 10                                                                        
                                                                54 NESTED LOOPS  Cost: 36,768  Bytes: 2,220  Cardinality: 10                                                                   
                                                                     51 NESTED LOOPS  Cost: 36,758  Bytes: 1,510  Cardinality: 10                                                              
                                                                          48 NESTED LOOPS  Cost: 36,747  Bytes: 1,050  Cardinality: 10                                                         
                                                                               45 HASH JOIN  Cost: 36,737  Bytes: 960  Cardinality: 10                                                    
                                                                                    43 HASH JOIN  Cost: 34,602  Bytes: 230,340  Cardinality: 3,490                                               
                                                                                         41 TABLE ACCESS FULL TABLE HR.PER_ALL_PEOPLE_F Cost: 1,284  Bytes: 1,848,420  Cardinality: 44,010                                          
                                                                                         42 TABLE ACCESS FULL TABLE PO.PO_REQUISITION_LINES_ALL Cost: 31,802  Bytes: 18,340,080  Cardinality: 764,170                                          
                                                                                    44 TABLE ACCESS FULL TABLE HR.PER_ALL_ASSIGNMENTS_F Cost: 2,134  Bytes: 822,540  Cardinality: 27,418                                               
                                                                               47 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ALL_ORGANIZATION_UNITS Cost: 1  Bytes: 9  Cardinality: 1                                                    
                                                                                    46 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0  Cardinality: 1                                               
                                                                          50 TABLE ACCESS BY INDEX ROWID TABLE HR.PAY_COST_ALLOCATION_KEYFLEX Cost: 1  Bytes: 46  Cardinality: 1                                                         
                                                                               49 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.PAY_COST_ALLOCATION_KEYFLE_PK Cost: 0  Cardinality: 1                                                    
                                                                     53 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_HEADERS_ALL Cost: 1  Bytes: 71  Cardinality: 1                                                              
                                                                          52 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_HEADERS_U1 Cost: 0  Cardinality: 1                                                         
                                                                56 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 3  Bytes: 17  Cardinality: 1                                                                   
                                                                     55 INDEX RANGE SCAN INDEX PO.PO_REQ_DISTRIBUTIONS_N1 Cost: 2  Cardinality: 1                                                              
                                                           59 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2  Bytes: 19  Cardinality: 1                                                                        
                                                                58 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1  Cardinality: 1                                                                   
                                       65 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 2  Bytes: 12  Cardinality: 1                                                                                            
                                            64 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQ_DISTRIBUTIONS_U1 Cost: 1  Cardinality: 1                                                                                       
                                  68 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_LINES_ALL Cost: 2  Bytes: 12  Cardinality: 1                                                                                                 
                                       67 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_LINES_U1 Cost: 1  Cardinality: 1                                                                                            
                             71 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2  Bytes: 19  Cardinality: 1                                                                                                      
                                  70 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1  Cardinality: 1                                                                                                 
                        74 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICE_DISTRIBUTIONS_ALL Cost: 26  Bytes: 16  Cardinality: 2                                                                                                           
                             73 INDEX RANGE SCAN INDEX AP.AP_INVOICE_DISTRIBUTIONS_N7 Cost: 2  Cardinality: 37                                                                                                      
                   77 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICES_ALL Cost: 3  Bytes: 47  Cardinality: 1                                                                                                                
                        76 INDEX RANGE SCAN INDEX (UNIQUE) AP.AP_INVOICES_U1 Cost: 2  Cardinality: 1                                                                                                           Thanks

    Forming a new table "new_table" with 3 tables which particiapate in CASE statement logic.
    with  DT_REQ_ALL  as
       SELECT DISTINCT
                            PO_RDA.DISTRIBUTION_ID,
                            PO_RLA.requisition_line_id,
                            PO_RHA.DESCRIPTION PO_Descr,
                            PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
                            Req_Emp.FULL_NAME,
                            GL_CC.SEGMENT1         Req_Company_Code,
                            GL_CC.SEGMENT2         Req_Cost_Center,
                            Req_Emp_CC.SEGMENT1    Emp_Company_Code,
                            Req_Emp_CC.SEGMENT2    Emp_Cost_Center,
                            (Case
                            When GL_CC.SEGMENT2  8000
                            Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
                            Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
                            End) EmpMgmtCD
                FROM
                            PO.po_requisition_lines_all PO_rla,
                            PO.po_requisition_headers_all PO_rha,
                            PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
                            GL.GL_CODE_COMBINATIONS gl_cc,
                            HR.PER_ALL_PEOPLE_F  Req_Emp,
                            HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
                            HR.hr_all_organization_units Req_Emp_Org,
                            HR.pay_cost_allocation_keyflex Req_Emp_CC
                WHERE
                            PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
                            PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
                            PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
                            PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
                            (trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
                            Req_Emp.effective_start_date IS NULL) AND
                            Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
                            Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
                            (trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
            Req_Emp_Assign.effective_start_date IS NULL) AND
            Req_Emp_Assign.primary_flag = 'Y' AND
            Req_Emp_Assign.assignment_type = 'E' AND
            Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
    SELECT  DISTINCT
      D.DB2_FDW_MGMT_V.RC_PARENT,
      DT_REQ_ALL.FULL_NAME,
      DT_REQ_ALL.EMP_COMPANY_CODE,
      DT_REQ_ALL.EMP_COST_CENTER,
      PO.PO_VENDORS.VENDOR_NAME,
      PO_PO_HEADERS_ALL2.SEGMENT1,
      PO_PO_HEADERS_ALL2.CREATION_DATE,
      PO_DIST_GL_CODE_COMB.SEGMENT1,
      PO_DIST_GL_CODE_COMB.SEGMENT2,
      PO_PO_HEADERS_ALL2.CURRENCY_CODE,
      PO_INV_DIST_ALL.INVOICE_NUM,
      PO_INV_DIST_ALL.INVOICE_DATE,
      (PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
      (NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
      PO_LINES_LOC.LINE_NUM,
      GL.GL_SETS_OF_BOOKS.NAME,
      CASE
            WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
            THEN 1
            ELSE 0
        END ,
      PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
      TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
    FROM
      ( SELECT * FROM
          DB2.DB2_FDW_MGMT_V,
          GL.GL_CODE_COMBINATIONS  PO_DIST_GL_CODE_COMB,
          DT_REQ_ALL
        WHERE
              DB2.DB2_FDW_MGMT_V.RC_PARENT  In  ( 'Unavailable','Corp','Commercial'  )
           AND
            CASE
               WHEN PO_DIST_GL_CODE_COMB.SEGMENT2  <>  '1000'
               THEN  PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
               WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
                    PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
               THEN DT_REQ_ALL.EMPMGMTCD     
            END =DB2.DB2_FDW_MGMT_V.MH_CHILD
       )   new_table,
       PO.PO_VENDORS,
      PO.PO_HEADERS_ALL  PO_PO_HEADERS_ALL2,
      AP.AP_INVOICES_ALL  PO_INV_DIST_ALL,
       PO.PO_DISTRIBUTIONS_ALL  PO_DIST_ALL,
      PO.PO_LINES_ALL  PO_LINES_LOC,
      GL.GL_SETS_OF_BOOKS,
      PO.PO_REQUISITION_LINES_ALL,
      PO.PO_LINE_LOCATIONS_ALL,
      AP.AP_INVOICE_DISTRIBUTIONS_ALL  PO_DIST_INV_DIST_ALL,
      APPS.HR_OPERATING_UNITS,
      PO.PO_REQ_DISTRIBUTIONS_ALL,
      SELECT
            FROM_CURRENCY,
            TO_CURRENCY,
            CONVERSION_DATE,
            CONVERSION_RATE
        FROM GL.GL_DAILY_RATES
        UNION
        SELECT Distinct
            'USD',
            'USD',
            CONVERSION_DATE,
            1
        FROM GL.GL_DAILY_RATES
      )  PO_Rates_GL_DR
    WHERE
      ( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID  )
      AND  ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID  )
      AND  ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID  )
      AND  ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID  )
      AND  ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID  )
      AND  ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY  )
      AND  ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE  )
      AND  ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+)  )
      AND  ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+)  )
      AND  ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID  )
      AND  ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID  )
      AND  ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+)  )
      AND  ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+)  )
      AND  ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID  )
      AND  ( PO_INV_DIST_ALL.SOURCE(+)  'XML GATEWAY'   )
      AND 
       ( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N')  'Y'   )
       AND
       ( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN')  'FINALLY CLOSED'  )
       AND
       ( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS')  'REJECTED'  )
       AND
       ( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE)  BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011')  )
       AND
       PO_Rates_GL_DR.TO_CURRENCY  =  'USD'
      

  • Query related to RAC architecture...

    Hi All,
    I have few question related to ORACLE 10g/11gRAC setup. I guess internal architecuture of both won't be change much except adding of new concept "Scan IP" and might be some other which I dont know perfreclty.
    I have two node cluster setup.
    My question are:
    1- if I want to run batch job, which is actually performing DML statment [ combination of insert,update and delete] so It's recommended to run batch job in all nodes. Because at the same time other operation is also performed on this RAC setup and it might be slow down DB performance because bulky DML transaction. Is it advisable to run this batch on just single node?
    2- if I want to run some jobs [dbms_jobs] on specific nodes then how can I control it?
    3- if I want to perform single node action in RAC environment, then I can does this by giving "alter system set cluster_database=false scope=spfile sid=*" [ like enable archive log mode] . So what is the difference between run this ALTER command and if I down one node through "srvctl" command line utility?
    4- In my application I have few transaction which is having nested loops and internally called procedures/triggers and all mix type of workload. If suddenly one node goes down then what happen if my transaction runninh on this nodes? I read few pdfs and came to knwo that all runnign transaction needs to be rollback so what happen if my application internally already commit at some level? does this measn inconsistent data?
    IF anybody having any idea about any point out of these 4 points then guide me please.
    Thanks...

    1- if I want to run batch job, which is actually performing DML statment [ combination of insert,update and delete] so It's recommended to run batch job in all nodes. Because at the same time other operation is also performed on this RAC setup and it might be slow down DB performance because bulky DML transaction. Is it advisable to run this batch on just single node?It is not advisable, as by doing so you disallow Oracle to distribute the workload.
    2- if I want to run some jobs [dbms_jobs] on specific nodes then how can I control it?First of all, since 10g dbms_job is obsolete. Secondly dbms_job.submit has a parameter to submit the job to a specific instance. The default is the local instance.
    As this is documented, this question can be answered in more detail by yourself by reading the documentation.
    3- if I want to perform single node action in RAC environment, then I can does this by giving "alter system set cluster_database=false scope=spfile sid=*" [ like enable archive log mode] . So what is the difference between run this ALTER command and if I down one node through "srvctl" command line utility?Alter system set cluster_database=false stops the cluster as soon as you reboot the database.
    srvctl stop instance just removes one instance from the cluster, with potential failover, the cluster continues to function.
    4- In my application I have few transaction which is having nested loops and internally called procedures/triggers and all mix type of workload. If suddenly one node goes down then what happen if my transaction runninh on this nodes? I read few pdfs and came to knwo that all runnign transaction needs to be rollback so what happen if my application internally already commit at some level? does this measn inconsistent data?Well, it means you are committing too often, doesn't it? You commit before a logical transaction ends. And yes, this means inconsistent data, as it would have meant on a single instance database.
    It also means your application is up for the dust bin, and a thorough revision.
    Sybrand Bakker
    Senior Oracle DBA

  • Insert query slows in Timesten

    Hello DB Experts ,
    I am inserting bulk data with ttbulkcp command. my permsize is 20GB . Insert query gets slow . can anyone help me that how can i maximize throughput by ttbulkcp.
    Regards,

    Hi Chris thanks for your reply.
    I have uncommented that memlock parameter is working now. I will not use system DSN now onwards. thanks for that suggestion .
    1.    The definition of the table you are loading data into, including indexes.
    My Comments : Table defination.The table doesnot having any primary key and any indexes.
    create table TBLEDR
    (snstarttime number,
    snendtime number,
    radiuscallingstationid number,
    ipserveripaddress varchar2(2000) DEFAULT '0',
    bearer3gppimsi varchar2(2000) DEFAULT '0',
    ipsubscriberipaddress  varchar2(2000),
    httpuseragent  varchar2(2000) DEFAULT '0',
    bearer3gppimei  varchar2(256) DEFAULT '0',
    httphost varchar2(2000) DEFAULT '0',
    ipprotocol  varchar2(256) DEFAULT '0',
    voipduration varchar2(256) DEFAULT '0',
    traffictype varchar2(256) DEFAULT '0',
    httpcontenttype varchar2(2000) DEFAULT '0',
    transactiondownlinkbytes number DEFAULT '0',
    transactionuplinkbytes number DEFAULT '0',
    transactiondownlinkpackets number  DEFAULT '0',
    transactionuplinkpackets number DEFAULT '0',
    radiuscalledstationid  varchar2(2000) DEFAULT '0',
    httpreferer varchar2(4000) DEFAULT '0',
    httpurl varchar2(4000) DEFAULT '0',
    p2pprotocol  varchar2(4000)  DEFAULT '0'
    2.    Whether the indexes (if any) are in place while you are loading the data.
    My comments: No indexes are there.
    3.    The CPU type and speed.
    Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz .32 core .
    4.    The type of disk storage you are using for the filesystem containing the database.
    We are not using any external storage. we are using linux ext3 filesystem.
    5.   The location of the CSV file that you are loading - is it on the same filesystem as the database files?
    My comment - database files are resides on /opt partition. and yes the CSV files are also placed in same directories .those files are in /opt/Files.
    6.   The number of rows of data in the CSV file.
    My comment - in per CSV file there is around 50,000 Records.
    7.   Originally you said 'I am only getting 15000 to 17000 TPS'. How are you measuring this? Do you TPS (i.e. commutes per second) or 'rows inserted per second'? Note that by default ttBulkCp commits every 1024 rows so if you are measuring commits then the insert rate is 1024 x that.
    My comment- Now I have set timing on at bash prompt. lets say when i have run command ./ttbulkcp at that time i have note down the timing. now when the command complete , i am again note down the time. and then i am calculating the TPS. further in this, i have one file with ttbulkcp . I am having 50000 records in file. and out of these records around 38000 records gets sucsucced. and thus i am calculating TPS.

  • Analyze job very slow in RAC environment

    Hi,
    I have an anlyze job which runs for 3 hrs in RAC environment (9.2.0.6).
    Earlier in NON-RAC environment it used to complete in 1 hrs.
    Need help in solving this issue.
    Ajoy Kumar Thapa

    hi,
    This database is used for query purpose mainly.
    we do huge data load during weekend into one of the table.
    This table is then exchange with one of the partition of a huge partitioned table.
    after that analyze job runs on this partitioned table.
    The command we use for analyze is given below:
    DBMS_STATS.GATHER_SCHEMA_STATS
    ownname => '<owner_name>'
    ,estimate_percent => 05
    ,cascade => true
    ,degree => 4
    ,granularity=>'ALL'
    I want to know, what can be the reason, this job, which runs fine in NON-RAC environment, is taking so long in RAC enviroment.
    Any help is highly appreciated.
    Ajoy Kumar Thapa

Maybe you are looking for

  • Submit to self - passing values (getParameter)

    Hi, I have a problem I can't seem to solve... So please help me... On a form that I submit to self through a Javascript function, I set a hidden field (just before submitting) that I want to recuperate on the same page, once reloaded. here is the var

  • IPhoto 8 and Raw

    Hi everyone, I recently took 600+ Raw photos (Nikon D70...roughly 3.85 gig of pictures)and imported them into iPhoto 8 as RAW photos. I understand that it (iPhoto 8) automatically converts them into jpegs and leaves the raw photos untouched, but it i

  • How to make exception rules for Stylish rules which change fonts?

    How to make exception rules for Stylish rules which change fonts? I use [https://addons.mozilla.org/en-US/firefox/addon/stylish/ Stylish add-on] to change specific fonts to other fonts of my choice, but I have a problem. I want to change the font Ari

  • Having problems with adobe reader

    Hi, I am having problems downloading the newest adobe reader, it fails and the error message is to contact the manufacturer.

  • Are deleted photos really deleted or is there some kind of back up file.

    i deleted a photo i did not want to delete is there anyway to get it back