Select Statement taking more time.How to improve the query performance.

SELECT DISTINCT ORDERKEY, SUM(IMPRESSIONCNT) AS ActualImpressions ,SUM(DiscountedSales)AS ActualRevenue ,SUM(AgencyCommAmt) as AgencyCommAmt
,SUM(SalesHouseCommAMT) as SalesHouseCommAMT
--INTO Anticiapted_ADXActualsMeasures
FROM AdRevenueFact_ADX ADx WITH(NOLOCK)
Where FiscalMonthkey >=201301 and Exists (Select 1 from Anticipated_cdr_AX_OrderItem OI Where Adx.Orderkey=Oi.Orderkey)
GROUP BY ORDERKEY
Clustered indexes on orderkey,fiscalmonthkey and orderkey in AdRevenueFact_ADX(contain more than 170 million rows)
thanks

As mentioned by Kalman, if your clustered index starts with Orderkey, then this query will require a full table scan. If it is an option to change the clustered index in such a way that FiscalMonthkey is the leading column, then only the data of the last
two year has to be queried.
In addition, you should have a look at the indexes of table Anticipated_cdr_AX_OrderItem. Ideally, there is a nonclustered index on Orderkey.
To get better advice, please post the query plan and list all available indexes of these tables.
Finally, an off topic remark: it is a good practice to keep consistent spelling of object names, and to keep the same spelling as their declaration. Your query would cause serious problems if the database is ever run with case sensitive collation.
Gert-Jan

Similar Messages

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Same select statement taking more time

    Hello all,
    I have two select statements. only the name of table from where it is fetching records are different.
    1) select belnr posnr etenr into corresponding fields of table it_cdtemp2
    from j_3avasso for all entries in it_cdtemp1
    where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
    it_cdtemp1 has 100 entries and j_3avasso has 20000 entries
    2) select belnr posnr etenr into corresponding fields of table it_cdtemp2
    from j_3avap for all entries in it_cdtemp1
    where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
    it_cdtemp1 has 100 entries and j_3avasso has 2000 entries
    statement 1 is executing less than a minute where as statement 2 is taking around 15 to 20 minutes
    could anyone suggest why.. if so how to minimize run time
    Regards
    Bala

    Hi,
    You can sort the internal table before using FOR ALL ENTRIES BY VBELN and POSNR.
    This will save a lot of processing time.
    You can also try combing both the selects as one join statement taking both the tables with for all entries addition.
    Regards,
    Subhashini
    Edited by: Subhashini K on Oct 8, 2009 2:58 PM

  • Taking More Time while inserting into the table (With foriegn key)

    Hi All,
    I am facing problem while inserting the values into the master table.
    The problem,
    Table A -- User Master Table (Reg No, Name, etc)
    Table B -- Transaction Table (Foreign key reference with Table A).
    While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
    While inserting we need to query the Table A first to have the values in TableABean.java.
    final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
    Then, we need to create the instance for TableB
    TableB tableB= (TableB)uow.newInstance(TableB.class);
    tableB.setID(bean.getID);
    tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
    This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
    For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
    regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
    Time delay is there for different users when they enter transaction in TableB.
    I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
    Please help me to resolve this issue...I am facing it now in production.
    Thanks & Regards
    VB

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • How to improve the load performance

    can any body tell me how to improve the load performance

    Hi,
    for all loads: improve your ABAP code in routines.
    for master data load:
    - load master data attributes before the charateristic itself
    - switch number range buffering on for initial loads
    for transactional loads:
    - load all your master data IObjs prior loading your cube / ODS
    - depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
    - switch on number range buffering for dimensions with high number of records for initial loads
    - switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
    these recommendations are just some among others like system tuning, DB parameters...
    hope this helps...
    Olivier.

  • Select statement taking much time.......

    Hi,
    IF NOT i601[] is initial.
    select vbelv
           posnv
           vbeln
           posnn
           vbtyp_v
           matnr
           from vbfa into table ivbfa
           FOR ALL ENTRIES IN i601
           where vbeln  = i601-mblnr and
                 posnn  = i601-zeile2 and
                 vbtyp_v = 'J'.
    select vbeln
           matnr
           werks
           lgort
           vgbel
           vgpos
           mwsbp
    from vbrp into table ivbrp
    FOR ALL ENTRIES IN ivbfa
    where vgbel = ivbfa-vbelv and
          vgpos = ivbfa-posnv and
          vgtyp = 'J' and
          werks IN werks.
    CLEAR i601.
    FREE i601.
    ENDIF.
    At the above highlighted select statement it is getting stucked up.I was not able to figure out the reason.No loops nothing but still is not moving from the second select statement,quite a long time it is taking.Can any one here throw some light.By the way none of the fields in the where clause of the 2nd select are primary keys.
    Thanks,
    K.Kiran.

    Hi,
    In second table you are tring to extract the records without passing the primary key values...
    Any how you have values vbeln ,posnr in internal table i601.So pass those values to VBRP.
    IF NOT i601[] is initial.
    select vbelv
    posnv
    vbeln
    posnn
    vbtyp_v
    matnr
    from vbfa into table ivbfa
    FOR ALL ENTRIES IN i601
    where vbeln = i601-mblnr and
    posnn = i601-zeile2 and
    vbtyp_v = 'J'.
    IF NOT ivbfa[] IS INITIAL
    select vbeln
    matnr
    werks
    lgort
    vgbel
    vgpos
    mwsbp
    from vbrp into table ivbrp
    FOR ALL ENTRIES IN ivbfa
    where *vbeln = ivbfa-vbeln  AND *
               posnr = ivbfa-posnv AND
               vgbel = ivbfa-vbelv and
               vgpos = ivbfa-posnv and
               vgtyp = 'J' and
               werks IN werks.
    ENDIF.
    CLEAR i601.
    FREE i601.
    ENDIF.
    Now check your  program.
    Pls. reward if useful.....

  • Select Statement takes more time after immediate insert statement..

    Hello,
    I found below scenario
    1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
    2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
    3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
    I didnt get anything in XPLAN.. :(
    I felt Buffer Clean is cause to take time. please let me know your opinion.
    Thanks in Advance
    Sach

    sach09 wrote:
    Hello,
    I found below scenario
    1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
    2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
    3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
    I didnt get anything in XPLAN.. :(
    I felt Buffer Clean is cause to take time. please let me know your opinion.Are you running the query in the other session after running it from the first?
    Aman....

  • How to improve the query

    Guys,
    I have the following query which seem to take quite a lot of time. About 7 hours
    It is spending significant amount of time on doing the max(dt), as the table has about a million records. Any know a better way of writing this query ?
    SELECT   ABS (amount) AS amount, date_from
        FROM (SELECT   SUM
                          (  k$forex.rate (f$evaluation.eval_ccy,
                                           :b5,
                                           f$card.date_until,
                                           f$card.date_until
                           * (  f$evaluation.eval_market_value
                              + DECODE (k$structure.total_includes_accrued,
                                        1, NVL
                                             (f$evaluation.eval_accrued_interests,
                                              0
                                        0
                          ) AS amount,
                       f$card.date_from AS date_from
                  FROM f$card, f$evaluation, k$portfolio, k$structure
                 WHERE f$evaluation.STRUCTURE = k$structure.ID
                   AND f$card.ID = f$evaluation.card
                   AND f$card.portfolio = k$portfolio.ID
                   AND f$card.fee = :b4
                   AND k$portfolio.fee_profile = :b3
                   AND f$card.date_from >= :b2
                   AND f$card.date_until <= :b1
                   AND f$evaluation.off_balance_sheet = 0
                   AND f$evaluation.asset = 2
                   AND f$evaluation.eval_market_value IS NOT NULL
                   AND f$evaluation.dte =
                            (SELECT MAX (e2.dte)
                               FROM f$evaluation e2
                              WHERE e2.card = f$card.ID AND e2.dte <= TRUNC (:b6))
              GROUP BY f$card.date_from)
    ORDER BY date_from
    TKPROF outout shows
    call     count       cpu    elapsed       disk      query    current        rows
    Parse     1598      0.07       0.07          0          0          0           0
    Execute   2398      1.19       1.20          0          0          0           0
    Fetch     4326   2687.62    2630.06        326   17291480          0        1929
    total     8322   2688.89    2631.34        326   17291480          0        1929
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 78  (PROSPERO)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=7485 pr=0 pw=0 time=1118615 us)
          1   VIEW  (cr=7485 pr=0 pw=0 time=1118575 us)
          1    HASH GROUP BY (cr=7485 pr=0 pw=0 time=1118566 us)
         45     HASH JOIN  (cr=7215 pr=0 pw=0 time=1093439 us)
         45      TABLE ACCESS BY INDEX ROWID F$EVALUATION (cr=7185 pr=0 pw=0 time=1085972 us)
       1263       NESTED LOOPS  (cr=7144 pr=0 pw=0 time=136812196 us)
          1        HASH JOIN  (cr=7136 pr=0 pw=0 time=1091026 us)
          1         TABLE ACCESS BY INDEX ROWID F$CARD (cr=7 pr=0 pw=0 time=112 us)
          4          NESTED LOOPS  (cr=5 pr=0 pw=0 time=200 us)
          1           TABLE ACCESS BY INDEX ROWID K$PORTFOLIO (cr=3 pr=0 pw=0 time=62 us)
          1            INDEX RANGE SCAN IDX_K$PORTFOLIO$FEE_PROFILE (cr=2 pr=0 pw=0 time=46 us)(object id 81335)
          2           INDEX RANGE SCAN IDX_F$CARD$PORTFOLIO (cr=2 pr=0 pw=0 time=11 us)(object id 81336)
       3106         VIEW  VW_SQ_1 (cr=7129 pr=0 pw=0 time=1092079 us)
       3106          HASH GROUP BY (cr=7129 pr=0 pw=0 time=1085865 us)
    806960           INDEX FAST FULL SCAN PK_F$EVALUATION (cr=7129 pr=0 pw=0 time=807031 us)(object id 53621)
       1261        INDEX RANGE SCAN IDX_F$EVALUATION$CARD (cr=8 pr=0 pw=0 time=1306 us)(object id 53622)
         15      TABLE ACCESS FULL K$STRUCTURE (cr=30 pr=0 pw=0 time=64 us)Thanks in advance

    Thanks for your help on this.. Do you know any links where i can gather more information on this ? I've now replaced the last bind with SYSDATE to make it easier. but not sure if it has any impact on the plan
    PLAN_TABLE_OUTPUT
    Plan hash value: 2614864167
    | Id  | Operation                           | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   1 |  SORT ORDER BY                      |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   2 |   VIEW                              |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   3 |    HASH GROUP BY                    |                             |     1 |    79 |  1470   (1)| 00:00:18 |
    |*  4 |     FILTER                          |                             |       |       |            |          |
    |*  5 |      HASH JOIN                      |                             |    73 |  5767 |    54   (2)| 00:00:01 |
    |*  6 |       TABLE ACCESS BY INDEX ROWID   | F$EVALUATION                |   291 | 10476 |    38   (0)| 00:00:01 |
    |   7 |        NESTED LOOPS                 |                             |    73 |  5256 |    45   (0)| 00:00:01 |
    |   8 |         NESTED LOOPS                |                             |     1 |    36 |     7   (0)| 00:00:01 |
    |   9 |          TABLE ACCESS BY INDEX ROWID| K$PORTFOLIO                 |     2 |    16 |     2   (0)| 00:00:01 |
    |* 10 |           INDEX RANGE SCAN          | IDX_K$PORTFOLIO$FEE_PROFILE |     2 |       |     1   (0)| 00:00:01 |
    |* 11 |          TABLE ACCESS BY INDEX ROWID| F$CARD                      |     1 |    28 |     3   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | IDX_F$CARD$PORTFOLIO        |     2 |       |     1   (0)| 00:00:01 |
    |* 13 |         INDEX RANGE SCAN            | IDX_F$EVALUATION$CARD       |   406 |       |     3   (0)| 00:00:01 |
    |  14 |       TABLE ACCESS FULL             | K$STRUCTURE                 |    15 |   105 |     8   (0)| 00:00:01 |
    |  15 |      SORT AGGREGATE                 |                             |     1 |    12 |            |          |
    |* 16 |       TABLE ACCESS BY INDEX ROWID   | F$EVALUATION                |   406 |  4872 |    39   (0)| 00:00:01 |
    |* 17 |        INDEX RANGE SCAN             | IDX_F$EVALUATION$CARD       |   406 |       |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("F$EVALUATION"."DTE"= (SELECT MAX("E2"."DTE") FROM "PROSPERO"."F$EVALUATION" "E2" WHERE
                  "E2"."CARD"=:B1 AND "E2"."DTE"<=TRUNC(SYSDATE@!)))
       5 - access("F$EVALUATION"."STRUCTURE"="K$STRUCTURE"."ID")
       6 - filter("F$EVALUATION"."OFF_BALANCE_SHEET"=0 AND "F$EVALUATION"."ASSET"=2 AND
                  "F$EVALUATION"."EVAL_MARKET_VALUE" IS NOT NULL)
      10 - access("K$PORTFOLIO"."FEE_PROFILE"=TO_NUMBER(:B3))
      11 - filter("F$CARD"."FEE"=TO_NUMBER(:B4) AND "F$CARD"."DATE_UNTIL"<=:B1 AND "F$CARD"."DATE_FROM">=:B2)
      12 - access("F$CARD"."PORTFOLIO"="K$PORTFOLIO"."ID")
      13 - access("F$CARD"."ID"="F$EVALUATION"."CARD")
      16 - filter("E2"."DTE"<=TRUNC(SYSDATE@!))
      17 - access("E2"."CARD"=:B1)
    39 rows selected.

  • How to improve the extactor performance?

    Dear all,
    I have an ODS with active data (1,180,000Records).
    I also have an InfoCube. It has a characteristices C1 with 5 navigation attributes.
    For C1, it has 4,300,000 records in the master.
    I have tried to load data to InfoCube from the ODS.
    But it takes more than 12 hours, how can I improve the performance or is there any bug in BW for so many records extracting?
    Thank you very much.
    Seven

    dear Seven,
    do you try to delete infocube's index first ?
    infocube->manage->performance->delete indexes
    take a look oss note
    130253-General tips on uploading transaction data to BW
    Tip 8:
    When you load large quantities of data in InfoCubes, you should delete
    the secodary indexes before the loading process and then recreate them afterwards if the following applies: The number of the records that are loaded is big in comparison to the number of records that already exist in the (uncompressed) F fact table. For non-transactional InfoCubes, you must delete the indexes to be able to carry out parallel loading.
    Tip 9:
    When you load large quantities of data in an InfoCube, the number range buffer should be increased for the dimensions that are likely to have a high number of data sets.
    To do this, proceed as follows. Use function module RSD_CUBE_GET to find the object name of the dimension that is likely to have a high number of data sets.
    Function module settings:
    I_INFOCUBE = 'Infocube name'
    I_OBJVERS = 'A'
    I_BYPASS_BUFFER = 'X'
    The numbers for the dimensions are then contained in table 'E_T_DIME', column 'NUMBRANR'. If you enter 'BID' before this number, you get the relevant number range (for example BID0000053).
    You can use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges for the dimensions used in BW if you enter BID*. You can use the object name that was determined beforehand to find the required number range.
    By double-clicking this line, you get to the number range maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
    Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads.

  • How to eleminate the co-releated Subquery to improve the query performance

    Please find the query below which takes long time to fetch the records.
    SQL> SET LINE 120
    SQL> EXPLAIN PLAN FOR select *
      2                 from KEMP_SRC a1
      3                 where ('MOFF' is null or eq_name = 'MOFF')
      4                       and
      5                         is_ad_hoc <> 1
      6                         and (pb_proc_id is null
      7                         or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type <> -1)
      8                         )
      9                    and     KEMPUtility.DTTM(end_dt, end_tm) in      (select max(KEMPUtility.DTTM(end_dt, end_tm))
    10                                from KEMP_SRC a2
    11                                where a2.eq_name = a1.eq_name
    12                                and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
    13                                and a2.is_ad_hoc <> -1 -- repeating case
    14                                group by eq_name, pb_proc_id
    15                                );
    Explained.
    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    Plan hash value: 2624956131
    | Id  | Operation             | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                           |     1 |    96 | 69399   (3)| 00:13:53 |
    |*  1 |  FILTER               |                           |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL   | KEMP_SRC                  |  2896 |   271K|   124   (2)| 00:00:02 |
    |*  3 |   TABLE ACCESS FULL   | KEMP_CONFIG               |     1 |    26 |     2   (0)| 00:00:01 |
    |*  4 |    FILTER             |                           |       |       |            |          |
    |   5 |     HASH GROUP BY     |                           |     1 |    35 |   125   (3)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    |*  6 |      TABLE ACCESS FULL| KEMP_SRC               |   364 | 12740 |   124   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter(("PB_PROC_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM
                  "KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
                  "FREQUENCY_TYPE"<>(-1))) AND  EXISTS (SELECT /*+ */ 0 FROM "KEMP_SRC" "A2" WHERE
                  "A2"."EQ_NAME"=:B2 AND ("A2"."PB_PROC_ID"=:B3 OR :B4 IS NULL AND "A2"."PB_PROC_ID" IS
                  NULL) AND "A2"."IS_AD_HOC"<>(-1) GROUP BY "EQ_NAME","PB_PROC_ID" HAVING
    PLAN_TABLE_OUTPUT
                  "KEMPUtility"."DTTM"(:B5,:B6)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM"))))
       2 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
       3 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1))
       4 - filter("KEMPUtility"."DTTM"(:B1,:B2)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM")))
       6 - filter("A2"."EQ_NAME"=:B1 AND ("A2"."PB_PROC_ID"=:B2 OR :B3 IS NULL AND
                  "A2"."PB_PROC_ID" IS NULL) AND "A2"."IS_AD_HOC"<>(-1))
    28 rows selected.

    When i comment the reference to a1 in the subquery , then the cost drastically reduced .
    select *
      2                 from KEMP_SRC a1
      3                 where ('MOFF' is null or eq_name = 'MOFF')
      4                       and
      5                         is_ad_hoc != 1
      6                         and (pb_proc_id is null
      7                         or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type  -1)
      8                         )
      9                    and     KEMPUtility.DTTM(end_dt, end_tm) in      (select max(KEMPUtility.DTTM(end_dt, end_tm))
    10                                from KEMP_SRC a2
    11                                where
                                                 -- a2.eq_name = a1.eq_name
    12                            --    and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
    13                                --and
                                                 a2.is_ad_hoc != -1 -- repeating case
    14                                group by eq_name, pb_proc_id
    15                                );
    PLAN_TABLE_OUTPUT
    Plan hash value: 3739658629
    | Id  | Operation              | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                           |    78 |  8190 |   249   (3)| 00:00:03 |
    |*  1 |  FILTER                |                           |       |       |            |          |
    |*  2 |   HASH JOIN            |                           |   203 | 21315 |   249   (3)| 00:00:03 |
    |   3 |    VIEW                | VW_NSO_1                  |     7 |    63 |   125   (3)| 00:00:02 |
    |   4 |     HASH UNIQUE        |                           |     7 |   245 |   125   (3)| 00:00:02 |
    |   5 |      HASH GROUP BY     |                           |     7 |   245 |   125   (3)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    |*  6 |       TABLE ACCESS FULL| KEMP_SRC               |  2896 |    98K|   124   (2)| 00:00:02 |
    |*  7 |    TABLE ACCESS FULL   | KEMP_SRC               |  2896 |   271K|   124   (2)| 00:00:02 |
    |*  8 |   TABLE ACCESS FULL    | KEMP_CONFIG |     1 |    26 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("PB_PROC_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM
                  "KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
                  "FREQUENCY_TYPE"<>(-1)))
    PLAN_TABLE_OUTPUT
       2 - access("$nso_col_1"="KEMPUTILITY"."DTTM"("END_DT","END_TM"))
       6 - filter("A2"."EQ_NAME"='BILAN_MAZOUT_BFOE' AND "A2"."IS_AD_HOC"<>(-1))
       7 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
       8 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1))

Maybe you are looking for

  • Table Controls/Settings in VF02/VF03

    Hi, Is anyone aware of way to supress the Table Control icon on VF02 and VF03 using exits or a badi. We have a situation where we need to hide the Cost (WAVWR) field from certain users but not for the majority (we've hidden the cost in the pricing co

  • Re: complaint to FCC about Comcast's broadcast stations and prices!

    It has been years since we have used Comcast services.Today we just had it reconnected. We were supposed to get the 105mbs service.It is supposed to be great for gaming and yadda yadda yadda.Well we haven't been able to use our service all day. We tr

  • Strange Dialog boxes popping up

    During the past several weeks I have been getting strange dialog boxes popping up on my desktop and all my efforts, including stripping out my hard drive and reinstalling a brand new version of Mac OS10.5.0 then upgrading to 10.5.6 via the combo down

  • Moving files from Dreamweaver 1 to the latest version (Was: Dreamweaver 1)

    I had the original Macromedia Dreamweaver 1. I have a website www.bodhicitta.net. I no longer have Dreamweaver 1, so I would like to know, do the recent editions of Dreamweaver recognise pages developed in Macromedia Dreamweaver 1.

  • Adobe Updated won't open

    I can see from the icon that I have 2 updates waiting, but when I try to open Updater it quickly closes.  Help.