How to improve the OpenGL performance for AE

I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
Any idea how to improve the OpenGL performance please ?
Regards

juskocf wrote:
But to scrub the timeline smoothly, I think OpenGL plays an important role.
No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
juskocf wrote:
Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
Mylenium

Similar Messages

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve the load performance

    can any body tell me how to improve the load performance

    Hi,
    for all loads: improve your ABAP code in routines.
    for master data load:
    - load master data attributes before the charateristic itself
    - switch number range buffering on for initial loads
    for transactional loads:
    - load all your master data IObjs prior loading your cube / ODS
    - depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
    - switch on number range buffering for dimensions with high number of records for initial loads
    - switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
    these recommendations are just some among others like system tuning, DB parameters...
    hope this helps...
    Olivier.

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • How to improve class-loading performance for missing classes

    Hi,
    do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
    Thanks
    Dimo

    Hi,
    do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
    Thanks
    Dimo

  • How to improve the extactor performance?

    Dear all,
    I have an ODS with active data (1,180,000Records).
    I also have an InfoCube. It has a characteristices C1 with 5 navigation attributes.
    For C1, it has 4,300,000 records in the master.
    I have tried to load data to InfoCube from the ODS.
    But it takes more than 12 hours, how can I improve the performance or is there any bug in BW for so many records extracting?
    Thank you very much.
    Seven

    dear Seven,
    do you try to delete infocube's index first ?
    infocube->manage->performance->delete indexes
    take a look oss note
    130253-General tips on uploading transaction data to BW
    Tip 8:
    When you load large quantities of data in InfoCubes, you should delete
    the secodary indexes before the loading process and then recreate them afterwards if the following applies: The number of the records that are loaded is big in comparison to the number of records that already exist in the (uncompressed) F fact table. For non-transactional InfoCubes, you must delete the indexes to be able to carry out parallel loading.
    Tip 9:
    When you load large quantities of data in an InfoCube, the number range buffer should be increased for the dimensions that are likely to have a high number of data sets.
    To do this, proceed as follows. Use function module RSD_CUBE_GET to find the object name of the dimension that is likely to have a high number of data sets.
    Function module settings:
    I_INFOCUBE = 'Infocube name'
    I_OBJVERS = 'A'
    I_BYPASS_BUFFER = 'X'
    The numbers for the dimensions are then contained in table 'E_T_DIME', column 'NUMBRANR'. If you enter 'BID' before this number, you get the relevant number range (for example BID0000053).
    You can use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges for the dimensions used in BW if you enter BID*. You can use the object name that was determined beforehand to find the required number range.
    By double-clicking this line, you get to the number range maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
    Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads.

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to improve the activation time for a standard DSO

    Hi all,
    I'm facing an issue related to the activation of a DSO. The SM37 log is as follows....
    12:03:16 Job started
    12:03:16 Step 001 started (program RSPROCESS, variant &0000000006946, user ID BWREMOTE)
    12:03:20 Attivazione is running: Data target ZFIZIASA, from 93,627 to 93,627
    12:18:00 Overlapping check with archived data areas for InfoProvider ZFIZIASA
    12:18:00 Data to be activated successfully checked against archiving objects
    12:18:02 Status transition 2 / 2 to 7 / 7 completed successfully
    12:18:13 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4IC45QJA588GKZ0M7JEJ3HCAR with ID 1218130
    12:18:19 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4IC45QJA588GKZ0M7JEJ3HCAR with ID 1218190
    12:18:20 Parallel processes (for Attivazione); 000003
    12:18:20 Timeout for parallel process (for Attivazione): 000300
    12:18:20 Package size (for Attivazione): 020000
    12:18:20 Task handling (for Attivazione): Processi batch
    12:18:20 Server group (for Attivazione): Nessun gruppo di server config
    12:18:20 Activation started (process is running under user BWREMOTE)
    12:18:20 Not all data fields were updated in mode "overwrite"
    12:18:20 Process started
    12:18:20 Process completed
    12:18:20 Activation ended
    Please have a look into the bold 3rd & 4th line. I couldnt able to analyse where the issue is and what to do to minimize the time for activation.
    It is very challenging, please reply!
    Please help.
    Thanks in adv.
    Ajay

    Hi Kundan,
    Thanks for the response!
    Actually, I have two identical DSO, having all the char & KF same and feeding the data from same DS at the same time but the issue is...
    1) For 1st DSO, Activation log....
    01.07.2010 02:02:41 Job started
    01.07.2010 02:02:41 Step 001 started (program RSPROCESS, variant &0000000006946, user ID BWREMOTE)
    01.07.2010 02:02:46 Attivazione is running: Data target ZFIZIASA, from 93,751 to 93,751
    01.07.2010 02:19:27 Overlapping check with archived data areas for InfoProvider ZFIZIASA
    01.07.2010 02:19:27 Data to be activated successfully checked against archiving objects
    01.07.2010 02:19:27 Status transition 2 / 2 to 7 / 7 completed successfully
    01.07.2010 02:19:28 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4ICC6UMXYT00Z2DYOQV8QL8CJ with ID 021
    01.07.2010 02:19:30 Parallel processes (for Attivazione); 000003
    01.07.2010 02:19:30 Timeout for parallel process (for Attivazione): 000600
    01.07.2010 02:19:30 Package size (for Attivazione): 020000
    01.07.2010 02:19:30 Task handling (for Attivazione): Processi batch
    01.07.2010 02:19:30 Server group (for Attivazione): Nessun gruppo di server config
    01.07.2010 02:19:30 Activation started (process is running under user BWREMOTE)
    01.07.2010 02:19:30 Not all data fields were updated in mode "overwrite"
    01.07.2010 02:19:30 Activation ended
    2) For 2nd DSO, the activation log is..
    01.07.2010 02:01:13 Job started
    01.07.2010 02:01:13 Step 001 started (program RSPROCESS, variant &0000000006947, user ID BWREMOTE)
    01.07.2010 02:01:35 Attivazione is running: Data target ZFIGL_02, from 93,749 to 93,749
    01.07.2010 02:01:43 Overlapping check with archived data areas for InfoProvider ZFIGL_02
    01.07.2010 02:01:43 Data to be activated successfully checked against archiving objects
    01.07.2010 02:01:53 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4ICCAR91ALHE1VHSB6GV9PAPV with ID 02015300
    01.07.2010 02:01:54 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4ICCAR91ALHE1VHSB6GV9PAPV with ID 02015400
    01.07.2010 02:01:56 Program RSBATCH_EXECUTE_PROZESS successfully scheduled as job BIBCTL_4ICCAR91ALHE1VHSB6GV9PAPV with ID 02015600
    Now my client is asking to lower the activation time for 1st one as it is taking for 2nd one.
    I'm toatally blank now that what & how to do it!
    The volume of data ...
    ZFIZIASA
    Active table - Number of entries: 13.960.508
    Change Log Table - Number of entries: 13.976.530
    ZFIGL_02
    Active Table - Number of entries: 21.923.947
    Change Log table - Number of entries: 21.938.657
    Thanks,
    ajay
    Edited by: sap.ajaykumar on Jul 6, 2010 1:47 PM

  • Select Statement taking more time.How to improve the query performance.

    SELECT DISTINCT ORDERKEY, SUM(IMPRESSIONCNT) AS ActualImpressions ,SUM(DiscountedSales)AS ActualRevenue ,SUM(AgencyCommAmt) as AgencyCommAmt
    ,SUM(SalesHouseCommAMT) as SalesHouseCommAMT
    --INTO Anticiapted_ADXActualsMeasures
    FROM AdRevenueFact_ADX ADx WITH(NOLOCK)
    Where FiscalMonthkey >=201301 and Exists (Select 1 from Anticipated_cdr_AX_OrderItem OI Where Adx.Orderkey=Oi.Orderkey)
    GROUP BY ORDERKEY
    Clustered indexes on orderkey,fiscalmonthkey and orderkey in AdRevenueFact_ADX(contain more than 170 million rows)
    thanks

    As mentioned by Kalman, if your clustered index starts with Orderkey, then this query will require a full table scan. If it is an option to change the clustered index in such a way that FiscalMonthkey is the leading column, then only the data of the last
    two year has to be queried.
    In addition, you should have a look at the indexes of table Anticipated_cdr_AX_OrderItem. Ideally, there is a nonclustered index on Orderkey.
    To get better advice, please post the query plan and list all available indexes of these tables.
    Finally, an off topic remark: it is a good practice to keep consistent spelling of object names, and to keep the same spelling as their declaration. Your query would cause serious problems if the database is ever run with case sensitive collation.
    Gert-Jan

  • How to improve the performance of adobe forms

    Hi,
    Please give me some suggestions as to how to improve the performance of adobe form?
    Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
    one it is hanging.
    I read about Wizard form design approach, how to use the same here.
    Thanks,
    Aravind

    Hi Otto,
    The form is created using HCM forms and processes. I' am performing user events in the form.
    User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
    happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
    the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
    re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
    I was reading ways to improve performance during re-rendering given below.
    http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
    It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
    Let me know if you need further details.
    Thanks,
    Aravind

  • How to Improve the performance in Variable Selection Screen.

    Hi,
    In Query Level we have Variable " User entry Defalt Valu". User want select particular value when he press "F4" it's take hours time how to improve the performance in Varaible Selection Screen.
    Thanks in Advance.
    Regards,
    Venkat.

    Dear Venkat.
    You please try the following steps:
    1. Say the InfoObject is 0EMPLOYEE against which you have created the variable, which user is trying to select value against, when they execute the report.
    2. Goto RSA1-> InfoObject tab-> Select InfoObject 0EMPLOYEE.
    3. Selcet the following options:
       Query Execution Filter Val. Selectn  -  'Only Posted Value for Navigation'
       Filter Value Repr. At Query Exec. -      'Selector Box Without Values'
    Please let me know if there is any more issue. Feel free to raise further concern
    Thnx,
    Sukdev K

  • How to improve the performance of one program in one select query

    Hi,
    I am facing performance issue in one program. I have given some part of the code of the program.
    it is taking much time below select query. How to improve the performance.
    Quick response is highly appreciated.
    Program code
    DATA: BEGIN OF t_dels_tvpod OCCURS 100,
    vbeln LIKE tvpod-vbeln,
    posnr LIKE tvpod-posnr,
    lfimg_diff LIKE tvpod-lfimg_diff,
    calcu LIKE tvpod-calcu,
    podmg LIKE tvpod-podmg,
    uecha LIKE lips-uecha,
    pstyv LIKE lips-pstyv,
    xchar LIKE lips-xchar,
    grund LIKE tvpod-grund,
    END OF t_dels_tvpod,
    DATA: l_tabix LIKE sy-tabix,
    lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
    ls_dels_tvpod LIKE t_dels_tvpod.
    SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
    FOR ALL ENTRIES IN t_dels_tvpod
    WHERE vbeln = t_dels_tvpod-vbeln
    AND erdat IN s_erdat
    AND bldat IN s_bldat
    AND podat IN s_podat
    AND ernam IN s_ernam
    AND kunnr IN s_kunnr
    AND vkorg IN s_vkorg
    AND vstel IN s_vstel
    AND lfart NOT IN r_del_types_exclude.
    Waiting for quick response.
    Best regards,
    BDP

    Bansidhar,
    1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
    2)  Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
    3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
    Try doing something like this.
    TYPES: BEGIN OF ty_del_types_exclude,
             sign(1)   TYPE c,
             option(2) TYPE c,
             low       TYPE likp-lfart,
             high      TYPE likp-lfart,
           END OF ty_del_types_exclude.
    DATA: w_del_types_exclude TYPE          ty_del_types_exclude,
          t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
          t_dels_tvpod_tmp    LIKE TABLE OF t_dels_tvpod        .
    IF NOT t_dels_tvpod[] IS INITIAL.
    * Assuming that I would like to exclude delivery types 'LP' and 'LPP'
      CLEAR w_del_types_exclude.
      REFRESH t_del_types_exclude.
      w_del_types_exclude-sign = 'E'.
      w_del_types_exclude-option = 'EQ'.
      w_del_types_exclude-low = 'LP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      w_del_types_exclude-low = 'LPP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      t_dels_tvpod_tmp[] = t_dels_tvpod[].
      SORT t_dels_tvpod_tmp BY vbeln.
      DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
        COMPARING
          vbeln.
      SELECT vbeln
        FROM likp
        INTO TABLE lt_dels_tvpod
        FOR ALL ENTRIES IN t_dels_tvpod_tmp
        WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
        AND erdat IN s_erdat
        AND bldat IN s_bldat
        AND podat IN s_podat
        AND ernam IN s_ernam
        AND kunnr IN s_kunnr
        AND vkorg IN s_vkorg
        AND vstel IN s_vstel
        AND lfart IN t_del_types_exclude.
    ENDIF.

  • How Can we improve the report performance..?

    Hi exports,
    I am learning the Business Objects XIR2, Please let me know How Can we improve the report performance..?
    Please give the answer in detailed way.

    First find out why your report is performing slowly. Then fix it.
    That sounds silly, but there's really no single-path process for improving report performance. You might find issues with the report. With the network. With the universe. With the database. With the database design. With the query definition. With report variables. With the ETL. Once you figure out where the problem is, then you start fixing it. Fixing one problem may very well reveal another. I spent two years working on a project where we touched every single aspect of reporting (from data collection through ETL and all the way to report delivery) at some point or another.
    I feel like your question is a bit broad (meaning too generic) to address as you have phrased it. Even some of the suggestions already given...
    Array fetch size - this determines the number of rows fetched at a single pass. You really don't need to modify this unless your network is giving issues. I have seen folks suggest setting this to one (which results in a lot of network requests) or 500 (which results in fewer requests but they're much MUCH larger). Does either improve performance? They might, or they might make it worse. Without understanding how your network traffic is managed it's hard to say.
    Shortcut joins? Sure, they can help, as long as they are appropriate. [Many times they are not.|http://www.dagira.com/2010/05/27/everything-about-shortcut-joins/]
    And I could go on and on. The bottom line is that performance tuning doesn't typically fall into a "cookie cutter" approach. It would be better to have a specific question.

Maybe you are looking for