View objects performance issue with oracle seeded tables

While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
This issue is happening only in R12 view object and advanced tables.
Edited by: vlsn on Jun 24, 2012 11:36 PM

Hi All,
Here is architecture of my application:
Java application creates XML from the screen values and then inserts that XML
into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
2. it calls Product SP and then in product SP we have business logic. Product SP
does the execution and then inserts response into Response GTT.
3. Response XML is created by using XML generation function and response GTT.
I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
Regards,
Vikas Kumar

Similar Messages

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • Performance issue with Oracle Global Temporary table

    Hi
    Oracle version : 10.2.0.3.0 - Production
    We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
    in the end data required response data is again inserted into response GTT's from which Response XML is generated.
    Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
    Regards,
    Vikas Kumar

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issue with sys.user_history$ table

    Hi,
    I am investigating performance for one of my client's databases (which is at 9.2.0.8) as they are experiencing intermittent poor response. In the Statspack report (Top SQL section) I can see that a catalog table called SYS.USER_HISTORY$ is being accessed very frequently. Now I understand that this would be a result of password limits being set in users' profile but each time the table is accessed, it would appear to incur a full table scan resulting in over 7,500 gets each time. The total buffer gets (and physical blocks read) from this table account for a high percentage of the total and since the highest waits are buffer and I/O related this must be a major factor. Here is an extract from the Statspack report:
    CPU Elapsd
    Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
    2,327,138 316 7,364.4 7.8 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    2,320,524 313 7,413.8 7.8 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    2,272,260 308 7,377.5 7.6 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
    1,448,689 316 4,584.5 20.6 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    1,269,172 313 4,054.9 18.1 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    1,206,906 308 3,918.5 17.2 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Is there any way to improve access to this table? Since it's a catalog table, I presume it would not be acceptable to an index to it but, for example, would it be acceptable to assign it to a suitably sized KEEP buffer pool which should at least reduce the amount of physical I/O incurred?
    Any ideas would be appreciated.
    Regards,
    Ian Brennan

    Hi,
    Here is the remaining information which I have now gathered:-
    select count(*) from dba_users;
    24681
    select count(*) from sys.user_history$;
    1258133
    select profile, limit from dba_profiles where resource_name = 'PASSWORD_REUSE_TIME';
    PROFILE LIMIT
    DEFAULT UNLIMITED
    PRS2_DEFAULT_PROFILE 365
    select bytes from dba_segments where SEGMENT_NAME='USER_HISTORY$';
    61865984
    explain plan for
    select password_date from user_history$ where user# = :1 order by password_date desc;
    SELECT STATEMENT CHOOSE
    Cost: 647 Bytes: 913 Cardinality: 83
    2 SORT ORDER BY
    Cost: 647 Bytes: 913 Cardinality: 83
    1 TABLE ACCESS FULL SYS.USER_HISTORY$
    Cost: 638 Bytes: 913 Cardinality: 83
    Any further thoughts?

  • Performance issue with view selection after migration from oracle to MaxDb

    Hello,
    After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
    Does anybody know about this problem and how to solve it ??
    Best regards !!!
    Gert-Jan

    Hello Gert-Jan,
    most probably you need additional indexes to get better performance.
    Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
    If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
    SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
    http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
    http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
    Best regards,
    Melanie Handreck

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Performance issue with a Custom view

    Hi ,
    I am pretty new to performance tuning and facing a performance issue with a custom view.
    Execution time for view query is good but as soon as I append a where caluse to view query ,the execution time increases.
    Below is the view query:
    CREATE OR REPLACE XXX_INFO_VIEW AS
    SELECT csb.system_id license_id,
    cst.name license_number ,
    csb.system_type_code license_type ,
    csb.attribute3 lac , -- license authorization code
    csb.attribute6 lat , -- license admin token
    csb.attribute12 ols_reg, -- OLS Registration allowed flag
    l.attribute4 license_biz_type ,
    NVL (( SELECT 'Y' l_supp_flag
    FROM csi_item_instances cii,
    okc_k_lines_b a,
    okc_k_items c
    WHERE c.cle_id = a.id
    AND a.lse_id = 9
    AND c.jtot_object1_code = 'OKX_CUSTPROD'
    AND c.object1_id1 = cii.instance_id||''
    AND cii.instance_status_id IN (3, 510)
    AND cii.system_id = csb.system_id
    AND a.sts_code IN ('SIGNED', 'ACTIVE')
    AND NVL (a.date_terminated, a.end_date) > SYSDATE
    AND ROWNUM < 2), 'N') active_supp_flag,
    hp.party_name "Customer_Name" , -- Customer Name
    hca.attribute12 FGE_FLAG,
    (SELECT /*+INDEX (oklt OKC_K_LINES_TL_U1) */
    nvl(max((decode(name, 'eSupport','2','Enterprise','1','Standard','1','TERM RTU','0','TERM RTS','0','Notfound'))),0) covName --TERM RTU and TERM RTS added as per Vijaya's suggestion APR302013
    FROM OKC_K_LINES_B oklb1,
    OKC_K_LINES_TL oklt,
    OKC_K_LINES_B oklb2,
    OKC_K_ITEMS oki,
    CSI_item_instances cii
    WHERE
    OKI.JTOT_OBJECT1_CODE = 'OKX_CUSTPROD'
    AND oklb1.id=oklt.id
    AND OKI.OBJECT1_ID1 =cii.instance_id||''
    AND Oklb1.lse_id=2
    AND oklb1.dnz_chr_id=oklb2.dnz_chr_id
    AND oklb2.lse_id=9
    AND oki.CLE_ID=oklb2.id
    AND cii.system_id=csb.system_id
    AND oklt.LANGUAGE=USERENV ('LANG')) COVERAGE_TYPE
    FROM csi_systems_b csb ,
    csi_systems_tl cst ,
    hz_cust_accounts hca,
    hz_parties hp,
    fnd_lookup_values l
    WHERE csb.system_type_code = l.lookup_code (+)
    AND csb.system_id = cst.system_id
    AND hca.cust_account_id =csb.customer_id
    AND hca.party_id= hp.party_id
    AND cst.language = USERENV ('LANG')
    AND l.lookup_type (+) = 'CSI_SYSTEM_TYPE'
    AND l.language (+) = USERENV ('LANG')
    AND NVL (csb.end_date_active, SYSDATE+1) > SYSDATE)
    I have forced an index to avoid Full table scan on OKC_K_LINES_TL and suppressed an index on CSI_item_instances.instance id to make the view query fast.
    So when i do select * from XXX_INFO_VIEWit executes in a decent time,But when I try to do
    select * from XXX_INFO_VIEW where active_supp_flag='Y' and coverage_type='1'
    it takes lot of time.
    Execution plan is same for both queries in terms of cost but with WHERE clause Number of bytes increases.
    Below are the execution plans:
    View query:
    SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 536,237 Cardinality: 3,211                                         
         10 COUNT STOPKEY                                    
              9 NESTED LOOPS                               
                   7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1                          
                        5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299                     
                             2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155                
                                  1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315           
                             4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2                
                                  3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2           
                        6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1                     
                   8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1                          
         12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1                                    
              11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                               
         28 SORT AGGREGATE Bytes: 169 Cardinality: 1                                    
              27 NESTED LOOPS                               
                   25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768                          
                        23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757                     
                             20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578                
                                  17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606           
                                       14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315      
                                            13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
                                       16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2      
                                            15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
                                  19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1           
                                       18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1      
                             22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10                
                                  21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9           
                        24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1                     
                   26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1                          
         43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211                                    
              41 NESTED LOOPS                               
                   39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196                          
                        37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196                     
                             32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196                
                                  30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17           
                                       29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17      
                                  31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196           
                             36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887                
                                  35 HASH JOIN           
                                       33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887      
                                       34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887      
                        38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                     
                   40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1                          
              42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918           
    Execution plan for view query with WHERE clause:
    SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211                                         
         10 COUNT STOPKEY                                    
              9 NESTED LOOPS                               
                   7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1                          
                        5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299                     
                             2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155                
                                  1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315           
                             4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2                
                                  3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2           
                        6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1                     
                   8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1                          
         12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1                                    
              11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                               
         28 SORT AGGREGATE Bytes: 169 Cardinality: 1                                    
              27 NESTED LOOPS                               
                   25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768                          
                        23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757                     
                             20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578                
                                  17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606           
                                       14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315      
                                            13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
                                       16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2      
                                            15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
                                  19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1           
                                       18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1      
                             22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10                
                                  21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9           
                        24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1                     
                   26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1                          
         44 VIEW VIEW APPS.WRS_LICENSE_INFO_V Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211                                    
              43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211                               
                   41 NESTED LOOPS                          
                        39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196                     
                             37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196                
                                  32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196           
                                       30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17      
                                            29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17
                                       31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196      
                                  36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887           
                                       35 HASH JOIN      
                                            33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887
                                            34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887
                             38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                
                        40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1                     
                   42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918

    Hi,
    You should always try using primary index fields, if not possible then secondary index fields.
    Even if you cannot do anything from either of the two then try this,
    Use Less distinct fields on the top.
    In your case , you can use bukrs ,gjahr ,werks on the top in the where condition..then followed by less distinct values..
    Even when you use secondary index if you have 4 fields in your sec index and you are using only two fields from the top then the index is useful only upto that two fields provided they are in sequence.

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • Performance issue with Crystal when upgrading Oracle to 11g

    Dear,
    I am facing performance issue in crystal report and oracle 11g as below:
    In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
    and I have a tomcat server to run my application in my application I refer to report folder in report server.
    This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
    please let me know the root cause.
    Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
    Please help me to solve it.
    Thanks so much,
    Anh

    Hi Anh,
    Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
    Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
    Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
    I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
    If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
    Thank you
    Don

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

Maybe you are looking for