Performance Issues with large XML (1-1.5MB) files

Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna Chinta

Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

Similar Messages

  • Pivot - Performance Issue with large dataset

    Hello,
    Database version : Oracle 10.2.0.4 - Linux
    I'm using a function to return a pivot query depending on an input "RUN_ID" value
    For example, i consider two differents "RUN_ID" (e.g. 119 and 120) with exactly the same dataset
    I have a performance issue when i run the result query with the "RUN_ID"=120.
    Pivot:
    SELECT   MAX (a.plate_index), MAX (a.plate_name), MAX (a.int_well_id),
             MAX (a.row_index_alpha), MAX (a.column_index), MAX (a.is_valid),
             MAX (a.well_type_id), MAX (a.read_index), MAX (a.run_id),
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC190', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC304050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC306050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC30050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC3011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC104050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC106050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC10050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC1011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC204050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC206050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC20050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC2011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC80050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC70050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW0', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW5030', a.this_value,
                          NULL
             MAX (a.dose), MAX (a.unit), MAX (a.int_plate_id), MAX (a.run_name)
        FROM vw_well_data a
       WHERE a.run_id = :app_run_id
    GROUP BY a.int_well_id, a.read_index
    Run the query :
    SELECT Sql_FullText,(cpu_time/100000) "Cpu Time (s)",
                    (elapsed_time/1000000) "Elapsed time (s)",
                    fetches,buffer_gets,disk_reads,executions
    FROM v$sqlarea
    WHERE Parsing_Schema_Name ='SCHEMA';
    With results :
    SQL_FULLTEXT     Cpu Time (s)     Elapsed time (s)     FETCHES     BUFFER_GETS     DISK_READS     EXECUTIONS
    query1 (RUN_ID=119)      22.15857     3.589822     1     2216     354     1
    query2 (RUN_ID=120)      1885.16959     321.974332     3     7685410     368     3
    Explain Plan for RUNID 119_
    PLAN_TABLE_OUTPUT
    Plan hash value: 3979963427
    | Id  | Operation                          | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   1 |  HASH GROUP BY                     |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   2 |   VIEW                             | VW_WELL_DATA         |   261 | 98397 |   433   (2)| 00:00:06 |
    |   3 |    UNION-ALL                       |                      |       |       |            |          |
    |*  4 |     HASH JOIN                      |                      |   252 | 21168 |   312   (2)| 00:00:04 |
    |   5 |      NESTED LOOPS                  |                      |   249 | 15687 |   112   (2)| 00:00:02 |
    |*  6 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |   7 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |   9 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  10 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  12 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 13 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |* 14 |       INDEX UNIQUE SCAN            | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | WELL_RAW_DATA        | 26361 |   540K|   199   (2)| 00:00:03 |
    |* 16 |       INDEX RANGE SCAN             | IDX_WELL_RAW_RUN_ID  | 26361 |       |    92   (2)| 00:00:02 |
    |  17 |     NESTED LOOPS                   |                      |     9 |   891 |   121   (2)| 00:00:02 |
    |* 18 |      HASH JOIN                     |                      |     9 |   846 |   121   (2)| 00:00:02 |
    |* 19 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |  20 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |* 21 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |  22 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  23 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 24 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  25 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 26 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |  27 |       TABLE ACCESS BY INDEX ROWID  | WELL_CALC_DATA       |   490 | 17640 |     9   (0)| 00:00:01 |
    |* 28 |        INDEX RANGE SCAN            | IDX_WELL_CALC_RUN_ID |   490 |       |     4   (0)| 00:00:01 |
    |* 29 |      INDEX UNIQUE SCAN             | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("WELL_RAW_DATA"."RUN_ID"="WELL"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
       6 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
       8 - access("PLATE"."RUN_ID"=119)
      11 - access("RUN"."RUN_ID"=119)
      13 - access("WELL"."RUN_ID"=119)
      14 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL_RAW_DATA"."RUN_ID"=119)
      18 - access("WELL"."RUN_ID"="WELL_CALC_DATA"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      19 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      21 - access("PLATE"."RUN_ID"=119)
      24 - access("RUN"."RUN_ID"=119)
      26 - access("WELL"."RUN_ID"=119)
      28 - access("WELL_CALC_DATA"."RUN_ID"=119)
      29 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
    Explain Plan for RUNID 120_
    PLAN_TABLE_OUTPUT
    Plan hash value: 599334230
    | Id  | Operation                           | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   1 |  HASH GROUP BY                      |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   2 |   VIEW                              | VW_WELL_DATA              |     2 |   754 |    23   (0)| 00:00:01 |
    |   3 |    UNION-ALL                        |                           |       |       |            |          |
    |*  4 |     TABLE ACCESS BY INDEX ROWID     | WELL_RAW_DATA             |     1 |    21 |     3   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS                   |                           |     1 |    84 |     9   (0)| 00:00:01 |
    |   6 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |   7 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |   8 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |   9 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 10 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  11 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 13 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 14 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 15 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN              | IDX_WELL_RAW_DATA_WELL_ID |     2 |       |     2   (0)| 00:00:01 |
    |* 17 |     TABLE ACCESS BY INDEX ROWID     | WELL_CALC_DATA            |     1 |    36 |     8   (0)| 00:00:01 |
    |  18 |      NESTED LOOPS                   |                           |     1 |    99 |    14   (0)| 00:00:01 |
    |  19 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |  20 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |  21 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |  22 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 23 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  24 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 25 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 26 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 27 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 28 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 29 |       INDEX RANGE SCAN              | IDX_WELL_CALC_RUN_ID      |   486 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("WELL_RAW_DATA"."RUN_ID"=120)
      10 - access("RUN"."RUN_ID"=120)
      12 - access("PLATE"."RUN_ID"=120)
      13 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      14 - access("WELL"."RUN_ID"=120)
      15 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
      17 - filter("WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      23 - access("RUN"."RUN_ID"=120)
      25 - access("PLATE"."RUN_ID"=120)
      26 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      27 - access("WELL"."RUN_ID"=120)
      28 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      29 - access("WELL_CALC_DATA"."RUN_ID"=120)I need some advice to understand the issue and to improve the performance.
    Thanks,
    Grégory

    Hello,
    Thanks for your response.
    Stats are computed recently with DBMS_STATS package (case 2) and we have histogramm on 'RUN_ID' columns.
    I tried to use the deprecated "analyze" method (case 1) and obtained better results!
    DECLARE
       -- Get tables used in the view vw_well_data --
       CURSOR c1
       IS
          SELECT table_name, last_analyzed
            FROM user_tables
           WHERE table_name LIKE 'WELL%';
    BEGIN
       FOR r1 IN c1
       LOOP
          -- Case 1 : Analyze method : Perf is good --
          EXECUTE IMMEDIATE    'analyze table '
                              || r1.table_name
                              || ' compute statistics ';
          -- Case 2 : DBMS_STATS --
          DBMS_STATS.gather_table_stats ('SCHEMA', r1.table_name);
       END LOOP;
    END;The explain plans are the same as before
    Any explanations, suggestions ?
    Thanks,
    Gregory

  • Performance issues with FDK in large XML documents

    In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
    The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
    When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
    Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
    PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

    FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
    Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
    --Franz

  • MII Performance with Large XML

    HI,
    We are facing performance issue while parsing XML of large size of around 50MB.
    Initially XMII was crashing due to "out of heap memory error". However, with some changes the problem has been removed.But
    now, we are facing an issue with time taken by the Payload to execute. Its taking more than half-an-hour to execute the transaction.
    The solution tried so far has decreased the time to half-an-hour. earlier it was taking more than one and half hour to process.
    We have tried parallel processing by asynchronous call to the transactions
    Is there any way to further reduce the time of processing?
    Further, is it possible to find out which parser is used by MII internally for XML SAX or DOM Parser.
    Thanks
    Amit

    Hi Amit,
    Just some tips to improve performance of BLS.
    1. Use Xpath whenever possible.
    2. Remove unnecessary repeaters performing on BLS.
    3. Check the Like to Optimizing BLS Performance.
    Optimizing BLS Performance for XML Handling in SAP MII
    If you are storing the data in database. just pass whole xml to query and insert data using bulk insert.
    Thanks
    Anshul

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issues with respect scheme registration,select & insert query

    I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me

    Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
    Justin

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance Issue with BSIS(open accounting items)

    Hey All,
          I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
    The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
    The select itself take a long time and after that as I have so much data overall execution is slow too.
    The select which concerns me the most is:
      SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
                 INTO TABLE i_bsis
                  FROM bsis
                  WHERE bukrs = '1000'
                  AND hkont in r_hkont   
                  AND budat <= p_lcdate
                  AND augdt = 0
                  AND augbl = space
                  AND gsber = c_ZRL1   
                  AND gjahr BETWEEN l_gjahr2 AND l_gjahr
                  AND ( blart = c_re      "Invoice
                  OR    blart = c_we      "Goods receipt
                  OR    blart = c_zc      "Invoice Cancels
                  OR    blart = c_kp ).   "Accounting offset
    I have seen other related threads, but was not that helpful.
    We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
    Any Input is highly appreciated.
    Thanks

    Thank you Thomas for your inputs:
    You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
    portion of the overall existing account numbers, then there is no really quick access possible via the
    BSIS primary key.
    Unfortunately R_HKONT contains all account numbers.
    As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
    BUDAT. No chance of narrowing down that range?
    Will look into this.
    What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
    an index on BUKRS and GSBER might be helpful.
    ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
    system performance.
    I assume that the four document types are not very selective, so it probably does not pay off to
    investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
    information. You still might want to look into it though.
    I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
    those threads I tried this:
    SELECT bukrs belnr gjahr blart budat
      FROM bkpf INTO TABLE bkpf_l
            WHERE bukrs = c_pepsico
            AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
            AND blart IN ('RE', 'WE', 'ZC', 'KP')
            AND gjahr BETWEEN l_gjahr2 AND l_gjahr
            AND budat <= p_lcdate.
    SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
               FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
                         WHERE bukrs = bkpf_l-bukrs
                          AND  hkont IN r_hkont
                          AND  budat = bkpf_l-budat
                          AND  augdt = 0
                          AND  augbl = space
                          AND  gjahr = bkpf_l-gjahr
                          AND  belnr = bkpf_l-belnr
                          AND  blart = bkpf_l-blart
                          AND  gsber = c_zrl1.
    The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
    improve performance.
    Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
    was done 7 months back. How frequently should we do this? Will it help?

  • Performance issue with Jdeveloper

    Hi Guys,
    I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
    I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
    I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
    Re: IDE has become extremely slow?
    But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
    The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
    The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
    The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
    Any advises are welcome!
    Thanks,
    Steven

    Hi Guys,
    I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
    Here is the content of the test file:
    <?xml version='1.0' encoding='windows-1252'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:af="http://xmlns.oracle.com/adf/faces"
    xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
    <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
    doctype-system="http://www.w3.org/TR/html4/loose.dtd"
    doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
    <jsp:directive.page contentType="text/html;charset=windows-1252"/>
    <f:view>
    <afh:html binding="#{backing_streettypedetail.html1}" id="html1">
    <afh:head title="streettypedetail"
    binding="#{backing_streettypedetail.head1}" id="head1">
    <meta http-equiv="Content-Type"
    content="text/html; charset=windows-1252"/>
    </afh:head>
    <afh:body binding="#{backing_streettypedetail.body1}" id="body1">
    <af:messages binding="#{backing_streettypedetail.messages1}"
    id="messages1"/>
    <h:form binding="#{backing_streettypedetail.form1}" id="form1">
    <af:panelForm binding="#{backing_streettypedetail.panelForm1}"
    id="panelForm1">
    <af:inputText value="#{bindings.streetTypeID.inputValue}"
    label="#{bindings.streetTypeID.label}"
    required="#{bindings.streetTypeID.mandatory}"
    columns="#{bindings.streetTypeID.displayWidth}"
    binding="#{backing_streettypedetail.inputText1}"
    id="inputText1">
    <af:validator binding="#{bindings.streetTypeID.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.description.inputValue}"
    label="#{bindings.description.label}"
    required="#{bindings.description.mandatory}"
    columns="#{bindings.description.displayWidth}"
    binding="#{backing_streettypedetail.inputText2}"
    id="inputText2">
    <af:validator binding="#{bindings.description.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.abbr.inputValue}"
    label="#{bindings.abbr.label}"
    required="#{bindings.abbr.mandatory}"
    columns="#{bindings.abbr.displayWidth}"
    binding="#{backing_streettypedetail.inputText3}"
    id="inputText3">
    <af:validator binding="#{bindings.abbr.validator}"/>
    </af:inputText>
    <f:facet name="footer">
    <h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
    id="panelGroup1">
    <af:commandButton text="Save"
    binding="#{backing_streettypedetail.saveButton}"
    id="saveButton"
    actionListener="#{bindings.mergeEntity.execute}"
    action="#{userState.retrieveReturnNavigationRule}"
    disabled="#{!bindings.mergeEntity.enabled}"
    partialSubmit="false">
    <af:setActionListener from="#{true}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    <af:commandButton text="Cancel"
    binding="#{backing_streettypedetail.cancelButton}"
    action="#{userState.retrieveReturnNavigationRule}"
    id="cancelButton">
    <af:setActionListener from="#{false}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    </h:panelGroup>
    </f:facet>
    </af:panelForm>
    </h:form>
    </afh:body>
    </afh:html>
    </f:view>
    <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
    </jsp:root>
    Can anybody take a look at the file and let me know what's wrong with it?
    Thanks in advance.
    Steven

  • Oracle RDF / Joseki : issue with large literals

    Hi,
    I have been using Joseki to query an Oracle RDF model. There seems to be an issue with large literals (according to a few unreliable tests, I would say this concerns literals around and over 4000 chars).
    Here are the two potential behaviours :
    First case:
    If the results contains several lines, one of which contains an overly large literal, there are NO exception on the server side, but the resulting xml is incomplete.
    It misses the "line" containing the large literal, and the xml is stopped there (which means that it also misses the closing </results> and </sparql>. In my case, I am using the results through Jena's sparqlService, which means I get this message :
    XMLStreamException: Unexpected EOF; was expecting a close tag for element <results>
    +at [row,col {unknown-source}]: [31,0]+
    Second case:
    If the query only returns one line which contains an overly large literal, the client receives a simple *"HttpException: 500 Internal Server Error"*
    Here is the error message from my server :
    +INFO [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] (SPARQL.java:165) - Throwable: we+
    blogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    java.lang.ClassCastException: weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    at oracle.spatial.rdf.client.jena.OracleSemIterator.getNodesFromResultSet(OracleSemIterator.java:579)
    at oracle.spatial.rdf.client.jena.OracleSemIterator.next(OracleSemIterator.java:445)
    at oracle.spatial.rdf.client.jena.OracleLeanQueryIter.moveToNextBinding(OracleLeanQueryIter.java:135)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.moveToNextBinding(QueryIterConvert.java:56)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.moveToNextBinding(QueryIterRepeatApply.java:76)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:54
    +)+
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:50)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:62)
    at org.joseki.processors.SPARQL.executeQuery(SPARQL.java:309)
    at org.joseki.processors.SPARQL.execQueryWorker(SPARQL.java:288)
    at org.joseki.processors.SPARQL.execQueryProtected(SPARQL.java:126)
    at org.joseki.processors.SPARQL.execOperation(SPARQL.java:120)
    at org.joseki.processors.ProcessorBase.exec(ProcessorBase.java:112)
    at org.joseki.ServiceRequest.exec(ServiceRequest.java:36)
    at org.joseki.Dispatcher.dispatch(Dispatcher.java:59)
    at org.joseki.http.Servlet.doCommon(Servlet.java:177)
    at org.joseki.http.Servlet.doGet(Servlet.java:138)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Would there be any fix / workaround ?
    Please let me know if you need further information / tests.
    Thanks,
    Regards,
    Julien

    Thanks for your reply.
    While trying to build up a small test case, I found out why there were discrepancies between the two cases I described.
    Indeed, usually, the two cases return the same thing (no exception on the server side, but incomplete resulting XML).
    The exception I described happened when I tried something else. Since I saw that issues were coming from long literals, I used fn:string-length (ARQ) to figure out how long they were.
    The test case resulting in the CLOB-cast-exception is:
    - too large literal
    - only one result "line" containing this literal
    - usage of fn:string-length (which does not change the behaviour in other cases (i.e. no long literals or/and several lines).
    Anyway, you will receive the other test cases shortly.
    Thanks,
    Regards,
    Julien

  • Performance Issue with SXMB_MONI

    Hi All,
    I have a typical performance issue with SXMB_MONI, when I trigger this T.code it is taking around 20-24 hrs to execute.
    Here I have found some tables which actually stores these processed xml messages,
    SXMSPFADDRESS
    SXMSPFRAWH
    RSXMB_REMOTE_SERVICE
    SXMSPFAGG
    SXMSCONFVL
    SXMSPMAST
    SXMSPEMAS, SXMSPERROR, SXMSPMAST & SXMSPVERS.
    SXMSPMAST, SXMSPMAST2, SXMSCLUR, SXMSCLUR2,
    SXMSCLUP, SMXSLUP2, SXMSPFRAWH,
    Here I want to increase the performance of sxmb_moni, firstly I want to know from which tables does the sxmb_moni fetches data and more over is it a single table or multiple table.
    And please suggest any technique which can decrease the latency time in executing sxmb_moni.
    Regards,
    Vijay N

    Hi,
    Periodically you need to archive the XI messages, that allows you to maintain sufficient performance level.
    Create archive jobs in SXMB_ADM to archive data which is 15 days old from XI related growing tables like SXMSCLUR, SXMSPEMAS, SXMSPHIST, SXMSPMAST, SXMSPVERS, SXMSPFRAW ,SWWWIHEAD. This archive job created archive files at the OS Level
    For XI tables refer
    /people/gourav.khare2/blog/2007/12/12/interesting-abap-tables-in-xi-150-part-i
    SXMSPMAST, SXMSCLUP, SXMSPCLUR
    the last two are cluster tables
    and you won't get XML messages directly from them
    have a look inside them
    The classes that reads this information in SXMB_MONI are abap classes,
    (can be seen at SE24) it is quite difficult to use them,
    you might debug SXMB_MONI or use SE30 and see all the classes
    that have been used.
    You can use value mapping if you are not looking at picking up values from application system.It is just like SM30 transcation.You can get the info under SAP XI->Design and Configuration->Configuration->Value Mapping.
    Also see the
    these tables,
    /SAPDMC/LSOMAP Field Mapping
    /SAPTRX/SCAOTMAP
    /SAPTRX/SCCNDMAP /SAPTRX/SCEVTMAP
    /SAPTRX/SCFUNMAP /SAPTRX/SCSOMAP
    /people/udo.martens/blog/2006/02/16/own-logging-of-xi-messages
    message-mappings: stored in which database-table?
    sxmb_moni, table sxmspmast, Messages with ICON_LED_RED, report RSXMB_SELECT
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/a1b46c4c686341e10000000a114a6b/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/ef/45393c3eb3036be10000000a11402f/frameset.htm
    Thanks
    Swarup

  • Performance Issue with RF Scanners after SAP Enhancement Pack 5 Upgrade

    We are on component version SAP ECC 6.0, and recently upgraded to Enhancement Pack 5.  I believe we are on Net Weaver 7.10, and using RF scanners in one plant that is Warehouse Managed.  Evidentially when we moved to EHP5, the Web SAP Console went away and we are left with ITS Mobile.  This has created several issues and continues to be a performance barrier for the forklift drivers in the warehouse.  We see there is a high use of java script, and the processors canu2019t handle this.  When we login to tcode LM00, on a laptop or desktop computer, there are no performance issues.  When we login to tcode LM00, with the RF scanners, the system is very slow.  It might take 30 seconds to confirm one item on a WM Transfer Order.
    1.)     Can we revert back to Web SAP Console, after we have upgraded to EHP5?
    2.)     What is creating the performance issues with the RF Scanners now that we switched over to SAP ITS mobile?
    Our RF Scanners are made by Intermec, but I donu2019t think that is where the solution lies.  One person in our IT Operations has spent a good deal of time configuring SAPITS to get it to work, but still it isnu2019t performing.

    Tom,
    I am sorry I did not see this earlier.
    I'm currently working on a very similar project with ITS mobile and the problem is to accurately determine the root cause of the problem in the least amount of time. The tool that works is found here: http://www.connectrf.com/index.php/mcm/managed-diagnostics/
    Isolating the network from the application and the device is a time consuming process unless you have a piece of software that can trace the HTTP transaction between host and device on both wired and wireless side of the network. Once that is achieved (as with Connect's tool) you can then you can begin to solve the problem.
    What I found in my project is that the amount of data traffic generated by ITS mobile can be reduced drastically, which speeds the response time of the mobile devices, especially with large number of devices in distribution centers.
    Let me know if I can answer more questions related to this topic.
    Cheers,
    Shari

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

Maybe you are looking for

  • Create an image scroller with swipe&drag in Edge Animate

    Hi folks! I'm trying to create a image scroller like this one http://www.awwwards.com/demo/touchSwipe-gallery-demo.html. For the Swipe, I use the wonderful JQuery Plugin TouchSwipe http://labs.rampinteractive.co.uk/touchSwipe/demos/, but for the imag

  • Oracle Java 7 in 32bit chroot

    Is anyone able to launch java applets using Oracle Java 7 and Firefox in a 32bit chroot environment? Whenever I browse to any site which opens a Java applet I instantly get the following exception: java.lang.NullPointerException     at java.lang.Proc

  • With the new ipad coming out soon, why should I buy the ipad 2?

    with the new ipad coming out soon, why should I buy the ipad 2?

  • Create Partner Bank Field in the Layout

    Hi Friends, My requirement is, in the vendor Line item report(FBL1N), I want to add the Partner bank Type field in the layout which is not available. Is there any option we can create our own without the help of ABAPer. Regards, Anil

  • Identity Error with App World

    This Error is very common in South Africa but hopefully someone has come up with an answer now , I got the latest version of BB app world then out of no-where my contacts began deleting themselves, so I wiped everything including operating system and