Improve performance on specific PL/SQL code

In these three querys there are identical "select" statements in the "where" sections of the querys.
SELECT sum(num_tit_par)
INTO posicion
FROM posi_hoy_rbo
WHERE fec_pos = fecha
and enti_cli in (select cod_int_ent from enti_rbo where enti_efe = entidad);
SELECT sum(mctasald)
INTO saldo_cuenta
FROM gbinmcta_rbo
WHERE mctafech = fecha
and mctaclav in (select cod_int_ent from enti_rbo where enti_efe = entidad);
SELECT sum(tarjdisp)
INTO saldo_tarjeta
FROM gbintarj_rbo
WHERE tarjfech = fecha
and tarjclav in (select cod_int_ent from enti_rbo where enti_efe = entidad);
Is there any way, using PL/SQL code, to store the results of this "select" statement in a kind of variable (varrays, nested tables...)
and then use this variable on the "where" clauses?. If it exists, could someone explain it in detail?
This way, I suppose that this "select" should be only once executed and the performance improved.
Thank You Very Much

Daniel,
I happened to have this code, which I posted in the past. simplified
You could ignore the creation of package and you could include the functionality of procedure A into procedure B in the following example,-- create a SQL type
SQL> create or replace type numTyp as table of number
  2  /
Type created.
SQL> create or replace package test_pkg as
  2    procedure B;
  3  end;
  4  /
Package created.
SQL> create or replace package body test_pkg as
  2 
  2       numArray numTyp := numTyp(); -- initialize
  3 
  3       procedure A Is -- Fills the array
  4       Begin
  5           numArray.extend(2);
  6           numArray(1) := 10;
  7           numArray(2) := 20;
  8       End;
  9   
10       procedure B Is
11       Begin
12          A; -- call to procedure A
13          For rec in (select empno from my_emp where deptno IN
14                       (Select a.column_value  val
15                          From THE ( select cast(numArray as numTyp) from dual ) a))
16          loop
17             dbms_output.put_line(rec.empno);
18          end loop;
19       end;
20  end;
21  /
Package body created.
SQL> exec test_pkg.B;
7782
7839
7934
7369
7876
7902
7788
7566
PL/SQL procedure successfully completed.Not a great example, but it shows how to use SQL type nested tables to use in SQL join operations. Hope it helps.
Thx,
Sri

Similar Messages

  • Performance tuning in PL/SQL code

    Hi,
    I am working on already existing PL/SQL code which is written by someone else on validation and conversion of data from a temporary table to base table. It usually has 3.5 million rows. and the procedure takes arount 2.5 - 3 hrs to complete.
    Can I enhance the PL/SQL code for better performance ? or, is this OK to take so long to process these many rows?
    Thanks!
    Yogini

    Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
    or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
    But please provide some more details like your database version etc.
    I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
    SQL> alter session set events '10046 trace name context forever, level 12';
    SQL> execute your PL/SQL code here
    SQL> exitWill give you a .trc file in your udump directory on the server.
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    Also this informative thread can give you more ideas:
    HOW TO: Post a SQL statement tuning request - template posting
    as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
    and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29

  • Performance issue with pl/sql code

    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Elapsed Times are 30minutes , 40 minutes, 65 minutes , 3 minutes ,3 seconds.
    Expected elapsed time is maximum of 3 minutes. ( But some times it took 3 seconds too...! )
    Output on all different executions are same that is deletion and insertion of 12K records into a table.
    Here is the auto trace details of two different scenarios.
    Slow execution - 33.65 minutes
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                         1,712,343    1,712,342.6    41.4
    CPU Time (ms)                             1,679,689    1,679,688.6    44.7
    Executions                                        1            N/A     N/A
    Buffer Gets                              ##########  167,257,973.0    86.9
    Disk Reads                                    1,284        1,284.0     0.4
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                       4,264            N/A     N/A
    Cluster Wait Time (ms)                        3,468            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        6            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------Fast Exection : 5 seconds
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                            41,550       41,550.3     0.7
    CPU Time (ms)                                40,776       40,776.3     1.0
    Executions                                        1            N/A     N/A
    Buffer Gets                               2,995,677    2,995,677.0     4.2
    Disk Reads                                       22           22.0     0.0
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                         162            N/A     N/A
    Cluster Wait Time (ms)                          621            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                       55            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------For security reasons, I cannot share the actual code. Its a report generating code that deletes and load the data into table using insert into select statement.
    Delete from table ;
    cursor X to get the master data ( 98 records )
    For each X loop
    insert into tableA select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    insert into tableB select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    end loop ;1. The select query is complex with bind variables ( explain plan varies for each values )
    2. I have checked the tablespace of the tables involved, it is 82% used. DBA confirmed that it is not the reason.
    3. Disk reads are high during long execution.
    4. At long running times, I can see a db sequential read wait event on a index object. This index is on the table where data is inserted.
    All I need to find is why this code is taking 3 seconds and 60 minutes on the same day and on the consecutive executions ?
    Is there any other approach to find the root cause of this behaviour and to fix it ? Kindly adivse.
    Thanks in advance your help.
    Regards,
    Hari
    Edited by: BluShadow on 26-Sep-2012 08:24
    edited to add {noformat}{noformat} tags.  You've been a member long enough to know to do this yourself... so please do so in future.  ({message:id=9360002})                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hariharan ST wrote:
    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Please reedit your post and add some code tags around the trace information. This would improve readability greatly and will help us to help you
    example
    {<b></b>code}
    select * from dual;{<b></b>code}
    Based upon your description I can imagine two things.
    a) The execution plan for the select query does change frequently.
    A typical reason can be not up to date statistics.
    b) Some locking / wait conflict. For example upon a UK index.
    Are there any other operations going on while it is slow? If anybody inserts a value, then your session will wait, if the same (PK/UK) value also is to be inserted.
    Those wait events can be recognized using standard tools like oracle sql developer or enterprise manager while the query is slow.
    Also go through the links that are in the FAQ. They tell you how to get better information for makeing a tuning request.
    SQL and PL/SQL FAQ
    Edited by: Sven W. on Sep 25, 2012 6:41 PM

  • Improving Performance of Dynamic SQL in Pro*C

    I am using Dynamic sql format 3 in pro*C,
    i.e prepare sqlstmt from :sql_stmt
    declare cursor c1 for sqlstmt
    But the query is slow .I am also using
    order by with ltrim(rtrim(colname))
    & sometimes group by .
    can anyone tell how to improve performance
    of cursors in dynamic sql,or how can
    I use the same in Pl/sql with simultaneous
    printing of the rows returned,on a file.
    null

    Manoj,
    Typically, the slow performance is due to the SQL statement itself rather than the dynamic SQL execution code in Pro*C.
    Check your explain plan to see if the query performs well. I suspect that you will find your problem here.

  • Rewrite SQL query to improve performance

    Hello,
    The below queries are very time consuming.Could you please suggest how to improve performance for these 2 queries:
    QUERY1:
    SELECT avbeln aposnr aauart avkorg avtweg aspart
    avkbur akunnr bmatnr bkwmeng b~vrkme
    bnetwr bwerks blgort bvstel babgru berdat b~ernam
    cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
    INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
                FROM vapma AS a INNER JOIN vbap AS b ON
                avbeln = bvbeln AND aposnr = bposnr
                INNER JOIN vbak AS c ON avbeln = cvbeln
                  WHERE a~vkorg IN so_vkorg
                  AND a~vtweg IN so_vtweg
                  AND a~vkbur IN so_vkbur
                  AND a~auart IN so_auart
                  AND a~vbeln IN so_vbeln
                  AND b~abgru IN so_abgru
                  AND c~faksk IN so_faksk
                  AND ( b~erdat GT g_f_zenkai_date OR
            ( berdat EQ g_f_zenkai_date AND berzet GE g_f_zenkai_time ) )
            AND ( b~erdat LT g_f_kaisi_date  OR
             ( berdat EQ g_f_kaisi_date  AND berzet LT g_f_kaisi_time ) ).
    QUERY2:
    SELECT avbeln aposnr aauart avkorg avtweg aspart
            avkbur akunnr bmatnr bkwmeng  b~vrkme
            b~netwr
            bwerks blgort bvstel babgru berdat bernam
            cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
            INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
           FROM vapma AS a INNER JOIN vbap AS b ON
           avbeln = bvbeln AND aposnr = bposnr
           INNER JOIN vbak AS c ON avbeln = cvbeln
             WHERE a~vkorg IN so_vkorg
             AND a~vtweg IN so_vtweg
             AND a~vkbur IN so_vkbur
             AND a~auart IN so_auart
             AND a~vbeln IN so_vbeln
             AND b~abgru IN so_abgru
             AND c~faksk IN so_faksk.

    Questions like this a one of the favorites here in this forum.
    I guess that the statements are o.k.
    The problem is the usage!  There are so many Ranges, if they are filled then some index should support it. If no range is filled then it is usually slow, you can not do anything.
    You must find out, which are actually used and under which conditions it is slow and when it is o.k.
    Ask again, when you can provide the actual cases.
    Use SQL trace to check the performance:
    SQL trace:
    The SQL Trace (ST05) – Quick and Easy
    I know the probabilty is high, that you will ignore this recommendation and you will points to the
    'Use FOR ALL ENTRIES' recommendations' ... but then I can not help you.
    Siegfried

  • How to avoid performance problems in PL/SQL?

    How to avoid performance problems in PL/SQL?
    As per my knowledge, below some points to avoid performance proble in PL/SQL.
    Is there other point to avoid performance problems?
    1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
    2. EXECUTE IMMEDIATE is faster than DBMS_SQL
    3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.

    Susil Kumar Nagarajan wrote:
    1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
    2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
    4. Optimize SQL statements if they need to
    -> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
    -> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
    5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
    Justin

  • Improving performance of scripts in a PDF form

    Hello there.  I'm a bit new to the scripting in Acrobat and find myself with a complex form that has a number of instances (about 70)  of the following script (specific to individual fields):
    event.value=this.getField("Ranks").value + this.getField("Mod").value + this.getField("MiscMod").value;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    The code works fine, but with this many instances, performance of form is sluggish while updating. In addition, there are approximately 200 simple addtion fields being used.
    I'm looking for any advice, scripting or otherwise, to help improve performance.
    Thanks so much in advance.

    If you create a function in the document scripts :
    function updateField() {
    event.value=this.getField("Ranks").value + this.getField("Mod").value + this.getField("MiscMod").value;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    you could replace the code in each field with the following function call, and they would all use the same code:
    updateField()
    Without knowing exactly what 'slightly different code'' entails, you can add parameters to the function call:
    function updateField(Rank, Mod, MiscMod) {
    event.value= Rank + Mod + MiscMod;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    and you could call it in each field with different values for 'Rank', 'Mod', and 'MiscMod' (or whatever parameters you use) like this:
    updateField(getField("Ranks").value, this.getField("Mod").value, this.getField("MiscMod").value )
    or
    updateField(getField("aDifferentRanks").value, this.getField("aDifferentMod").value, this.getField("aDifferentMiscMod").value )
    or
    updateField(47, 12, 45)
    If you need to make changes, you only need to change the 'updateField' function once, and any field which calls it will use the updated code.

  • How to run query in parallel  to improve performance

    I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
    To be specific, I will call below operation to get data:
    declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
    src0:SOA_1MIN_POOL_METRIC(),
    src1:SOA_1MIN_POOL_METRIC(),
    src2:SOA_1MIN_POOL_METRIC(),
    src3:SOA_1MIN_POOL_METRIC(),
    src4:SOA_1MIN_POOL_METRIC(),
    src5:SOA_1MIN_POOL_METRIC(),
    src6:SOA_1MIN_POOL_METRIC(),
    src7:SOA_1MIN_POOL_METRIC(),
    src8:SOA_1MIN_POOL_METRIC(),
    src9:SOA_1MIN_POOL_METRIC(),
    src10:SOA_1MIN_POOL_METRIC(),
    src11:SOA_1MIN_POOL_METRIC()
    This method acts as a proxy, it aggregates data from 12 data tables
    src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
    src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
    The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?

    Thanks Mike.
    The async function works, from the log, I could see the queries are executed in parallel.
    but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
    java.lang.IllegalStateException
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
         at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
         at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
         at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
         at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
         at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
         at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
         at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
         at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
         at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
         at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
         at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
         at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
         at RDSWS.getMetricAggNumber(RDSWS.jws:240)
         at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
         at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
         at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
    below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
    the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
    let $METRIC_RESULT :=
            fn-bea:async(
                for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
               fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
             ... //12 similar queries
            for $Metric_data in $METRIC_RESULT    
            group $Metric_data as $Metric_data_Group        
            by   $Metric_data/ROLE_TYPE as $role_type_id  
            return
            <ns0:RawMetric>
                <ns0:endTime?></ns0:endTime>
                <ns0:target?>{$role_type_id}</ns0:target>
    <ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
    <ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
    <ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
    <ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
    </ns0:RawMetric>
    could you tell me why the result is unstable? thanks!

  • How to improve performance on SD document with over 1000 line items

    Dear experts,
    A lot of our SD document contains 1500 to 5000 line items. Saving a simple change of one line item description would take 5 minutes (with or without pricing considerations). We are on ECC 5.0. Does anyone know of an existing snote or ways that would help to improve performance?
    A related question on performance tracing. I have done some tracing and notice that the time to access SQL/buffer/enqueue/RFC only amount to less than 50% (sometimes 5%) of the total processing time. What has the system been doing for the "unrecorded" time?
    Thank you and appreciate!
    -cynthia

    I would check which modules or which parts of code take the most of the processing time. You can use SE30 to check one order save.
    If it's not SQL - then it may be internal processing like inefficient loops and so on.  For example runtime of nested loops with big tables have grows exponentially.
    It may be caused by SAP code - search OSS notes for based on the module names which consume most time.
    OR it can be caused by the code in your userexits - then developers have to fix it and redesign using more efficient technique.

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • PL/SQL code written in db or in application...????

    Hi ,
    Which is the best...at performance , maintability... e.t.c.???
    To write PL/SQL as validation process of data values going to be inserted in db:
    1) as db trigger (before insert or update row-level trigger)
    2) in application level - in Forms10g...
    This PL/SQL code compares some pairs of data values only.... there DML statement....
    Both db server and App server are considered to be or in the same machine or in two machines very close to each other...
    Note: i use Db10g .
    Thanks...
    Sim

    It is all about moving parts. How many there are. Where they are located.
    The less moving parts, the less the complexity, the bugs - and the better the performance (less wheels to turn to crunch data), the easier the maintenance and the better the flexibility.
    Part of this is having the moving parts close to the data. Kind of obvious - what is faster? Shipping data from the db engine to a front-end application via a app server and web server? Or shipping the data from the db (SQL) engine to a PL/SQL process running as part and parcel of db instance? IPC is significantly faster than TCP/IP.
    Having the moving parts close to the data also means that it can scale with the data. Oracle is good at scalability. It is designed at its very core to be scalable. Partitions to reduce I/O. Shared Server to reduce o/s resource footprint per client session. Shared pool to re-use SQL. DB buffer to cache data. RAC to run multiple db instances for a single physical database. Etc. Etc.
    With the application moving parts being close to the data, it inherits this scalability. You can "multi-thread" application code using table pipelines. You have access to forking application code using background processes (DBMS_JOB and DBMS_SCHEDULER). You have bulk processing in order to transfer more data per SQL engine call, between the buffer cache and your application, and thus also reduce expensive context switching.
    This list goes on and on.
    All this is summarised into the following rules.
    Rule 1. Do it in SQL.
    Rule 2. Only when SQL cannot do it, use PL/SQL. (e.g. SQL is not suited for managing and controlling process flow - PL/SQL is)
    Rule 3. Only when PL/SQL cannot do it, use <insert-language-here> (e.g. PL/SQL cannot render a Windows grid on the client - but Delphi/C#//Java can).
    Also keep in mind that in the modern client-server paradigm we deal with a web architecture. Which means the client is a web browser. And this means that PL/SQL can pass the data (HTML) required by the client (web browser) to render the User Interface.
    Have a look at APEX (Oracle's Application Express) for how this works. http://apex.oracle.com
    All you need to develop web applications is APEX (a PL/SQL product suite in the database) and a web browser. Kickass stuff... :-)

  • Performance degradation in pl/sql parsing

    We are trying to use xml pl/sql parser and noticed performance degradation as we run multiple times. We zeroed into the following clause:
    doc := xmlparser.getDocument(p);
    The first time the procedure is run the elapsed time at sqlplus is something like .45sec, but as we run repeatedly in the same session the elapsed time keeps on increasing by .02 seconds. If we log out and start fresh, we start again from .45sec.
    We noticed similar degradation with
    p := xmlparser.newParser;
    but we got around by making the 'p' variable as package variable, initializing it once and using the same for all invocations.
    Any suggestions?
    Thank you.

    Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
    or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
    But please provide some more details like your database version etc.
    I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
    SQL> alter session set events '10046 trace name context forever, level 12';
    SQL> execute your PL/SQL code here
    SQL> exitWill give you a .trc file in your udump directory on the server.
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    Also this informative thread can give you more ideas:
    HOW TO: Post a SQL statement tuning request - template posting
    as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
    and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to reference LEVEL in a TREE's SQL code

    Is it possible to somehow reference LEVEL in a tree's sql code so that you can use a decode statement to selectively create links on specific levels only i.e. NOT on the root level?
    e.g.
    select "CHILD_ID" id,
    "PARENT_ID" pid,
    "NAME" name,
    decode( LEVEL, 1, null,'f?p=' || :APP_ID || ':43:' || :APP_SESSION || '::::P43_CHILD_ID:' || CHILD_ID) link,
    null a1,
    null a2
    from TREETABLE
    order by name
    one solution I've implemented is to create a DB function e.g.
    create or replace function get_tree_level(v_child_id number) return number
    is
    v_level number(2);
    begin
    select level
    into v_level
    from TREETABLE
    where child_id=v_child_id
    start with child_id=1
    connect by prior child_id= parent_id;
    return v_level;
    end;
    and then use
    decode( get_tree_level(child_id) 1, null,'f?p=' || :APP_ID || ':43:' || :APP_SESSION || '::::P43_CHILD_ID:' || CHILD_ID) link,
    but this is too slow as I have quite a few records in the tree and takes too long to render
    regards
    Paul P

    OK I found a workaround after reading
    How to add URL to tree
    The view my tree is build on is constructed out of 3 unions and 4 "selects" ( the first select creates the ROOT NODE where "1" is also used as a hard coded foreign key (Parent_id), in the company table.)
    I just added a new column called tree_level to the query which is hard coded wrt its level on the hierarchy.
    e.g.
    CREATE OR REPLACE FORCE VIEW COMCONSTUTREE ("CHILD_ID", "PARENT_ID", "NAME","TREE_LEVEL") AS
    select 1 as child_id,to_number(null) as parent_id ,'Company' as name,*1 as tree_level* from dual
    UNION
    select COM_ID, 1, COM_NAME,2
    from company
    UNION
    select CON_ID,CON_COMPANY_ID, substr(con_name,instr(con_name,' ')+1) ||', '||substr(con_name,1,instr(con_name,' ')-1),3
    from contact
    UNION
    select STU_ID,STU_CONTACT_ID, STU_LAST_NAME||', '||STU_FIRST_NAME ,4
    from student;
    the tree code now becomes
    select "CHILD_ID" id,
    "PARENT_ID" pid,
    "NAME" name,
    decode( tree_level,1, null,'f?p=' || :APP_ID || ':43:' || :APP_SESSION || ':TREE:::P43_CHILD_ID:' ||CHILD_ID) link,
    null a1,
    null a2
    from COMCONSTUTREE"
    order by name

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

Maybe you are looking for