How to optimize Database Calls to improve performance of an application

Hi,
I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
This is very important, and any help is greatly appreciated.
Thanks in Advance,
Sailatha

latha_kaps wrote:
I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
This is very important, and any help is greatly appreciated.1. 2000 records inside a resultset are not a big number.
2. Which RDBMS you use?
Concerning the answer to 2. you have different possibilities. The best thing is always to handle as many transactions as possible inside the database. Therefore a stored procedure is the best approach imho.
Below there is an example for an Oracle RDBMS.
Assumption #1 you have created an object (demo_obj) in your Oracle database:
create type demo_obj as object( val1 number, val2 number, val3 number);
create type demo_array as table of demo_obj;
/Assumption #2 you've created a stored function to get the values of the array in your database:
create or replace function f_demo ( p_num number )
return demo_array
as
    l_array demo_array := demo_array();
begin
    select demo_obj(round(dbms_random.value(1,2000)),round(dbms_random.value(2000,3000)),round(dbms_random.value(3000,4000)))
    bulk collect into l_array
      from all_objects
     where rownum <= p_num;
    return l_array;
end;
/For getting the data out of database use the following Java program (please watch the comments):
import java.sql.*;
import java.io.*;
import oracle.sql.*;
import oracle.jdbc.*;
public class VarrayDemo {
     public static void main(String args[]) throws IOException, SQLException {
          DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
          Connection conn = DriverManager.getConnection(
                    "jdbc:oracle:oci:@TNS_ENTRY_OF_YOUR_DB", "scott", "tiger"); // I am using OCI driver here, but one can use thin driver as well
          conn.setAutoCommit(false);
          Integer numRows = new Integer(args[0]); // variable to accept the number of rows to return (passed at runtime)
          Object attributes[] = new Object[3]; // "attributes" of the "demo_obj" in the database
          // the object demo_obj in the db has 3 fields, all numeric
          // create an array of objects which has 3 attributes
          // we are building a template of that db object
          // the values i pass below are just generic numbers, 1,2,3 mean nothing really
          attributes[0] = new Integer(1);
          attributes[1] = new Integer(2);
          attributes[2] = new Integer(3);
          // this will represent the data type DEMO_OBJ in the database
          Object demo_obj[] = new Object[1];
          // make the connection between oracle <-> jdbc type
          demo_obj[0] = new oracle.sql.STRUCT(new oracle.sql.StructDescriptor(
                    "DEMO_OBJ", conn), conn, attributes);
          // the function returns an array (collection) of the demo_obj
          // make the connection between that array(demo_array) and a jdbc array
          oracle.sql.ARRAY demo_array = new oracle.sql.ARRAY(
                    new oracle.sql.ArrayDescriptor("DEMO_ARRAY", conn), conn,
                    demo_obj);
          // call the plsql function
          OracleCallableStatement cs =
               (OracleCallableStatement) conn.prepareCall("BEGIN ? := F_DEMO(?);END;");
          // bind variables
          cs.registerOutParameter(1, OracleTypes.ARRAY, "DEMO_ARRAY");
          cs.setInt(2, numRows.intValue());
          cs.execute();
          // get the results of the oracle array into a local jdbc array
          oracle.sql.ARRAY results = (oracle.sql.ARRAY) cs.getArray(1);
          // flip it into a result set
          ResultSet rs = results.getResultSet();
          // process the result set
          while (rs.next()) {
               // since it's an array of objects, get and display the value of the underlying object
               oracle.sql.STRUCT obj = (STRUCT) rs.getObject(2);
               Object vals[] = obj.getAttributes();
               System.out.println(vals[0] + " " + vals[1] + " " + vals[2]);
          // cleanup
          cs.close();
          conn.close();
}For selecting 20.000 records it takes only a few seconds.
Hth

Similar Messages

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • How to make Database calls without directly htting the DB Listener?

    Hi
    I have a frontend end (code sitting on user's PC) application that needs to call the database, say, to look up table data and populate a JTable.
    So, I make a call directly to the database, on the default 1521 (yes, it's Oracle!) and get my data. However, I have other server side java code connecting with my application over a Java socket. (sending messages back and forth).
    So, I've got two ports open, 3041 (arbitrary ) for the Java socket and 1521 for the DB.
    I want to close 1521 and move the sql statements to the server.
    I can't work out a way to do this effectively. At the moment, I'm sending a message through 3041 to tell the server side code to perform a DB call. Then the database returns the info and I send a message back to the client PC with the data.
    It makes the DB call asynchronous (good or bad thing?), since I have a listener on the client waiting for messages from the server.
    It all looks a bit messy....ie sending messages and waiting for replies just for a DB call.
    Is there a standard practice for my problem that I can use?
    Thanks

    Your script won't work.
    #Renamed the variable so that it's less confusing.
    $webUrl = "http://spsite"
    #There's no point getting the SPSite object if you only want/need the SPWeb object.
    #$spSite = Get-SPSite -Identity $mySiteUrl
    #$spWeb = $spSite.OpenWeb()
    $spWeb = Get-SPWeb $webUrl
    #You've copied this from an example that uses a subsite which has the documents. That's not a great approach and probably not valid for yours
    #$cvDocumentLibrary = $spWeb.GetList("subsite/Documents")
    $library = $spWeb.Lists["ListName"]
    #Here you are referencing '$lib' which hasn't been declared. PowerShell will be completely ignorant of what on earth that is and will throw a null reference exception
    # $item = $lib.Items | where {$_.Name -eq "Report.docx"}
    #This still isn't perfect as it's slow and won't work on lists that are over the throttling limit but it'll work most of the time.
    $item = $library.Items | where {$_.Name -eq "Report.docx"}
    $item.Delete()
    You need to slow down and check things.
    $webUrl = "http://spsite"
    $spWeb = Get-SPWeb $webUrlWrite-Host "The URL of the website is: " $SPWeb.URL
    $library = $spWeb.Lists["ListName"]
    Write-Host "The Library title is: " $Library.Title$item = $library.Items | where {$_.Name -eq "Report.docx"}
    Write-Host "Deleting item named: " $item.Name$item.Delete()
    In that version it should print out some lines as it processes which will confirm that you've found the web, list and item before it does anything.
    If you hit errors post up the EXACT code you're using, change the URLs a little bit if you want but try to only replace words rather than missing out whole chunks.

  • How much can a native compiler improve performance of a java application?

    Hello,
    we have a customer with low-end machines who complains very much about the
    starting time of an application of us. I dont't know the exact configuration
    of the clients but i think the bad perfrmance is because the client has too
    few memory and has to swap when the jvm starts.
    Could a native compiler like Excelsior JET be a solution or would the
    imporvement be only marginal?
    Anyone who has experience in this topic?
    Thank you in advance!

    Could a native compiler like Excelsior JET be a solution or would the
    imporvement be only marginal?Excelsior JET only packages up the class files and a JRE into a big blob so that it looks like a single executable. It doesn't actually generate native code for your classes.
    There are some limited pure-native compilers (e.g. GCJ - the GNU compiler for Java). Because you also need native-compiled libraries, and those are quite incomplete, only certain basic programs can be compiled down to native form without some major tweaking today.
    As to whether it'll improve your performance or not: if your program is computationally intensive (does lots of floating-point math, or other CPU-intensive algorithms), it may improve. If it's I/O, network or database bound, you'll see very little improvement, if any. If it's graphics-bound, you may see some improvement, though the native Swing support with GCJ is limited at this time.
    Even with long computationally-intensive programs, you may or may not see an improvement. If you use the server VM (java -server ...), then it does similar things behind your back (optimizing compiles), so that eventually your program speeds up a fair amount (though not to pure-native speeds).

  • Database Statistics to improve performance

    I had skipped the creation of database statistics after the import stage during ECC installation . I have installed PI 7.1 and ECC 6.0 SR3 as MCOD installation.  My installation is on 64bit windows 2003 server, ORACLE 10 DB and unicode kernel.
    Would it be okay if I did the database statistics creation , now that my installation is finished and both PI 7.1 and ECC 6.0 is working correctly ?
    i would be calling it via command line
    brconnect.exe -u / -c -o summary -f stats -o SAPSR4 -t all -p 8 -f nocasc
    my schema for the ECC is SAPSR4.

    yes, you can do this right now.
    -p is at 8, so, just do this off hours or when nobody is using this as it may impair performance while this job runs.
    good luck.

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to improve performance for Custom Extractor in BI..

    HI all,
               I am new to BI and started working on BI for couple of weeks.. I created a Custom Extractor(Data View) in the Source system and when i pull data takes lot of time.. Can any one respond to this, suggesting how to improve the performance of my custom Extractor.. Please do the needfull..
      Thanks and Regards,
    Venugopal..

    Dear Venugopal,
    use transaction ST05 to check if your SQL statements are optimal and that you do not have redundant database calls. You should use as much as possible "bulking", which means to fetch the required data with one request to database and not with multiple requests to database.
    Use transaction SE30 to check if you are wasting time in loops and if yes, optimize the algorithm.
    Best Regards,
    Sylvia

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • How to eliminate joins to improve performance

    i have a query:
    from D236OT00.ASN1_COMP_NB CP LEFT OUTER JOIN D236OT00.NB01_COMP_DEP_NB CD
         on(CP.SRVC_LOC_ID = CD.CHLD_SRVC_LOC_ID
              and CP.ORD_ITEM_ID = CD.CHLD_ORD_ITEM_ID
              and CP.PRMRY_COMP_CD = CD.CHLD_PRMRY_COMP_CD
              and CP.SECNDRY_COMP_CD = CD.CHLD_SCN_COMP_CD
         and CP.ORD_ITEM_SEQ = CD.CHLD_ORD_ITEM_SEQ
              and CD.DEP_TYPE_CD = 'HI'
              and CD.RECORD_EFF_END_DT = '9999-12-31'
              and CD.EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.ASN2_RESOURCE_NB RS
    on(CP.ORD_ITEM_ID = RS.ORD_ITEM_ID
              and CP.ORD_ITEM_SEQ = RS.ORD_ITEM_SEQ
              and CP.PRMRY_COMP_CD = RS.PRMRY_COMP_CD
              and CP.SECNDRY_COMP_CD = RS.SECNDRY_COMP_CD
              and RS.RECORD_EFF_END_DT = '9999-12-31'
              and RS.RESR_EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.CSTI_CUST_INFO_MP CS
    on(RS.OWNER_ACCT_ID = CS.OWNER_ACCT_ID
              and RS.OWNER_SRVC_LOC_ID = CS.OWNER_SRVC_LOC_ID
              and RS.RESR_ID = CS.RESR_ID and RS.RESR_GRP_TYP = CS.RESR_GRP_TYP
              and RS.RESR_TYPE = CS.RESR_TYPE and CS.START_NBR = ''
              and CS.EFF_END_DT = '9999-12-31' and CS.RATE_END_EFF_DATE = '9999-12-31')
    where CP.ORD_ITEM_ID = ? and CP.ORD_ITEM_SEQ = ? and CP.PRMRY_COMP_CD = ? and
    CP.RECORD_EFF_END_DT = '9999-12-31' and CP.EFF_END_DT > CURRENT DATE
    it has quite a few joins... any way to eliminate these joins to improve performance? Kindly help as this is urgent

    from D236OT00.ASN1_COMP_NB CP LEFT OUTER JOIN D236OT00.NB01_COMP_DEP_NB CD
    on(CP.SRVC_LOC_ID = CD.CHLD_SRVC_LOC_ID
    and CP.ORD_ITEM_ID = CD.CHLD_ORD_ITEM_ID
    and CP.PRMRY_COMP_CD = CD.CHLD_PRMRY_COMP_CD
    and CP.SECNDRY_COMP_CD = CD.CHLD_SCN_COMP_CD
    and CP.ORD_ITEM_SEQ = CD.CHLD_ORD_ITEM_SEQ
    and CD.DEP_TYPE_CD = 'HI'
    and CD.RECORD_EFF_END_DT = '9999-12-31'
    and CD.EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.ASN2_RESOURCE_NB RS
    on(CP.ORD_ITEM_ID = RS.ORD_ITEM_ID
    and CP.ORD_ITEM_SEQ = RS.ORD_ITEM_SEQ
    and CP.PRMRY_COMP_CD = RS.PRMRY_COMP_CD
    and CP.SECNDRY_COMP_CD = RS.SECNDRY_COMP_CD
    and RS.RECORD_EFF_END_DT = '9999-12-31'
    and RS.RESR_EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.CSTI_CUST_INFO_MP CS
    on(RS.OWNER_ACCT_ID = CS.OWNER_ACCT_ID
    and RS.OWNER_SRVC_LOC_ID = CS.OWNER_SRVC_LOC_ID
    and RS.RESR_ID = CS.RESR_ID and RS.RESR_GRP_TYP = CS.RESR_GRP_TYP
    and RS.RESR_TYPE = CS.RESR_TYPE and CS.START_NBR = ''
    and CS.EFF_END_DT = '9999-12-31' and CS.RATE_END_EFF_DATE = '9999-12-31')
    where CP.ORD_ITEM_ID = ? and CP.ORD_ITEM_SEQ = ? and CP.PRMRY_COMP_CD = ? and
    CP.RECORD_EFF_END_DT = '9999-12-31' and CP.EFF_END_DT > CURRENT DATEYou have not used tags.
    it has quite a few joins... any way to eliminate these joins to improve performance? Kindly help as this is urgentBy saying urgent you have lost 99% of points for the volunteer to answer. This is considered as rude.
    Your query is incomplete. There is no database version, no os version.
    Where is your research? How do you came to that conculsion? Where is the explain plan?
    I think you have to repost along with the complete details.
    Thank you.

  • How to run query in parallel  to improve performance

    I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
    To be specific, I will call below operation to get data:
    declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
    src0:SOA_1MIN_POOL_METRIC(),
    src1:SOA_1MIN_POOL_METRIC(),
    src2:SOA_1MIN_POOL_METRIC(),
    src3:SOA_1MIN_POOL_METRIC(),
    src4:SOA_1MIN_POOL_METRIC(),
    src5:SOA_1MIN_POOL_METRIC(),
    src6:SOA_1MIN_POOL_METRIC(),
    src7:SOA_1MIN_POOL_METRIC(),
    src8:SOA_1MIN_POOL_METRIC(),
    src9:SOA_1MIN_POOL_METRIC(),
    src10:SOA_1MIN_POOL_METRIC(),
    src11:SOA_1MIN_POOL_METRIC()
    This method acts as a proxy, it aggregates data from 12 data tables
    src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
    src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
    The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?

    Thanks Mike.
    The async function works, from the log, I could see the queries are executed in parallel.
    but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
    java.lang.IllegalStateException
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
         at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
         at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
         at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
         at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
         at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
         at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
         at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
         at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
         at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
         at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
         at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
         at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
         at RDSWS.getMetricAggNumber(RDSWS.jws:240)
         at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
         at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
         at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
    below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
    the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
    let $METRIC_RESULT :=
            fn-bea:async(
                for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
               fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
             ... //12 similar queries
            for $Metric_data in $METRIC_RESULT    
            group $Metric_data as $Metric_data_Group        
            by   $Metric_data/ROLE_TYPE as $role_type_id  
            return
            <ns0:RawMetric>
                <ns0:endTime?></ns0:endTime>
                <ns0:target?>{$role_type_id}</ns0:target>
    <ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
    <ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
    <ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
    <ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
    </ns0:RawMetric>
    could you tell me why the result is unstable? thanks!

  • How to improve performance of MediaPlayer?

    I tried to use the MediaPlayer with a On2 VP6 flv movie.
    Showing a video with a resolution of 1024x768 works.
    Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
    Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
    Does somebody know a solution for this problems?
    Cheers
    masim

    duplicate thread..
    How to improve performance of attached query

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • How to improve performance of the attached query

    Hi,
    How to improve performance of the below query, Please help. also attached explain plan -
    SELECT Camp.Id,
    rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount,
    (SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
    rCam.AccountKey as AccountKey
    FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
    WHERE Camp.AccountKey = rCam.AccountKey
    AND Camp.AvCampaignKey = rCam.AvCampaignKey
    AND Camp.AccountKey = CamBilling.AccountKey
    AND Camp.CampaignKey = CamBilling.CampaignKey
    AND rCam.AccountKey = xSite.AccountKey
    AND rCam.AvSiteKey = xSite.AvSiteKey
    AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
    to_date('01-01-2011', 'DD-MM-YYYY')
    GROUP By rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount
    Explain Plan :-
    Description Object_owner Object_name Cost Cardinality Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
    SORT AGGREGATE 1 13
    VIEW GEMINI_REPORTING 14 1 13
    HASH GROUP BY 14 1 103
    NESTED LOOPS 13 1 103
    HASH JOIN 12 1 85
    TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
    NESTED LOOPS 9 5 325
    HASH JOIN 7 1 40
    SORT UNIQUE 2 1 18
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
    INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
    TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
    INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
    TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
    INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1

    duplicate thread..
    How to improve performance of attached query

  • How to improve performance of attached query

    Hi,
    How to improve performance of the below query, Please help. also attached explain plan -
    SELECT Camp.Id,
    rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount,
    (SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
    rCam.AccountKey as AccountKey
    FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
    WHERE Camp.AccountKey = rCam.AccountKey
    AND Camp.AvCampaignKey = rCam.AvCampaignKey
    AND Camp.AccountKey = CamBilling.AccountKey
    AND Camp.CampaignKey = CamBilling.CampaignKey
    AND rCam.AccountKey = xSite.AccountKey
    AND rCam.AvSiteKey = xSite.AvSiteKey
    AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
    to_date('01-01-2011', 'DD-MM-YYYY')
    GROUP By rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount
    Explain Plan :-
    Description Object_owner Object_name Cost Cardinality Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
    SORT AGGREGATE 1 13
    VIEW GEMINI_REPORTING 14 1 13
    HASH GROUP BY 14 1 103
    NESTED LOOPS 13 1 103
    HASH JOIN 12 1 85
    TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
    NESTED LOOPS 9 5 325
    HASH JOIN 7 1 40
    SORT UNIQUE 2 1 18
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
    INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
    TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
    INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
    TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
    INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1

    duplicate thread..
    How to improve performance of attached query

  • How to improve performance of query

    Hi all,
    How to improve performance of query.
    please send :
    [email protected]
    thanks in advance
    bhaskar

    hi
    go through the following links for performance
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2

Maybe you are looking for

  • ITunes wont open or install correctly

    I know this has already been posted but I'm tring to get apple to notice this and fix it. The other day iTunes told me to upgrade so I did and when I tried opening the program it said "iTunes has encountered a problem and needs to close. We are sorry

  • Color themes disappear

    When I first started using Kuler, I didn't get that I needed to put descriptive words in the tags field. So, after seeing many themes with tag words and realizing that the tags are searchable (duh), I decided to go back to some of my older themes and

  • Caret and Size(Width)

    I want to change the width of Caret in the JTextFild, for example, "0000" is the text in the JTextField,and the first "0" is selected As default. when I input "1",the text is changed to "1000",and the second "0" is selected automatically,but the care

  • Error when Updating Ztable.....

    Hi, i am trying to update one ztable by uploading data from flat file. the data was uploading correctly into the internal table. But it was not inserting into the ztable and when the MANDT field was empty in the WA_RDATA i wrote the following logic t

  • Issue with Dynamic SQL

    I am using a dynamic SQL which generates a insert SQL at runtime.      DBMS_OUTPUT.PUT_LINE('CREATE_BACKUP');      L_DYNAMIC_SQL_SEL := ' ( SELECT ';      L_INDEX := 0 ;      FOR COLUMN IN TAB_COLUMNS(P_TABLE_NAME) LOOP           L_COLUMN_NAME := COL