Processing large Resultsets quickly or parallely

How to process a large Resultset that contains a purchase entries of say 20 K users. Each user may have 1 or more purchase entries. The resultset is ordered by userid. And the other fields are itemname, quantity, price.
Mine is a quad processor machine.
Thanks.

you're going to need to provide a lot more details. for instance, is the slow part reading the data from the database, or the processing that you are going to do on the data? if the former, then in order to do work in parallel, you probably need separate threads with their own resultsets. if the latter, then you could parallelize the work by having one thread read the resultset and push the data onto a shared work queue, from which multiple worker threads were reading. these are just a few of the possibilities.

Similar Messages

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • PL/SQL Dev query session lost on large resultsets after db update

    We have a problem with our PL/SQL Developer tool (www.allroundautomations.nl) since updating our Database.
    So far we had Oracle DB 10.1.0.5 Patch 2 on W2k3 and XP-Clients using Instant Client 10.1.0.5 and PL/SQL Developer 5.16 or 6.05 to query your DB. This scenario worked well.
    Now we upgraded to ORACLE 10G 10.1.0.5 PATCH 25 and now our PL/SQL Developer 5.16 or 6.05 (on IC 10.1.0.5) can logon the db and also query small tables. But as soon as the resultset reaches a certain size, the query on a table won't come to an end and is always showing "Executing...". We can only press "BREAK" what the results in a "ORA-12152: TNS: unable to send break message" and "ORA-03114: not connected to ORACLE".
    If i narrow the resultset down on the same table it works like before.
    If i watch the sessions on small resultset-queries, i see the corresponding session, but on large resultset-queries the session seem to close immediately.
    To solve this issue a already tried to install the newest PL/SQL developer 7.1.5(trail) or/and installing a newer instant client version (10.2.0.4), which both did not solve the problem.
    Is there a new option in 10.1.0.5 Patch 25 (or before) which closes sessions if the resultsets getting to large over a slower internet connection?
    btw. using sqlplus in the instantclient directory or even excel over odbc on the same client, returns the full resultset without problems. Could this be some kind of timeout problem ?
    Edit:
    Here is a snippet of the tracefile on the client right after Executing the select-statement. Some data seems to be retrieved and than it ends with these lines:
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 2D 20 49 6E 74 72 61 6E |-.Intran|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 74 2D 47 72 75 6E 64 |et-Grund|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 6C 61 67 65 6E 02 C1 04 |lagen...|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 02 C1 03 02 C1 0B 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 51 00 02 C1 03 02 C1 2D |Q......-|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 05 48 4B 4F 50 50 01 80 |.HKOPP..|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 03 3E 64 66 01 80 07 78 |.>df...x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 07 78 |.......x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 3B 02 C1 02 01 80 00 00 |;.......|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 00 00 00 00 00 00 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 01 80 15 0C 00 |....... |
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: got NSPTDA packet
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: NSPTDA flags: 0x0
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: what=1, bl=2001
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: cid=0, opcode=85, bl=0, what=0, uflgs=0x0, cflgs=0x3
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: rank=64, nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctx: state=8, flg=0x100400d, mvd=0
    (1992) [20-AUG-2008 17:13:00:953] nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: switching to application buffer
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: entry
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: recving a packet
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: entry
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: reading from transport...
    (1992) [20-AUG-2008 17:13:00:968] nttrd: entry
    Message was edited by:
    vhbtech

    Found nothing in the \bdump alert.log or \bdump trace files. I only have the DEFAULT profile and everything is set to UNLIMITED there.
    But the \udump generates a trace file the moment i execute the query:
    Dump file <path>\udump\<sid>ora4148.trc
    Fri Aug 22 09:12:18 2008
    ORACLE V10.1.0.5.0 - Production vsnsta=0
    vsnsql=13 vsnxtr=3
    Oracle Database 10g Release 10.1.0.5.0 - Production
    With the OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:898M/3071M, Ph+PgF:2675M/4967M, VA:812M/2047M
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 33
    Windows thread id: 4148, image: ORACLE.EXE (SHAD)
    *** 2008-08-22 09:12:18.731
    *** ACTION NAME:(SQL Window - select * from stude) 2008-08-22 09:12:18.731
    *** MODULE NAME:(PL/SQL Developer) 2008-08-22 09:12:18.731
    *** SERVICE NAME:(<service-name>) 2008-08-22 09:12:18.731
    *** SESSION ID:(145.23131) 2008-08-22 09:12:18.731
    opitsk: network error occurred while two-task session server trying to send break; error code = 12152
    This trace is only generated if the query with a expected large resultset fails. If i narrow down the resultset no trace is written, well and the query then works of course.

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

  • How to process large data files in XI  ?  100 MB files ?

    Hi All
       At present we have a scenario as follows
      It is File to IDoc ....Problem is the size of the file
      We need to transfer 100mb file to SAP R/3 system ? So this huge data how to
      process ?
    Adv thanx and regards
    Rakesh

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • How to process Large Image Files (JP2 220MB+)?

    All,
    I'm relatively new to Java Advanced Imaging, so I need a little help. I've been working on a thesis that involves converting digital terrain data into X3D scenes for future use in military training and applications. Part of this work involves processing large imagery data to texture the previously mentioned terrain data. I have an image slicer that can handle rather large files (200MB+ jpeg files). But it can't seem to process jpeg 2000 data. Below is an excerpt from my code.
    public void testSlicer(){
    String fname = "file.jp2";
    Iterator readers = ImageIO.getImageReadersByFormatName("jpeg2000");
    ImageReader imageReader = (ImageReader) readers.next();
    try {
    ImageInputStream imageInputStream = ImageIO.createImageInputStream(new File(fname));
    imageReader.setInput(imageInputStream, true);
    } catch (IOException ex) {
    System.out.println("Error: " + ex);
    ImageReadParam imageReadParam = imageReader.getDefaultReadParam();
    BufferedImage destBImage = new BufferedImage(256, 256, BufferedImage.TYPE_INT_RGB);
    Rectangle rect = new Rectangle(0, 0, 1000, 1000);
    //Only reading a portion of the file
    imageReadParam.setSourceRegion(rect);
    //Used to subsampling every 4th pixel
    imageReadParam.setSourceSubsampling(4, 4, 0, 0);
    try {
    destBImage = imageReader.read(0, imageReadParam);
    } catch (IOException ex) {
    System.out.println("IO Exception: " + ex);
    The images I am trying to read are in excess of 30000 pixels by 30000 pixels (15m resolution at 5 degrees latitude and 6 degrees longitude). I continually get an OutOfMemoryError, though I am pumping up the heap size to 16000MB when using the command line.
    Any help would be greatly appreciated.

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • Returning a Large ResultSet

    At the momoent we use our own queryTableModel to fetch data from the database. Although we use the traditional (looping on ResultSet.next())method of loading the data into a vector, we find that large ResultSets (1000+ rows) take a considerable amount of time to load into the vector.
    Is there a more efficient way of storing the ResultSet other than using a vector? We believe the addElement method constantly expanding the vector is the cause of the slowdown.
    Any tips appreciated.

    One more thing:
    We believe the addElement method constantly expanding the vector is the cause of the slowdown.You probably are rigth, but this is easy to avoid: both Vector and ArrayList have one constructor in which you can specify the initial size, so could save much time in growing the List, and in Vector class, as in another collection classes as HashSet, there is a constructor in which you can specify plus the initial size the loafFactor
    Abraham.

  • Problem processing large message using dbadapter.

    I have a process which is initiated by dbadapter fetch from table.
    Its working fine when the records are less. But when the number of records
    are more than 6000(more than 4MB) I am getting errors as below.
    The process goes to off state after these errors.
    Any body have any suggestions on how to process large messages ?
    <2006-08-02 11:55:25,172> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:36,473> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:42,689> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [OracleDB_ptt::receive(HccIauHdrCollection)] - JCA Activation Agent was unable to perform delivery of inbound message to BPEL Process 'bpel://localhost/default/IAUProcess~1.0/' due to: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:56:22,573> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    <2006-08-02 11:57:52,341> <ERROR> <default.collaxa.cube.ws> <Database Adapter::Outbound> <oracle.tip.adapter.db.InboundWork runOnce> Non retriable exception during polling of the database ORABPEL-11624 DBActivationSpec Polling Exception.
    Query name: [OracleDB], Descriptor name: [IAUProcess.HccIauHdr]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by javax.resource.ResourceException: ORABPEL-12509 Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:684)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: ORABPEL-12509
    Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:628)
         ... 9 more
    Caused by: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         ... 9 more
    .

    To process 6000 messages in one shot is not the best practice in BPEL. For that you have to choose concepts like datawarehouse or so.
    But you might want to process it in batch mode. So think of using batch option in DB adapter and try to define MaxRaiseSize and MaxTransactionSize for your DB adapter. Further explanation is here
    http://download-west.oracle.com/docs/cd/B14099_19/integrate.1012/b25307/adptr_db.htm#CHDHAIHA

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

  • Processing large files on Mac OS X Lion

    Hi All,
    I need to process large files (few GB) from a measurement. The data files contain lists of measured events. I process them event by event and the result is relatively small and does not occupy much memory. The problem I am facing is that Lion "thinks" that I want to use the large data files later again and puts them into cache (inactive memory). The inactive memory is growing during the reading of the datafiles up to a point where the whole memory is full (8GB on MacBook Pro mid 2010) and it starts swapping a lot. That of course slows down the computer considerably including the process that reads the data.
    If I run "purge" command in Terminal, the inactive memory is cleared and it starts to be more responsive again. The question is: is there any way how to prevent Lion to start pushing running programs from memory into the swap on cost of useless harddrive cache?
    Thanks for suggestions.

    It's been a while but I recall using the "dd" command ("man dd" for info) to copy specific portions of data from one disk, device or file to another (in 512 byte increments).  You might be able to use it in a script to fetch parts of your larger file as you need them, and dd can be used to throw data from and/or to standard input/output so it's easy to get data and store in temporary container like a file or even a variable.
    Otherwise if you can afford it, and you might with 8 GB or RAM, you could try and disable swapping (paging to disk) alltogether and see if that helps...
    To disable paging, run the following command (in one line) in Terminal and reboot:
    sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    To re-enable paging, run the following command (in one line) in Terminal:
    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    Hope this helps!

  • Processing large xml file (500mb)? break into small part? load into jtree

    hi,
    i'm doing an assignment to processing large xml file (500mb) and
    load into jree using JAVA.
    can someone advice me on the algorithm to do this?
    how can i load a 500mb xml in a jtree without system hang?
    how to i break my file and do the loading?

    1 Is the file schema based binary XML.
    2. The limits are dependant on storage model and chacater set.
    3. For all NON-XML content the current limit is 4GBytes (Where that is bytes not characters). So for Character content in an AL32UTF8 database the limit is 2GB
    4. For XML Content stored as CLOB the limit is the same as for character data (2GB/4GB) dependant on database character set.
    5. For SB Based XML content stored in Object Relatioanl storage the limit is determined by the complexity and structures defiend in the XML Schema

  • Best way to return large resultsets

    Hi everyone,
    I have a servlet that searches a (large) database by complex queries and sends the results to an applet. Since the database is quite large and the queries can be quite general, it is entirely possible that a particular query can generate a million rows.
    My question is, how do I approach this problem from a design standpoint? For instance, should I send the query without limits and get all the results (possibly a million) back? Or should I get only a few rows, say 50,000, at a time by using the SQL limit construct or some other method? Or should I use some totally different approach?
    The reason I am asking this question is that I have never had to deal with such large results and the expertise on this group will help me avoid some of the design pitfalls at the very outset. Of course, there is the question of whether the servlet should send so may results at once to the applet, but thats probably for another forum.
    Thanks in advance,
    Alan

    If you are using a one of the premiere databases (Oracle, SQL Server, Informix) I am fairly confident that it would be best to allow the database to manage both the efficiency of the query, and the efficiency of the transport.
    QUERY EFFICIENCY
    Query efficiences in all databases are optimized by the DBMS to a general algorithm. That means there are assumptions made by the DBMS as to the 'acceptable' number of rows to process, the number of tables to join, the number of rows that will be returned, etc. These general algorithms do an excellent job on 95+% of queries run against database. However, f you fall outside the bounds of these general algorithms, you will run into escalating performance problems. Luckily, SQL syntax provides enourmous flexibility in how to get your data from the database, and you can code the SQL to 'help' the database do a better job when SQL performance becomes a problem. On the extreme, it is possible that you will issue a query that overwhelms the database, and the physical resources available to the database (memory, CPU, I/O channels, etc). Sometimes this can happen even when a ResultSet returns only a single row. In the case of a single row returned, it is the intermediate processing (table joins, sorts, etc) that overwhelms the resources. You can help manage the memory resource issue by purchasing more memory (obviously), or re-code the SQL to a more apply a more efficent algorithm (make the optimizer do a better job), or you may as a last resort, have to break the SQL up into seperate SQL statements, using more granual approach (this is your "where id < 1000"). BTW: If you do have to use this approach, in most casees using the BETWEEN is often more efficient.
    TRANSPORT
    Most if not all of the JDBC drivers return the ResultSet data in 'blocks' of rows, that are delivered on an as needed basis to your program. Some databases alllow you to specify the size of these 'blocks' to aid in the optimization of your batch style processes. Assuming that this is true for your JDBC driver, you cannot manage it better than the JDBC driver implementation, so you should not try. In all cases, you should allow the database to handle as much of the data manipulation and transport logic as possible. They have 1000's of programmers working overtime to optimzie that code. They just have you out numbered, and while it's possible that you can code an efficiency, it's possible that you will be unable to take advantage of future efficiencies within the database due to your proprietary efficiencies.
    You have some interesting, and important decisions to make. I'm not sure how much control of the architecture is available, but you may want to consider alternatives to moving these large amounts of data around through the JDBC architecture. Is it possible to store this information on the server, and have it fetched using FTP or some other simple transport? Far less CPU usage, and more efficient use of your bandwith.
    So in case it wasn't clear, no, I don't think you should break up the SQL initially. If it were me, I would probably spend the time in putting out some metic based information to allow you to better judge where you are having slow downs when or if any should occur. With something like this, I have seen I.T. spend hours and hours tuning SQL just to find out that the network was the problem (or vice versa). I would also go ahead and run the expected queries outside of application and determine what kind of problems there are before coding of the application is finished.
    Hey, this got a bit wordy, sorry. Hopefully there is something in here that can help you...Joel

  • DbKona and large resultsets

    Hi,
    I'm working on a application that will allow a user to run ad-hoc queries against
    a database. These queries can return huge resultsets (between 50,000 & 450,000
    rows). Per our requirements, we can't limit (through the database anyway) the
    number of rows returned from a random query. In trying to deal with the large
    number of results, I've been looking at the dbKona classes. I can't find a solution
    with the regular JDBC stuff and CachedRowSet is just not meant to handle that
    much data.
    While running a test (based on an example given in the dbKona manual), I keep
    running into out of memory issues. The code follows:
    QueryDataSet qds = new TableDataSet(resultSet);
    while (!qds.allRecordsRetrieved()) {
    DataSet currentData = qds.fetchRecords(1000);
    // Process the hundred records . . .
    currentData.clearRecords();
    currentData = null;
    qds.clearRecords();
    I'm currently not doing any processing with the records returned for this trivial
    test. I just get them and clear them immediately to see if I can actually get
    them all. On a resultset of about 45,000 rows I get an out of memory error about
    halfway through the fetches. Are the records still being held in memory? Am
    I doing something incorrectly?
    Thanks for any help,
    K Lewis

    I think I found the problem. From some old test, the Statement object I made returned
    a Scrollable ResultSet. I couldn't find a restriction on this immediately in
    the docs (or maybe its just a problem of the Oracle driver?). As soon as I moved
    back to the the default type of ResultSet (FORWARD_ONLY) I was able to process
    150,000 records just fine.
    "Sree Bodapati" <[email protected]> wrote:
    Hi
    Can you tell me what JDBC driver you are using?
    sree
    "K Lewis" <[email protected]> wrote in message
    news:[email protected]..
    Hi,
    I'm working on a application that will allow a user to run ad-hoc queriesagainst
    a database. These queries can return huge resultsets (between 50,000&
    450,000
    rows). Per our requirements, we can't limit (through the databaseanyway)
    the
    number of rows returned from a random query. In trying to deal withthe
    large
    number of results, I've been looking at the dbKona classes. I can'tfind
    a solution
    with the regular JDBC stuff and CachedRowSet is just not meant to handlethat
    much data.
    While running a test (based on an example given in the dbKona manual),I
    keep
    running into out of memory issues. The code follows:
    QueryDataSet qds = new TableDataSet(resultSet);
    while (!qds.allRecordsRetrieved()) {
    DataSet currentData = qds.fetchRecords(1000);
    // Process the hundred records . . .
    currentData.clearRecords();
    currentData = null;
    qds.clearRecords();
    I'm currently not doing any processing with the records returned forthis
    trivial
    test. I just get them and clear them immediately to see if I can actuallyget
    them all. On a resultset of about 45,000 rows I get an out of memoryerror about
    halfway through the fetches. Are the records still being held in memory?Am
    I doing something incorrectly?
    Thanks for any help,
    K Lewis

Maybe you are looking for

  • EFI Update Lost The Speakers?

    I'm on a brand new iMac 24". I ran System Update and now all the software is up to date. Part of that, apparently, was an EFI updater, which I also ran. After the EFI updater, the iMac no longer "sees" any internal speakers. Here's a screen shot: htt

  • Oracle E-Business Suite 12.1 Installations fails on windows 7     64 bit

    Hi guys, I am trying since the last 10 days to install Oracle EBS 12.1 on windows 7 64 bit. I have installed the pre-requisites, .i.e: 1. MKS Toolkit 8.7 2. Visual C++ 2005 3. I am installing from the Administrator user Account, which is added to the

  • OrgChart - Cannot find the root of your orgchart

    Dear Experts, I have done the required configuration for my NAKISA 3.0 SP 01 Verison with CE 7.1. But when i am trying to acess the End user Link like http://ServerName:Port/OrgChart/default.jsp from the CE 7.1 Portal . First Screen it self it shows

  • Front Row Update

    When wil front row be updated, it was ok when iTunes first started to sell movies and TV shows but now libraries are reaching hundreds of gigabytes and hundreds of movies navigating through them all is becoming quite difficult. Is there anything that

  • Need help connecting to Oracle 8i db

    I'm trying to build a very simple jsp page that connects to a Oracle 8i schema I already have. Here's the entry in the data-sources.xml for the project: <native-data-source name="jdev-connection-native-VacationTest" jndi-name="jdbc/VacationTestCoreDS