How do I handle large resultsets in CRXI without a performance issue?

Hello -
Problem Definition
I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
How do you handle large resultsets in Crystal Reports without a performance issue?
Environment
Crystal Reports XI
Apache WebSvr 2.X, Jboss 4.2.3, Struts
Java Reporting Component (JRC),Crystal Report Viewer (CRV)
Firefox
DETAILS
I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
So.....
- Request received to generate a report with a filter criteria
- Query DB to get resultset
- Initialize JRC and CRV
- finally display the report by calling
    reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
Solutions?
So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
I have thought of some half baked ideas.
A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
I'd appreciate it if someone can point me in the right direction.

Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

Similar Messages

  • Handling large ResultSets

    I want to retrieve about 30 rows at a time from our DB2/AS400. The table contains over 4,000,000 rows. I would like to begin at the first row and drag 30 rows over the network. Then get the next 30 if the user requests them. I know the answer is to use cursors but I cannot use these statements within my code on the AS400. Websphere Studio allows me to create jsp's using a <tsx:repeat> tag to iterate over the result set but the instructions on using these is pretty vague.
    Can anyone direct me to some informative sites with examples or recommend a way to go about this?

    That would be fantastic and my ideal approach but the
    manager of the department wants to keep the
    functionality of what they presently use and he and
    his team wrote 20 years ago written in RPG with
    loverly green screens. They scroll 10 rows at a time,
    jump to the top or the bottom of of the table and
    type in the first two three or four letters of the
    search parameter and get the results which can also
    be scrolled. Some tables are even worse, one contains
    over 10 million rows.We have a lot of those green screen applications in our AS/400 systems too. So I can tell you (and your manager should be able to confirm this) that the "subfiles" that they scroll cannot contain more than 9,999 records. But in real life, even in our AS/400 environment, nobody ever starts at the beginning of our customer file (which does have more than 9,999 records) and scrolls through it looking for something. They put something into the search fields first.
    So displaying the first 10 records of the file before allowing somebody to enter the search criteria is pointless. And jumping to the end of the table is pointless too -- unless the table is ordered by date and you want to get recent transactions, in which case you should be sorting it by descending date anyway. My point is that those AS/400 programs were written that way because it was easy to write them that way, not necessarily because people would ever use those features. When you have hundreds of tables (as we do), it's easier just to copy and paste an old program to produce a maintenance program for a new table than it is to start from scratch and ask the users what they really need. That's why all the programs look alike there. It's not because the requirements are all the same, it's because it's easier for the programmers to write them.
    Here's another example: Google. When you send it a query it comes back with something like "Results 1 - 10 of about 2,780,000". But you can't patiently page through all of those 2,780,000 results: Google only saves the first 1000 for you to look at, and won't show you more than that.
    So I agree, a program that's simply designed to let somebody page through millions of records needs to be redesigned. If you want to write a generic program that lets people page through small files (less than 1000 records, let's say) there's nothing wrong with that, but your users will curse you if you make them use it for large files.

  • How to convert a large number to hex without truncating to 32-bit.

     I am trying to convert a very large number to hexadecimal (string). The number gets truncated to 32-bit, which is not what I want. For example, the number 28037546508295 (double) should be 0x198000000007. Labview truncates it and the resulting string is 0x7FFFFFFF, using Number To Hex String.vi. I am stuck. Thanks.
    Solved!
    Go to Solution.

    You can split your dbl into two 32bit integers using quotient&remainder (divide by 2^32 followed by "toU32").
    Now format each with %032x and concatenate the strings.
    Remove leading zeroes if needed.
    (Sorry, posting via phone. I can show an example later)
    LabVIEW Champion . Do more with less code and in less time .

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • Processing large Resultsets quickly or parallely

    How to process a large Resultset that contains a purchase entries of say 20 K users. Each user may have 1 or more purchase entries. The resultset is ordered by userid. And the other fields are itemname, quantity, price.
    Mine is a quad processor machine.
    Thanks.

    you're going to need to provide a lot more details. for instance, is the slow part reading the data from the database, or the processing that you are going to do on the data? if the former, then in order to do work in parallel, you probably need separate threads with their own resultsets. if the latter, then you could parallelize the work by having one thread read the resultset and push the data onto a shared work queue, from which multiple worker threads were reading. these are just a few of the possibilities.

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • How to handle large data while acquisition? BNC 2110

    I want to acquire data using  BNC 2110, I am writing a software in VB 6. We will use 3 channels. We are supposed to scan about 10000 points before AcquiredData is triggered. in all we will need to scan 10000 * 1000 * 1000 before data is put into a binary fall. Can anybody let me know, how to hande this large number points

    Hello Vjuno,
    In order to acquire 10,000,000,000 points you are going to have to be streaming this data to your hard drive as you go.  To do this you'll need to write the data you read to a file each loop iteration.  In general it is a good practice to make your "samples to read" at least 10% of your sample rate in seconds to avoid overflowing buffers, however, depending on your computer you may be able to go faster.  I made an example program in LabVIEW and was able to read 10,000 points at a time from each of 3 analog inputs at 333MHz and write the values to file without overflowing a buffer.  However, even opening a web browser while the code was running was enough to delay the VI long enough for the buffer to overflow.
    You can use the DAQmx Configure Input Buffer call to increase the buffer size and account for spikes in CPU usage from other processes, and you should also monitor the "Available Samples Per Channel" property to make sure you aren't steadily gaining samples in your buffer.  Since you want to acquire 10 billion samples at 1MHz this acquisition will take several hours; if you're not able to keep the buffer empty then it will become apparent before the end of your acquisition.  By monitoring the samples in the buffer you can tell if you're pulling the samples out fast enough, if you find that this number is steadily increasing then you should either reduce the sample rate or increase the number of samples to read each time you call the DAQmx Read.
    In my example program I used a write to TDMS (binary) file and a PCI-6251.
    I hope this helps, and have a good night.
    Cheers,
    Brooks

  • How to hanlde breaked large messages in sender JMS adapter

    Hi,
    I have been asked like 'how can we handle breaked large messages in sender JMS adapter?'and lets say I am getting some messages as it is and some are breaked into small segements for the mesaages that are large....
    do we need to use module,if so is there any standard module that we can use to hanlde this type of scenarios?
    thank you in advance.
    babu

    http://biemond.blogspot.com/2009/10/jms-request-reply-interaction-pattern.html
    if you check the blog of edwin, see his comments at the bottom
    there he gives a suggestion how to add the selector properties to filter on
    and this one is maybe helpfull
    http://blogs.oracle.com/adapters/2010/05/configuring_request-reply_in_jmsadapter.html

  • How to handle a fixed length file without newline?

    Hi Experts,
    I'd like to handle a fixed length file without newline by sender file adapter.
    A file like following.
    It contains three recores."AAXBBBXCCCCX" is one record.
    AA1BBB1CCCC1AA2BBB2CCCC2AA3BBB3CCCC3
    I tried that following two parameters set. But only first recored was read.
    fieldFixedLengths
    fieldFixedLengthType
    Please tell me how to handle.
    Thanks
    Shinya Kawagoe.

    For this case we wrote a simple Adapter Module inserting an end of line character after an offset.
    This way it can be reused in many interfaces.
    And reading the whole file may not be an option in case of large source files. May cause performance / memory issues.
    eolbean.offset = <recordLlen>
    XMLPayload xmlpayload = msg.getDocument();
    byte[] content = xmlpayload.getContent();
    byte crlf = 0x0A;
    int current = 0;
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    int lines = content.length / recordLen;
    do
         lines--;
         baos.write(content, current, recordLen);
         if (lines > 0) // if other lines, eol required
              baos.write(crlf);
              current += recordLen;
    } while (lines > 0);
    xmlpayload.setContent(baos.toByteArray());
    baos.close();
    Audit.addAuditLogEntry(key, AuditLogStatus.SUCCESS,     MODULE + " Done EOLing.");

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • BPEL 2.0 throws some of the faults as exceptions.. how do i handle them

    hi All
    I am using Jdev 11.1.1.6
    There is something weird happening while i am trying to handle faults in BPEL 2.0..
    Scenario: I am simply transfering data from one table to another and there is mismatch of the length in two tables i.e. target table have column with 30 length and source system is sending data of 40 length..
    so i am expecting a BPEL Binding Fault for the same
    but I am getting an exception from BPEL 2.0 for the same.. and it is not thrown as a fault to catch block.. Project gets Rolledback without throwing Fault to catch Block..
    then i tried same with BPEL 1.1 and it gave me fault as expected in catch block which i could handle by sending fault notifications..
    but BPEL 2.0 does not give me option of sending notifications as i do not get fault in Catch block..
    Issue: How do i handle the fault in BPEL 2.0 when its throwing it as exception?????
    Below is the trace i get from my instance with BPEL 2.0 and BPEL 1.1..
    Getting Exception From BPEL 2.0 as below_: After throwing the exception process gets rolledback but instance is always in running mode
    Non Recoverable System Fault :
    Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution.
    Exception i am getting in Invoke Activity but No Fault
    <fault>
    <exception class="com.collaxa.cube.engine.EngineException">
    JTA transaction is not in active state. The transaction became inactive when executing activity "" for instance "520,633", bpel engine can not proceed further without an active transaction. please debug the invoked subsystem on why the transaction is not in active status. the transaction status is "MARKED_ROLLBACK". The reason was The execution of this instance "520633" for process "BPELProcess2" is supposed to be in an active jta transaction, the current transaction status is "MARKED_ROLLBACK", the underlying exception is "BINDING.JCA-12563 Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution. " . Consult the system administrator regarding this error.
    <stack>
    <f>com.oracle.bpel.client.util.TransactionUtils.throwExceptionIfTxnNotActive#107</f>
    <f>com.collaxa.cube.ws.WSInvocationManager.invoke#352</f>
    <f>com.collaxa.cube.engine.ext.common.InvokeHandler.__invoke#1070</f>
    <f>com.collaxa.cube.engine.ext.common.InvokeHandler.handleNormalInvoke#584</f>
    <f>com.collaxa.cube.engine.ext.common.InvokeHandler.handle#132</f>
    <f>com.collaxa.cube.engine.ext.bpel.common.wmp.BPELInvokeWMP.__executeStatements#74</f>
    <f>com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform#166</f>
    <f>com.collaxa.cube.engine.CubeEngine.performActivity#2687</f>
    <f>com.collaxa.cube.engine.CubeEngine._handleWorkItem#1190</f>
    <f>com.collaxa.cube.engine.CubeEngine.handleWorkItem#1093</f>
    <f>com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handleLocal#76</f>
    <f>com.collaxa.cube.engine.dispatch.DispatchHelper.handleLocalMessage#218</f>
    <f>com.collaxa.cube.engine.dispatch.DispatchHelper.sendMemory#297</f>
    <f>com.collaxa.cube.engine.CubeEngine.endRequest#4609</f>
    <f>com.collaxa.cube.engine.CubeEngine.endRequest#4540</f>
    <f>com.collaxa.cube.engine.CubeEngine._createAndInvoke#713</f>
    <f>...</f>
    </stack>
    </exception>
    <root class="oracle.fabric.common.FabricInvocationException">
    BINDING.JCA-12563 Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution.
    <stack>
    <f>oracle.integration.platform.blocks.adapter.fw.jca.cci.EndpointInteractionException.getFabricInvocationException#75</f>
    <f>oracle.integration.platform.blocks.adapter.AdapterReference.getFabricInvocationException#307</f>
    <f>oracle.integration.platform.blocks.adapter.AdapterReference.post#293</f>
    <f>oracle.integration.platform.blocks.mesh.AsynchronousMessageHandler.doPost#142</f>
    <f>oracle.integration.platform.blocks.mesh.MessageRouter.post#197</f>
    <f>oracle.integration.platform.blocks.mesh.MeshImpl.post#215</f>
    <f>sun.reflect.GeneratedMethodAccessor1455.invoke</f>
    <f>sun.reflect.DelegatingMethodAccessorImpl.invoke#25</f>
    <f>java.lang.reflect.Method.invoke#597</f>
    <f>org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection#307</f>
    <f>org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint#182</f>
    <f>org.springframework.aop.framework.ReflectiveMethodInvocation.proceed#149</f>
    <f>oracle.integration.platform.metrics.PhaseEventAspect.invoke#71</f>
    <f>org.springframework.aop.framework.ReflectiveMethodInvocation.proceed#171</f>
    <f>org.springframework.aop.framework.JdkDynamicAopProxy.invoke#204</f>
    <f>$Proxy315.post</f>
    <f>...</f>
    </stack>
    </root>
    </fault>
    Getting Exception as well as Fault from BPEL 1.1 as below_
    ->Here i am getting exception as above
    Non Recoverable System Fault :
    Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution.
    ->and here is the fault thrown by BPEL 1.1
    Non Recoverable System Fault :
    <bpelFault><faultType>0</faultType><bindingFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution. </summary></part><part name="detail"><detail>ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) </detail></part><part name="code"><code>12899</code></part></bindingFault></bpelFault>
    Fault i am getting in Invoke Activity
    <fault>
    <bpelFault>
    <faultType>0</faultType>
    <bindingFault>
    <part name="summary">
    <summary>Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'merge' failed due to: DBWriteInteractionSpec Execute Failed Exception. merge failed. Descriptor name: [MergeItemDBAdapterV1.Item]. Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) . Please see the logs for the full DBAdapter logging output prior to this exception. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "-12899" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution. </summary>
    </part>
    <part name="detail">
    <detail>ORA-12899: value too large for column "ITEM"."ITEMNAME" (actual: 40, maximum: 30) </detail>
    </part>
    <part name="code">
    <code>12899</code>
    </part>
    </bindingFault>
    </bpelFault>
    </fault>
    Please suggest..

    I sorted out what is happening..
    When My Target DB is Empty and Merge operation is executed , Fault is thrown and catch block was able to catch it
    And When My Target DB is having Values, It is thrown as Exception.. i.e. no fault to Catch block :(
    Someone have any idea, hoe this can be resolved???
    Edited by: 910947 on Aug 31, 2012 1:43 AM

  • NIO- how to best transfer large binary files 200mb

    Hi there,
    For my dissertation I am in the process of implementing a LAN- NIO based distributed system most of it is working except I cant figure out how to best transfer large files >200mb.
    1. Client registers as available
    2. Server notifies that a job is available on file x
    3. Client requests file x
    4. Server transfers file x- without reading the whole file into memory-
    5- Client does its thing and tells the server the results are ready-
    6. Server requests results.
    This is all implemented, except for the actual file transfers, which just wont work the way I want it. I got access to a shared drive that I could copy to and from, but Java still reads all bytes fully into memory. I did a naughty workaround of calling windows native copy command ( runtime() ) which does the trick, though I would prefer that my app did the file handling since I need to know when things are done.
    Can anyone provide me with a link to an example / tutorial of how to do this, or is there a better way of transferring large files like tunneling, streaming, ftp or the old way of splitting into chunks with headers (1024 bits, byteFlip() etc) how is this usually done out there in the proper world?
    I did search the forum, and found hints and tips though I thought perhaps I would see if there is a best practice for this problem--
    Thanks,
    Thorsan

    Hi,
    I have tried various approaches with out much luck, think part of the problem is I have never done file transfers in java over networks before...
    At the moment I let the user select files from a JFileChooser, and store the String path+filename in a list. When a file is to be transferred I do the following:
    static void copyNIO(File f, String newFile) throws Exception {
            File f2 = new File(newFile);     
            FileChannel src = new FileInputStream(f).getChannel();
            FileChannel dst = new FileOutputStream(f2).getChannel();
            dst.transferFrom(src, 0, src.size());
            src.close();
            dst.close();
        }This lets me copy the file to wherever I want to without going through a socket, since I am developing in a lan environment this was ok for debugging. Though sooner or later it has to go through the network.
    I have allso played around with ObjectOutputStream and ObjectInputStream but again I just cant figure out how to inhibit my source from loading the whole file into memory. I want to read / write in blocks to keep the memory usage to a minimal.
    Thorsan

Maybe you are looking for

  • Broken Links

    Greetings, I'm reading the Style 9670 pamphlet from BlackBerry, and I noticed that the following links no longer work: http://www.blackBerry.com/style9670support http://www.blackBerry.com/style9670accessories I hate it when a company deletes web page

  • 5600FX VTDR128 /Liton 811s won't play movie

    I have a new P4 2.6G, AOPEN AX4SPE moboard, 5600FX VTDR128 video, Lite-on 811S DVD/CD RW combo, and DVD movies will not play.  I am suspecting the video card.  I have replaced the moboard, and replaced the Pioneer DVD/combo with the above, with the s

  • Urgent!!!!Regarding Workflow mailer of iStore

    Hi All, I need to modify the body content of iStore user registration workflow email . I tried to modify the file IBENOTIF,then after I generated email from istore, then it showed the old format.. How can I get new format of body content of email? Pl

  • Procure to Pay

    Hi Guys, We have reporting requirement with details from Material cost to General ledger. This is variance report with comparison of Divisional standard cost on material(split by cost components) to amount posted to general ledger. Flow of data Mater

  • In-Process Inspection material having inspection type 04

    Dear Sir/Mam My question.In-Process Inspection material having inspection type 04.As we accept the inspection lot we put the stock into unrestricted.But when we have to reject we convert that stock into new material or if rework to be carried out,the