Unable to decompress large data with CL_ABAP_UNGZIP_BINARY_STREAM

Hello all,
i would like to stream the huge amount of XML data to the application server compressed. It seems that the class pair CL_ABAP_GZIP_BINARY_STREAM / CL_ABAP_UNGZIP_BINARY_STREAM should do this job.
So far the compression works, because the compressed chunks are small enough. I considered the dependency between the buffer length and the overall amount of bytes to be compressed.
The decompression seems to produce some kind memory leak or at least fails for unknown reason. For relatively small compressed data amounts the  CL_ABAP_UNGZIP_BINARY_STREAM works just well. Larger files, even if decompressed within a loop in small portions (max 4KB large chunks or smaller),  can not be decompressed without to increase the buffer size. For very large files (decompressed in loop in a small chunks) the buffer can not be just large enough and the procedure ends up in an out of memory. 
So either i use  CL_ABAP_GZIP_BINARY_STREAM / CL_ABAP_UNGZIP_BINARY_STREAM  pair wrong or there are some troubles with the memory management inside  the C implementation of these classes.
If somebody knows about this problem or has got an idea on how to resolve it, any help would be very welcome!
Here the error message i get:
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_SY_COMPRESSION_ERROR'
The function IctDecompressStream returns the return code 30
Code that i use to decompress:
* buffer
data l_gzip_buff_decomp type xstring.
* buffer size
data l_gzip_buff_len_decomp type i value 16777216.
* empty hex
data c_empty_x type x.
* chunk of compressed data to be decompressed at once
data   l_buff type x length 4096.
data: uref       type ref to user_outbuf.
data: csref       type ref to cl_abap_ungzip_binary_stream.
      create object uref.
      create object csref
        exporting
          output_handler = uref.
      call method csref->set_out_buf(
                                 importing out_buf = l_gzip_buff_decomp
                                                 out_buf_len = l_gzip_buff_len_decomp ).
      l_file = 'very-large_file.xml.gz'.
      open dataset l_file in binary mode for input.
      do.
        read dataset l_file into l_buff length l_len.
        if l_len > 0.
          call method csref->decompress_binary_stream
            exporting
              gzip_in     = l_buff
              gzip_in_len = l_len.
        else.
          close dataset l_file.
          exit.
        endif.
      enddo.
* close the stream and flush the unzipped buffer
call method csref->decompress_binary_stream_end
      exporting
        gzip_in     = c_empty_x
        gzip_in_len = 0.

Hi Gena,
I'm facing exactly the same problem as you...
Since this post is an old one, I imagine that you may not remember, but I have to try...
Have you solved it? If yes, could you please tell me how?
I've tried to use CL_ABAP_GZIP and CL_ABAP_UNGZIP_BINARY_STREAM and I'm getting the same error 30 at the  IctDecompressStream function.
Tks in advance,
Flavio.

Similar Messages

  • Import and process larger data with SQL*Loader and Java resource

    Hello,
    I have a project to import data from a text file in a schedule. A lager data, with nearly 20,000 record/1 hours.
    After that, we have to analysis the data, and export the results into a another database.
    I research about SQL*Loader and Java resource to do these task. But I have no experiment about that.
    I'm afraid of the huge data, Oracle could be slowdown or the session in Java Resource application could be timeout.
    Please tell me some advice about the solution.
    Thank you very much.

    With '?' mark i mean " How i can link this COL1 with column in csv file ? "
    Attilio

  • Unable to access Archived data with an ABAP Query

    I have an ABAP Query that uses Logical Database KDF (Vendor) for reporting.
    KDF is Archive Enabled and I can access archived document via SAP standard programs that used the KDF Logical Database.
    the query appears accessing the data but nothing shows on the report.
    Has anyone successfully written and ABAP query and included access to the Archived data?
    Any help is appreciated.

    Hi Gena,
    I'm facing exactly the same problem as you...
    Since this post is an old one, I imagine that you may not remember, but I have to try...
    Have you solved it? If yes, could you please tell me how?
    I've tried to use CL_ABAP_GZIP and CL_ABAP_UNGZIP_BINARY_STREAM and I'm getting the same error 30 at the  IctDecompressStream function.
    Tks in advance,
    Flavio.

  • Xcelsius 2008 Unable to preview BPC Data with EPM connection

    Hello,
    I'm working with BPC 7.5 for Microsoft platform and I'm trying to integrate it with Xcelsius using EPM Connector.
    When I try to preview the screen get struck and just displays "Initializing".
    If I remove the connection it works fine.
    please give your suggestions.
    I tried to install Crystal Reports 2008 FP 3.5 but am not able to install it as it says "the target product is missing or missing the correct version for this update"
    Need your expert advice.
    thanks,
    Bala
    Edited by: Bala on Jun 14, 2011 6:13 PM
    Edited by: Bala on Jun 14, 2011 6:17 PM

    Install Xcelsius fix 3.5

  • Aquisition of parts of large signals with ni scope

    Hello
    I am trying to aquire large data with ni scope.
    The aquisition time is about 2 seconds with a sampling frequency of 50 MHz.
    I figured out that this amount of data is too large. The NI USB 5133 does not have enough memory.
    Actually I do only need some parts of this 2senconds signal thus it would be possible to aquire only some parts of the signal.
    Moreover I need to average the signal to reduce the noise.
    I am trying and trying but couldn't make it out so far.
    Maybe you could help me with some ideas....
    Thank you very much

    This type of question would be more visible on the High Speed Digitizers Forum.  
    The USB-5132/5133 is a finite acquisition digitizer.  The record size is limited by the total amount of memory on the digitzer.  A 32MB/ch memory option for the USB-5132 and USB-5133 has just been released, which increases the maximum record length by a factor of 8 in comparison to the 4MB/ch option. 
    If you need to acquire more data, you must re-initiate the task to take a second record.  There will be some "deadtime" where no samples are acquired during this initiate step.  Some benchmarks have been provided to show what equivalent data rate can be achieved using this method.  For continuous acquisition, consider a PCI, PXI or PXIe digitizer.
    Jennifer O.

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • Problem with payables open interface import-unable to pick invoice data

    Hi,
    I have put the Interface data in two staging table and then moved them to AP_INVOICES_INTERFACE and AP_INVOICE_LINES_INTERFACE tables.
    Code:
    insert into XXWU_AP_INVOICES_INTERFACE (
    invoice_id,
    invoice_num,
    vendor_id,
    vendor_site_id,
    vendor_site_code,
    invoice_amount,
    INVOICE_CURRENCY_CODE,
    invoice_date,
    DESCRIPTION,
    PAY_GROUP_LOOKUP_CODE,
    source,
    org_id,
    OPERATING_UNIT
    values (
    ap_invoices_interface_s.nextval,
    'INV345DJ',
    '3317',
    '14335',
    'ACH NY1',
    1200.00,
    'USD',
    to_date('01-31-2010','mm-dd-yyyy'),
    'This Invoice is created for test purpose',
    'WUFS SUPPLIER',
    'Manual Invoice Entry',
    81,
    'WU_USD_OU'
    insert into XXWU_AP_INVOICE_LN_INTERFACE (
    invoice_id,
    invoice_line_id,
    line_number,
    line_type_lookup_code,
    amount,
    org_id
    values (
    ap_invoices_interface_s.currval,
    ap_invoice_lines_interface_s.nextval,
    1,
    'ITEM',
    1200.00,
    81
    INSERT INTO AP_INVOICES_INTERFACE (SELECT * FROM XXWU_AP_INVOICES_INTERFACE);
    INSERT INTO AP_INVOICE_LINES_INTERFACE (SELECT * FROM XXWU_AP_INVOICE_LN_INTERFACE);
    I can very much see the data in Open Interface Invoices(Front End). But when I run the payables open interface import program with correct Operating Unit, Source and Batch Name, the program unable to pick the data from the interface tables and unable to create a invoice.
    Note:
    1] The program run successfully everytime, but there is no output in the output file.
    2] In the Log file One Message is there saying 'Zero(0) invoices were created during the process run.'
    3] All the initail setup is right, i think, because I ran the same program few days back and it was working fine and the invoices were created at that time. Now what is going wrong I am unable to figure it out.
    Please help ASAP.Thanks in Advance
    DJ Koch.
    Edited by: DJKOCH on 17 Aug, 2010 9:51 PM

    Can you run the program - Payables Open Interface Import with Debug mode set to Yes.
    Once the program completes sucessfully verify the log file of the program and try to find out if there are any invoices which is causing the exceptions.
    Also confirm that the invoices which are in the AP_INVOICES_INTERFACE and AP_INVOICE_LINES_INTERFACE tables are of the correct period in which you are trying to create the invoices.
    Hope this helps.
    Thanks and Regards
    Manish Jain.

  • I have an ipad2 with wifi and 3G.  My wife and I are traveling in France (Nice) then into Italy.  We are unable to purchase a data plan because our Verizon IPad will not accept a sims chip...we are told.  We were advised that this model wound work in Euro

    I have an ipad2 with wifi and 3G.  My wife and I are traveling in France (Nice) then into Italy.  We are unable to purchase a data plan because our Verizon IPad will not accept a sim chip...we are told.  An Apple store employee here said this model is good only in the US. The iPads here take a sim card. We were advised by an Apple Store Employee at home that this model would work in Europe and that we could buy our service here.  That does not seem to be the case.    We have been told that we must invest in a modem at 200.00€ and then purchase a separate service plan for each country. We can use the free wifi wherever it is available but we cannot connect to the outside world when out of range. Is there anything else we can do?
    Thanks, John

    Unfortunately, no other solution is available.
    Hope this helps

  • I have Microsoft Office 2004 on my MacBook (2.4 GHz Intel Core 2 Duo).  It is currently up to date with Microsoft updates.  I am running Mac OS 10.6.8.  I just updated my Mac software with "Security Update 2012-001".  I am now unable to print (Epson NX510

    I have Microsoft Office 2004 on my MacBook (2.4 GHz Intel Core 2 Duo).  It is currently up to date with Microsoft updates.  I am running Mac OS 10.6.8.  I just updated my Mac software with "Security Update 2012-001".  I am now unable to print (Epson NX510 printer) from Excel or Word.  When I click on the Print menu item in Excel, there is a flash in the background like it is trying to open the print window, but nothing else.  I am able to print from other programs like TextEdit, Mail or KakeidaGraph.  As far as I know I have the latest Epson print driver.
    Also, I am also unable to open an existing Excel or Word file from the open menu - both programs lock up and do not respond.  I have to force quit.  After I restart Excel or Word I can open an existing file by double clicking on the file, but if I again try to open another file from the open menu, Excel or Word lock up.
    Any similar problems?

    Howdy,
    Apparently some are reporting that this causes the older PowerPC (PPC) applications that are supported in 10.6 via 'Rosetta' to crash upon attempting to open/save/print using any dialog box, or fail in other similar ways such as simply not printing or quitting, or freezing/hanging/crashing of the application.
    I have read of some companies that have indeed submitted proper bug reports to Apple, but that is not a guarantee of a bug-fix being issued.
    You might wish to read:
    http://www.macintouch.com/readerreports/snowleopard/index.html#d02feb2012
    If you are unsure if you are still using PowerPC apps, if the application is currently running, look under the 'Activity Monitior' (in Applications -> Utilities), or alternatively you could check in the System Profiler, Applications. Check the column "Type".
    Here is a fairly simple way you can restore you system and restore you applications functionality again, if you don't have a recent clone or good Time Machine backup that you can restore from. If you do, restore from your backup prior to having installed the Security Update 2012-001.
    Time Machine restore: http://support.apple.com/kb/ht1427
    If you are restoring a backup made by a Mac to the same Mac
    With your backup drive connected, start up your Mac from the Lion recovery partition (Command-R at startup) or Mac OS X v10.6 installation disc. Then use the "Restore From Time Machine Backup" utility. Select the backup prior to your issues, and it will be restored back to the state it was in at that time.
    If you can't easily restore from a backup, you can instead do the following:
    - You first start by reinstalling your OS X 10.6.x, this will preserve all your user data, your applications, no worries there.
    - Then install the Mac OS X 10.6.8 Update Combo v1.1 (links provided below)
    - Make sure you're printers are showing up correctly in your system preferences, if not, re-add the printers
    - Then finally, run the Apple Software Update (by pulling down the Apple Menu), and install any and all remaining updates, except do not then re-install the Security Update 2012-001. It is possible that you may have to reboot after installing some of the updates, and you may even need to run it a 2nd time to make sure that you've got all updates, except NOT the Security Update 2012-001.
    Links for 10.6.8 Update Combo v1.1:
    http://support.apple.com/kb/DL1399
    or the link to directly download this 1.09GB combo updater:
    http://support.apple.com/downloads/DL1399/en_US/MacOSXUpdCombo10.6.8.dmg
    Cheers,
    Daniel Feldman
    =======================
      MacMind
      Certified Member of the
      Apple Consultants Network
      Apple Certified (ACHDS)
      E-mail:  [email protected] 
      Phone:   1-408-454-6649
      URL : www.MacMind.com
    =======================

  • Unable to load provider data, syncing issues with update 9.1

    Since the install of iTunes 9.1, I have had to reinstall it twice and still get the same issues:
    1) Says "iTunes can not sync - disabled on this computer"
    2) iTunes was unable to load provider data from sync services.
    So I can sync songs, videos, and apps. But can no longer sync calender, contacts, etc... This all started right after the last update, up until then it was fine and no other program changes have been made on the PC.
    Reinstalling / repairing iTunes does not resolve the issue. Any ideas on this?

    You are not alone that is the same issue I get. It then tells me it can not save the backup to my computer. *** is going on with the 9.1 update!

  • I have an external hard drive from my time capsule that stopped working on me. I am attempting to access the data with a hard drive reader on my MAC. I am able to see the drive in disk utility and under system info USB. But I am unable to access the data.

    I have an external hard drive from my time capsule that stopped working on me. I am attempting to access the data with a hard drive reader on my MAC. I am able to see the drive in disk utility and under system info USB. But I am unable to access the data and it does not show on the desktop when connected.

    Ok if disk utility was able to verify the drive I doubt there is any problem.. are you trying to open a TM backup??
    You need to mount the sparsebundle then check the actual info inside the bundle.
    Don't use disk warrior.. if the disk has verified then unless you deliberately deleted files there is nothing that is going to do.
    Pondini has a lot of stuff about getting access to the sparsebundle.
    http://pondini.org/TM/17.html
    But if you have copied info to the TC that is now gone.. and the disk is ok.. I am not sure.. the TC will not have deleted the files itself.

  • I am shooting with a Nikon D60 on Fine Format and unable to print larger than 11x14.  I have been able to print larger pictures.  I am told iPhoto reduces the size of the file when uploaded. Is this true? If so how can I change this?  I need 16x20's

    I am shooting with a Nikon D60 on Fine Format and unable to print larger than 11x14.  I have been able to print larger pictures.  I am told iPhoto reduces the size of the file when uploaded. Is this true? If so how can I change this?  I need 16x20's

    I am shooting with a Nikon D60 on Fine Format and unable to print larger than 11x14.  I have been able to print larger pictures.  I am told iPhoto reduces the size of the file when uploaded. Is this true? If so how can I change this?  I need 16x20's

  • Conditional format with large data fails and show error as "Selection is too large" in Excel 2007

    I am facing a issue in paste special operation using conditional formats for large data in Excel 2007
    I have uploaded a file at below given location. 
    http://sdrv.ms/1fYC9qE
    The file contains two sheets, Sheet "Data" contains the data on which formats are to be applied and sheet "FormatTables" contains the format tables which contains conditional formating.
    There are two table in "FormatTables" sheet. Both have some conditional formats applied on it. 
    Case 1: 
    1. Select the table range of Table1 i.e $A$2:$AV$2
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    It throws error as "Selection is too large"
    Case 2:
    1. Select the table range of Table2 i.e $A$5:$AV$5
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    Formats get applied successfully.
    Both are the same format tables with same no of column and applied to same data range($A$1:$AV$20664) where one of the case works and another fails.
    The only diffrence is Table1 has appliesTo range($A$2:$T$2) as partial of total table range($A$2:$AV$2) whereas the Table2 has appliesTo range($A$5:$AV$5) same as of its total table range($A$5:$AV$5)
    NOTE : This issue is only in Excel 2007

    Excel 2007 No Supporting formating to take a formatting form another if source table has more then 16000 rows and if you want to do that in more then it then you have ot inset 1 more row in your format table to have 3 rows
    like: A1:AV3
    then try to copy that formating and apply
    Solution Case 1: 
    1.Select the table range of Table1 i.e AV21 and drage it down to one row down
    2. Select the table range of Table1 i.e $A$2:$AV$3
    3. Copy it
    4. Goto Sheet "Data" 
    5. Select data area i.e $A$1:$AV$20664
    6. Perform a paste special operation on full range and select "Formats" option while performing paste special

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

Maybe you are looking for

  • Strange error message when I tried to use the "collaborate" function under "File"

    Has anyone else met this before? : I am trying to share my project with my team members. Tried File -> Collaborate and then try three options under it. Non e of them works. It asks for your Adobe ID in order to sign in and upload. However, every time

  • DBIF_RSQL_SQL_ERROR: SQL error in the database when accessing a table

    Hi Gurus, Im getting DBIF_RSQL_SQL_ERROR: SQL error in the database when accessing a table error while I was importing support pack in the system. This is ERP 6.0 with EHP4 with MS SQL in back ground and I was applying the BASIS SP 04. It was in the

  • Use a previous TM back-up after machine change

    I try to read a lot on this subject but I didn't find really an easy way to re-use a my previous TM backup. I've changed my computer due to an hardware issue and TM created a new backup. I read some tips valid with 10.5 with MAC adress change but ter

  • Small help in SQl regarding the distinct function

    could some one tell me how to get a distinct column rows from a table .. select distinct(xyz), sales from dept which is giving the duplicate rows for the column xyz... could some one give me the right syntex for this Thanks in advance ..

  • Passing Arrays to function

    In my main function, I have created a class array of Players[3]. In the function I pass this array of Players to another class function that may or may not change the size of this array (which I may be doing incorrectly). This is console game of Blac