Convert large volumes in Oracle

I am trying to convert large tables from one instance to another (simple conversion).
Table has same layout in both systems. Doing conversion with takes ages.
So now we trying to convert via Oracle (exp/imp).
The schema-id's are different, but this issue can be tackled.
Now we found that the Oracle tables in the target system are always created with 'not null' contraint.
Now the import is impossible as a consequence.
Any ideas to tackle this, or any other ideas how to transfer large volumes.

Hi,
If I understand correctly that you want to copy a table contents from a system to another. At this stage, because of the constraints, it is not allow you to insert records at the target site.
So, you exported the table from the source system, but the fields without "NOT NULL" constraint. On the other hand, at the target site, same fields have "NOT NULL" constraint.
Under this circumstance, you may create the table first without "NOT NULL" fields at the destination and import the records in it.
You can use "brspace -f tbexport ..." for the import/export operations. Check the note 646681 - Reorganizing tables with BRSPACE
Best regards,
Orkun Gedik

Similar Messages

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • About large volumes

    HI. I'd like to know how to manage large volumes databases with ORACLE, I mean databases more than 1 To.
    Thank you.

    Apparently there's a storage war going between network-attached storage (NAS) and storage area networks (SAN).
    both of them very popular and they all have advantages and disadvantages. If you google the term you will find many articles talking about them.
    People use NAS usually choose Network Appliance Filer (netapps)
    There are many players in SAN field, EMC is most famous.
    http://www.storagesearch.com/xtore-art1.html

  • XML - large volume

    This is a beginner question, but I haven't found anything that answers it.
    Is XML suitable for large volume data exchange? Does it depend on the capcity of the parser used?
    I am interested in using XML as the format of a fairly complex database extact of around 200,000 rows. Is this a reasonable use?

    Your decision to use XML for that size of a select depends on what you want to do with the data once you have it materialized. While wrapping it in XML will definitely bloat it, if you are trying to preserve its schema information on transfer and have it processed by generic standards-based parsers instead of custom code, it may well be worth it.
    Oracle XML Team

  • Converting large amounts of points - 76 million lat/lon's to spatial object...

    Hello, I need help.
    Platform - Oracle 11g 64bit on Windows Enterprise server 2008 64bit.  64 GB of ram with 2 CPUs totalling 24 cores
    Does any one know of a fast way to convert large amounts of points to a spatial object?  I need to convert 76 million lat/lon's to ESRI st_geometry or Oracle sdo_geometry.
    Currently, I have setup code using pipelined parallel functions and multiple jobs that run concurrently.  It still takes over 2.5 hours to process all of the points.
    Any pointers would be GREATLY appreciated!
    Thanks
    John

    Hi,
    Where is the lat/lon data at the moment?  In an external text file or in an existing database table as number attributes?
    If they're in an external text file, then I'd probably use an external table to load them in as quickly as possible.
    If they're in an existing database table, then you can just update the sdo_geometry column using:
    update <table> set <geometry column> = sdo_geometry(2001, <your srid>, sdo_point_type(<lon column>, <lat column>, null), null, null)
    where <lon column> is not null
    and <lat column> is not null;
    That should run very quick for you.  If you want to avoid the overhead of creating redo, you could use "create table .... as select...".  This example of creating 1,000,000 points runs in 9 seconds for me.
    create table sample_points (geometry) nologging as
      (select sdo_geometry(2001, null,
      sdo_point_type(
      trunc(100000 * dbms_random.value()),
      trunc(100000 * dbms_random.value()),
      null), null, null)
      from dual connect by level <= 1000000);
    I have setup code using pipelined parallel functions and multiple jobs that run concurrently
    You shouldn't need to use pl/sql for this task.  If you find you do, then provide some sample code and we'll take a look.
    Regards,
    John O'Toole

  • Convert php script to oracle procedure

    To all please help me... I wanna convert php script to oracle procedure..and the script is (exp)..
    <?php
    include("../config/koneksi.php");
    $customer=$_POST['customer'];
    $tanggal1=$_POST['theDate1'];
    $tanggal2=$_POST['theDate2'];
    $no_bulan=substr($tanggal1,0,2);
    $bulan_sajah= (substr($no_bulan,0,1)=='0')? substr($no_bulan,1,1) : $no_bulan;          
    $tahun_sajah=substr($tanggal1,3,4);
    $blnkmrn=(int)$bulan_sajah;
    $thnkmrn=(int)$tahun_sajah;
    if ($blnkmrn==1) {
         $bulan_lalu=12;
         $tahun_lalu=$thnkmrn-1;}
    else {
         $bulan_lalu=$blnkmrn-1;
         $tahun_lalu=$thnkmrn;
    $bulanlalu=strval($bulan_lalu);
    $tahunlalu=strval($tahun_lalu);
    $sql = "select nip_nas from edo_customer_master_dives where standard_name='$customer'";
         $stm = ociparse($conn,$sql);
         ociexecute($stm);
         ocifetch($stm);
         $data=ociresult($stm,1);
         $sql12 = "select PRODUCT_LINE_ID,sum(REVENUE)
    from PA_FACT_REV_BILLED_CC
    where nip_nas='$data' and year_id='$tahun_sajah' and month_id='$bulan_sajah' group by PRODUCT_LINE_ID";
         $stm12 = ociparse($conn,$sql12);
         ociexecute($stm12);
         $total_revenue=0;
         $i="0";
    while (ocifetch($stm12)){
         $rev_items=ociresult($stm12,1);
         $revenue=ociresult($stm12,2);
         $sql2 = "select * from PA_FACT_REV_BILLED_CC
    where nip_nas='$data' and PRODUCT_LINE_ID='$rev_items'";
         $stm2 = ociparse($conn,$sql2);
         ociexecute($stm2);
         ocifetch($stm2);
    $tahun=ociresult($stm2,1);
    $bulan=ociresult($stm2,2);
    $nipnas=ociresult($stm2,3);
    $prod_line=ociresult($stm2,4);
    $rev_item=ociresult($stm2,5);
    //$revenue=ociresult($stm2,6);
    $query1 = "select standard_name from edo_customer_master_dives where nip_nas='$nipnas'";
         $st1 = ociparse($conn,$query1);
         ociexecute($st1);
         ocifetch($st1);
         $nama_cust=ociresult($st1,1);
         $query2 = "select prod_line_lname from parameter.p_prod_line@dwhnas where prod_line_id='$prod_line'";
         $st2 = ociparse($conn,$query2);
         ociexecute($st2);
         ocifetch($st2);
         $nama_prod_line=ociresult($st2,1);
         $query3 = "select REV_TYPE_LNAME from parameter.p_rev_type@dwhnas where REV_TYPE_ID='$rev_item'";
         $st3 = ociparse($conn,$query3);
         ociexecute($st3);
         ocifetch($st3);
         $nama_rev_item=ociresult($st3,1);
         $query4="select sum(total_usage) from PA_FACT_TRAFFIC_CC where PRODUCT_LINE_ID='$prod_line' and nip_nas='$nipnas' and year_id='$tahun_sajah' and month_id='$bulan_sajah' group by PRODUCT_LINE_ID";
         $st4 = ociparse($conn,$query4);
         ociexecute($st4);
         ocifetch($st4);
         $total_usage=ociresult($st4,1);
         $total=$revenue + $total_usage;
         echo $tahun." ".$bulan." ".$nama_cust." ".$nama_prod_line." ".$nama_rev_item." ".$revenue." ".$total_usage." ".$total."<br>";
         $total_revenue=$total_revenue+$total;
    $i++;
    echo $total_revenue;
    //cost of product
    $query5="select * from PA_FACT_TRAFFIC_CC where nip_nas='$nipnas' and year_id='$tahunlalu' and month_id='$bulanlalu'";
    $st5 = ociparse($conn,$query5);
         ociexecute($st5);
         $total1=0;
         $total2=0;
         $total3=0;
         $total4=0;
         $total5=0;
    while (ocifetch($st5)){
         $nipnas=ociresult($st5,3);
         $lineid=ociresult($st5,4);
         $itemid=ociresult($st5,5);
         $call=ociresult($st5,6);
         $unit=ociresult($st5,7);
         $query6 = "select prod_line_lname from parameter.p_prod_line@dwhnas where prod_line_id='$lineid'";
         $st6 = ociparse($conn,$query6);
         ociexecute($st6);
         ocifetch($st6);
         $nama_prod_line=ociresult($st6,1);
         $query7 = "select REV_item_LNAME from parameter.p_rev_item@dwhnas where REV_item_ID='$itemid'";
         $st7 = ociparse($conn,$query7);
         ociexecute($st7);
         ocifetch($st7);
         $nama_rev_item=ociresult($st7,1);
    $query8 = "select * from cost_of_product where prod_line_lname='$nama_prod_line' and REV_item_LNAME='$nama_rev_item' and end_date is null";
         $st8 = ociparse($conn,$query8);
         ociexecute($st8);
         ocifetch($st8);
         $lineid_cost=ociresult($st8,1);
         $itemid_cost=ociresult($st8,2);
         $satuan=ociresult($st8,5);
         $nilai=ociresult($st8,7);
         if (strtoupper($satuan)=='MENIT') $total1=$total1+(($unit/60)*$nilai);
         if (strtoupper($satuan)=='KBPS') $total2=$total2+($unit*$nilai);
         if (strtoupper($satuan)=='SMS') $total3=$total3+($call*$nilai);
         if (strtoupper($satuan)=='SSL') $total4=$total4+($call*$nilai);
         if (strtoupper($satuan)=='SST') $total5=$total5+($call*$nilai);
    $total_cost_pots=$total1+$total2+$total3+$total4+$total5;
    echo $total_cost_pots;
    ?>
    this script just for exp.

    Please convert step by step. for example
    (1) remove inverted quotation mark ( ` )
    (2) modify constraints syntax (PRIMARY KEY,UNIQUE KEY, KEY etc.) to [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses002.htm#g1053592]Oracle constraints.
    (3) modify some datatype to [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements001.htm#i45441]Oracle datatype
    (4) think how to convert auto_increment ([url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6015.htm#i2067093]Sequence, [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7004.htm#i2235611]Beffore Trigger etc. on Oracle)
    http://download-west.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_packages.htm#sthref864
    http://download-west.oracle.com/docs/cd/B19306_01/appdev.102/b14261/toc.htm

  • How can I convert the volume directory into a single file installer?

    How can I convert the volume directory into a single file installer? I would like to hide all the miscillaneous files that I don't care for and be able to have the installer double click a single file and have it automatically install.

    On the second prompt screen when prompted 'What kind of self-extracting Zip file file do you want to make?'
    Are you choosing the second option (self-extracting Zip file for software installation)?
    I have a word file that I created to help me remember - is there anyway to email it to you?

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • Processing large volume of idocs using BPM Processing

    Hi,
    I have a scenario in which SAP R/3 sends large volume say 30,000 DEBMAS Idocs to XI. XI then sends data to 3 legacy systems using jdbc adapter.
    I created a BPM Process which waits for 4 hrs to collect all the idocs. This is what my BPM does:
    1. Wait for 4 hrs Collect the idocs
    2. For every idoc do a IDOC->JDBC Message transformation.
    3. Append to a Big List
    4. Loop at the Big list from step 4 and in the loop for
    5. Start counter from 0 and increment. Append to a Small List.
    6. if counter reaches 100 then send a Batch JDBC Message in send step.
    7. Reset counter after every send.
    8. Process remaining list i.e if there was an odd count of say 5300 idoc then the remaining 53 idocs will be sent in anther block.
    After sending 5000 idocs to above BPM following problems are there:
    1. I cannot read the workflow log as system does not respond.
    2. In the For Each loop which loops through the big list of say 5000 idocs only first pass of 100 was processed after that the workflow item is not moving ahead. It remains in the status as "STARTED" but I do not see further processing.
    Please tell me why certain Work Items are stuck is it becuase I have reached upper limit and is this the right approach? The Main BPM Process is also hanging from last 2 days.
    I have concerns about using BPM for processing such high volume of idocs in production. Please advice and thanks in advance.
    Regards
    Ashish

    Hi Ashish,
    Please read SAPs Checklist for proper usage of BPMs: http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    One point i'm wondering about is why do you send the IDocs out of R/3 one by one and don't use packaging there? From a performance stand point this is much better than a bpm.
    The SAP Checklist states the following:
    <i>"No Replacement for Mass Interfaces
    Check whether it would not be better to execute particular processing steps, for example, collecting messages, on the sender or receiver system.
    If you only want to collect the messages from one business system to forward them together to a second business system, you should do so by using a mass interface and not an integration process.
    If you want to split a message up into lots of individual messages, also use a mass interface instead of an integration process. A mass interface requires only a fraction of the back-end system and Integration-Server resources that an integration process would require to carry out the same task. "</i>
    Also you might want to have a look at the IDoc packaging capabilities within XI (available since SP14 i believe): http://help.sap.com/saphelp_nw04/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm
    And here is Sravyas good blog about this topic: /people/sravya.talanki2/blog/2005/12/09/xiidoc-message-packages
    If for whatever reason you can't or don't want to use the IDoc packets from R/3 or XI there are other points on which you can focus for optimizing your process:
    In the section "Using the Integration Server Efficiently" there is an overview on which steps are costly and which steps are not so costly in their resource consumption. Mappings are one of the steps that tend to consume a lot of resources and unless it is a multi mapping that can not be executed outside a BPM there is always the option to do the mapping in the interface determination either before or after the BPM. So i would sugges if your step 2 is not a multi mapping you should try to execute it before entering the BPM and just handle the JDBC Messages in the BPM.
    Wait steps are also costly steps, so reducing the time in your wait step could potentially lead to better performance. Or if possible you could omitt the wait step and just create a process that waits for 100 messages and then processes them.
    Regards
    Christine

  • Cannot Print PDF Large volume PDF.  Internal Error: 8004, 6343724, 8484240, 0

    We are having an issue with printing to PDF for large volume FM files.  PDF appear to work for small/medium volume PDFs however. 
    Internal Error: 8004, 6343724, 8484240, 0
    FrameMaker 8.0.0 for Intel
    Build: 8.0p273
    I've checked the default printer and it is set to Adobe PDF.  I've also tried to print to PDF to a different FM file just in case the current file might be corrupted and the same error appears.  At this point this issue is affecting our work as we cannot save it to PDF.  Does anyone have any ideas that I can try?
    Thanks
    Steve

    Shelia,
    Thank you for your reply.  Here are some further details on the question that you asked:
    1. At that beginning, because the PDF file has 226 pages as total and consists of 19 fm files, which I mean the big volume, I thought this problem occurs.  (FYI, the small/medium volume means the PDF file has 112 pages as total and consists of 16 fm files / only around 20 pages as total and consists of only 1 fm file.)
    However, now, I know the problem occurs in the specific two very small fm files.  These two fm files are only 3 pages / 15 pages as total and consists of only one fm file.  – So, I do not think the volume relates to this problem.
    2. I always use “save it to PDF” to create PDF files so far.  I am using the PDF distiller that came with FrameMaker8.
    3. All files including graphics are in my local drive (C drive).
    So I'm wondering if the problem is possible due to the two specific fm files.  If so, what are your recommendations to get around this?
    Thanks
    Steve

  • [ERRORS]Converting large files to pdfs

    Some errors occur when I try to convert large file(about 30MB.....or may be more....word or ppt) to pdfs with Livecycle PDF Generator .
    But the same file can be converted to pdf with the Acrobat Pro
              Environment:
              Websphere 7.0
              DB2 9.5
              LiveCycle ES2
    websphere log as following:
         [8/25/10 0:07:05:093 PDT] 00000008 TimeoutManage I   WTRN0006W: Transaction 0000012AA80DB61600000003000011F7F5576B5DC585E6443AFA415F07FC959B3436ED460000012AA80DB6160 0000003000011F7F5576B5DC585E6443AFA415F07FC959B3436ED4600000001 has timed out after 300 seconds.
    [8/25/10 0:07:05:093 PDT] 00000008 TimeoutManage I   WTRN0124I: When the timeout occurred the thread with which the transaction is, or was most recently, associated was Thread[WebContainer : 2,5,main]. The stack trace of this thread when the timeout occurred was:
        java.lang.Object.wait(Native Method)
        java.lang.Object.wait(Object.java:196)
        com.ibm.rmi.iiop.OutCallDesc.waitForResponse(OutCallDesc.java:67)
        com.ibm.rmi.iiop.Connection.send(Connection.java:2232)
        com.ibm.rmi.iiop.ClientRequestImpl.invoke(ClientRequestImpl.java:326)
        com.ibm.rmi.corba.ClientDelegate.invoke(ClientDelegate.java:436)
        com.ibm.CORBA.iiop.ClientDelegate.invoke(ClientDelegate.java:1184)
        com.ibm.rmi.corba.ClientDelegate.invoke(ClientDelegate.java:783)
        com.ibm.CORBA.iiop.ClientDelegate.invoke(ClientDelegate.java:1214)
        org.omg.CORBA.portable.ObjectImpl._invoke(ObjectImpl.java:484)
        com.adobe.native2pdf.bmc._ConverterAgentStub.convertToPdf(_ConverterAgentStub.java:36)
        com.adobe.pdfg.callbacks.NativeToPDFTransactionCallback.convertToPdf(NativeToPDFTransacti onCallback.java:214)
        com.adobe.pdfg.callbacks.NativeToPDFTransactionCallback.doInTransaction(NativeToPDFTransa ctionCallback.java:188)
        com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doRequiresNew (EjbTransactionBMTAdapterBean.java:218)
        com.adobe.idp.dsc.transaction.impl.ejb.adapter.EJSLocalStatelessEjbTransactionBMTAdapter_ 3af08fdf.doRequiresNew(Unknown Source)
        com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvi der.java:133)
        com.adobe.idp.dsc.transaction.impl.DefaultTransactionTemplate.execute(DefaultTransactionT emplate.java:79)
        com.adobe.pdfg.BMCCaller.invokeInSMT(BMCCaller.java:789)
        com.adobe.pdfg.Native2PdfCaller.callNativeBMC(Native2PdfCaller.java:1064)
        com.adobe.pdfg.Native2PdfCaller.createPDF(Native2PdfCaller.java:382)
        com.adobe.pdfg.GeneratePDFImpl.createPDFInternal(GeneratePDFImpl.java:501)
        com.adobe.pdfg.GeneratePDFImpl.createPDFCommon(GeneratePDFImpl.java:280)
        com.adobe.pdfg.GeneratePDFImpl.createPDF(GeneratePDFImpl.java:232)
        sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
        java.lang.reflect.Method.invoke(Method.java:599)
        com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.jav a:118)
        com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor. java:140)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassi vationInterceptor.java:53)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(Transa ctionInterceptor.java:74)
        com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doBMT(EjbTran sactionBMTAdapterBean.java:197)
        com.adobe.idp.dsc.transaction.impl.ejb.adapter.EJSLocalStatelessEjbTransactionBMTAdapter_ 3af08fdf.doBMT(Unknown Source)
        com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvi der.java:95)
        com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInt erceptor.java:72)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStra tegyInterceptor.java:55)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateIntercep tor.java:37)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterc eptor.java:188)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:129)
        com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessage Receiver.java:93)
        com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:22 5)
        com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispat cher.java:66)
        com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:208)
        com.adobe.aes.web.create.CreatePDFAct.createPDF(CreatePDFAct.java:415)
        com.adobe.aes.web.create.CreatePDFAct.createPDF2(CreatePDFAct.java:434)
        com.adobe.aes.web.create.CreatePDFAct.execute(CreatePDFAct.java:183)
        org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
        org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
        org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
        org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
        javax.servlet.http.HttpServlet.service(HttpServlet.java:738)
        com.adobe.aes.web.AesActionServlet.service(AesActionServlet.java:66)
        javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
        com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1443)
        com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1384)
        com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:131)
        com.adobe.idp.um.auth.filter.AuthenticationFilter.doFilter(AuthenticationFilter.java:154)
        com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 188)
        com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116)
        com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:41)
        com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 188)
        com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116)
        com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:77)
        com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:852)
        com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:785)
        com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:443)
        com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java: 175)
        com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.jav a:91)
        com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:859)
        com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1557)
        com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:173)
        com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink .java:455)
        com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink .java:384)
        com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.jav a:83)
        com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionLi stener.java:165)
        com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
        com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
        com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
        com.ibm.io.async.ResultHandler.complete(ResultHandler.java:202)
        com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:766)
        com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:896)
        com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1527)
    [8/25/10 0:07:13:734 PDT] 00000041 WordToPDFConv A com.adobe.service.ProcessResource$ManagerImpl logJdk ALC-PDG-001-000-Calling function to clean registrys Resiliency entry
    [8/25/10 0:07:18:468 PDT] 00000040 WordToPDFConv A com.adobe.service.ProcessResource$ManagerImpl logJdk ALC-PDG-001-000-Calling function to clean registrys Resiliency entry
    [8/25/10 0:07:18:546 PDT] 0000003a DMAdapter     I com.ibm.ws.ffdc.impl.DMAdapter getAnalysisEngine FFDC1009I: Analysis Engine using data base: E:\Program Files\IBM\WebSphere\AppServer\profiles\AppSrv01\properties\logbr\ffdc\adv\ffdcdb.xml
    [8/25/10 0:07:18:906 PDT] 0000003a FfdcProvider  I com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on E:\Program Files\IBM\WebSphere\AppServer\profiles\AppSrv01\logs\ffdc\server1_71917191_10.08.25_00.07 .18.54619796.txt com.ibm.ejs.container.UserTransactionWrapper.commit 285
    [8/25/10 0:07:18:921 PDT] 0000003a EjbTransactio E com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean doRequiresNew The current transaction has been marked for rollback.  This means one of three things; 1) This transaction has timed-out (the timeout period was set to [470(sec)]470, while the actual transaction took [313(sec)]), 2) An unhandled exception occurred when calling another service (please check the logs for more detail), or 3) This is a JTA transaction and a service has explicitly marked this transaction for rollback
    [8/25/10 0:07:18:921 PDT] 0000003a EjbTransactio E com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean doRequiresNew TRAS0014I: The following exception was logged javax.transaction.RollbackException
        at com.ibm.tx.jta.TransactionImpl.stage3CommitProcessing(TransactionImpl.java:1217)
        at com.ibm.tx.jta.TransactionImpl.processCommit(TransactionImpl.java:991)
        at com.ibm.tx.jta.TransactionImpl.commit(TransactionImpl.java:913)
        at com.ibm.ws.tx.jta.TranManagerImpl.commit(TranManagerImpl.java:369)
        at com.ibm.tx.jta.TranManagerSet.commit(TranManagerSet.java:161)
        at com.ibm.ws.tx.jta.UserTransactionImpl.commit(UserTransactionImpl.java:293)
        at com.ibm.ejs.container.UserTransactionWrapper.commit(UserTransactionWrapper.java:305)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doRequiresNew (EjbTransactionBMTAdapterBean.java:220)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EJSLocalStatelessEjbTransactionBMTAdapter_ 3af08fdf.doRequiresNew(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvi der.java:133)
        at com.adobe.idp.dsc.transaction.impl.DefaultTransactionTemplate.execute(DefaultTransactionT emplate.java:79)
        at com.adobe.pdfg.BMCCaller.invokeInSMT(BMCCaller.java:789)
        at com.adobe.pdfg.Native2PdfCaller.callNativeBMC(Native2PdfCaller.java:1064)
        at com.adobe.pdfg.Native2PdfCaller.createPDF(Native2PdfCaller.java:382)
        at com.adobe.pdfg.GeneratePDFImpl.createPDFInternal(GeneratePDFImpl.java:501)
        at com.adobe.pdfg.GeneratePDFImpl.createPDFCommon(GeneratePDFImpl.java:280)
        at com.adobe.pdfg.GeneratePDFImpl.createPDF(GeneratePDFImpl.java:232)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
        at java.lang.reflect.Method.invoke(Method.java:599)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.jav a:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor. java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassi vationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(Transa ctionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doBMT(EjbTran sactionBMTAdapterBean.java:197)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EJSLocalStatelessEjbTransactionBMTAdapter_ 3af08fdf.doBMT(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvi der.java:95)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInt erceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStra tegyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateIntercep tor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterc eptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptor ChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:129)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessage Receiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:22 5)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispat cher.java:66)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:208)
        at com.adobe.aes.web.create.CreatePDFAct.createPDF(CreatePDFAct.java:415)
        at com.adobe.aes.web.create.CreatePDFAct.createPDF2(CreatePDFAct.java:434)
        at com.adobe.aes.web.create.CreatePDFAct.execute(CreatePDFAct.java:183)
        at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
        at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
        at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
        at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:738)
        at com.adobe.aes.web.AesActionServlet.service(AesActionServlet.java:66)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1443)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1384)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:131)
        at com.adobe.idp.um.auth.filter.AuthenticationFilter.doFilter(AuthenticationFilter.java:154)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 188)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:41)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 188)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:77)
        at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:852)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:785)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:443)
        at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java: 175)
        at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.jav a:91)
        at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:859)
        at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1557)
        at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:173)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink .java:455)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink .java:384)
        at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.jav a:83)
        at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionLi stener.java:165)
        at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
        at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
        at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
        at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:202)
        at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:766)
        at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:896)
        at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1527)
          what should I do?
          Please advise
          thanks

    Hi,
         Have you seen the following?
         http://blogs.adobe.com/livecycle/2008/10/livecycle_processing_big_docum.html
    http://blogs.adobe.com/livecycle/2008/10/livecycle_tuning_knob_default.html
    These articles both helped me.

  • Convert large AVI files to MPEG4 with PE11 to reduce size

    I have a lot of large AVI files from captured VHS tapes. Computer hard disk is full. Should I convert large AVI files to MPEG4 using PE11 to reduce file size? In the future, can I import MPEG4 into PE11 for video projects? I don't want to delete the AVI file until I know I can work with MPEG4.

    uberjaeger,
    My next objective will be to learn PE11 so I can edit the video.
    This Tips & Tricks article has links to many learning resources for PrE: http://forums.adobe.com/thread/800455?tstart=0
    Good luck,
    Hunt

Maybe you are looking for

  • Why itune take too long time to authorize my computer to sync with my ipod touch?

    I updated my ipod touch to iOS5.1. Also is itune 10.6.0.40. I never try to sync, since they are updated. But today 22/3/2012. when I try to sync my ipod with itune, itune says, 'this computer is no longer authorize, if you don't authorize the app you

  • SUNWCuser on global zone, SUNWCreq on non-global zone

    Hello, is there anybody out there who knows how i can install a zone with different Software Cluster. By default, when my global-zone is installed with SUNWCuser (Recommended by SUN), my new zone is created with SUNWCuser too. But I want to install o

  • My display won't respond what do I do?

    My IPad screen is frozen and the touch part is not working you can't scroll left or right but you can scroll up and down so you can't open a app .The buttons are working  fine but I can't turn it completely off because you can't slide it over.

  • Where Are The Custom Preset Files Stored?

    I deleted FCS 2 before installing FCS 3 and unfortunately all of my custom presets in Compressor were lost. I have got Time Machine with the old Compressor but I don't know where the old preset files are stored. Ideally I would like to copy them to t

  • Trails off of moving objects, ie. running basketball players leave trails

    I am shooting with a Sony HVR-A1U Digital HD camera. capturing on FCE 4.0.1 Format: HD Use: HDV-Apple intermediate Codec 1080i60 the Rate is set to (all rates) When I edit basketball games I see trails off of the running players, how can I avoid this