Update large volume data

Hi Guys..I have a requirement to update one column in a table with some constant value, the volume of the data is 1 billion, the column is already indexed, I am using parallel hint, but it is running from 2 days..can you please provide some suggestions
UPDATE /*parallel(t,40) */ table t
SET t.col1 = '1';
COMMIT;

>
I have a requirement to update one column in a table with some constant value, the volume of the data is 1 billion, the column is already indexed,
>
Please clarify the requirement.
Are you updating that column in EVERY record to a constant value? If so then drop the index and do the update; the index won't be of any use if all records have the same column value.
Or are you only updating that column for SOME records to a constant value? For example, if the current value is NULL or 2 then you want to update it to 3?
If not updating every record then how many of the 1 billion records need to be updated? For any substantial number I would suggest dropping the index, do the update and then recreate the index.

Similar Messages

  • Large Volume Data Merge Issues with Indesign CS5

    I recently started using Indesign CS5 to design a marketing mail piece which requires merging data from Microsoft Excel.  With small tasks up to 100 pieces I do not have any issues.  However I need to merge 2,000-5,000 pieces of data through Indesign on a daily basis and my current lap top was not able to handle merging more than 250 pieces at a time, and the process of merging 250 pieces takes up to 30-45 mins if I get lucky and software does not crash.
    To solve this issue, I purchased a Desktop with a second generation Core i7 processor and 8GB of memerory thinking that this would solve my problem.  I tried to merge 1,000 piece of data with this new computer, and I was forced to restart Adobe Indesign after 45 mins of no results.  I then merged 500 pieces and the task was completed, but the process took a little less than 30 minutes.
    I need some help with this issue because I can not seem to find another software that can design my Mail Piece the way Indesign can, yet the time it takes to merge large volumes of data is very frustrating as the software does crash from time to time after waiting a good 30-45 mins for completion.
    Any feedback is greatly appreciated.
    Thank you!

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • XML - large volume

    This is a beginner question, but I haven't found anything that answers it.
    Is XML suitable for large volume data exchange? Does it depend on the capcity of the parser used?
    I am interested in using XML as the format of a fairly complex database extact of around 200,000 rows. Is this a reasonable use?

    Your decision to use XML for that size of a select depends on what you want to do with the data once you have it materialized. While wrapping it in XML will definitely bloat it, if you are trying to preserve its schema information on transfer and have it processed by generic standards-based parsers instead of custom code, it may well be worth it.
    Oracle XML Team

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Loading large volumes of arbitrary binary data to a clip

    It is easy to download external data in XML format to a movie
    clip. However, what I need is to load really large volumes of
    readonly binary data. In my case, textual representation is not an
    option. Is it possible to download an arbitrary array of bytes into
    memory then seek this array to read individual bytes?
    I don't think that ActionsScript arrays like this one
    var data:Array = [1,2,3,...];
    could be solution fro my problem either. The reason is that
    virtual machine associates so much extra information with every
    array member.
    The only solution I came so far is to pack binary data as
    strings,
    var data:String = "\u0000\u1234\uabcd";
    two byte per character. There should not be any storage
    overhead, and seeking for an individual data member is trivial.
    But I doubt is there any better solution?

    For as2 I don't believe there's any other option other than
    to load it in as an encoded string and then decode it internally.
    So if you have \u0000 as in the above example you will find it
    doesn't work.
    var data:String = "\u0000\u1234\uabcd";
    trace(data.length) //traces 0 (zero) because the first
    character is a string terminator
    I think you need an encoding method like base64 in the source
    string and an equivalent decoder class for decoding to binary
    inside flash. I'm not expert on this stuff... others may know more
    or it could be a starting point for your research.
    In the past I've used the meychi.com classes for this type of
    thing. Couldn't see them online now... but there's something else
    here that may be useful:
    http://www.svendens.be/blog/archives/8
    With as3 - as I understand it - there's no problem because
    you can load binary data.

  • Can't erase or paritition primary internal HD: "wiping volume data to prevent future accidental probing failed"

    Yesterday my computer (mid-2010 macbook pro) froze and after doing a hard reset my computer would hang during the bootup process.
    I then started the machine in recovery mode and tried to do a Repair Disk using the Disk Utility.
    The Repair Disk failed an indicated that the partition needed to be erased and recreated.
    Why I tried to erase and recreate the paritition, I got the error message: "wiping volume data to prevent future accidental probing failed"
    I am unable to create a new partition on my internal hd using Disk Utility. I keep on getting the error same message. I've already tried the fixes mentioned in https://discussions.apple.com/thread/4490281?start=0&tstart=0
    I've tried creating the partition using Parted Magic but that fails as well
    When I start the machine now in recovery mode, it goes to internet recovery.
    I would suspect that this is a hd failure, but there are a couple of things that seem to indicate otherwise:
    The Lion recovery parition is still visible in Disk Utility, and when I run a Verify Disk on it, everything is ok
    Using Parted Magic, I've run a complete HD self test, and there are no errors reported
    Any help would be greatly appreciated.

    Hi there!
    Sounds like i have similar problem but with my  MacBook Pro 13' (9.2, mid 2012)
    I have made some updates 500GB SSD Intel 520, and original kit of 500GB Toshiba 5400 (with Opticbay)
    It is time to change kit Toshiba to larges one. My choise is Hitachi 7K1000 Travelstar 7200 rpm for 1TB (SATA III).
    I try to set up new HDD to Opticbay, when using Disk Utility there is Error
    "wiping volume data to prevent future accidental probing failed"
    When i change it to original place for HDD - everything fine, but if SSD on Opticbay - MacBook does not see it.
    I have read a lot of forums about this problem, and there some issues how to fix it - need to install updates for EFI Frimeware manualy, but suddenly it is impossible - when i try to install update there is Alert -
    Actualy i realy need to help with that! Maybe some one?

  • Managing large volumes of images

    I have a large volume of images. Mostly in raw format. I almost lost them all a few years ago when something happened to iPhoto 06. Since that time I avoided iPhoto and have been managing the file structure myself and using Lightroom.
    All my images are now stored on a NAS running Raid 0. I am feeling a little more secure now, so....
    ...I am interested to know what database improvements have been made to iPhoto. Is it save to use with that much data? Does it work well with Lightroom? How does it work with Aperture or does Aperture just replace iPhoto? Can the iPhoto or Aperture database reside on my NAS?
    Cheers.

    1. The protection against any database failure is a good current back up. iPhoto makes a automatic back up of the database file. This facilitates recovery from issues. However this is not a substitute for a back up of the entire Library.
    2. The number of images is what's important to iPhoto, not the total file size. iPhoto is good for 250k images and we've seen reports on here from folks with more than 100,000. So it will work with volume.
    3. It doesn't work with Lightroom at all. This is germane to Aperture as well.
    iPhoto, Lightroom and Aperture are all essentially Database applications. All of them want to manage the data and will only share the data under certain protocols.
    Aperture and Lightroom don't actually edit photos. When you process a photo in these apps, the file is not changed. Instead, the decisions are recorded in the database and applied live when you view the pic. With Lightroom the only way to get an edited image to iPhoto is to Export from LR and then import to iPhoto. (There was a plug-in to automate that process but I have no idea if it's been updated since LR 1.)
    Aperture can share it's Previews with iPhoto, but that's all. Otherwise you need to do the export/import dance.
    What communication there is between Aperture and iPhoto is designed to facilitate upgrading from iPhoto to Aperture. Yes, Aperture is a complete replacement for iPhoto.
    Neither the iPhoto nor Aperture Libraries can live on your NAS. However, the file management tools in Aperture are such that you can easily store the files on your NAS while the Library is on the HD. You can also do this with iPhoto but I wouldn't recommend it.
    Frankly, if you're a Raw shooter I don't understand why you would consider changing from the Pro level LR to a hiome user's iPhoto.
    Regards
    TD

  • /Volumes/Data/my computer.sparsebundle could not be accessed (error-1)

    Since upgrading to Lion I keep getting "Volumes/Data/my computer name.sparsebundle" could not be accessed (error -1) message.  I will go into the airport utility, delete the backup and start over. It will work ok for a couple weeks and the message comes back.  I have the most recent firmware update in the timecapsule.  Is there an issue with my timecapsule itself, or do I need to upgrade to a larger one.  My current 1 is 1T, my mac's hard drive is roughly 75% full.

    I checked the pondini fixes and ended up doing the following - in case this could help someone else with OSX Lion and Time Capsule... http://pondini.org/TM/Troubleshooting.html
    First, I downloaded and used Airport Utility 5.6 instead of 6.1 - BTW, I had to attempt download and installation THREE times before the program actually installed.  Be persistant.
    Pondini #5: It's possible some names (of all things!) may be a problem.    See item C9.  I had to re-name my Mac, Time Capsule, network, etc... to remove all apostrophes and spaces.
    Pondini #7:  If you have WD SmartWare installed on Lion, it's not compatible, per RoaringApps, and there are reports it can cause this problem as well.   Use Western Digital's uninstaller, or delete the app from /Applications and the files com.wdc.WDDMservice.plist and com.wdc.WDSmartWareServer.plist from /Library/LaunchDaemons.   There may also be a file in Library/Application Support.
    I then had to shut down computer, unplug Time Capsule, unplug modem; plug everything in again, then reboot computer; then had to go to Apple- System Preferences- Network- and reselect my Time Capsule to connect to WiFi again.
    Then opened Time Machine preferences again and attempted back up - it worked!

  • PSD v's Layered TIF (on large volumes)

    platform is windows XP, InDesign CS
    we currently have 750GB volumes (windows 2003 servers) where the Indesign document and all graphic files reside.
    we are moving data to a new server (NetAPP Storage) the volume size is 3TB. We have noticed working with Indesign documents that have placed PSD files is slow, slow to place PSDs, slow to print to a printer or file. However if we save all the graphic elements as Layered TIF files then it all becomes quicker. Is there an issue with using LARGE volumes (or shares)? I ask this as we don't have speed issues with smaller volume sizes.
    We have also noted that when using InDesign CS3 with placed PSD's that reside on large volumes times are better, however upgrading to CS3 for us is not possible at this time as our desktop workstations are under speced. Hardware updates are planned but will take many months to implement.

    No these are not large files, they range between 5 to 80 mb in size. Adobe inform me that InDesign CS3 handles PSD files better. But I'm still very curious as to why is it when I have migrated data from a small volume to a very large volume is InDesign afected by it, that is whenever PSD files are involved.

  • Error message from Time Machine: "Unable to complete backup.   An error occurred while linking files on the backup volume."  Using Time Machine Buddy, I got the following log entries: "Event store UUIDs don't match for volume: data"    ror occured while l

    More from the Time Machine Buddy:  "Node requires deep traversal:/Volumes/Data"  and "reason: kFSEDBEventFlagMustScanSibDirs"
    I am using a Mac G5 with Leopard.  My LaCie data drive failed and I replaced it with a WD MyPassport drive, at which time the failed backups began.  The recent file updates do appear in Time Machine, but it appears to back up everything since the last completed backup.

    Hi Norm,
    Could be many things, we should start with this...
    "Try Disk Utility
    1. Insert the Mac OS X Install disc, then restart the computer while holding the C key.
    2. When your computer finishes starting up from the disc, choose Disk Utility from the Installer menu at top of the screen. (In Mac OS X 10.4 or later, you must select your language first.)
    *Important: Do not click Continue in the first screen of the Installer. If you do, you must restart from the disc again to access Disk Utility.*
    3. Click the First Aid tab.
    4. Select your Mac OS X volume.
    5. Click Repair Disk, (not Repair Permissions). Disk Utility checks and repairs the disk."
    (Repair both Drives)
    http://docs.info.apple.com/article.html?artnum=106214
    Then try a Safe Boot, (holding Shift key down at bootup), run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions, reboot when it completes.
    (Safe boot may stay on the gray radian for a long time, let it go, it's trying to repair the Hard Drive.)

  • Processing large volume of idocs using BPM Processing

    Hi,
    I have a scenario in which SAP R/3 sends large volume say 30,000 DEBMAS Idocs to XI. XI then sends data to 3 legacy systems using jdbc adapter.
    I created a BPM Process which waits for 4 hrs to collect all the idocs. This is what my BPM does:
    1. Wait for 4 hrs Collect the idocs
    2. For every idoc do a IDOC->JDBC Message transformation.
    3. Append to a Big List
    4. Loop at the Big list from step 4 and in the loop for
    5. Start counter from 0 and increment. Append to a Small List.
    6. if counter reaches 100 then send a Batch JDBC Message in send step.
    7. Reset counter after every send.
    8. Process remaining list i.e if there was an odd count of say 5300 idoc then the remaining 53 idocs will be sent in anther block.
    After sending 5000 idocs to above BPM following problems are there:
    1. I cannot read the workflow log as system does not respond.
    2. In the For Each loop which loops through the big list of say 5000 idocs only first pass of 100 was processed after that the workflow item is not moving ahead. It remains in the status as "STARTED" but I do not see further processing.
    Please tell me why certain Work Items are stuck is it becuase I have reached upper limit and is this the right approach? The Main BPM Process is also hanging from last 2 days.
    I have concerns about using BPM for processing such high volume of idocs in production. Please advice and thanks in advance.
    Regards
    Ashish

    Hi Ashish,
    Please read SAPs Checklist for proper usage of BPMs: http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    One point i'm wondering about is why do you send the IDocs out of R/3 one by one and don't use packaging there? From a performance stand point this is much better than a bpm.
    The SAP Checklist states the following:
    <i>"No Replacement for Mass Interfaces
    Check whether it would not be better to execute particular processing steps, for example, collecting messages, on the sender or receiver system.
    If you only want to collect the messages from one business system to forward them together to a second business system, you should do so by using a mass interface and not an integration process.
    If you want to split a message up into lots of individual messages, also use a mass interface instead of an integration process. A mass interface requires only a fraction of the back-end system and Integration-Server resources that an integration process would require to carry out the same task. "</i>
    Also you might want to have a look at the IDoc packaging capabilities within XI (available since SP14 i believe): http://help.sap.com/saphelp_nw04/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm
    And here is Sravyas good blog about this topic: /people/sravya.talanki2/blog/2005/12/09/xiidoc-message-packages
    If for whatever reason you can't or don't want to use the IDoc packets from R/3 or XI there are other points on which you can focus for optimizing your process:
    In the section "Using the Integration Server Efficiently" there is an overview on which steps are costly and which steps are not so costly in their resource consumption. Mappings are one of the steps that tend to consume a lot of resources and unless it is a multi mapping that can not be executed outside a BPM there is always the option to do the mapping in the interface determination either before or after the BPM. So i would sugges if your step 2 is not a multi mapping you should try to execute it before entering the BPM and just handle the JDBC Messages in the BPM.
    Wait steps are also costly steps, so reducing the time in your wait step could potentially lead to better performance. Or if possible you could omitt the wait step and just create a process that waits for 100 messages and then processes them.
    Regards
    Christine

Maybe you are looking for