Parallel processing for one large message

I have some troubles from messaging performance perspective.
Sender:ABAP Proxy
Receiver:File Adapter
I'd like use parallel processing for one large message.
And the file for receiver is needed to be one file.
Could you let me know how to set them ?
Best regards,
Koji Nagai

Hi
Can you elaborate your requirement more?
How are you trying to achieve parallel processing in XI.
Since you mentioned that the source is Proxy, there should be some trigger mechanism say selection screen, you restrict the values here and use append strategy in File and can execute the same.
REgards
Krish

Similar Messages

  • Parallel processing for increaing the performance

    various ways of parallel processing in oracle especially using hints
    Please let me knw if there exists any online documentation in understanding the concept

    First of all: As a rule of thumb don't use hints. Hints make programs too unflexible. A hint may be good today, but might make things worse in future.
    There are lots of documents available concerning parallel processing:
    Just go to http://www.oracle.com/pls/db102/homepage?remark=tahiti and search for parallel (processing)
    Due to my experience in 10g, enabling parallel processing might slow down processing extremely for regular tables. The reason are lots of waits in the coordination of the parallel processes.
    If, however, you are using parallel processing for partitioned tables, parallel processing works excellent. In this case, take care to choose the partitioning criterion properly to be able to distribute processing.
    If, for example, your queries / DMLs work on data corresponding to a certain time range, don't use the date field as partitioning criterion, since in this case parallel processing might work on just a single partition. Which again would result in massive waits for process coordination.
    Choose another criterion to distribute the data to be accessed to at least <number of CPUs -1> partitions (one CPU is needed for the coordination process). Additionally consider to use parallel processing only in cases where large tables are involved. Compare this situation with writing a book: If you are planning to have some people writing a (technical) book consisting of just 10 pages, it wouldn't make any sense at all concerning time reduction. If, however, the book is planned to have 10 chapters, each chapter could be written by a different author. Reducing the resulting time to about 1/10 compared to a single author writing all chapters.
    To enable parallel processing for a table use the following statement:
    alter table <table name> parallel [<integer>];If you don't use the <integer> argument, the DB will choose the degree of parallelism, otherwise it is controlled by your <integer> value. Remember that you allways need a coordinator process, so don't choose integer to be larger than <number of CPUs minus 1>.
    You can check the degree of parallelism by the degree column of user_/all_/dba_tables.
    To do some timing tests, you also can force parallel dml/ddl/query for your current session.
    ALTER SESSION FORCE PARALLEL DML/DDL/QUERY [<PARALLEL DEGREE>];

  • Parallel processing for ABAP prorams in Process chain.

    Hi All,
    In one of the process chain, we have added the ABAP program. In Backend,the job is running as "BI_PROCESS_ABAP".
    I just want to know, same like DTP, can we keep parallel processing for the ABAP programs also. Please suggest.
    Thanks.

    Hello Jalina
    Also check with BASIS if the memory allocated to run this program has not overflowed and the selections you have in your ABAP program is in small chunks and use variants to run them in parallel OR series
    Thanks
    Abhishek Shanbhogue

  • The parallel process for mrp.

    hi exports
    we plan to do the scope of planning for the total planning as a background job.
    while doing that system ask for the parallal processing for mrp
    what is customize step and procedure to do the parallel process for mrp.

    Dear Raj,
    With the help of parallel processing procedures, you can significantly improve the runtime of the total planning run.
    To process in parallel, you can either select various sessions on the application server or various servers.
    Parallel processing runs according to packages using the low-level code logic:
    The work package, with a fixed number of materials that are internally defined in the program, is distributed over the individual servers/sessions. Once a server/session has finished processing a package, it starts processing the next package.
    If a low-level code is being planned, the servers/sessions that have finished must wait until the last server/session has finished its package to avoid inconsistencies. Then the next low-level code is processed per packages.
    The parallel processing procedure is switched on in the initial screen of total planning.
    Activities
    Define the application server with the number of sessions that can be used:
    If you want to define various servers for parallel processing, enter the server with the number of sessions.
    If you only want to use one server, but several sessions, enter the application server and the appropriate number of sessions.
    Further notes
    Parallel processing shortens the time required for calculation, however, it cannot shorten the database time as the system still only operates using one database.
    The Customizing Transaction is   OMIQ
    Regards
    PSV

  • Duplicate IR through parallel processing for automated ERS

    Hi,
    We got duplicate IR issue in production when running the parallel processing for automated ERS job. This issue is not happening in every time. Once in a while the issue happeing. That means the issue has happened in June month as twice. What could be the reasons to got this issue. On those days the job took more time comaredt o general. We are unable to replicate the same scenareo. When i am testing the job is creating IRs successfully. Provide me the reasons for this.

    Wow - long post to say "can I use hardware boxes as inserts?" and the answer is yes, and you have been able to for a long time.
    I don't know why you're doing some odd "duplicated track" thing... weird...
    So, for inserts of regular channels, just stick Logic's I/O plug on the channel. Tell it which audio output you want it to send to, and which audio input to receive from. Patch up the appropriate ins and outs on your interface to your hardware box/patchbay/mixer/whatever and bob's your uncle.
    You can also do this on aux channels, so if you want to send a bunch of tracks to a hardware reverb, you'd put the I/O plug on the aux channel you're using in the same way as described above. Now simply use the sends on each channel you want to send to that aux (and therefore hardware reverb).
    Note you'll need to have software monitoring turned on.
    Another way is to just set the output of a channel or aux to the extra audio outputs on your interface, and bring the outputs of your processing hardware back into spare inputs and feed them into the Logic mix using input objects.
    Lots of ways to do it in Logic.
    And no duplicate recordings needed...
    I still don't understand why the Apple-developers didn't think of including such a plug-in, because it could allow amazing routing possibilities, like in this case, you could send the audio track to the main output(1-2 or whatever) BUT also to alternate hardware outputs, so you can use a hardware reverb unit, + a hardware delay unit etc...to which the audio track is being sent , and then you could blend the results back in Logic more easily.
    You can just do this already with mixer routing alone, no plugins necessary.

  • Parallel processing for information broadcasting

    Hi SDN,
    How can we control parallel processing for information broadcasting in BI background management?
    Early answer is appreciated.
    Thanks in Advance.
    Namrata

    Hi,
    agree with the above postings
    you can find more details regarding this in below given link
    http://help.sap.com/saphelp_nw70/helpdata/en/ef/4c0b40c6c01961e10000000a155106/frameset.htm
    hope this helps
    Regards,
    rik

  • Job fail with Timeout for parallel process (for SID Gener.): 006000

    Hello all,
    Im getting below error and not able to find any issue with Basis side. Please anyone help on this!
    Job started
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Background process BCTL_4XU7J1JPLOHYI3Y5RYKD420UL terminated due to missing confirmation
    Process 000006 returned with errors
    Data pkgs 000001; Added records 1-; Changed records 0; Deleted records 0
    Log for activation request ODSR_4XUG2LVXX3DH4L1WT3LUFN125 data package 000001...000001
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Activation is running: Data target CRACO20A, from 1,732,955 to 1,732,955
    Overlapping check with archived data areas for InfoProvider CRACO20A
    Data to be activated successfully checked against archiving objects
    Parallel processes (for Activation); 000005
    Timeout for parallel process (for Activation): 006000
    Package size (for Activation): 100000
    Task handling (for Activation): Backgr Process
    Server group (for Activation): No Server Group Configured
    Parallel processes (for SID Gener.); 000002
    Timeout for parallel process (for SID Gener.): 006000
    Package size (for SID Gener.): 100000
    Task handling (for SID Gener.): Backgr Process
    Server group (for SID Gener.): No Server Group Configured
    Activation started (process is running under user *****)
    Not all data fields were updated in mode "overwrite"
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Report RSODSACT1 ended with errors
    Job cancelled after system exception ERROR_MESSAGE

    Thanks for the link TSharma I will try that today.
    UPDATE:
    I ran a non-parallel Data Pump and just let it run overnight. This time it finished after 9 hours.  In this run I set the STATUS=300 parameter in the PARFILE which basically echos STATUS updates to standard out every 300 seconds (5 minutes).
    And as before after 2 hours it finished 99% of the export and just spit out WAITING status for the last 7 hours until it finished.  The remaining TABLES it exported (a few hundred) were all very small or ZERO rows.  There clearly is something going on that is not normal.  I've done this expdp before on clones of this database and it usually takes about 2-2.5 hours to finish.
    The database is about 415 Gigabytes in size.
    I will update what the TRACE finds and I'm also opening a case with MOS.

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • Parallel processing for program RBDAPP01

    Hi All,
    I am running this program RBDAPP01 daily after every 30minutes to clear the error I Docs (Status 51 Application document not posted) Status Message u201CObject requested is currently locked by user ADMINJOBSu201D when I run this job it only clears few Idocs because of the Status Message u201CObject requested is currently locked by user ADMINJOBSu201D Means when one Idoc is getting updated the second one tries to update the same time for same order, same customer, same material and same plant but different ship to party it finds locked and cannot be posted.
    Can any one tell me what parallel processing is and will it help my case.
    Thanks

    You didn't specfiy which release you use so I can just give some suggestions:
    Note 547253 - ALE: Wait for end of parallel processing with RBDAPP01
    Note 715851 - IDoc: RBDAPP01 with parallel processing
    Markus

  • B2B batching fails for one failed message

    We have B2B batching implemented for outbound messages. If any one of the messages (to be batched in next cycle) ends in an error, whole Batch is stopped and no message is completed.
    Is there any set up we can do, so that other messages still get batched and processed successfully.
    Regards
    Ravdeep

    B2B batching fails for a failed message
    Regards,
    Anuj

  • Parallel Processing for a single Package

    Hi,
    I have PKg1 that have mixture of For Each Loop container, DFT's and Seq containers and I want to run more than one thread for this package where i can process data in parallel.
    Please let me know how i can create this using SSIS 2012.
    Thanks,

    Hi,
    DFTs connected by precedence constraints  and I want to run this package more than once (multiple threads)  at a given point of time. is this possible? if
    yes, please let me know how I can achieve this.
    Thanks..
    If the DFTs are connected then there will be absolutely no parallel processing. Running the same package in parallel most likely result in a lock. It depends how it is architectured, but with a RDBMS in default installation or files it is not going to fly.
    When you have a DFT with say OLEDB destination each using its own connection, and they are not connected then each gets opened independently and thus allowing you to ingress data simultaneously.
    Arthur My Blog

  • One large message or multipels small ones?

    Hi:
    I'm working in an application using weblogic 8.1.
    We receive in a JMS queue a large message containing thouthands of transactions to be processed.
    What should be more eficient? processing the whole transactions in the message (with one MDB) or split the large message in smaller ones and process "in paralel" with multiples MDB and somehow join all the results to return as the whole process result (hiding the client from this split/join) ?
    Keeping all the execution in the same transaction would be desireable, but we could create multiple independant transactions to process each small message, but the result must be consistent in what transactions were processed successfully and what weren't.
    If splitting is the option: Is there any tip on doing this?
    thanks in advance.
    Guillermo.

    Hi Guillermo,
    I would recommend reading the book
    Enterprise Integration Patterns by Gregor Hohpe
    http://www.eaipatterns.com/
    This book describes the split and join process very good.
    If the messaging overhead is little compared to total job time it would be a good idea to split
    (if the tasks also run well in paralel, get quick out of the DB if you run against one DB).
    Are you doing transactions against a DB?
    Are you using several computers to distribute the work or a multiprocessor computer?
    You can also test performance and try to split to 1, 5 or 10 transactions/msg.
    When you split, keep track of total number of transations per job (= N),
    and when number of successful transactions + number of failed transactions = N you are finished.
    You could send the result per msg to a success or failure channel.
    Regards,
    Magnus Strand

  • Two idocs are created for one output message type

    hi all,
    we are communicating our sap idocs to external system using ALE.It is working smoothly.
    Our problem raises here,
    .idocs are creating at the time of output type attachment for purchase orders.But rarely,two idocs are creating for one message type.It means two idocs are created for same Purchase order.It makes complications for the external system.
    anyone can help me pls?
    thnks in advance..........

    thanks jurgen for ur reply,
    jurgen, the second idoc is not an "ORDCHG",It is the replica of first idoc.This one has also "ORDERS" message type.The only difference between these idocs are the time.It means the difference between the time field segments of these idocs have only three(3)seconds difference.We think that its from the system error.If it is not from the system error pls give me the explanation.
    thanks in advance.......

  • Writing test case for One-way messaging returns fault

    Hi All,
         We are using latest 11.1.1.7 version of SOA Suite. There is a SOA composite which gets data feed from external party using the one-way message exchange pattern to store the feed into the database.
    i.e. from (Web Service component ----> Mediator ----> DBAdapter). The composite uses one way returns fault pattern. This basically returns only the faults when they occur and seems to work as expected. I can see any runtime errors (like database not available returned to the Mediator and the caller of the web service). However, I am unable to write a test case for the fault returned in case of database adapter unable to connect to database as the 'Emulates' tab is disabled.
    Happy to provide more details if required.
    Thanks

    Hi,
    By looking at the error what you have got it indicates that you are not catching the specific fault which is thrown by bpel1 in bpel2.
    Regards,
    Santosh Hemashekar

  • RemovingContext only for One level(Message Mapping)

    Hi,
    Is it possible to remove context only for one level?
    For Eg:If I apply [RemoveContext] function to <Item>
    I will get {A1,A2,B1,B2,C1,C2,C3,C4}
    <SourceRoot>
    <A>
    <Item r="a">A1</Item>
    <Item r="b">A2</Item>
    </A>
    <A>
    <Item r="a">B1</Item>
    <Item r="b">B2</Item>
    </A>
    <A>
    <Item r="a">c1</Item>
    <Item r="b">c2</Item>
    <Item r="c">c3</Item>
    <Item r="d">c4</Item>
    </A>
    </SourceRoot>
    But I need only those coming under a particluar <A> .
    Like {B1,B2}.
    Any way to do this?
    Can anyone help.
    Thanks in Advance
    Message was edited by: Chemmanz

    Hi,
    I was just trying to give an example. In my real case
    SOURCE - INVOIC01 IDoc
    TARGET - cXML Structure for Invoice.
    When it goes to Line item level the complexity comes.
    A similar situation as i explained needs to be solved.
    [REMOVECONTEXT] helped me a lot.
    But I this situation I want to restrict it to a single level.
    For Ex:
    <INVOIC02>
    <IDOC>
    <EIEDP01>
    <EIEDP02 QUALF="001">.. </EIEDP02>
    <EIEDP02 QUALF="002">.. </EIEDP02>
    <EIEDP02 QUALF="0xx">.. </EIEDP02>
    </EIEDP01>
    <EIEDP01>
    <EIEDP02 QUALF="001">.. </EIEDP02>
    <EIEDP02 QUALF="002">.. </EIEDP02>
    <EIEDP02 QUALF="0xx">.. </EIEDP02>
    </EIEDP01>
    </IDOC>
    </INVOIC02>
    1.<EIEDP01>
    <=== mapped directly to ==> <InvoiceDetailOrder>
    Then internal things are organized differently in
    <InvoiceDetailOrder> compared to IDoc element<EIEDP01>
    Ex:
    I want to take all the <EIEDP02>s coming under one line Item <EIEDP01>
    <EIEDP02> cannot be mapped diectly to cXML element. It can be done only after checking it's own attribute QUALF.
    For this I applied [REMOVECONTEXT], but i am getting all the QALF values.
    I need to restrict it to <EIEDP01> level.
    Regards
    Chemmam

Maybe you are looking for