Large payload data to  SOA platform

In our BPEL how to upload large payloads data. Please let me know your suggestions.
Thank you
Balaji
Edited by: Hari.luckey on Dec 5, 2012 6:19 PM

Hi,
Thanks for your reply!
Actually, I got this error:
Error parsing envelope: (92635, 79) Expected 'EOF'.; nested exception is:
     javax.xml.soap.SOAPException: Error parsing envelope: (92635, 79) Expected 'EOF'.
when the WS method returns a large payload.
I got this error only when the WS returns a lot of data. So I supposed that was the problem?!
I've tried your solution, but nothing has changed.
any idea?
Many thanks,
Regards

Similar Messages

  • How to extract payload data from SOA database schema using Java

    I am trying to extract the payload data and output as XML text files using Java. Seems that is stored in SOA table XML_DOCUMENT. I am trying the following Java code to get started and it's not working as I would expect. I only get a few actual lines of output and, when I do, I only get the *<?xml version ... ?>* line.
    I appreciate any advice to extract the payload data from the database. Ultimately I will want to include the composite instance ID in the SQL but for now I'm just using the code shown here:
    OracleDataSource ods = new OracleDataSource();
    ods.setURL("soa_db_connection_string");
    ods.setUser("soa_db_user_id");
    ods.setPassword("soa_db_password");
    Connection conn = ods.getConnection();
    String sql = "select document from xml_document where rownum < 10";
    OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement(sql);
    DBBinXMLMetadataProvider dbrep = BinXMLMetadataProviderFactory.createDBMetadataProvider();
    dbrep.setConnectionPool(ods);
    dbrep.associateDataConnection(conn);
    OracleResultSet rset = (OracleResultSet)stmt.executeQuery();
    XMLDOMImplementation domimpl = new XMLDOMImplementation();
    BinXMLProcessor proc = BinXMLProcessorFactory.createProcessor(dbrep);
    while (rset.next()) {
         Blob blob = rset.getBlob("DOCUMENT");
         BinXMLStream inpbin = proc.createBinXMLStream(blob);
         BinXMLDecoder dec = inpbin.getDecoder();
         InfosetReader xmlreader = dec.getReader();
         XMLDocument doc = (XMLDocument)domimpl.createDocument(xmlreader);
         doc.print(System.out);
    }

    I found a method using a slight variation of the code I originally posted. Essentially you remove the DBBinXMLMetadataProvider dbrep portion. I believe, with this included, the XML being extracted is validated against the database (which is referenced as a "metadata provider"). Since the SOA schema doesn't seem to contain the information to validate the XML it returns as blank. If you don't include the dbrep portion then the XML is extracted as desired.
    OracleDataSource ods = new OracleDataSource();
    ods.setURL("soa_db_connection_string");
    ods.setUser("soa_db_user_id");
    ods.setPassword("soa_db_password");
    Connection conn = ods.getConnection();
    String sql = "select document from xml_document where rownum < 10";
    OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement(sql);
    OracleResultSet rset = (OracleResultSet)stmt.executeQuery();
    XMLDOMImplementation domimpl = new XMLDOMImplementation();
    BinXMLProcessor proc = BinXMLProcessorFactory.createProcessor();
    while (rset.next()) {
         BLOB blob = rset.getBLOB("DOCUMENT");
         BinXMLStream inpbin = proc.createBinXMLStream(blob);
         BinXMLDecoder dec = inpbin.getDecoder();
         InfosetReader xmlreader = dec.getReader();
         XMLDocument doc = (XMLDocument)domimpl.createDocument(xmlreader);
         doc.print(System.out);
    }

  • Splitting large message (60MB) based on payload data

    Hi,
    I have a file (Flat) to file (xml) scenario. The source flat file is being read by FCC. Since the source flat file is large (upto 60MB) so I have to split it into small files then I have applied "recordset per messages" in FCC level to split large file into small ones. But the clients requirement is to split the large document based on payload data (that is DeliveryDate). That means I cannt split the message based on number of rows of flat file instead I have to split the file on the basis of DeliveryDate so that after splitting into small files, each small file should contain data for exactly one date (say in one file data for 15th NOV and in another file data for 16NOV and so on).
    Please suggest some solution to split the large file (60MB) based on payload data(DeliveryDate).
    Br,
    Madan Agrawal

    Hi Madan,
    in this case split the message in to different messages like 2 mb file,
    XI doesn't support 60 mb files, U have to split the flat files based one some condition.
    i have same requirement flat file having huge data i split that data using java map.]
    then i processed its work fine for me.
    I think u can also do it,
    But in my case i divided message based on sequence number uniquenumber to diffentiate data.
    if there is any sequence number split the message .
    Regards,
    Raj

  • Java.lang.OutOfMemoryError: allocLargeObjectOrArray error for large payload

    Our is an outbound flow where one FTP adapter picks the files and it calls a requester service, requester service calls the EBS and EBS calls the provider service, and finally file is getting written using the B2B.
    Since last 4/5 days we are getting java.lang.OutOfMemoryError: allocLargeObjectOrArray.
    We are getting this error when large payloads are being used for doing testing.
    As per our understanding, when you have a tree of composite invocations (so A invokes B invokes C invokes D via flowN 100 times), none of the memory is released until they all complete.
    1. Could you please let us know exactly when memory is released?
    2. How to tune/optimize this?
    Our flow is like:
    SyncDisbursePaymentGetFtpAdapter --> CreateDisbursedPaymentEbizReqABCSImp l--> SyncDisbursePaymentRBTTEBS --> SyncDisbursedPaymentJPMC_CHKProvABCSImpl--> AIAB2BInterface --> Oracle B2B
    <Dec 12, 2012 8:17:06 PM EST> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: javax.management.remote.rmi.RMIConnecti\
    onImpl.invoke(Ljavax.management.ObjectName;Ljava.lang.String;Ljava.rmi.MarshalledObject;[Ljava.lang.String;Ljavax.security.auth.Subject;)
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664.
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:858)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:869)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:838)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
    at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449)
    Truncated. see log file for complete stacktrace
    Caused By: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at java.util.Arrays.copyOf(Arrays.java:2786)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1847)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1756)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1169)
    Truncated. see log file for complete stacktrace
    Do any one have any idea how to rectify this error as whole UAT environment has become down because of this issue.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Please find the required info:
    1. Operating System--> LINUX
    2. JVM (Sun or JRockit)-->JRockit
    3. Domain info (production mode enabled?, log levels, number of servers in cluster, number of servers in a machine)-->
    a) production mode enabled-->>Production mode is not enabled, we are going to enable it.
    b)Log levels---There are many logs b2b, soa, bpel, integration, which log level do I need to do finest(32).
    c) number of servers in cluster-->2
    d) number of servers in a machine-->1
    3. Payload info (size, xml/non-xml?,)
    a) size-->more than 1 MB, and upto 25 MB
    b)xml/non-xm--> xml
    we are trying to do the changes as suggested by you and will update accordingly.

  • Errors and exceptions in writing large binary data on sockets!!! urgent

    hi
    I am trying to write large binary data in the form of byte arrays on sockets.
    Data is as large as 512KB(== 524288bytes) So i store the data (actually read from a file through FileInputStream ) and then write on the socket with lines like this
    DataOutputStream dos =
    new DataOutputStream(new BufferedOutputStream(sock.getOutputStream()));
    dos.write(b);
    /* suppose b is the arrayreference in which data is stored. sometimes i write with that offset + len function*/
    dos.flush();
    dos.close();
    sock.close();
    but the program is not stable: sometimes the whole 512KB is read on other side and sometimes less usually 64KB.
    The program is unthreaded.
    There is another problem : one side(reading or writing) sometimes gives error :
    java.net.SocketException: Software caused connection abort: socket write error
    please reply and reply soon and give ur suggestions
    thanks

    hi
    I am trying to write large binary data in theform
    of byte arrays on sockets.
    Data is as large as 512KB(== 524288bytes) So istore
    the data (actually read from a file through
    FileInputStream ) and then write on the socketwith
    lines like this
    DataOutputStream dos =
    new DataOutputStream(new
    BufferedOutputStream(sock.getOutputStream()));
    dos.write(b);
    /* suppose b is the arrayreference in which datais
    stored. sometimes i write with that offset + len
    function*/
    dos.flush();
    dos.close();
    sock.close();
    but the program is not stable: sometimes the whole
    512KB is read on other side and sometimes less
    usually 64KB.
    The program is unthreaded.
    There is another problem : one side(reading or
    writing) sometimes gives error :
    java.net.SocketException: Software caused
    connection abort: socket write error
    please reply and reply soon and give ursuggestions
    thanksUmm how are you reading the data on the other side?
    some of your code snippet might help. Your writing
    code seems ok. I've written a file transfer program
    in a similar fashion and have successfully testing on
    different platforms (AIX, AS400, Solaris, Windows,
    etc) without any problems and without needing to set
    the buffer sizes with files as large as 600MB and you
    said you're testing this on the loopback?
    Point here is you should never need to reset any of the default TCP options to get program correctness. The options are more for optimizations and fine tuning. If indeed you need to change the options to get your program to work, then you program wont be able to scale under different load.

  • Determine the size of EDI payload data

    Hi Experts,
    As far as my B2B Knowledge is concerned, in order to know the size of an EDI payload... we download the payload from wire message or payload, copy the data and paste it in a file. The size of the file determines the EDI payload size.
    But, this is a tedious task, particularly in cases where there are huge number of EDI data. To far as I know, the size of EDI files ranges from 1 MB to 1GB.
    Please advise what should I do in cases of files with large payload sizes(more than 25MB).
    Please advise if there is any B2B table whose one of the columns depicts the size of the payload.
    I have searched through some b2b table as b2b_messageinstance, b2b.ip_b2b_report but could not find any such columns.
    Please advise what exactly the procedure is to determine the size of EDI payload data, particularly for cases where payload data is larger than 25 MB.

    I am afraid that there is no direct way of finding the payload size in 10g. You may write your own standalone program or API which may calculate the size of payload by querying the b2b_instancemessage view or by calling the B2B InstanceMessage API -
    http://www.oracle.com/technetwork/testcontent/b19324-01-instance-msg-api-129535.zip
    Regards,
    Anuj

  • How to create a single large bitmap data at run time?

    Hi All,
    Please help me in overcoming the issue that is mentioned below.
    Requirement:   Create single very large bitmap data which contain some 30 PNG images loaded and for each image it should have some text.  Images and text are loaded dynamically (AS2 code. Images are stored in a remote server). The bitmap data display should show 8 images at a time and corresponding text. We can see rest content by scrolling (kinetic scroll is implemented). How can I go for it?
    Some questions:
    · Is there any limit for size of bitmap data ( As per link there is restriction on the height of the bitmap we can create in AS2 (max value is 2880 which is not enough for some 30 element list http://help.adobe.com/en_US/FlashLite/2.0_FlashLiteAPIReference2/WS84235ED5-9394-4a52-A098 -EED216C18A66.html ) How to overcome this limitation?
    · If we create individual bitmap data for 30 individual PNG files, we find some jerks in scroll. How can we have smooth scrolling?
    Thanks and Regards,
    Manjunath

    Thank you very much for the reply.
    The number of bitmaps are not 30 always. It can vary in real time and since all PNG files are stored in a remote server which are also vary during runtime; so we can not predefine the number of child movieclips. It could be 10 at some duration and may be 20 at some other time.
    Any help?

  • Large OLTP data set to get through the cache in our new ZS3-2 storage.

    We recently purchased a ZS3-2 and are currently attempting to do performance testing.  We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd.  The OVM repositories are connecting via NFS.  The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing). 
    The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks.  I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache  (NOTE: we have no L2ARC at the moment).
    I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m.  My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
    * 100% random, 70% read file I/O test.
    hd=default
    fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
    fsd=fsd1,anchor=/vm1_nfs
    fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
    fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
    fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
    fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
    fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
    fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
    fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
    rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    However, the problem I keep running into is that vdbench's java processes will throw exceptions
    ... <cut most of these stats.  But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
    14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:15.388
    14:12:15.388 Miscellaneous statistics:
    14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
    14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
    14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
    14:12:15.388
    14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
    java.lang.RuntimeException: Requested parameter file does not exist: param_file
      at Vdb.common.failure(common.java:306)
      at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
      at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
      at Vdb.Vdbmain.main(Vdbmain.java:550)
    So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up).  But based on this, I can't tell what I should do differently to the vdbench file to get around this error.  Does anyone have advice for me?
    Thanks,
    Joe

    ah... it's almost always the second set of eyes.  Yes, it is run from a script.  And I just looked and realized that the list last line didn't have the \# in it.  Here's the line:
       "Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
    I just added the hash to comment that out and am rerunning my script.  My guess is that it'll complete   Thanks Henk.

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • Error publishing large payload in AQ JMS Adapter

    Hi,
    We are usign AQ JMS Adapter and we are publishing payload in AQ JMS Topic thorugh BPEL process.
    Issue is, i'm not able to publish large payload ( xml file of size 4KB) to AQ JMS Topic through asynchronous BPEL process.
    When i post the xml in the BPEL Control, i'm getting the follwoing message:
    "Cannot find the specified instance". I have tried 3-4 times but getting same message all the time
    However, i'm able to publish xml to JMS Topic with synchronous BPEL process.
    Please let me know if there is a way to overcome this issue.
    Thanks in advance

    If the StreamPayload property does not exist, then the default value false is assumed.
    <activation-spec className="oracle.tip.adapter.aq.inbound.AQDequeueActivationSpec">
    <property name="QueueName" value="RAW_IN_QUEUE"/>
    <property name="DatabaseSchema" value="SCOTT"/>
    <property name="StreamPayload" value="true"/>
    </activation-spec>
    you can add <property name="StreamPayload" value="true"/>
    to the .jca file but rememeber This property is applicable when processing Raw messages, XMLType messages, and ADT type messages for which a payload is specified though an ADT attribute.

  • Getting payload data in alert category

    Is there anyway to fetch XI payload data(IDOC number) in ALRTCATDEF along with other container variable?
    We have been sending email alerts without BPM(Alert rule and alert cat).
    Please suggest.

    Hi Rajesh,
    Check this blog:/people/michal.krawczyk2/blog/2005/03/13/alerts-with-variables-from-the-messages-payload-xi--updated
    Sachin

  • SELECT records larger than date specified in sub query

    Dear All
    Thank you for your attention.
    I would like to select records larger than date specified in sub query
    query should be something like the following
    SELECT my_order_number, my_date, my_task
    FROM MYTB
    WHERE my_order_number IN order_no AND my_date > date (SELECT order_no, date FROM MySubQueryResult)
     (it is incorrect)
    Sub query result:
    order_no | date
    A1    | 2014-12-21 09:06:00
    A2    | 2014-12-20 09:07:00
    A3    | 2014-12-20 08:53:00
    A4    | 2014-12-20 08:57:00
    MYTB:
    my_order_number | my_task | my_date
    A1  |  T1  |  2014-12-21 09:06:00
    A1  |  T2  |  2014-12-22 10:01:00
    A2  |  T1  |  2014-12-20 09:07:00
    A3  |  T2  |  2014-12-20 08:53:00
    A3  |  T4  |  2014-12-21 09:30:00
    A3  |  T8  |  2014-12-23 20:32:00
    A4  |  T6  |  2014-12-20 08:57:00
    expected result:
    my_order_number |  my_task | my_date
    A1  |  T2  |  2014-12-22 10:01:00
    A3  |  T4  |  2014-12-21 09:30:00
    A3  |  T8  |  2014-12-23 20:32:00
    Any ideas?  Thanks.
    swivan

    Hi,
    try this
    SELECT my_order_number, my_date, my_task
    FROM MYTB
    WHERE my_order_number IN (SELECT order_no FROM MySubQueryResult)
    AND my_date > (SELECT date FROM MySubQueryResult)
    Alternatively, you can also make use of joins to achieve the same.
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    Praveen Dsa | MCITP - Database Administrator 2008 |
    My Blog | My Page
    Dear Praveen Dsa
    Thanks for your reply, but order_no and date are paired and related, cannot separate.
    each order have its own date, so it is not working
    Best Regards
    swivan

  • Passing Payload data to alert container for all SAP PI erros

    Hi All,
       I have a alert requirement as follows.
    One alert has to be raised for all PI errors including Integration Engine errors and Adapter Engine errors. The scenario  is  ABAP PROXY >XI->SOAP.
    Payload will have n number of fields including Delivery Number and Mail ID of a  business User.
    Mail has to be triggered from PI to this particular user along with delivery number. Please note this alert has to be triggered for all PI errors ( Mapping, Adapter Error,Apllication Error ...etc).
    Triggering alert from an UDF  by calling report  SALERT_CREATE does cover only integration engine errors.It should not cover any adapter engine error.
    By creating a alert rule and alert category in RWB will not have Payload data  in the mail and Alert will be sent to fixed receipients. Alert rule does cover all PI errors.
    How should i do this>

    Hey
    By regular alert mechanism,this is not possible,you need to use BPM to trigger e-mail to include delivery number.
    /people/michal.krawczyk2/blog/2005/03/13/alerts-with-variables-from-the-messages-payload-xi--updated
    >>Mail has to be triggered from PI to this particular user along with delivery number
    Well this can only be done via mail adapter,alert mechanisms need a specific user ID or user with specific roles.
    Thanks
    Aamir

  • Search Payload data

    Hi,
    I would like to know the table in which the payload data is being stored.
    Many Thanks
    Bala

    Hi,
    take a look at Michal's reply in this thread,
    In Which Database Table the Messages are Stored in XI
    Regards,
    Bhavesh

  • Large Payload Configuration in B2B

    Hi,
    I have set the below mentioned values in B2B Configuration:
    Large Payload Size: 2000
    Large Payload Directory: /tmp
    I have a EDI (846) transaction with multiple ST segments, where transactions (ST segments) can be between 7000 Bytes to 2000 KB in size, after converted to XML by B2B.
    Now i post a payload, the first transaction (ST) segment alone is going to the /tmp location where as the other segments irrespective of the size moves to the destination location as mentioned in the agreement.
    Is this the actual behavior of B2B or am I missing any configuration?
    Please help.
    Thanks,
    Monica
    Edited by: Monica Ashwin on Mar 7, 2013 5:40 PM

    Monica,
    Translated payload size is used to determine whether incoming message is large or not, so in your case, any transaction set which has size more than 2000 byte, should be passed as a reference instead of value. If it is not working like this, then it may be a bug. You may like to refer -
    http://docs.oracle.com/cd/E23943_01/user.1111/e10229/app_perform.htm#BABEHDIA
    You may raise a SR with support if you think it is working incorrectly. You may also consider forwarding the B2B export and sample payload to my id so that I can cross-verify.
    Regards,
    Anuj

Maybe you are looking for

  • My ipod screen is white, and has lines on the side of it. I can still touch and i can see but this is quite annoying anyway to fix it?

    Helllo so i just accidently dropped my ipod and when i picked it up, It still works i can touch and move just fine but the screen is faded and has lines running throught the side, any way to fix it?

  • Just installed 4.0.1 and now "offline" appears and I can't connect at all

    When I click on Firefox, the dialog box says: Offline mode/ Firefox is currently in offline mode and can't browse the Web. Uncheck the "Work Offline" menu item, then try again. I am sending this on Safari, the only browser working on my computer now.

  • Error reading node property?

    I'm using Premiere Pro CS6, editing a 10 minute sequence using H.264 clips in .MOV containers. This morning I have been applying the Fast colour corrector, warp stabiliser and highlight/shadow adjustment to the clips. A small red box with a white X h

  • Assignment Manager - urgent

    Hi All, Which responsibilty has the "assignment manager" form. What exact setups I need. This is urgent. My requirement - create SR, link it to a field service task and and schedule it so it's cleary visible on the gantt chart that the task A is from

  • Winrm and SP3

    Hello, I just installed SP3 + RU3 on an exchange 2010 SP2 cas server. After the install I am not able to launch EMS or EMC but instead I get the error below. Other cas servers are fine. The difference between this cas server and the other is that thi