File adapter: file processing in correct sequence

Hello All
We have inbound file scenario.
File will be picked up and processed by XI and sent to SAP system.
All the files have to be processed in the same sequence.
Once the file is picked and processed in XI, data will be sent to receiving SAP system thru proxy call.
But, if there is an error in processing received data in the receiving system (because of duplicate data or any other application related error, which is specific to receiving application), how can we stop XI from picking and processing the next file?
Second file should be picked and processed by XI only after 1st file is successfully processed by the receiving system.
How can i ensure this?
Please let me know...
Many Thanks
Chandra

Hi Chandra
If you are on NFS then you can enable Processing Sequence by Name or Date in sender file adapter. With this you need to implement EOIO this allows messages to be picked in sequence and process in queue. If one of the file failed the queue will error.
To avoid more than one thread processing you can use additional parameter "clusterSyncMode" refer Note 801926.
With the above other solution can be creating a SYNC  or interface that can act as an ACK from the system then you process the next message. BPM can also be useful if you use Fault messages and capture them for errors for processing of next message.
Hope above info can be used to come to a final approach for you
Thanks
Gaurav

Similar Messages

  • File Adapter BPEL Process getting switched off

    The file adapter BPEL process reads a csv file which has a series of records in itfrom /xfer/chroot/data/aramex/accountUpdate/files. In between reading the files, the BPEL process gets switched off. The below snippet is the error we found in the domain.log. Anybody can you please suggest what to do?
    <2010-11-25 16:22:28,025> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound>
    java.io.FileNotFoundException: /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv (No such file or directory)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:106)
    at oracle.tip.adapter.file.FileUtil.copyFile(FileUtil.java:947)
    at oracle.tip.adapter.file.inbound.ProcessWork.defaultArchive(ProcessWork.java:2341)
    at oracle.tip.adapter.file.inbound.ProcessWork.doneProcessing(ProcessWork.java:614)
    at oracle.tip.adapter.file.inbound.ProcessWork.processMessages(ProcessWork.java:445)
    at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:227)
    at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
    at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:280)
    at java.lang.Thread.run(Thread.java:619)
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> Processer thread calling onFatalError with exception /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv (No such file or directory)
    <2010-11-25 16:22:28,025> <FATAL> <PreActivation.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_ptt::Read(root)]Resource Adapter requested Process shutdown!
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.activation> <AdapterFramework::Inbound> Adapter Framework instance: OraBPEL - performing endpointDeactivation for portType=Read_ptt, operation=Read
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> Endpoint De-activation called in adapter for endpoint : /xfer/chroot/data/aramex/accountUpdate/files/
    <2010-11-25 16:22:28,095> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> ProcessWork::Delete failed, the operation will be retried for max of [2] times
    <2010-11-25 16:22:28,095> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound>
    ORABPEL-11042
    File deletion failed.
    File : /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv as it does not exist. could not be deleted.
    Delete the file and restart server. Contact oracle support if error is not fixable.
    at oracle.tip.adapter.file.FileUtil.deleteFile(FileUtil.java:279)
    at oracle.tip.adapter.file.FileUtil.deleteFile(FileUtil.java:177)
    at oracle.tip.adapter.file.FileAgent.deleteFile(FileAgent.java:223)
    at oracle.tip.adapter.file.inbound.FileSource.deleteFile(FileSource.java:245)
    at oracle.tip.adapter.file.inbound.ProcessWork.doneProcessing(ProcessWork.java:655)
    at oracle.tip.adapter.file.inbound.ProcessWork.processMessages(ProcessWork.java:445)
    at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:227)
    at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
    at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:280)
    at java.lang.Thread.run(Thread.java:619)
    <2010-11-25 16:22:28,315> <ERROR> <PreActivation.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Process state off.
    The process class "BulkAccountUpdateFileConsumer" (revision "1.0" ) has not been turned on. No operations on the process or any instances belonging to the process may be performed if the process is off.
    Please consult your administrator if this process has been turned off inadvertently.

    This patch is not for 10.1.3.1.
    I have provided a response to on the following post
    BPEL Process Going into Dead State Automatically.
    cheers
    James

  • Sender File adapter and duplicate file processing

    If I set the sender file adapter to delete or archive and when a file gets picked up and processed, this file will not get deleted/archived unless it was successfully processed.  However, if it errors out during processing, the file remains but it's message will get persisted in the integration engine or adapter engine.  Since there's automatic retry, we have the potential for duplicate processing as in addition to the retry, the file is still continously polling for this file.  In other words, how do we stop this duplicate file processing?
    Thanks.

    Hi Bevan,
    However, if it errors out during processing, the file remains but it's message will get persisted in the integration engine or adapter engine.
    your file wont get deleted unless adapter engine sucesfull pickups. if does not picked up at adapter engine then is not stored in adapter engine . if it reached Intergation Server and failed their then file would be deleted.
    please let me know if you haveany questions
    please reward points
    Regards
    Sreeram.G.Reddy

  • Large file processing in file adapter

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine just for this large file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Dear Steve,
    This might help you,
    Topic #3.42
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad#search=%22XI%20sizing%20guide%22
    /people/sap.user72/blog/2004/11/28/how-robust-is-sap-exchange-infrastructure-xi
    This sizing guide &  the memory calculations  it will be usefull for you to deal further on this issue.
    http://help.sap.com/bp_bpmv130/Documentation/Planning/XISizingGuide.pdf#search=%22Message%20size%20in%20SAP%20XI%22
    File Adpater: Size of your processed messages
    Regards
    Agasthuri Doss

  • Port sequence and file processing in SAP MDM

    Hi All,
    Can any one let me know how port sequence mechanism works in MDM 7.1? As per my understanding in SAP MDM no one files from different ports are processed simultaniously...Suppose if you have three different ports A,B,C and the scenairo is as below:
    SAP PI sends 2 files to A port and then sends 2 more files to B port and then sends 1 file to C port....All these files are not send simultaniously.
    Thanks
    Rajeev

    http://help.sap.com/saphelp_mdm550/helpdata/en/43/120367f94c3e92e10000000a1553f6/frameset.htm
    Once started, MDIS scans inbound ports in the order set by the Sequence column in the Ports pane of the MDM Console. When it finds a port containing an import file, it uses the portu2019s associated import map to process the file.
    The sequence in which MDIS processes ports is not affected by a portu2019s remote system.
    MDIS processes all files in a portu2019s Ready folder before scanning the next port in the sequence.
    When more than one import file is present in the folder, MDIS processes the files in a FIFO (first in, first out) order, meaning the oldest file in the port is processed first, then the next oldest, and so on.
    Under certain circumstances, MDIS will skip over a port and not process any import files it may contain. These circumstances include the following:
    ·        Port is set up for manual processing instead of automatic.
    ·        Port is blocked due to a structural exception.
    ·        Port is connected to an Import Manager or other MDIS.
    Once all ports on the MDM Server have been scanned, MDIS waits the number of seconds specified in the Interval property of the mdis.ini file before restarting the sequence (for more information, see MDIS Configuration).
    Thanks,
    Shambhu.

  • JCBC Adapter and File adapter not processing messages

    Hi
    I noticed that messages are being delivered to Adapter engine and the same are visible in Runtime Workbench with status "to be delivered". But, JDBC Adapter and File Adapter not processing these messages.
    Any idea where I can find the problem?
    I was able to re-deliver successfully via the JDBC Adapter using the MessagingSystem GUI using XISUPER user.
    Regards
    Chandu

    Hi,
    1.Status: TO_BE_DELIVERED
    Which means that the message was successfully delivered from Integration Server point of view and it states that the messages is initially handed over to the Messaging System.
    TO_BE_DELIVERED occurs while the message is put into the Messaging System receive queue.
    Regards
    Agasthuri Doss

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • File name generation with sequence number

    Hi All,
    My scenario is, File to File.  I need to generate the filenames at the target side like File1.xml, File2.xml..File9999.xml for each file triggered from source system. The interface tiggesrs multiple times in a day.
    For example, for the first time, 5 files triggered, need to generate File1.xml,File2.xml..File5.xml. After some time the interface might trigger with 10 files, then need to generate files as File6.xml,File7.xml....File15.xml . Once the file count reaches 9999 then need to generate the files from  1 (File1.xml)again.
    Could you please suggest me the possible solutions. For this any Lookups required?
    Regards,
    Praveen Kumar

    Hi Praveen,
    Case 1: If a field in source data carries the information regarding the sequence.
                 You can map this value ( Directly .. or using some transformation ) in some temporary field in the target and then use a Variable substitution at the receiver communication channel.
    Case 2. If the source file name carries the sequence information. then you can enable the Adapter specific settings in the Sender communication channel , and then get the information of the source file name using the Container object in the mapping. Then assign the sequence number to a field in the target , use a Variable substitution at the receiver communication channel.
    Case 3 : If Case 1 and Case 2 are not applicable ... then you have to use a  Ztable  to store the sequence number , a function module to fetch the number , and then use a UDF  in which you will implement the  RFC call logic.Then the same process ....      assign the sequence number to a field in the target , use a Variable substitution at the receiver communication channel.
    BR,
    Sushil.

  • BAI File postings to correct GL but wrong company code

    Hello,
    I have a issue as one of the Outgoing Miscellaneous entry is posting to a correct GL account , but to the wrong Company Code. Could you please let me know how the GL a/c and Company code is detemined when the BAI files process automatically into the system.
    Appreciate your response
    Thanks
    ID

    Hello,
    The electronic bank statement is not designed for cross-company code                    
    postings . You can,  however, create these postings manually, for instance by the                            
    subsequent processing of BI sessions or using Transaction FEBA.      
    However, this can lead to unwanted results in the document display                      
    of Transaction FEBA.
    If you've developed an exit to adapt these postings then you also have to check your own source code that could cause the error.
    REgards,
    Renan Correa

  • Error message by periodic weekly: No output from the 1 file processed

    Hi there,
    since four weeks, I got a problem with the maintenance script periodic weekly. Up to December 22nd, the script did, what it should do: rebuilding the database of locate and whatis, rotating log-files. Since one week later, I got the error message: No output from the 1 file processed.
    Normally, I use Anacron to do the job. When I noticed the problem, I tried to start the script with Tinker Tool System getting the same result. Another try using the Terminal (sudo periodic weekly) also failed. The commands locate and whatis are working, locate.updatedb and makewhatis also. I'm running 10.4.8; in the past, I did not have such problems. Anyone with an idea or solution?
    Thanks
    Klaus
    MacBook Pro   Mac OS X (10.4.8)  

    Hi Gary,
    here is the output you were asking for:
    Last login: Thu Jan 25 20:03:55 on console
    Welcome to Darwin!
    DeepThought:~ dirk$ sudo /private/etc/periodic/weekly/500.weekly; echo $?
    Password:
    Sorry, try again.
    Password:
    Rebuilding locate database:
    Rebuilding whatis database:
    find: /usr/local/man: No such file or directory
    makewhatis: /usr/share/man/man1/fetchmailconf.1.gz: No such file or directory
    Rotating log files: ftp.log lpr.log mail.log netinfo.log ipfw.log ppp.log secure.log
    access_log error_log
    Running weekly.local:
    Rotating psync log files:/etc/weekly.local: line 17: syntax error near unexpected token `)'
    /etc/weekly.local: line 17: `if [ -f /var/run/syslog.pid ]; then kill -HUP 0 80 79 81 0cat /var/run/syslog.pid | head -1); fi'
    2
    DeepThought:~ dirk$ ls -loe /private/etc/periodic/weekly/500.weekly
    -r-xr-xr-x 1 root wheel - 2532 Jan 13 2006 /private/etc/periodic/weekly/500.weekly
    DeepThought:~ dirk$
    It seems, Rogers idea, PsynX respectively the deficient uninstalling by me is responsible for my problems, is correct. Should I remove the whole file weekly.local or should I only remove the content? I prefer removing the whole file, because it was created while installing PsyncX. The date of creation is the same as the date of installing the app (December 25).
    Klaus
    By the way: it seems to me, the solution of my problem is in sight. So I want to thank you all for the amazing aid I got from you!

  • FTP - Run OS Command before file processing

    Hi,
    I have a requirement wherein I need to FTP a file from XI to a folder in a FTP server . Now FTP Server is set up in such a way that I cannot put the file directly. Before transferring the file , I have to use CD ( change directory command ) to access a particular folder and then transfer the file. This means that I cannot give the folder information directly to TARGET DIRECTORY.
    To address this, I decided to use the feature "Run OS Command BEFORE file processing " . And wrote a command 'cd <foldername> .It is not working. Then I tried using "Run OS Command AFTER file processing "  and it also didnot work.
    Does anyone have any clue how can I address this requirement using FILE Adapter.
    thanks,
    rakesh

    HI,
    OS commands will be executed in XI server not in the FTP server. So first you need to connect into FTP server and then you need execute CD command.
    option 1) Get the absolute path ie direct path from FTP server so that you can directly connect to FTP server's specific directoty
    22) In this case , write the file into your XI server itself by NFS File Transport protocol. Then ftp this file from your XI server into FTP server using Shell Script.
    So write a shell script which will be executed in the XI server, inside this write a logic of tranfer of files with FTP protocol. This shell script is executed from the Reciever File adapter with the option OS command.
    Hope this helps,
    Regards,
    Moorthy

  • Huge size file processing in PI

    Hi Experts,
    1. I have seen blogs which explains processing huge files. for file and sftp
    SFTP Adapter - Handling Large File
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    Here also we have constrain that we can not do any mapping. it has to be EOIO Qos.
    would it be possible, to process  1 GB size file and do mapping? which hardware factor will decide that sytem is capable of processing large size with mapping?
    is it number of CPUs,Applications server(JAVA and ABAP),no of server nodes,java,heap size?
    if my system if able to process 10 MB file with mapping there should be something which is determining the capability.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    2. consider pi is able to process 50 MB size message with mapping. in order to increase the performance what are the options we have in PI
    i have come across these two point many times during design phase of my project. looking for your suggestion
    Thanks.

    Hi Ram,
    You have not mentioned what sort of Integration it is.You just mentioned as FILE.I presume it is FILE To FILE scenario.In this case in PI 711 i am able to process 100MB(more than 1Million records ) file size with mapping(File is in the delta extract in SAP ECC AL11).In the sender file adapter i have chosen recordset per message and processed the messages in bit and pieces.Please note this is not the actual standard chunk mode.The initial run of the sender adapter will load the 100MB file size into the memory and after that messages will be sent to IE based on recordset per message.If it is more than 100MB PI Java starts bouncing because of memory issues.Later we have redesigned the interface from proxy to file asyn and proxy will send the messages to PI in chunks.In a single run it will sent 5000 messages.
    For PI 711 i believe we have the memory limtation of the cluster node.Each cluster node can't be more than 5GB again processing depends on the number of Java app servers and i think this is no more the limitation from PI 730 version and we can use 16GB memory as the cluser node.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    If i understand this i think if it is asyn communication then definitely 1GB data can sent to webservice however messages from Proxy should sent to PI in batches.May be the same idea can work for Sync communication as well however timeouts in receiver channel will be the next issue.Increasing time outs globally is not best practice however if you are on 730 or later version you can increase timeouts specific to your scenario.
    To handle 50 MB file size make sure you have the additional java app servers.I don't remember exactly how many app server we have in my case to handle 100 MB file size.
    Thanks

  • Large file processing in XI 3.0

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine for just this file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Fixed lengh file processing

    Hi All,
    I'm doing fixed lenghth file by small example. but my receiver adapter is showing following error
    Receiver Adapter v2123 for Party '', Service 'F1_BS':
    Configured at 2006-04-19 17:28:19 BST
    No message processing until now
    Following is my configuration
    My file
    123456
    venugopalsirangi
    Sender communication channel
    Record structure: H1,,sub1,
    FNAME               Value
    H1.fieldNames           KF
    H1.fieldFixedLengths     6
    H1.fieldFixedType      char
    H1.endSeparator          'nl'
    sub1.fieldNames          KF,Lname,Sname
    sub1.fieldFixedLengths     4,5,7
    sub1..fieldFixedType      char
    sub1.endSeparator     'nl'
    H1.keyFieldValue     '144857'
    sub1.keyFieldValue     'venu'
    Receiver communication channel
    Record structure:
    FNAME               Value
    H1.fieldNames           KF
    H1.fieldFixedLengths     6
    H1.fieldFixedType      char
    H1.endSeparator          'nl'
    sub1.fieldNames          KF,Lname,Sname
    sub1.fieldFixedLengths     4,5,7
    sub1..fieldFixedType      char
    sub1.endSeparator     'nl'
    H1.keyFieldValue     '144857'
    sub1.keyFieldValue     'venu'
    Regards,
    venu.

    Hi Prateek,
    XML file is successfully picked up by the adpater without content conversion.
    If I do content conversion it is not receiver adapter not processing the message.
    MY file.
    0112345010101
    021111112222
    03100001111112222
    03100011111212223
    041000011111
    021231116722
    03100781119012332
    041005611001
    059453287699
    MY xml file
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ns0:LSOUT_MT xmlns:ns0="urn://LengthSpecific">
    - <Header>
      <Key1>01</Key1>
      <name>12345</name>
      <date>010101</date>
      </Header>
    - <Hbatch>
      <Key2>02</Key2>
      <hvalue1>11111</hvalue1>
      <hvalue2>12222</hvalue2>
      </Hbatch>
    - <body>
      <Key3>03</Key3>
      <bvalue1>10000</bvalue1>
      <bvalue2>111111</bvalue2>
      <bvalue3>2222</bvalue3>
      </body>
    - <tbatch>
      <Key4>04</Key4>
      <tvalue1>10000</tvalue1>
      <tvalue2>11111</tvalue2>
      </tbatch>
    - <trailer>
      <Key5>05</Key5>
      <value1>94532</value1>
      <value2>87699</value2>
      </trailer>
      </ns0:LSOUT_MT>
    MY content conversion of sender adapter  is:
    record struct :Header,1,Hbatch,,body,,tbatch,*,trailer,1
    Header.fieldFixedLengths 2,5,6
    Header.keyFieldValue  01
    Header.fieldNames Key1,name,date
    the same approach i used for  Hbatch, body and tbatch..
    My content conversion for receiver adapter is.
    Record structure: Header,Hbatch,body,tbatch,trailer
    Header.fieldFixedLengths 2,5,6
    Header.fieldNames Key1,name,date
    Header.processFieldNames fromConfiguration
    rest as above.
    Regards,
    venu.

  • Empty file processing

    Hi All,
    We are using PI 7.0 with SP 18. In one of our file to Idoc scenarios when posting empty files the file is picked up and archived but we are not getting any logs in moni. Further I couldnt find any option in the sender file adaper for empty file processing.
    Can anyone please clarify on this?

    hi,
    >>>when posting empty files the file is picked up and archived but we are not getting any logs in moni.
    this is good isn't it ?
    >>>Further I couldnt find any option in the sender file adaper for empty file processing.
    either you don't have correct SP level or you didn't import sap basis content approriate for your SP
    Regards,
    Michal Krawczyk

Maybe you are looking for