Content Conversion with Substructures
I'm very new at all of this, and I have a bit of a complex mapping. In an attempt to make it a little tidier, I made several substructures when I set up the data type.
For example, most of the lines start with a 'record id' made up of 4 fields. So I created a structure with those 4 fields, and then another structure with the rest of those fields, and then put both structures together to define the line.
I thought I was making things cleaner, but when I got to the file content conversion, I realized I'd just made a big mess.
Here is a somewhat simplified example of my structure, after it comes out of the mapping.
<EmployeeData01>
<RecordID>
<RecordType>01</RecordType>
<SSN>1234-56-7890</SSN>
<EmployeeID></EmployeeID>
</RecordID>
<Employee>
<FirstName>Mickey</FirstName>
<MiddleName>M</MiddleName><LastName>Mouse</LastName>
<Prefix>Sir</Prefix>
<Title>Jr</Title>
<Gender>M</Gender>
</Employee>
</EmployeeData01>
Originally I tried just using the employeedata01, and that gets me an error in the adapter monitor about the RecordID structure not being defined.
Then I tried not defining employeedata01 and instead defining RecordID and Employee. That works, sort of. But it puts the RecordID segment and the Employee Segment on seperate lines.
Is there a way to do this with the structure I have? I know I can solve it by going back and completely rewriting my structure, but I'd really rather not have to do that.
Thanks for any help at all.
Abhy,
I am using content conversion already. I'm trying to figure out how to configure the content conversion.
Those blogs are not so useful, because they are all about sender adapters, and this is a receiver adapter.
The sender is ABAP Proxy, the receiver is a file adapter.
The mapping works fine, and maps the document in XML exactly how I want it mapped.
My problem is getting the file adapter configured properly to receive XML and write it out the way that I need it written.
Here are the things I've tried:
#1
Recordset Structure: EmployeeData01
Parameters:
EmployeeData01.fieldFixedLengths with all the individual field lengths for the whole record
Result: Error in the adapter monitor
Error: Message processing failed: Exception: Exception in XML Parser (format problem?):'java.lang.Exception: Message processing failed in XML parser: 'Conversion configuration error: Unknown structure 'RecordID' found in document', probably configuration error in file adapter (XML parser error)'
#2
Recordset Structure: Employee,RecordID
Parameters:
RecordID.fieldFixedLengths - with lengths for the record id fields
Employee.fieldFixedLengths - with lengths for the rest of the fields
Result: Mapping works, but prints out the record id section and the employee section on seperate lines, instead of one line
What I need to know is how to set up the file content conversion parameters in order to make the entire employeedata01 (recordid + employee) print out on one line
Does this help clarify my question?
Similar Messages
-
Sender File Content Conversion with re-occuring record pairs
Hi,
Our FCC works fine with the following structure:
Header1: H1F1, H1F2, H1F3,... (1:1)
Header2: H2F1,H2F2,H2F3,..... (1:1)
Notes: NF1,NF2,NF3,.............(1:1)
Line1:L1F1,L1F2,L1F3,.............(1:N)
Line1:L1F1,L1F2,L1F3,
Line1:L1F1,L1F2,L1F3,
Line1:L1F1,L1F2,L1F3
Line2:L2F1,L2F2,L2F3,............(1:N)
Line2:L2F1,L2F2,L2F3,
Line2:L2F1,L2F2,L2F3,
Line2:L2F1,L2F2,L2F3,
But we have structure as below:
Header1: H1F1, H1F2, H1F3,... (1:1)
Header2: H2F1,H2F2,H2F3,..... (1:1)
Notes: NF1,NF2,NF3,.............(1:1)
Line1:L1F1,L1F2,L1F3,.............(1:N)
Line2:L2F1,L2F2,L2F3,.............(1:N)
Line1:L1F1,L1F2,L1F3,
Line2:L2F1,L2F2,L2F3,
Line1:L1F1,L1F2,L1F3,
Line2:L2F1,L2F2,L2F3,
Line1:L1F1,L1F2,L1F3,
Line2:L2F1,L2F2,L2F3,
Lin1 and Line2 occur as multiple pairs making it as multiple line items. When we use content conversion parameters as below:
Header1.fieldNames H1F1,H1F2,H1F2,...
Header1.fieldFixedLengths 10,5,10,.....
Header1.keyFieldValue H1
Header1.keyFieldInStructure add
Header1.endSeparator 'nl'
Header1.lastFieldsOptional YES
and same for Header2, Notes, Line1, Line2
It picks up only first Header1, Header2, Notes, Line1 and Line2 in a recordset.
Does anyone have idea how could we do this content conversion? Any help would be appreciated.
Reagrds,
N@v!nHi Navin,
You can check the below links :-
Complex File Content Conversion - with random multiple occurrences
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
/people/shabarish.vijayakumar/blog/2007/08/03/file-adapter-receiver--are-we-really-sure-about-the-concepts
Complex content conversion File sender
These might be of some help.
Regards,
Rohit -
Receiver file Content Conversion with Header line
Hi,
Here I am doing receiver file content conversion with header line.
I am able to get the output file correct, when I open the file in notepad the header line and data appearing in the same line (not accepted).
But when I tried to open the name file in internet explorer I can see the header line and data in two different lines (accepted).
What should I do I want to see the same output in the notepad?
Please help me out.
Thanks in advance,
Srikanth.You can use NameA.addHeaderLine.
Specify whether the text file will have a header line with column names. The following values are permitted:
0 u2013 No header line
1 u2013 Header line with column names from the XML document
2 u2013 As for 1, followed by a blank line
3 u2013 Header line is stored as NameA.headerLine in the configuration and is applied
4 u2013 As for 3, followed by a blank line
The below weblinks will help you to know the other paramters.
http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
http://help.sap.com/saphelp_nwpi71/helpdata/en/44/686e687f2a6d12e10000000a1553f6/content.htm -
File sender adapter and content conversion with polish character
We are loading a csv file with PI 7.0 file sender adapter using "content conversion" - all fields go through EXCEPT a special character hex '208C' (space in front) looks like "Æ" is converted to hex 'C28C'.
We are using code page UTF8
We are using:
enclosuresign "
enclosuresignescape ""
fieldcontentformatting nothing
enclosureconversion NO
Hope some one can helpHi Bohamo,
Hope you have set the following for your file sender adapter :
1. Transfer Mode is set to Binary,
2. File Type Text,
3. Encoding ISO-8859-1( for Western European Latin ).
Inorder to recognize Polish Character, try as follows :
Your sender file after coming into Pi has XML encoding declaration 'UTF-8'.
Write a simple XSLT mapping to change the value of the attribute "encoding" to "ISO-8859-1" in the output XML of message mapping . Include this XSLT map as the second mapping step in your interface mapping.
First step in your interface mapping will be your already existing message mapping.
An example of the XSL code :
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method='xml' encoding='ISO-8859-1' />
<xsl:template match="/">
<xsl:copy-of select="*" />
</xsl:template>
</xsl:stylesheet>
or you can also do java mapping if you are comfortable with java code !
Cheers,
Ram. -
File Content Conversion with Multiple structures
Here is the scenario
Legacy to XI -> XI to R/3 (App Server)
txt file and fixed length.
<b>Test file</b>
100WELCOME 0430000960603201321
2000000000040008000803
<b>Table2</b>
RecordType
PriorityCode
Destination
BankOrginNo
CreationDate
CretionTime
Spaces
<b>Table2</b>
RecordType
Destination
BankOrginNo
ReferenceCode
ServiceCode
RecordLength
CharactersPerBlock
PartialCompression
CompressionSpaces
<b>Content Conversion Parameters:</b>
Document Name: Details
RecordsetName : recordset
Recordsetstructure : Table1,1,Table2,*
Recordsetsequence: Ascending
RecordsetperMessage: *
Keyfieldname : KF
Kefieldtype : String(Case-Sensitive)
Table1.keyFieldValue :'1'
Table1.fieldFixedLengths:1,2,10,10,6,4,47
Table1.fieldNames :RecordType,PriorityCode,Destination,BankOrginNo,CreationDate,CretionTime,Spaces
Table2.keyFieldValue :'2'
Table2.fieldFixedLengths: 1,10,10,10,3,3,4,1,38
Table2.fieldNames:RecordType,Destination,BankOrginNo,ReferenceCode,ServiceCode,RecordLength,CharactersPerBlock,PartialCompression,CompressionSpaces
ignoreRecordsetName :true
When I am trying with first structure it is working fine, where as 2 structures it is not.
in the adopter monitoring it show it pics the file from Legacy(file adopter display green), where as SXMB_MONI it is not showing any thing.
can any body help on this do I need to maintain any other parameters for file content conversion.
Thanks
MHI,
I can see that.
The number of characters in the file for TABLE2 is lesser then the number of field size given.
for eg. if the table2 fixed field length sum is 20 ,your file contains only 10 characters. Please give complete data for table2
Test file
100WELCOME 0430000960603201321
<b>2000000000040008000803</b>
Table2.fieldFixedLengths: 1,10,10,10,3,3,4,1,38
Ragards
vijaya
Message was edited by: vijaya kumari -
JMS Content Conversion with spaces
Hi
I am using the JMS Content Conversion to transalte MQ messages ( they come from the Mainframe ),to XML Format.
These messages are in fixed lenghts structure , the fields contain blanks (spaces)
The adapter cannot deal with it,only fixed lenghts without spaces , the monitor gives the following error
XI inbound processing failed for message at 2006-08-24|07:45:25.518+0200. JMS and XI identifiers for message are ID:414d51204445564d41494e31202020207aabe04420c2ea13 and bd95c760-3333-11db-b915-001125a56002 respectively. JMS adapter will rollback database and JMS session transactions
Have some1 dealed with this issue? managed to solve it?
Thx,ShaiHi Shai.
Before you call him ,try to use the simple convertion in the sender JMS comm. channel,then after you got the first XML structure to the XI ,use a simple\advanced java function from the message mapping and get rid of the spaces if they bother you.
Can you post the fixed width message you get from JMS I'd like to have a look.
By the way ,how did you resolve the File content convertion with 4 levels?
Regarding Yaki's phone...well I'm sure your Boss have it.
8-)
Again ,good luck.
Nimrod
Message was edited by: Nimrod Gisis -
Sender File Content Conversion with headerline
Hi,
Is it possible to do via the sender file content conversion in the file adaptor for the following flat file?
Inbound flatfile format:-
FILEHEADER
HEADER1
DETAILS1
DETAILS2
DETAILS3
HEADER2
DETAILS1
DETAILS2
DETAILS3
Target XML file format:-
<XML>
<FILEHEADER></FILEHEADER> occurrence 1
<RECORDSET> occurence *
<HEADER></HEADER> occurence 1
<DETAILS></DETAILS> occurence *
</RECORDSET>
</XML>
Edited by: Bee Huat, Leonard Yong on Oct 16, 2008 10:52 AMI've read through all the blogs, and have no leads on how to get this done?
I tried putting the following into the recordset. FileHeader, 1, Header, 1, Details, *
But it does'nt seem to work, the above is expecting the Fileheader and header to be repeated in the file to be sent.
My file is in the following format.
Fileheader
Header1
Details1
Details2
Details3
Header2
Details1
Details2
Details3
I need the following XML format
<XML>
<Fileheader>
<invoice>
<header1>
<details1>
<details2>
<details3>
</header1>
</invoice>
<invoice>
<header2>
<details1>
<details2>
<details3>
</header2>
</invoice>
</XML>
Edited by: Bee Huat, Leonard Yong on Oct 16, 2008 5:34 PM -
Receiver File content conversion with nested structure
Hi Guys,
I have the below nested structure and have to convert it using receiver file content conversion.
<Header> [o, unbounded]
<A>a</A>
<B>b</B>
</Header>
<record> [0, unbounded]
<field1>
<X1>x</X1>
<Y1>y</Y1>
</field1>
<field2>
<X2>x</X2>
<Y2>y</Y2>
</field2>
</record>
The file is a comma separated one. Please let me know how to configure the content conversion.
ThanksHi Mukesh,
Have a look at the Shabz's blog for the receiver File content conversion : File Adapter (Receiver) - Are we "really" sure about the concepts?
Thanks,
Pooja -
Sender Content Conversion with ; endSeparator
Hi
I'm trying to read the following flat structure using a File Sender communication channel using Content Convesion.
11,99;22,99;33,99
This should be translated into:
<Employees>
<employee>
<employeeNumber>11</employeeNumber>
<employeeType>99</employeeType>
</employee>
<employee>
<employeeNumber>22</employeeNumber>
<employeeType>99</employeeType>
</employee>
<employee>
<employeeNumber>33</employeeNumber>
<employeeType>99</employeeType>
</employee>
</Employees>
Using the following settings:
Recordset Structure: employee,*
and Properties:
employee.fieldSeparator ,
employee.fieldNames employeeNumber,employeeType
employee.endSeparator ;
ignoreRecordsetName true
I get the following error:
more elements in file csv structure than
field names specified!
As soon as I change the endSeparator to 'nl' (and the file to match) everything works.
Is this a bug?Hi Werner,
No it is not a bug.
Give the parameter <b><i>fieldSeparator</i></b> for root element i.e. Employees also and check your scenario again.
Thanks & regards,
Varun Joshi -
How to do a sender File Content Conversion with this structure ?
Hi SDN,
I have a flat file with this format :
header1
header2
headerN
detail1
detail2
detailM
header and detail are strings with differents lengths.
There is no key field.
What are the method to do a FCC with this structure ?
thanks in advance
GregHi,
Here is my test data :
3D10512224046N350 106500002000020083,450072413090545023500280242
3D10512224142N388 63400012000120138,050062213030641032902310041
3D10512224143N355 191600009000080333,850062914360806045901430124
3D105122230127/014046N 106501080015301051222084500170000,6894840047,4027610101
3D105122233970/014046N 106501090011001051222105800110000,6895330047,3884300102
3D105122228864/014142N 63402050008301051222065900080000,7999170047,3951100101
3D105122210381/014142N 63401073011001051222071500190000,7397000047,3853100102
3D105122210668/014142N 63403070011591051222081200100000,8302500047,3871200103
3D105122210342/014142N 63405073011591051222082700130000,8269000047,3866200104
3D105122223934/014142N 63404073011301051222085100120000,8365170047,3757800105
3D105122210380/014142N 63409063011591051222092900120000,9868670047,3974300106
3D105122201670/014142N 63406050011591051222100000110000,9861830047,5372000107
3D105122210514/014142N 63410073011001051222103300070000,9895830047,4272700108
3D105122201339/014142N 63407090014301051222105300190000,9831410047,4126660109
3D105122204940/014142N 63408073014301051222111700150001,0161500047,4243200110
3D105122225675/014142N 63411090012001051222121100191000,7733330047,3888700111
3D105122205447/014142N 63401080012001051222123800061000,7442170047,3867100112
3D105122216716/014143N 191601080011001051222074300091000,2266940047,8758400101
3D105122222952/014143N 191601073011301051222081800360000,3559120047,9371010102
3D105122225232/014143N 191601063014001051222090600130000,4123830047,8753300103
3D105122216516/014143N 191601083012001051222095700080000,4648000047,8643300104
3D105122230377/014143N 191601070012001051222100700040000,4714830047,8638500105
3D105122221885/014143N 191601073012001051222104500130000,5358330048,0378500106
3D105122208380/014143N 191601071517001051222122500110000,5779500047,7382700107
3D105122224171/014143N 191601070011001051222124300081000,6140330047,7543800108
There is no key code because the data are different depending on the companies in my group.
Greg -
Module processor enhancement with file content conversion
Hi All,
I have a bean (module processor) that converts the a PDF doc to text. I need to use the same bean with file content conversion. Module processor processing sequence is as follows:
1) My Bean: localejbs/myBean
2) Default: localejbs/CallSapAdapter (Bean for file content conversion).
When I tried the same (myBean + File adapter with file content conversion) , it was not working as expected. But the same bean works fine with normal file adapter (myBean + File adapter without file content conversion).
Couldn't figure out the reason. Please help.
Regards,
Ajay.As the content conversion of the file adapter is done before calling the customer module, it is not possible to use both.
As workaround you can do the file content conversion with help of the MessageTransformBean. In the module chain you put the MessageTransformBean after your module.
Note that the configuration of the MessageTransformBean has to be done like the J2SE sender file adapter:
http://help.sap.com/saphelp_nw04/helpdata/en/0d/00453c91f37151e10000000a11402f/frameset.htm
You find an example here:
https://websmp206.sap-ag.de/~sapdownload/011000358700001186732005E/HowToConveModuleJMS.pdf
Regards
Stefan -
Sender adapter: File content conversion suggestion
Hi all,
I require your suggestion on File content conversion with Sender File adapter. I am having a scenario where I would be getting multiple orders data i.e., header and item information in a single file. Suppose, we have given a chance to design the source structure. So, which would be the better way for having the source structure so that it would be easy for file content conversion and mapping. Is it like Method 1 or Method II?
Method I:
Orderno,FieldA,FieldB,FieldC
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,FieldA,FieldB,FieldC
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,FieldA,FieldB,FieldC
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Method II:
Orderno,FieldA,FieldB,FieldC
Orderno,FieldA,FieldB,FieldC
Orderno,FieldA,FieldB,FieldC
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
If you suggest method I, can you let me know how the content conversion parameters look like?
Thanks in Advance,
Adithya K
Message was edited by:
Adithya KAdithya,
Looking at your input file I think a structure as below would be better.
<Order_Data>
<Orderno>
<Orderno/>
<FieldA/>
<FieldB/>
<FieldC/>
</Orderno>
<Items>
<Orderno/>
<OrderItem/>
<Field1/>
<Field2/>
<Field3/>
<Field4/>
<Field5/>
</Items>
</Order_Data>
Content Conversion Parameters for this would be:
RecordSet Structure OrderData,,Orderno,1,items,
OrderData.fieldNames Orderno,items
OrderData.fieldSeparator 'nl'
OrderData.endSeparator 'nl'
Orderno.fieldNames Orderno,FieldA,FieldB,FieldC
Orderno.fieldSeparator ,
Orderno.endSeparator 'nl'
items.fieldNames Orderno,OrderItem,Field1,Field2,Field3,Field4,Field5
items.fieldSeparator ,
Items.endSeparator 'nl'
ignoreRecordsetName true
Depending on which are your key fields you will have to put two more parameters:
Orderno.keyFieldValue
items.keyFieldValue
Hope this would be of some help.
Thanx,
Manju. -
File content conversion Document offset not working?
hi All,
I have a flat file which i need to convert into an xml using content conversion.
The document consists of 18 lines as shown below:-
#HEADER_ARCDLX3098 OMD.I.170 0000000000000002#HEADERFECHADOC A 8
HEADER A 25
CLAMOVIM A 3
MOTIVMOV N 4
CENTRO A 4
DEPOSITO A 4
PROVEED A 10
DEPRECEP A 4
MATERIAL A 18
CANTIDAD N 13
UNMEDIDA A 3
FLAGDEIMPRESION A 1
TRANSPORTISTA A 12
CHOFERPATENTE A 50
CENTRORECEPTOR A 4
MATERIALRECIBIDO A 18
NUMERWMS A 16
#DATA10521000 12345678 12210521000 12345678 12210521000 12345678 122#EOF
the first 17 lines are to be ignored. And only the 18th line to be taken.
The 18th line however, contains multiple records based on fixed field length.
For example the above example contains 3 records - "10521000 12345678 122" , "10521000 12345678 122" and "10521000 12345678 122".
I tried to do a content conversion with document offset. However, it doesnt seem to work. There is no error and the xml is created without ignoring the first 17 lines.
xml.documentOffset
17
this is what is given in the content conversion.
Please help.
thnx,
NinuHi,
Under Document Offset parameter, specify the number of lines(in ur case its 17) that are to be ignored at the beginning of the document.
This enables you to skip comment lines or column names during processing.
This should work. check the below link:
http://help.sap.com/saphelp_nwpi711/helpdata/en/44/6713ec3f914ddee10000000a1553f7/frameset.htm -
Sender File Adapter - File Content Conversion
Hello,
i do have a problem with the file content conversion at the sender file adapter.
I have configured the file content conversion with key field defined
keyfield: key
recordsetstructure is set as head,1,item,*
head.fieldSeparator |
head.beginSeparator |
head.endSeparator |
head.keyFieldValue H
head.fieldNames key,....
item.fieldSeparator |
item.beginSeparator |
item.endSeparator |
item.keyFieldValue I
item.fieldNames key,....
When i run it like this it will not read since it has a problem to identify the key fields.
When ever i change the whole logic to be not based on fieldseparator but on fixed length, everything is working perfectly fine.
but this unfortuantly doesn't help as i have no fixed structure for the rest of the fields.
also when i change the recordsetstructure to head,1,item,1 my file is also processed correctly, of corse only for the first line, it at least tells me that the file structure is correctly defined.
Can anyone help why the identification for the keyfields doesn't work with field names and separator but in the same set up with fixed lengths ?
thanks a lotHello All,
thanks for the helpful answers.
So i tried Jayan tip and removed the begin Separter.
Unfortuantly my file is really looking like this |H|...|...| for the header record
and for the item record |I|...|...| ( this was an" I" like in Item ...
In my opinion this means i really do have a begin seperator which is the same like the field separator.
When i removed it from both the file and the config it started to work.
So i wrote an java function which will trow away the first digit in this case and integrated it in my szenario.
So that when this file is read there is no | as a begin flag anymore.
And the whole thing starts to work, this sounds more like a bug then a feature but anyway i am happy.
so thanks a lot
Tina -
Can't get File content conversion to produce CSV file
Hi Guys
Have no problem at all getting XML file created via an RFC
Structure is something like this
From MONI message monitor inbound message payload
<?xml version="1.0" encoding="UTF-8" ?>
- <rfc:Z_XI_005_RFC xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
- <IP_CUSTOMER_HEADER>
<CUSTOMERID>100853</CUSTOMERID>
<COMPANY>Bram Van Tuyl Coldstore</COMPANY>
<SHORT_NAME>VAN TUYL</SHORT_NAME>
<STREET>Middelkampseweg 1</STREET>
etc
</IP_CUSTOMER_HEADER>
</rfc:Z_XI_005_RFC>
Receiver Grouping payload
<?xml version="1.0" encoding="UTF-8" ?>
- <rfc:Z_XI_005_RFC xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
- <IP_CUSTOMER_HEADER>
<CUSTOMERID>100853</CUSTOMERID>
<COMPANY>Bram Van Tuyl Coldstore</COMPANY>
<SHORT_NAME>VAN TUYL</SHORT_NAME>
etc.
With no conversion xml file is created on output directory
<?xml version="1.0" encoding="UTF-8"?>
<ns1:MT_customer_header xmlns:ns1="http://avenue.com/xi/test"><customerid>100853</customerid><company>Bram Van Tuyl Coldstore</company>
etc
Now trying to use File content conversion with the simplest possible conversion -- just insert a colon after all the fields in the structure.
Following parameters
Adapter type FILE
receiver box checked
Message protocol File Content conversion
File access etc as before
Content conversion parameters
Recordset : IP_CUSTOMER_HEADER
(have also tried Z_XI 005_RFC)
Name
CUSTOMERID.fieldSeparator : (colon)
MONI shows message received by XI and processed correctly but NO outfile gets written to the target directory.
what am I doing wrong --it's probably something stupid.
The data into XI is a ONE RECORD structure -- not a table etc.
Cheers
jimboHi guys -- both solutions STILL not working
all I'm now getting is just a file with garbage in it
Payload is fine on XI system
Here's the payload
<?xml version="1.0" encoding="UTF-8" ?>
- <rfc:Z_XI_005_RFC xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
- <IP_CUSTOMER_HEADER>
- <item>
<CUSTOMERID>65013</CUSTOMERID>
<COMPANY>OY PANDA AB</COMPANY>
<SHORT_NAME>PANDA</SHORT_NAME>
<STREET>P.O.Box 3</STREET>
<STREET2 />
<STREET3 />
<STREET4 />
<POSTCODE>3331 GT</POSTCODE>
etc etc until end of customer mmaster details
</item>
</IP_CUSTOMER_HEADER>
</rfc:Z_XI_005_RFC>
Without conversion file is generated on target directory
Here's the XML output file sent to the directory
(for testing I only usec the ist 3 fields)
Not sure why <item> disappeared - maybe that has something to do with it ?
<?xml version="1.0" encoding="UTF-8" ?>
- <ns1:MT_customer_header xmlns:ns1="http://avenue.com/xi/test">
<CustomerId>65013</CustomerId>
<company>OY PANDA AB</company>
<shortname>PANDA</shortname>
</ns1:MT_customer_header>
RFC call from R3 system is fine (otherwise I wouldn't get the message into XI. Transaction SM58 as well on R3 shows no RFC errors
Converted file is just a blank with nl character at the end ??????
cheers
jimbo
Maybe you are looking for
-
Add Supported Channels Airport extreme Macbook Pro 15' late 2010
My wireless network at school recently added another router under our classroom. The channel for this connection is channel 13 which is not supported under the default supported channel settings. I was wondering if there is way to add or change these
-
Low resolution DVD's from high resolution source
I took numerous photos on a recent trip. Almost all on my Nikon D 80 DSLR. I downloaded them to iPhoto. They are all sharp approx 3+ MB/image. I created a slideshow of the best of them in iPhoto complete with maps and music. In iPhoto you can cr
-
Show pixel dimensions when cropping
The current crop tool allows one to specify the aspect ratio of a crop. It would be useful (to me, at least) to have a display of the actual pixel dimensions of the resulting crop. I can guess by eye, but sometimes I want to be sure that I do not go
-
System Check at OS level.
Hi Experts, Due to some problems in SAP Gui we are not able to login to SAP system.Still then I need to check whther ABAP + JAVA is up or not.Which are the files I will check in OS level for ABAP & JAVA stack??Moreover I need t
-
Adobe Standard slow to open two files at once
At work, I must check many PDF files to see if they are exact duplicates. However, I have noticed that when opening two files at once, the second file often takes very long to load. I am therefore wondering if there is any solution to help multiple f