File Splitting in XI.

Hi,
My Scenario is As below...
[ Legacy ( file ) ] -
> [XI] -
> [ R3 ( Proxy ) ]
1. My File is having structure as single <ROW>... </ROw> occurance is 0..unbounded.
2. Now problem i am facing is if number of line items increase ...XI is unable to pick file properly.
3. I want to split file before XI.
or on XI server and make smaller files.
What could be the better solution?
Please Reply.
Regards,
Akshay.

Processing large files is really a pain. Some points to look for this.
1. Java heap size.
2. Amount of Content conversion
3. Usage of custom/std adapter modules
The best practise is to split the large file into smaller ones and process them.
No idea how to develop a script, but you can get a lot of examples in internet. Just do a quick google. You need a script that will split the larger file into smaller files. This script has to be scheduled at OS level (independent of XI). In XI give the path of smaller files.
Regards,
Jai Shankar

Similar Messages

  • File Split to Multiple IDocs

    Hi all
    I have a problem with splitting an flat file into multiple IDocs. My attempt was to do this without BPM as mentioned in some similar posts, but I am not sure about how to get the file splitted. The flat file has multiple orders with multiple line items and each order should create a single IDoc of type ORDERS.ORDERS05. I have imported the IDoc as external definition with IDOC being 0...unbounded. My input file looks as following:
    ORDER1|LINE1|SOMETHING
    ORDER1|LINE2|SOMETHING
    ORDER1|LINE3|SOMETHING
    ORDER2|LINE1|SOMETHING
    ORDER2|LINE2|SOMETHING
    I have been getting this into the appropriate PI XML structure like:
    MT_Order
    ...OrderRecordSet  (0...unbounded)
    ......OrderRecord    (1...1)
    How do I get this into multiple IDocs for each ORDERx with its n LINE items? I was thinking of SplitByValue with value change on PO, but this doesn't seem to work.
    Any feedback appreciated. Thanks,
    Daniel

    Dear Daniel,
    You can achieve the required by using MultiMapping without BPM. Pls refer to the following blog for the same,
    /people/jin.shin/blog/2006/02/07/multi-mapping-without-bpm--yes-it146s-possible
    Apart from the above, if allowd you can changed the IDOC occurrence and change the XSD of the IDOC and you can achieve the requirement with the simple message mapping.
    Thanks
    Prasanna

  • XML File splitting in PI

    Hi all,
    We have a requirement that the receiving application can accept files only of size 2MB. So this is a limitation from the receiving application
    The source system is SAP ECC system and the receiving application is a legacy application.
    Technically it is a ABAP Proxy to File scenario. Receiving application can accept files only in XML format.
    SAP ABAP Proxy code is written in such a way to read all the records in a SAP table and the full table load is sent as an proxy xml message to PI, PI has to split this  full xml file  in to chunks based on some condition and then transfer it to receiving legacy application.
    Can we achieve file splitting in PI for XML files based on no.of records or size.
    Please share your inputs/pointers to provide the best solution to this requirement.
    regards,
    Younus

    Hi,
    Yes this can be done by the amount of records you send through at a time.
    So lets there is 10000 records. You might only send 200 at a time and that will keep it under the 2mb limitation.
    This is a simple if function on your nodes. Please see the links below that would help you.
    Split source message into multiple target messages
    Defining Message Splits - SAP NetWeaver Process Integration - SAP Library
    Configuring Mapping-Based Message Splits - Integration Directory - SAP Library
    Split mapping created no messages SAP PI 7.11
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/30ea2fdf-f047-2a10-d3a2-955a634bde6b?overridelayout=t…
    Regards,
    Jannus Botha

  • Idoc to File Split Scenario..

    HI Experts,
    My scenario is Idoc to File split scenario..
    Idoc will be triggered from the SAP ECC system and at the target side we need to drop two text files at the ftp directory.
    so Idoc data needs to be splitted and to be dropped in two differient files in the target FTP directory..
    So can any one suggest how to go ahead with the scenario..
    Thanks,
    --Kishore.

    Hi,
    In the idoc, some of the data will go to onefile and some goes to other files..
    the structures of the two files are differient..
    There is no condinal spliting here every idoc contents should be mapped according to the requirement and dropped as two files.. at the target end.
    for ex:
    Idoc
    field1
    fileld2
    field3
    field4
    in the out put side
    file1 should contain
    field1
    field3
    field4
    file2 should contain
    field1
    field2
    fileld3
    This is what the requirement, if you are not getting reply back with your queries..
    Thanks,
    --Kishore..

  • File-splitting based on content.

    Hi Experts,
    This question is regarding file-splitting.
    Its a BAI2 file with multiple records in it.
    a)I need to split the file based on an identifier and make those many sub-files ( with header and trailer added to the file )
    b)And can we ensure we have the same name for the files? ( with a count suffixed in the end )
    The forum talks about BPM, Enhanced Interface/Receiver Determination, Message Mapping split etc..
    Which would be the best approach for this?
    Please advise.
    Thank you,
    Rakesh.

    Hi,
    Using Enhanced Interface/Receiver Determination approach is best one.
    1.If your version is 7.1 and above, you can also try using enhanced interface determination. Please go through the below link
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/90dcc6f4-0829-2d10-b0b2-c892473f1571?QuickLink=index&overridelayout=true
    2.In the Receiver Determination, you need to select " Extended" in the Type of Receeiver determination.
    Illustration of Enhanced Receiver Determination - SP16
    regards,
    ganesh.

  • EXPORT FILE을 SPLIT하여 받는 방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    EXPORT FILE을 SPLIT하여 받는 방법
    =================================
    PURPOSE
    이 자료는 export시 2GB이상의 export dump file을 만들지 못하는 제약사항에 대하여 여러 export file로 나누어 받는 방법을 설명한다.
    Explanation
    !!! IMPORTANT: 이 방법은 반드시 KORN SHELL (KSH) 에서 작업해야 한다. !!!
    만일 ksh을 쓰지 않을 경우는 unix prompt에서 ksh라고 치면 변경이 된다.
    unix pipe와 split command를 사용하여 disk에 1024m(약1GB)씩 export file이 생기면서 export된다.
    export나 import시 제공되는 모든 parameter의 추가적인 사용이 가능하다.
    Export command:
    echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger tables=X
    사용예> echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger log=scott.log
    위와 같이 split하여 export를 받을 경우 file name은 expdmp-aa, expdmp-ab...expdmp-az, expdmp-ba... 으로 생성된다.
    Import command:
    echo|imp file=<(cat expdmp-*) userid=scott/tiger tables=X
    사용예>echo|imp file=<(cat expdmp-*) userid=scott/tiger ignore=y commit=y
    split와 compress를 같이 사용해도 가능하다.
    Export command:
    echo|exp file=>(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X
    Import command:
    echo|imp file=<(cat expdmp-*|uncompress) userid=scott/tiger tables=X
    *** 위의 작업을 solaris에서 test가 되었다. os 종류에 따라서 shell command는
    달라질 수 있다.
    이 외에도 split command를 사용하지 않고 unix pipe와 compress를 사용하여
    3-step의 작업으로 export file size limit의 회피를 위한 방법도 가능하다.
    Export command:
    1) Make the pipe
    mknod /tmp/exp_pipe p
    2) Compress in background
    compress < /tmp/exp_pipe > export.dmp.Z &
    -or-
    cat p | compress > output.Z &
    -or-
    cat p > output.file & )
    3) Export to the pipe
    exp file=/tmp/exp_pipe userid=scott/tiger tables=X
    Import command:
    1) Make the pipe
    mknod /tmp/imp_pipe p
    2) uncompress in background
    uncompress < export.dmp.Z > /tmp/imp_pipe &
    -or-
    cat output_file > /tmp/imp_pipe &
    3) Import thru the pipe
    imp file=/tmp/imp_pipe userid=scott/tiger tables=X
    Reference Ducumment
    <Note:1057099.6>

  • File splitting and idoc serialization into XI

    Dear experts,
    I am brand new with XI so please be indulgent with my following question.
    We have to define a new inbound interface with R/3 (4.7). We also have XI 3.0 into our landscape.
    The legacy system will send us a flat file whose lines will all have the same structure. The first part of the files is aimed at creating a FI posting into R/3. The second part of the file is aimed at creating a CO posting. This CO posting has to be created after the FI posting.
    I would like to know whether it is possible for XI :
    1) to split the input flat file into 2 "files" (one for FI posting, the other for CO posting)
    2) to create 2 idocs : one for FI posting and one for CO posting
    3) to serialize those idocs into R/3
    Thank you in advance for your help.
    Regards,
    Fabrice

    Hi Fabrice,
    this may help you....
    File to Multiple IDocs
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    also see File splitting 
    Re: splitting of messages using BPM
    Re: splitting of messages using BPM
    regards
    biplab
    <b><i>**reward points if u find it is useful</i></b>

  • HELP Desktop reorganized, file split into 10 different pieces -- any way to revert without losing anything? Urgent!

    HELP!
    I was showing a friend/client how to take a screenshot of a section of the screen (cmd+shift+4), but I accidentally pressed ctrl+cmd+4 and perhaps a few others (mistake due to using non-apple keyboard with mine), which reorganized her very cluttered but very purposeful desktop!
    She was rather upset. It also placed all the files that wouldn't fit in a row beneath her hard-drive icon. Understandably, this is frustrating for her; but what made it worse is that a very important document (is it PDF, .Doc? I don't know...) is, she claims, in TEN PIECES! How is this possible?
    I think those are just leftover pieces and the main file is there somewhere, but I haven't heard from her -- this happened yesterday afternoon -- and I don't know if she's even used the finder to locate it (does she remember the name of the file?) -- and now she's going to visit the Apple store, and hoping they can do something for this dilemma!
    They can revert to a previous state, but then she might lose recent files.
    Is there any way (even intensive) to undo this re-organization of her desktop?
    What is the explanation of the important file splitting into ten pieces (apparently from accidentally re-organizing her desktop or pressing a combination of cmd, shift, ctrl, alt, and 4)? Is there any simple way to undo either of these? (I'd gladly suggest a program for her to use to combine pieces of a PDF file or images into a .doc, but at this point, understandably, she is wary of my help. I still want to know what's going on, nevertheless.)
    Thank You All!

    No way I know of to undo an arrange.  And I've never heard of files being broken into pieces, whether PDF, DOC, or other.

  • TV@nywhere File Split Size limitation?

    I have been recording videos and wanted to set the File Split Size to 690mb, leaving some slop room, so the mpeg files would fit on a CD. I ran several tests and found that this file split size made no difference at all. The program would split on the defaults at about 4063mb.
    Any others have problems with this?

    The crappy software supplied with the TV@nywhere range is very buggy and one of the bugs is the file split option. I found that no matter what size it was set to within MSIPVS it would nearly always split the file at the default split size even with an NT file system. This is probably a registry entry problem, so you may be able to by-pass the software via the registry but do not attempt it unless you are happy editing the registry.
    I personally never trust any software to split my video files and perfer to capture the video as one complete file (not possible with MSIPVS unless file is smaller than the default split size because of the bug) and then split it manually at more suitable points within the video. You can use Tmpeg, VirtualDub or simular to proform the splits. By do it this way you do not get people cut off in mid sentence or find you have to change CD etc part way through an action sequence.

  • File split in SAP PI 7.3.1

    Hello ,
    We have one requirement in which we have to split a flat file in SAP PI 7.3 in to two files based upon a particular condition.
    COndition can be if payroll area =1 then it should be the part of first file and if payroll area =2 then it should be part of second file.
    The main file will be a mix of payroll area 1 and 2. When this split is done then we have to send the two files to different receivers.
    If there is some blog on SDN then please help!!!
    Regards
    Gaurav Ranjan

    Thanks Amit ,
    I am planning to create one message mapping only with multiple target mapping and based upon the condition records will be filled.
    This is given in the blog suggested :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/90dcc6f4-0829-2d10-b0b2-c892473f1571?overridelayout=true
    My only concern is i want all the records consolidated in one file/not one file for a record.
    Please help!!!

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Reg. File split in XI

    Hi,
            How do we split a file in XI? I Know that we can split a file using BPM step Message split. But my scenario is like, I am getting a flat file from a customer and as soon as it picked up by the File adaptor, it should be split into two parts if the file size exceeds certain limit say 100MB. How do we handle such type of scenarios in XI?  
    Regards,
    Murthy

    Hi ,
    Check Satish Reddy's reply in the following thread
    Re: File receiver - Get file size
    Regards,
    Sushil.

  • File splitting and transform in XI

    Hi friends,
    I have two queries,
    1) Is XI able to read in a file and split it into 2 new reformated files based on specific fields within the original file (ex: hours & earnings type)?
    2) is it possible to set up a translation table in XI and access and use the table data in creating these files?
    Thanks in advance

    Hi,
    <i>
    1) Is XI able to read in a file and split it into 2 new reformated files based on specific fields within the original file (ex: hours & earnings type)?</i>
    >>>what does it mean within the original file. Anyway you can split the file into 2 or many based on fields and you can write 2 or many files from XI.
    <i>2) is it possible to set up a translation table in XI and access and use the table data in creating these files?</i>
    >>>If you want to lookup some tables to create a file , you can maitain the table in XI abap stack or you can create Java Tables. .
    WHen you want to get the data from XI ABAP table, you need to lookup these tables with JCO connection.
    Hope this helps if my understanding is correct,
    Regards,
    moorthy

  • File splitting in EJB module

    Hi,
         I have a EJB module (used in sender file adapter) which takes a flat file as input and then splits the file into multiple files(XML) which needs to be sent to the Integration Engine.
    Is there any method by which I can return multiple XMLs from the same EJB module to the Integration Engine.
    Will writing the return statement from the EJB in a loop do the job?
    Thanks,
    Shiladitya

    Hi Shiladitya !
    Multiple EJB module outputs for the file adapter is not possible. If you need splitting, maybe you could use an external application that does the job and then the XI scenarios picks up the XML files (use the filename to get them in order), or maybe you could "split" your original scenario in 2 scenarios. One scenario, to split the file : File-XI-File and the other one to process the result of the first one : File-XI-Target Adapter.
    In the first scenario you could use the "recordsets per message" parameter of the file adapter sender comm channel, to automatically split the source file in several XML messages.
    Why do you need an EJB module ?
    Regards,
    Matias.

  • File Split based on Condition

    Hello
    I have a scenario where I get a file and need to split it into two based on a condition. Is it possible to accomplish this scenario at the file adapter configuration or do I need a mapping for this.
    Sample File 
    ABC1234asdfasfasdfasdfsdfasdfsdfsdfsdfasfsdfasdfsfasdfNewafasfsdfasfasdfasfafsdas
    asfdasdfasfasdfasdf
    asdfasdfadfasdfasfsd asdfsadfa fs
    asdfasdfasfasdfasdfasdfadfsfafasfas
    ABC1234asfdjoawejasdlfasdfasdfjsdfljasfjsfjaslfjasldfjasdlfsdOldfsdfadsfadfsdfasdffasdfasdfads
    asdfasdfsdfa
    asfdasdfasdfasdf
    asdfsfasfsfsad
    fasfasfasdffas
    asdfasfsfa
    asdfasfasfasdfas
    ABC1234asfdjoawejasdlfasdfasdfjsdfljasfjsfjaslfjasldfjasdlfsdNewfsdfadsfsd23424324234234234
    asdfsfasdfasfasfasfasfa
    asdfasfasdfasfasfaasdfa
    in the above content record always start with ABC1234 and it can have multiple sub records and it has a value new or old at column 120-123 in the first line of each record, it can have many sub records but I need to consider ABC1234 as the new record. I need to split the file into two, like one as newdata.txt and other as olddata.txt with new and old records respectively.
    Files after the split should look like this
    newdata.txt
    ABC1234asdfasfasdfasdfsdfasdfsdfsdfsdfasfsdfasdfsfasdfNewafasfsdfasfasdfasfafsdas
    asfdasdfasfasdfasdf
    asdfasdfadfasdfasfsd asdfsadfa fs
    asdfasdfasfasdfasdfasdfadfsfafasfas
    ABC1234asfdjoawejasdlfasdfasdfjsdfljasfjsfjaslfjasldfjasdlfsdNewfsdfadsfsd23424324234234234
    asdfsfasdfasfasfasfasfa
    asdfasfasdfasfasfaasdfa
    olddata.txt
    ABC1234asfdjoawejasdlfasdfasdfjsdfljasfjsfjaslfjasldfjasdlfsdOldfsdfadsfadfsdfasdffasdfasdfads
    asdfasdfsdfa
    asfdasdfasdfasdf
    asdfsfasfsfsad
    fasfasfasdffas
    asdfasfsfa
    asdfasfasfasdfas
    I appreciate if any one can help me with this.
    Thanks in advance

    Hi,
    as stefan pointed already you can't do this using standard adapter functionality,
    you can do it in mapping easily,convert below code in to UDF it will work.
    String test = "ABCasdasasasABCadsfsddgfgABCggdgdgdgdgABC34343";
    myString =test.split("ABC");
    for(int i=0;i<=myString.length;i++)
    System.out.println("MY STRING " + myString<i>);
    Regards,
    Raj

  • Java File Splitting

    Hi,
    We're working on a distributed file processing project. My friend told me we can split files using a command in Java. I searched for it but found no such command. It's not like splitting a text file according to the number of characters, but splitting any kind of file into smaller pieces. Is there a command for that in Java or will we have to write a whole program for it?

    NikhilKini wrote:
    Hi,
    We're working on a distributed file processing project. My friend told me we can split files using a command in Java. I searched for it but found no such command. It's not like splitting a text file according to the number of characters, but splitting any kind of file into smaller pieces. Is there a command for that in Java or will we have to write a whole program for it?No, there's no such method in the standard Java classes. You will have to write your own, or have a look at Apache's IO library: chances are they have something like it.
    [http://commons.apache.org/io/]
    Good luck!

Maybe you are looking for

  • How to get the column values from a BC4J View Table in UIXML?

    I am using a default UiXML Application for Order Entry system with Orders & Order Lines & Customers. I have a uix file OrdersView1_View.uix which displays (no updateable columns) all the Orders. How do I get the column value of a selected row in a BC

  • Two event handler

    I am using two event handler     CLASS-METHODS:         handle_data_changed_finished             FOR EVENT data_changed_finished OF cl_gui_alv_grid                 IMPORTING e_modified                           et_good_cells.     CLASS-METHODS: catch

  • Can't publish new edits

    After re-editing in iWeb, it won't publish the changes to my webpage. iWeb acts as if it is republishing, but the site remains the same. Doesn't matter whether I click on the "Publish" button or File-Publish all to .Mac. I receive no error messages,

  • Fpdownload.macromedia page error

    Hi, I recently downloaded GameRanger (a site that connects gamers online), similar to steam. When i tried to open this program, it says that i need "Adobe Flash Player web plug-in to be installed", it then redirects me to the adobe download page "htt

  • Common reference based on Item Combination

    Hi, I had a scenario where i need to create a unique reference based on Item and org combination. Eg: Item Org Common Item1 A1 001 Item2 A2 002 Basesd on Item and Org Combination i need to generate a common reference value. Can some one help me out h