File splitting and transform in XI

Hi friends,
I have two queries,
1) Is XI able to read in a file and split it into 2 new reformated files based on specific fields within the original file (ex: hours & earnings type)?
2) is it possible to set up a translation table in XI and access and use the table data in creating these files?
Thanks in advance

Hi,
<i>
1) Is XI able to read in a file and split it into 2 new reformated files based on specific fields within the original file (ex: hours & earnings type)?</i>
>>>what does it mean within the original file. Anyway you can split the file into 2 or many based on fields and you can write 2 or many files from XI.
<i>2) is it possible to set up a translation table in XI and access and use the table data in creating these files?</i>
>>>If you want to lookup some tables to create a file , you can maitain the table in XI abap stack or you can create Java Tables. .
WHen you want to get the data from XI ABAP table, you need to lookup these tables with JCO connection.
Hope this helps if my understanding is correct,
Regards,
moorthy

Similar Messages

  • File splitting and idoc serialization into XI

    Dear experts,
    I am brand new with XI so please be indulgent with my following question.
    We have to define a new inbound interface with R/3 (4.7). We also have XI 3.0 into our landscape.
    The legacy system will send us a flat file whose lines will all have the same structure. The first part of the files is aimed at creating a FI posting into R/3. The second part of the file is aimed at creating a CO posting. This CO posting has to be created after the FI posting.
    I would like to know whether it is possible for XI :
    1) to split the input flat file into 2 "files" (one for FI posting, the other for CO posting)
    2) to create 2 idocs : one for FI posting and one for CO posting
    3) to serialize those idocs into R/3
    Thank you in advance for your help.
    Regards,
    Fabrice

    Hi Fabrice,
    this may help you....
    File to Multiple IDocs
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    also see File splitting 
    Re: splitting of messages using BPM
    Re: splitting of messages using BPM
    regards
    biplab
    <b><i>**reward points if u find it is useful</i></b>

  • File split and correlation in BPM

    Hi,
    I have a large file from the sender FTP which I am splitting by using 'recordsets per message' in the file adapter.
    Now I want all the split files to enter the same BPM.
    I have designed the BPM logic by using the correlation in the receive step. Still I am seeing that the each split message from the same file is entering into a separate BPM.
    How can we monitor whether the correlation is actibve? What could be the other options to check?
    KIndly help.
    Thanks,
    John.

    Hi ciochinah,
    The value is same in all the batches.
    But there is one aspect. Each batch has got many line items (no headers). I am assigning an element in the line items for correlation.
    All the line items in all the batches has got the same value for the field element used in correlation.
    I have one important question here. Where can we check for the active correlation at runtime? Is there a transaction which will tell you whether the correlations are active and which one was taken at runtime?
    Thanks,
    John

  • How to get the source file name and transform into database

    Hi,
    We need to load data from source file abc.csv into oracle table. during loading, we need to keep the file name abc.csv and write into the database table as one field. Is there anyone who did the similar task? and how to make it? Pls help!

    Hi,
    If you can write the filename to a text file, you may do the following:
    (1) create a text-file containing the name of your source-file.
    (2) create an external table using that text-file as input.
    Now, you can SELECT the filename from the external table.
    Of course, you need a way to : create the flat file, handle the flat file, etc.
    Grtz.

  • Split and transform words

    I have a problem I'm having trouble solving.
    I have an application I write a textbox: 'SQL SQL' MICROSOFT 'SQL SQL'
    When I click a button, I wanted to receive as stored procedure parameter:
      @ String = 'SQL SQL "OR MICROSOFT OR MICROSOFT OR' SQL SQL '
    Can you help me please.
    thank you

    Hi ,
      While Passing the string make sure to replace the single quotes with double single quotes as below
    @String = '''SQL SQL'' MICROSOFT ''SQL SQL'''
    Best Regards Sorna

  • BPM Split and Merge

    Hi...
       I want to do scenario like file split and merge using BPM.
    for that i have used,
    1.Receive
    2.Transformation(1:N)
    3.Block(ForEach)
    4.Control
    5.EndBlock
    6.Transformation(N:1)
    7.Send.
    while executing the scenario, the message is going to the queue. In that it is showing the status as "Running".
    can you please tell if i did wrong in my scenario?

    > 1.Receive
    > 2.Transformation(1:N)
    > 3.Block(ForEach)
    > 4.Control
    > 5.EndBlock
    > 6.Transformation(N:1)
    > 7.Send.
    U are using an empty infinite block and hence it is in running state always. You dont need a block at all. after 1:n transformation, use the n:1 transformation and send. I know you must be doing a sample scenario. In reality you will have a send step usually for sending to another system line by line. That when you will need a block.
    VJ

  • I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    video
    What version of Premiere Elements do you have and on what computer operating system is it running?
    Please review the following workflow.
    ATR Premiere Elements Troubleshooting: PE11: Project Assets Organization for Scene and Highlight Grabs from Collection o…
    But please also determine if your project goal is supported by
    a. format of your source
    and
    b. computer resources
    More later based on details that you will post.
    ATR

  • How to validate and transform large (180M) xml files

    Hi:
    I've been looking at various ways to do this with oracle and am getting a bit lost in the sea of documentation and different ways to go about this. I was hoping that something like the XMLParser class and XMLTransform would be smart enough to handle large files by using SAX when it has to but I'm getting "too many nodes" when trying to transform a really large file. I've gotten oraxsl to handle it if I pass in the proper memory parameters on the command line but a) this will still have limits and b) I was trying to do this in a stored procedure which (I think) means I'm looking at XMLParser?
    I've also seen documentation on something called "Scalable DOM" but I think that's only in 11g? So I'm thinking I have to write a (Java?) stored procedure to loop through the top elements of this XML file (select extract(...)) and transform each node?
    I have the XML, XSD, and XSLT all in clob columns. What's the easiest/quickest path within Oracle to validate the XML against the xsd and translate source XML with XSL?
    I'm using Oracle 10gR2 on RH Linux.
    Thanks.

    So I'm thinking I have to write a (Java?) stored procedure to loop through the top elements of this XML file (select extract(...)) and transform each node? Here's something I've written a while back when I hit the same restrictions ("too many nodes") on 10.2.0.4.
    It takes XMLType as input parameters (but it's easy to adapt for CLOB), and streams the transformed XML directly into a file :
    create or replace and compile java source named ora_xslt_util as
    import oracle.xml.parser.v2.*;
    import oracle.xdb.XMLType;
    import java.io.*;
    import org.w3c.dom.*;
    public class oraXSL
    private static XMLDocument getXMLDocument(XMLType xml) throws Exception
        XMLDocument doc = null;
        DOMParser parser  = new DOMParser();
        parser.setValidationMode(oracle.xml.parser.v2.XMLParser.NONVALIDATING);
        parser.setPreserveWhitespace(true);
        parser.parse(new StringReader(xml.getStringVal()));
        doc = parser.getDocument();
        return doc;
    public static void transform(XMLType doc, XMLType xsl, String filename) throws Exception
        OutputStream os = new FileOutputStream(filename);
        XMLDocument xmldoc = getXMLDocument(doc);
        XMLDocument xsldoc = getXMLDocument(xsl);
        XSLProcessor xsp = new XSLProcessor();
        XSLStylesheet xss = xsp.newXSLStylesheet(xsldoc);
        xsp.processXSL(xss, xmldoc, os);
        os.close();
    }and the PL/SQL wrapper (originally part of a package) :
    PROCEDURE processXSL (
       p_xmldoc IN XMLType
    , p_xsldoc IN XMLType
    , p_filename IN VARCHAR2
    IS
    LANGUAGE JAVA NAME 'oraXSL.transform(oracle.xdb.XMLType,oracle.xdb.XMLType,java.lang.String)'
    ;

  • Can we cleanse and transform data at flat file or external table level?

    Hi,
    I have some data that I want to cleanse and transform. I don't want to cleanse it after i populate the external table, I want to get done with it at flat file level or while populating the external table. Can we cleanse and transform data at flat file or external table level through Oracle or OWB 11.2? Is it possible to run a conditional load (i.e. having a where clause or if-else-then) for an external table? Can we call oracle functions for an external table at the time of creation?
    Thanks in advance.
    Regards,
    Ann.

    Hi Oleg,
    Thanks a lot for the clarification. :)
    So is there a way that I can clease the data within the text file through Oracle or OWB? I have datatype mismatches in the data and most of my data is getting rejected because of that. The way I can think of, for solving this problem, is to create the external table with all fields with datatype varchar and then cleansing the data. But it doesn't seem very effecient plus it will get very complicated because I have almost 80-90 fields.
    Any help?
    Thanks and regards,
    Ann.

  • How do I split Imovie events?  I find the option in File menu and have selected the part of the event that I want to split before, but the option to split is not bolded (allowed) in the File menu.

    I have imported video to iMovie and want to split the event.  I have selected part of the event and when I go to File> Split the event before the selected clip it is not bolded in the menu and it is not an allowed selection.  I had imported directly to an external HD so moved the event to the Mac HD and tried it again. This did not work either.  Any ideas why this option is not available to me???

    Use iMovie to move the events to your Movies folder, otherwise you're can't.

  • Does Acrobat Pro read the content in pdf file and transforms it?

    Does Acrobat Pro read the content in pdf file and transforms it to xls file without the need for much changes or manual work?

    Acrobat X (Standard and Pro) will save tabular data to XLS or XLSX format, provided it can recognize the table as being a table. If the PDF has missing or incorrect structure tags, Acrobat will try to guess the table layout by the position of text and lines on the page - this works well for basic formatting but if the table has complex styling, spanned cells etc. it can lead to problems.
    Acrobat X will even attempt to export a table within a scanned document, by applying OCR during the export stage - though again this relies on the table being visually identified.
    See http://www.adobe.com/products/acrobatpro/pdf-to-word-excel-converter.html and this article on how to extract one table from a larger document.

  • Large XML files, how to split and read them in PI

    Hi,
    For a specific requirement, we have a 60 MB-90 MB XML file being generated by a legacy application.
    This file has to be processed by PI 7.31 Sender File Adapter and should be posted as a ABAP proxy to SAP ECC system.
    To address any performance issues, we are planning to read the files in chunks.
    Is there a way by which we can configure the file adapter to pick a fixed no.of records to limit the size.
    In inputs in this regard will be appreciated.
    regards,
    Younus

    Hi Younus,
    First of all if you are using PI 7.31, then you should be quite able to process XML file of about 100MB scale. Performance issue shall occur in ECC side instead as it can not handle such high volume.
    My suggestion will be to pick the whole XML file at one go and do necessary mapping. However on receiver side use SOAP Adapter instead of XI for making proxy calls to ECC.
    In SOAP adapter (using HTTP as Transport and XI 3.0 as Message protocol) there's an option as 'XI Packaging' which can be used to send data in packets.
    Refer below help doc on it.
    Configuring the Receiver SOAP Adapter - Advanced Adapter Engine - SAP Library
    Thanks
    Bibek

  • Need to validate file Name,split the file name and store the splited values into Variables

    Dear All,
    Below is the my requirement.
    I have a folder, in that folder I have bunch of text files. The file name is below format
    ACA_122_pay_20140430_001
    Initially the file name start with ACA code,groupid,group name and date time stamp. This is the standard structure of file name.
    I want to check each and every file in the folder with this structure. The structure should be standard for all the files. If the structure same for all files I need to get codes form file name. For example
    If you see the below file name, I need to get ACA and put into variable,I need to get 122 and put into variable. For group name and date time needs to put in variable.
    If the file format is not valid state then I need to log exception.
    Let me know if I am not clear.
    Kindly provide the C# code for achieving the above requirement.
    As I am new to .net programming. kindly help me.
    Thanks in Advance,
    Regards,
    Madhava Ganji
    Madhava Ganji

    Hi MadhavaGanji,
    I have post how to validate the file name, header row against definition table which stored the file name and column definition. 
    Take a look and see if this is helpful.
    http://sqlage.blogspot.com/2013/11/ssis-validate-file-header-against.html
    http://sqlage.blogspot.com/

  • Creating XML file Using Call Transformation

    Hello Friends,
          I have searched before posting thread, couldnt find anything.
          I am creating an XML file using Call Transformation. My internal table has 3 date fields and some other fields.  For some records I dont have values for the date fields. In that case my XML file is giving the date value as 0000-00-00 since I declared it as Date type.  This value 0000-00-00  is not accepted by the middle ware as the valid date.  I can not change it as String type as per the suggestion.
    In that case I am advised to skip printing the date field tag if it doesnt have value.
         Is there any way to skip the date field if it is empty. Any Suggestions please ?.
    Thanks
    Lakshmi.

    Hi,
    I had exactly the same problem before. When you call a transformation there is an option called initial_components. According to SAP if you use initial_components = 'SUPRESS' the empty fields should not being generated on the XML.
    Now, this didn't work for me and I have seen some people with the same problem. Here is how I solved this (maybe not the best way but it worked):
    First: My fields are all CHAR in my table
    Second: In the transformation, you can use conditional transformation to not display a tag if field is empty, here a piece of my transformation (I am using simple transformations):
       <tt:root name="ROOT"/>
         <tt:cond s-check="not-initial(ref('ROOT.L1_NM')) or not-initial(ref('ROOT.L2_NM'))">
              <TRNMTR_NM>
                <tt:cond s-check="not-initial(ref('ROOT.L1_NM'))">
                  <l1_nm>
                    <tt:value ref="ROOT.L1_NM"/>
                  </l1_nm>
                </tt:cond>
                <tt:cond s-check="not-initial(ref('ROOT.L2_NM'))">
                  <l2_nm>
                    <tt:value ref="ROOT.L2_NM"/>
                  </l2_nm>
                </tt:cond>
              </TRNMTR_NM>
            </tt:cond>
    As you can see, I first check if the fields have values.
    Hope it helps
    Edited by: carlosrv on Oct 4, 2011 8:22 PM

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

Maybe you are looking for

  • Access Denied - Failed to Open Document

    I've done a few hours of searching for a solution to my problem with no specific answer helping me. I recently upgraded 63 various reports from Crystal Reports version 8.5 to Crystal Reports 2011 (using a Windows 7 box). I realigned all of the report

  • ITunes remote app for mac

    Hello, I use the remote app on my iPhone quite a bit and am looking for one something similar i can use on my mac. I have found a couple but ideally I am looking for a dashboard widget or menubar item thats always there. I'm not looking for loads of

  • Adding Hyperlink to Video Made in Final Cut Pro

    I created a 10 second which incorporates my logo flash by the screen and concludes with a fade in of two different banners: one leading to a commercial side of my work and another leading to personal. I want to somehow export this video and import it

  • How to connect iPod Classic to bluetooth dongle?

    how to connect iPod Classic to bluetooth dongle?

  • Password Sync using Waveset

    Hello All, I am trying use the password sync util which is part of the Identity Manager aka Waveset Lighthouse to capture the password changes on Active Directory and pass it to an LDAP server. It intercepts the password change on the Active Director