Data load from variable file names

I have multiple files that I want to load into a cube, each starting with the same 5 characters but ending differently. EG GM1010104 GM1010204 What's the best option for MAXL script to automate this data load ? Can you use a wildcard name in the script to pick up anything starting with GM101**** ?

No - you need to specify the file name as it appears properly (I've never tried it but I am pretty sure it wouldn't work). One solution to this problem though is to have a shell script (or DOS commands) auto-generate an ESSCMD/MaxL script based on the files that exist in a directory.Most scripting environments should allow you to loop through a list of files that match some pattern - you can then create a script with the results and execute it.Another option is to build a MaxL script that accepts a parameter (file name) and have a shell script call it as it loops through the file list.Hope that helps.Regards,Jade----------------------------------Jade ColeSenior BI ConsultantClarity [email protected]

Similar Messages

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Cube creation & Data loading from Flat file

    Hi All,
    I am new to BI 7. Trying to create a cube and load data from a flat file.
    Successfully created the infosource and Cube (used infosource as a template for cube)
    But got stucked at that point.
    I need help on how to create transfer rules/update rules and then load data into it.
    Thanks,
    Praveen.

    Hi
    right click on infosource->additional functions->create transfer rules.
    now in the window insert the fields you want to load from flat file->activate it.
    now right click on the cube->additional functions->create update rules->activate it.
    click on the small arrow on the left and when you reach the last node(DS)
    right click on it->create info package->extenal data tab->give your FLAT file path and select csv format->schedule tab->click on start.
    hope it helps..
    cheers.

  • CUNIT error in data loading from flat file after r/3 extraction

    Hi all
    After R/3 business content extraction, if i load data from flat file to info cube, I am getting Conversion exit CUNIT error, what might be the reason, the data in the flat file 0unit column is accurate, and mapping rules are also correct, but still i am getting error with CUNIT.?

    check your unit if you are loading amount or quantities what mapping you have and what you are loading from flat files.
    BK

  • Vendor master data load from flat file

    Hi Experts,
    I am trying to load data from flat file. I am confused which info object i should use. My requirement is for info cube 0BBP_C01.
    As i found for this info cube only two info objects are available: 0BBP_VENDOR and 0VENDOR. But in my flat file i have following fields: Vendor code, Vendor Name, Ultimate Parent, City, Country, Minority Status, Vendor Payment Terms. which all are together not present in either of info objects.
    Please suggest which info object i must use. or should i create Z info object for my requirement?
    Thanks in advance.
    Regards,
    Niranjan Chechani

    Hi Kiran,
    I am loading data through flat file. I am not getting your point like how to get attributes present in the R/3 system which I am not using in this case.
    Hi rvc,
    I have created two characteristics z info objects i.e. ZU_PARENT for Ultimate parent and ZVEN_PAY for Vendor Payment Terms.
    When i am trying to add these two as attributes and add it as Navigational attribute, I am getting the error "Characteristic 0VENDOR: The attibutes SID table(s) could not be filled" (Message no. R7586)
    Please suggest am i wrong somewhere?
    Thanks and Regards,
    Niranjan Chechani
    Edited by: Niranjan Chechani on Nov 28, 2011 12:15 PM

  • Help Required regding: Validation on Data Loading from Flat File

    Hi Experts,
    I need u r help in the following issue.
    I need to validated the transactional data loading to the GL Cube from Flat file,
    1) The transactional data to the Cube to be loaded <b>only if master data</b> record exists for the <b>“0GL_ACCOUNT”</b> info object.
    2) If the master data record does not exits then the record need to be skipped from the loading and after the loading  the system should throw a message saying that these many records have been skipped (if there are any skipped records.).
    I would really appriciate u r help and suggestions on solving this issue.
    Regds
    Hari

    Hi, write a <b>start routine</b> in transfer rules like this.
      DATA: l_s_datapak_line type TRANSFER_STRUCTURE,
            l_s_errorlog TYPE rssm_s_errorlog_int,
            <b>l_s_glaccount type /BI0/PGLACCOUNT</b>,
            new_datapak type tab_transtru.
           refresh new_datapak.
           loop at datapak into l_s_datapak_line.
           select single * from /BI0/PGLACCOUNT into l_s_glaccount
             where CHRT_ACCTS eq l_s_datapak_line-<b>field name in transfer structure/datsource for CHRT_ACCTS</b>
    and GL_ACCOUNT eq l_s_datapak_line-<b>field name in transfer structure/datsource for GL_ACCOUNT</b>
    and OBJVERS eq 'A'.
           if sy-subrc eq 0.
             append l_s_datapak_line to new_datapak.
           endif.
           endloop.
           datapak = new_datapak.
           if datapak[] is initial.
    abort <> 0 means skip whole data package !!!
             ABORT = 4.
           else.
             ABORT = 0.
           endif.
    i have already some modifications but U can slightly change it to suit your need.
    regards
    Emil

  • Data Load from XML file to Oracle Table

    Hi,
    I am trying to load data from XML file to Oracle table using DBMS_XMLStore utility.I have performed the prerequisites like creating the directory from APPS user, grant read/write to directory, placing the data file on folder on apps tier, created a procedure ‘insertXML’ to load the data based on metalink note (Note ID: 396573.1 How to Insert XML by passing a file Instead of using Embedded XML). I am running the procedure thru below anonymous block  to insert the data in the table.
    Anonymous block
    declare
      begin
      insertXML('XMLDIR', 'results.xml', 'employee_results');
      end;
    I am getting below error after running the anonymous block.
    Error :     ORA-22288: file or LOB operation FILEOPEN failed”
    Cause :   The operation attempted on the file or LOB failed.
    Action:   See the next error message in the error stack for more detailed
               information.  Also, verify that the file or LOB exists and that
               the necessary privileges are set for the specified operation. If
               the error still persists, report the error to the DBA.
    I searched this error on metalink and found DOC ID 1556652.1 . I Ran the script provided in the document. PFA the script.
    Also, attaching a document that list down the steps that I have followed.
    Please check and let me know if I am missing something in the process. Please help to get this resolve.
    Regards,
    Sankalp

    Thanks Bashar for your prompt response.
    I ran the insert statement but encountered error,below are the error details. statement.
    Error report -
    SQL Error: ORA-22288: file or LOB operation FILEOPEN failed
    No such file or directory
    ORA-06512: at "SYS.XMLTYPE", line 296
    ORA-06512: at line 1
    22288. 00000 -  "file or LOB operation %s failed\n%s"
    *Cause:    The operation attempted on the file or LOB failed.
    *Action:   See the next error message in the error stack for more detailed
               information.  Also, verify that the file or LOB exists and that
               the necessary privileges are set for the specified operation. If
               the error still persists, report the error to the DBA.
    INSERT statement I ran
    INSERT INTO employee_results (USERNAME,FIRSTNAME,LASTNAME,STATUS)
        SELECT *
        FROM XMLTABLE('/Results/Users/User'
               PASSING XMLTYPE(BFILENAME('XMLDIR', 'results.xml'),
               NLS_CHARSET_ID('CHAR_CS'))
               COLUMNS USERNAME  NUMBER(4)    PATH 'USERNAME',
                       FIRSTNAME  VARCHAR2(10) PATH 'FIRSTNAME',
                       LASTNAME    NUMBER(7,2)  PATH 'LASTNAME',
                       STATUS  VARCHAR2(14) PATH 'STATUS'
    Regards,
    Sankalp

  • Hiearchy data loading from flat file to sap bw

    Hi experts,
    I am new to this can plz help me in sloving this sceniro.
    I have a sceniro like this can u tell me the procedure and hiearchy flat file structure
    To develop a data model in SAP BW to analyze sales
    MASTER DATA STRONG ENTITIES
    Create characteristics for following strong master data entities
    Customer
    Outlet
    Sales Office
    Sales Region
    Sales Representative
    Material
    Use Calendar Day and Calendar Month as time characteristics
    MASTER DATA WEEK ENTITIES
    Create attributes for following week master data entities
    Customer Name
    Customer Location
    Material Name
    Material Group
    ADDITIONAL MASTER DATA (HIEARCHIEY)
    Create a hierarchy where sales offices are assigned to sales regions
    and sales representatives are assigned to sales offices
    KEYFIGURES
    Quantity
    Price
    Tax %
    Sales Revenue
    DATA LOADING STARTEGY
    Load all master and transaction data using flat files.
    Thankq in advance
    Edited by: subbaraju on Dec 23, 2009 6:42 PM

    Hi arun
    Can u send me in detail the procedure how to slove the above sceniro.
    That is how many flat files we need to create. (cust f.f, mat f.f, heri f.f, trx f.f) i dont know wheather  it right r not.
    For tax and sales revenue wht are the formula we need to submit.
    and which one we need to take as master data key for hiearchery.
    Thanks in advance
    Edited by: subbaraju on Dec 24, 2009 7:05 PM

  • Data loading from flat file into bw

    Hi Experts,
    When we load data from flat file into BW the <b>sequence of fields in flat file</b> should be matched with the <b>transfer structure</b> or <b>communication structure</b>?
    wid thanX,
    arijit

    Arijit,
    The sequence of fields in a flat file must be the same as this one in the transfer structure.
    The comm and tran structures are the same after the first creation of datasource from flat file source system. After that someone could change the sequence in the CS -- if the mapping of fields in transfer rules remains unchanged, then the load is to be succesful.
    In case of changing the sequence in TS you'll get either error or wrong load (values to be loaded into some infobjects will go to another ones.)
    Best regards,
    Eugene

  • Regarding master data ,transactional data loading from flat file

    Hi friends,
    Please tell me how to load master data and transactional data from flat file ....
    Thanks in advance ,
    Regards,
    ramnaresh.

    Hi,
    Please use the 'search forum' functionality and search the BI Forum with say 'flat file loading'.  You would get plenty of links of previous threads.
    BR/
    Mathew.

  • Steps to create Data loading from Flat File to Info Cube in BI

    Hi,
         I am very new BW, I need some one help. When I am trying to create info source i am any pop window stating to create Transaction data or master data.
         After creating Info source, I dont know how to assign this info source to source system (which i created).
          When select the context menu of info source I dont have option to assign the datasource.
          And one more thing is When I am creating the Info Cube. I cant understand how to create.
          Please some one help me how to map the fields to Source system.
    Regds
    Dave.

    Hi,
    For flat file upload, first you need to create the source system.
    Then you need to create the infosource based on the format of the flat file you are going to upload or vice versa depending on your requirements.
    Once your infosource is ready, right click on it and select assign datasource. Here you can assign your flat file datasource. Then create an infopackage and give the path from where the file is to be upload to BW (in the infopackage).
    Also look at the thread below for procedure on flat file upload :
    http://help.sap.com/saphelp_nw04s/helpdata/en/8e/dbe92341c84242be2c7d3917f1c197/frameset.htm
    Cheers,
    Kedar

  • Data loading from csv  files

    Hello,
    I designed a quite simple characteristic having language dependent texts, two numeric time dependent attributes, and a compounding characteristic.
    I then built a set of infosource, data source and  infopackage for texts,.and a similar one for attributes.  Data sources reside on csv flat files, stored on local machine [ folder c:\tmp ].
    The file with texts stores the texts for two languages, each row containing the language key field.
    When I run the infopackage in preview mode, everything looks ok.
    When I run it in schedule mode, only the english texts are stored into the characteristic, the other language texts are left blank.
    The line structure of the csv file that contains the attributes, is as follows:
    1;1;99991231;10000101;0.99;0.00
    The first two fields are
    -compound;
    -key_value;
    -ValidTo;
    -ValiFrom;
    -Attribute1
    -Attribute2
    When I run the infoPackage in preview mode, it also looks ok.
    When schedule it, the ValidTo and ValidFrom are left blank.
    In both situations the transfer rules are based on direct fields-infoObject mapping.
    Also no error ot warning message is displayed.
    Is there anyone to let me know what I miss?
    Any other suggestion is very welcome.
    Regards!

    As u mentioed in your post should bee excel save as csv why need semicolon just check out and analyze
    1;1;99991231;10000101;0.99;0.00

  • How do I grab a value from a file name and load it in a field/column?

    Hi,
    I am loading this .txt file (OUS_RAW_NYC_05_2011.txt) into an internal table i_raw.
    I want to pick out the NYC characters from the file name and fill it as value for <wa_raw>-field1 for all records.
    How do I do this?
    Pls advice.
    Thanks!

    Hi Durgesh,
    I am doing this in a program via SE38 and not via transformation routine.
    Now I am working on this piece of code to get the value.
    file_str = //rdmsbw/dev/data/output/all/OUS_RAW_HCM_05_2011.txt
    I only want the characters HCM from file_str.
    When I execute this code below:
        MOVE file_str TO org_unit.
        WRITE / org_unit+37(12).
        <wa_raw>-/BIC/ZOUORGUT = org_unit.
    my output is = //rdm
    how do i extract out HCM?
    Pls advice.
    PS: also pls help me out with my another post
    http://forums.sdn.sap.com/thread.jspa?threadID=2141618&tstart=0

  • How to fetch the Date column(or Month column) from the file name from the specified path in ODI 11g

    Hi ALL,
    Can any one help us regarding How to fecth the Date column(or month column) from the file name specified in the path in a generalized way .
    For example :
    file name is :subscribers (Cost) Sep13.csv is specified in the below path
      E:\Accounting\documents\subscribers (Cost) Sep13.csv
    here I need to fetch the "Sep13" as a Date column in the ODI 11g in the generalized way.
    Can any one help us in this case as early as possible.

    I would suggest using a piece of Jython code for this.  Something like this...
    import os
    import os.path
    filelist  = os.listdir(E:\Accounting\documents\)
    for file in filelist:
    datestr = file[19:-4]
    You'd need to work out what to do with datestr next...  perhaps write it to a table or update an ODI variable with it.
    Hope this is of some help.

  • BPC:: Master data load from BI Process chain

    Hi,
    we are trying to automatize the master data load from BI.
    Now we are using a package with:
    PROMPT(INFILES,,"Import file:",)
    PROMPT(TRANSFORMATION,%TRANSFORMATION%,"Transformation file:",,,Import.xls)
    PROMPT(DIMENSIONNAME,%DIMNAME%,"Dimension name:",,,%DIMS%)
    PROMPT(RADIOBUTTON,%WRITEMODE%,"Write Mode",2,{"Overwirte","Update"},{"1","2"})
    INFO(%TEMPNO1%,%INCREASENO%)
    INFO(%TEMPNO2%,%INCREASENO%)
    TASK(/CPMB/MASTER_CONVERT,OUTPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_CONVERT,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_CONVERT,TRANSFORMATIONFILEPATH,%TRANSFORMATION%)
    TASK(/CPMB/MASTER_CONVERT,SUSER,%USER%)
    TASK(/CPMB/MASTER_CONVERT,SAPPSET,%APPSET%)
    TASK(/CPMB/MASTER_CONVERT,SAPP,%APP%)
    TASK(/CPMB/MASTER_CONVERT,FILE,%FILE%)
    TASK(/CPMB/MASTER_CONVERT,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,INPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_LOAD,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_LOAD,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,WRITEMODE,%WRITEMODE%)
    But we need to include these tasks into a BI process chain.
    How can we add the INFO statement into a process chain?
    And how can we declare the variables?
    Regards,
    EZ.

    Hi,
    i have followed your recomendation, but when i try to use the process /CPMB/MASTER_CONVERT, with the parameter TRANSFORMATIONFILEPATH and the root of the transformation file as value, i have a new problem. The value only have 60 char, and my root is longer:
    \ROOT\WEBFOLDERS\APPXX\PLANNING\DATAMANAGER\TRANSFORMATIONFILES\trans.xls
    How can we put this root???
    Regards,
    EZ.

Maybe you are looking for

  • SCCM 2012 R2 site server client installation error: The client version 5.00.7958.1000 does not match the MP version 5.00.7804.1000.

    Hello, When I try to install the client on the site server itself it gives me this error message and fails the ccmsetup. I only have this on the site server. Clients to other servers and computers are pushed fine. If I check the version of the MP it

  • I need to remove my credit card from my account

    Please help me remove my credit card from my account.  I am going to buy my kids Itunes Cards instead.  Thanks, Laura

  • Seeburger

    Hi, I have the following questions on EDI Seeburger. 1.BIC Mapping Designer:As per my understanding,the BIC Mapping Designer is used if any existing Generator mapping provided by Seeburger is not as per the requirement.If we need to edit the existing

  • Calls offered = Calls Handled & Abandoned in Q?

    I have produced a report that shows calls Handled and Abandoned in Queue, however they do not add up to the total calls offered, The report is showing Handled @ 329 & Abandoned in Q 273, where as calls offered is 524. The discrepancy is 48 calls! In

  • Essbase Error involving member formula

    We are having an essbase issue when accessing a data intersection that has three members that have member formulas. In FRS, we get the following error message: Error executing query: Error: Internal Essbase JAPI error: [Cannot perform cube view opera