Not loading from flat file using SQL*Loader

Hi,
I am trying to load from an excel file.
first i converted excel file into csv file and save it as as dat file.
in the excel file one column is salary and the data is like $100,000
while converting xls to csv the salary is changed to "$100,000 " (with quotes and a space after the amount)
after the last digit it will put a space.
in the control file of sql*loader i had given
salary "to_number('L999,999')"
my problem is the space after the salary in the dat file.---> "$100,000 "
what changes i have to make in the to_number function which is in the control file.
Please guide me.
Thanks & Regards
Salih KM
Message was edited by:
kmsalih

Thanks a lot Jens Petersen
It's is loading ..........
MI means miniute.
am i correct.
but i didn't get the logic behind that.
can u please explain that.
Thanks & Regards
Salih KM

Similar Messages

  • How to Load Arabic Data from flat file using SQL Loader ?

    Hi All,
    We need to load Arabic data from an xls file to Oracle database, Request you to provide a very good note/step to achieve the same.
    Below are the database parameters used
    NLS_CHARACTERSET AR8ISO8859P6
    nls_language american
    DB version:-10g release 2
    OS: rhel 5
    Thanks in advance,
    Satish

    Try to save your XLS file into CSV format and set either NLS_LANG to the right value or use SQL*Loader control file parameter CHARACTERSET.
    See http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_control_file.htm#i1005287

  • Copying TEXT column from flat file into SQL results in empty fields....

    I'm copying TEXT column from SQL to Flat file (ragged right fixed width) - DT_TEXT. It copies fine and I''ve checked the output file. Now, while trying to copy the flat file to sql server in another system. I find all the fields are empty.
    When I preview the source from flatfile, I see all the entries there. But it copies other fields but the field with DT_TEXT  is empty.
    This is when I preview
    SQL Table output
    Any help will be helpful!!!

    Hi, I'm not sure If I'm understanding what you're saying. The data got copied from SQL to Flat file. I've double checked the flat file and I see the DT_TEXT data there. The size of file also gives an indication that there's existence of TEXT Data
    But, when I copy that data back to sql again (which I can also preview before the load), The DT_TEXT values goes missing. Same when I copy to Excel as CSV or as a Flat file. I don't see text data.
    The TEXT data resides on the first output. But when I try to extract to other format from that output, it doesn't come out.

  • Reading data from flat file Using TEXT_IO

    Dear Gurus
    I already posted this question but this time i need some other changes .....Sorry for that ..
    I am using 10G forms and using TEXT_IO for reading data from flat file ..
    My data is like this :-
    0|BP-V1|20100928|01|1|2430962.89|27|2430962.89|MUR|20100928120106
    9|2430962.89|000111111111|
    1|61304.88|000014104113|
    1|41961.73|000022096086|
    1|38475.65|000023640081|
    1|49749.34|000032133154|
    1|35572.46|000033093377|
    1|246671.01|000042148111|
    Here each column is separated by | . I want to read all the columns and want to do some validation .
    How can i do ?
    Initially my requirement was to read only 2 or 3 columns so i did like this ...
    Procedure Pay_Simulator(lfile_type varchar2,lac_no varchar2,lcur varchar2,lno_item number,ltotal number,ldate date,lpay_purp varchar2,lfile_name varchar2)
    IS
    v_handle utl_file.file_type;
    v_filebuffer varchar2(500);
    line_0_date VARCHAR2 (10);
    line_0_Purp VARCHAR2 (10);
    line_0_count Number;
    line_0_sum number(12,2);
    line_0_ccy Varchar2(3);
    line_9_sum Number(12,2);
    line_9_Acc_no Varchar2(12);
    Line_1_Sum Number(12,2);
    Line_1_tot Number(15,2) := 0;
    Line_1_flag Number := 0;
    lval number;
    lacno varchar2(16);
    v_file varchar2(20);
    v_path varchar2(50);
    Begin
    v_file := mcb_simulator_pkg.GET_FILENAME(lfile_name); -- For the file name
    v_path :=rtrim(regexp_substr( lfile_name , '.*\\' ),'\'); For the Path
    v_path := SUBSTR (lfile_name,0, INSTR (lfile_name, '\', -1));
    v_handle := UTL_FILE.fopen (v_path, v_file, 'r');
    LOOP
    UTL_FILE.get_line (v_handle, v_filebuffer);
    IF SUBSTR (v_filebuffer, 0, 1) = '0' THEN
    SELECT line_0 INTO line_0_date
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    SELECT line_0 INTO line_0_Purp
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 4;
    SELECT line_0 INTO line_0_count
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 7;
    SELECT line_0 INTO line_0_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 8;
    SELECT line_0 INTO line_0_ccy
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 9;
    ELSIF SUBSTR (v_filebuffer, 0, 1) = '9' THEN
    SELECT line_9 INTO line_9_Acc_no
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_9, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    SELECT line_9 INTO line_9_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_9, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 2;
    ELSIF SUBSTR (v_filebuffer, 0, 1) = '1' THEN
    line_1_flag := line_1_flag+1;
    SELECT line_1 INTO line_1_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_1, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    Line_1_tot := Line_1_tot + line_1_sum;
    END IF;
    END LOOP;
    DBMS_OUTPUT.put_line (Line_1_tot);
    DBMS_OUTPUT.PUT_LINE (Line_1_flag);
    UTL_FILE.fclose (v_handle);
    END;
    But now how can i do ? Shall i use like this select Statement for all the columns ?

    Sorry for that ..
    As per our requirement ...
    I need to read the flat file and it looks like like this .
    *0|BP-V1|20100928|01|1|2430962.89|9|2430962.89|MUR|20100928120106*
    *9|2430962.89|000111111111|*
    *1|61304.88|000014104113|*
    *1|41961.73|000022096086|*
    *1|38475.65|000023640081|*
    *1|49749.34|000032133154|*
    *1|35572.46|000033093377|*
    *1|246671.01|000042148111|*
    *1|120737.25|000053101979|*
    *1|151898.79|000082139768|*
    *1|84182.34|000082485593|*
    I have to check the file :-
    Validation are 1st line should start from 0 else it should raise an error and insert that error into one table .
    The for 2nd line also same thing ..it should start from 9 else it should raise an error and insert that error into one table .
    Then the 3rd line should start from 1 else it should raise an error and insert that error into one table .
    After that i have to do a validation like i will read the 1st line 2nd column .. It should be like this BP-V1 else raise an error and insert that error to a table . Then i will check the 3rd column which is 20100928 , it should be YYYYMMDD format else same thing ERROR.
    Then like this for all columns i have different validation .......
    Then it will check for the 2nd line 3rd column . this is an account no .1st i will check it should be 12 char else ERROR .Then I will check that what user has imputed in the form.Like for example User putted 111111111 then i will check with this 000111111111 which is there in the 2nd line . I have to add 000 before that Account no which user imputed .
    Then the lines which is starting from 1 , i have to take all the 2nd column for all the lines which is starting from 1 and i have to do a sum . After that i have to compare that sum with the value in the 1st lines ( Starting from 0) 6th column . It should be same else ERROR ...
    Then same way i have to count all the lines which is starting from 1 . Then i have to compare with the 7th column of 1st line . It should be same . Here in this file it should be 9.
    MY CODE IS :-
    Procedure Pay_Simulator(lfile_type varchar2,lac_no varchar2,lcur varchar2,lno_item number,ltotal number,ldate date,lpay_purp varchar2,lfile_name varchar2)
    IS
    v_handle TEXT_IO.file_type;
    v_filebuffer varchar2(500);
    line_0_date VARCHAR2 (10);
    line_0_Purp VARCHAR2 (10);
    line_0_count Number;
    line_0_sum number(12,2);
    line_0_ccy Varchar2(3);
    line_9_sum Number(12,2);
    line_9_Acc_no Varchar2(12);
    Line_1_Sum Number(12,2);
    Line_1_tot Number(15,2) := 0;
    Line_1_flag Number := 0;
    lval number;
    lacno varchar2(16);
    v_file varchar2(20);
    v_path varchar2(50);
    LC$String VARCHAR2(50) ;--:= 'one|two|three|four|five|six|seven' ;
    LC$Token VARCHAR2(100) ;
    i PLS_INTEGER := 2 ;
    lfirst_char number;
    lvalue Varchar2(100) ;
    Begin
    v_file := mcb_simulator_pkg.GET_FILENAME(lfile_name); For the file name
    v_path :=rtrim(regexp_substr( lfile_name , '.*\\' ),'\'); For the Path
    --v_path := SUBSTR (lfile_name,0, INSTR (lfile_name, '\', -1));
    Message(lfile_name);
    v_handle := TEXT_IO.fopen(lfile_name, 'r');
              BEGIN
                        LOOP
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        --Message('First Char '||lfirst_char); 
                                  IF lfirst_char = '0' Then
                                  Loop
                                  LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                                  Message('VAL - '||LC$Token);
                                  lvalue := LC$Token;
                                  EXIT WHEN LC$Token IS NULL ;
    i := i + 1 ;
    End Loop;
                                  Else
                                       Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (9999,'0002','First line should always start with 0');
                                       Forms_DDL('Commit');
                                       raise form_Trigger_failure;
                                  End if ;
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                        --Message('Row '||LC$Token);
                             IF lfirst_char = '9' Then
                                  Null;
                             Else
                                  Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (8888,'0016','Second line should start with 9');
                                  Forms_DDL('Commit');
                                  raise form_Trigger_failure;
                             End IF;
                        LOOP
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                        --Message('Row '||LC$Token);
                                  IF lfirst_char = '1' Then
                                  Null;
                                  Else
                                       Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (7777,'0022','The third line onward should start with 1');
                                       Forms_DDL('Commit');
                                       raise form_Trigger_failure;
                                  End if;
                        END LOOP;
                        --END IF;
                        END LOOP;
              EXCEPTION
                   When No_Data_Found Then
              TEXT_IO.fclose (v_handle);
              END;
    Exception
         When Others Then
         Message('Other error');
    END;
    I am calling the FUNCTION which you gave SPLIT as mcb_simulator_pkg.Split.

  • Loading from text file using Sql Loader

    I need to load data from a text file into Oracle table. The file has long strings of text in the following format:
    12342||||||Lots and lots of text with all kinds of characters including
    ^&*!#%#^@ etc.xxxxxxxxxxxxxxxxxxxxxxx
    yyyyyyyyyyyyyyyyyyyyyyyyytrrrrrrrrrrrrrrrrrrr
    uuuuuuuuuuuuuuuuuuurtgggggggggggggggg.||||||||
    45356|||||||||||again lots and lots of text.uuuuuudccccccccccccccccccccd
    gyhjjjjjjjjjjjjjjjjjjjjjjjjkkkkkkkkkkkkklllllllllllnmmmmmmmmmmmmnaaa|||||||.
    There are pipes within the text as well. On the above example, the line starting with 12342 is an entire record that needs to be loaded into a CLOB column. The next record would be the 45356 one. Therefore, all records have a bunch of pipes at the end, so the only way to know where a new record starts is to see where the next number is after all the ending pipes. The only other thing I know is that there are a fixed number of pipes in each record.
    Does anyone have any ideas on how I can load the data into the table either using sql loader or any other utility? Any input would be greatly appreciated. Thanks.

    STFF [url http://forums.oracle.com/forums/thread.jspa?messageID=1773678&#1764219]Sqlldr processing of records with embedded newline and delimiter

  • Extra column to be added while importing data from flat file to SQL Server

    I have 3 flat files with only one column in a table.
    The file names are bars, mounds & mini-bars.
    Table 'prd_type' has columns 'typeid' & 'typename' where typeid is auto-incremented and typename is bars, mounds & mini-bars.
    I Import data from 3 files to prd_details table. This table has columns 'pid', 'typeid' & 'pname' where pid is auto-incremented and pname is mapped to flat files and get info from them, now i wanted the typeid info to be received from prd_type table.
    Can someone please suggest me on this?

    You can get it as follows
    Assuming you've three separate data flow tasks for three files you can do this
    1. Add a new column to pipeline using derived column transformation and hardcode it to bars, mounds or mini-bars depending on the source
    2. Add a look task based on prd_type table. use query as
    SELECT typeid,typename
    FROM prd_type
    Use full cache option
    Add lookup based on derived column. new column -> prd_type.typename relationship. Select typeid as output column
    3. In the final OLEDB destination task map the typeid column to tables typeid column.
    In case you use single data flow task you need to include a logic based on filename or something to get the hardcoded type value as bars, mounds or mini-bars in the data pipeline
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Bulk Insert to oracle DB from flat file using C#

    I am using odp.net to connect to an oracle database for a C# console application. I have a fixed width file from which the values have to be inserted to multiple tables in oracle database. The data should be bulk inserted as there are lots of records and
    no tools should be used. Please let me know the efficient way to do this task with C#.

    Your question is how to load data from a text file into arrays?
    Start with a
    StreamReader to read the file one line at a time.  Then break each line into individual values, and add each value to a
    List<T>.  Then call .ToArray() on each list to set the value of each array-bound parameter.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • How to avaoid space while reading from flat file

    Dear all
    I am using forms 10g.
    I am reading data from flat file using Text_io.
    In my exception part i have written like this
    EXCEPTION
      WHEN no_data_found THEN
          CLIENT_TEXT_IO.Fclose(in_file);But if in my last line from the flat file is some other characters then it is taking that also .
    How can i avoid it ?
    Note :- Other characters means you cannot see like this , when you press SHOW ALL CHARACTERS button in the NOTEPAD++ then it will show ...

    My File is like this ABC . There is no extension like ABC.txt .
    And in the file if the last line if blank space is there then it is coming to exception part , but if TAB is there then it is not coming .

  • Data is not updated to Infocube (BI 7.0) when i load from flat file

    Hi Experts,
                   when I try to load data from flat file to Infocube (BI 7.0) with full or Delta, I am gettin following errors.
    1) Error while updating to data target,  message No: RSBK241
    2) Processed with Errors,  Message No: RSBK 257.
    Till transformations everthing is successful but updating data to cube is not successful...
    what is the message no. indicates? can i find solution anywhere with respective to Message NO.?
    Anybody please help me..
    Regards
    Anil

    Hi Bhaskar,
                       There is no problem with cube activation.But i try to activate the cube by using
    the function module " RSDG_CUBE_ACTIVATE". It goes to short dump.Anyhow i want to know
    that in what scenario we use this function module to activate the cube.
             There is no special characters in the file. I delete the requests in psa, cube and Re run the
    Infopackage & DTP.But I am getting same error.
    The following errors are found in updating menu
    of details tab (display messages option)
    1) Error while updating to data target, message No: RSBK241
    2) Processed with Errors, Message No: RSBK 257.
                     Thank you
    Regards
    Anil
    Edited by: anilkumar.k on Aug 20, 2010 8:35 PM

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Loading from flat file to dso using process chains

    hi,
    i am using BI7.0
    i am new to  process chains
    can anyone  explain how to load data from flat file to dso using process chains(i have created all the objects created) preffered if explained with an example

    You can find a lot info if you can searh SDN.
    Metachain
    Steps for Metachain :
    1. Start ( In this variant set ur schedule times for this metachain )
    2.Local Process Chain 1 ( Say its a master data process chain - Get into the start variant of this chain ( Sub chain - like any other chain ) and check the second radio button " Start using metachain or API " )
    3.Local Process Chain 2 ( Say its a transaction data process chain do the same as in step 2 )
    Steps for Process Chains in BI 7.0 for a Cube.
    1. Start
    2. Execute Infopackage
    3. Delete Indexes for Cube
    4.Execute DTP
    5. Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Execute DTP
    5. Activate DSO
    For an IO
    1. Start
    2.Execute infopackage
    3.Execute DTP
    4.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage ( loads till psa )
    3.Execute DTP ( to load DSO frm PSA )
    4.Activate DSO
    5.Delete Indexes for Cube
    6.Execute DTP ( to load Cube frm DSO )
    7.Create Indexes for Cube
    3.X
    Master loading ( Attr, Text, Hierarchies )
    Steps :
    1.Start
    2. Execute Infopackage ( say if you are loading 2 IO's just have them all parallel )
    3.You might want to load in seq - Attributes - Texts - Hierarchies
    4.And ( Connecting all Infopackages )
    5.Attribute Change Run ( add all relevant IO's ).
    Start
    Infopackge1A(Attr)|Infopackge2A(Attr)
    Infopackge1B(Txts)|Infopackge2B(Txts)
    /_____________________|
    Infopackge1C(Txts)______|
    \_____________________|
    ___________________|
    __\___________________|
    ___\__________________|
    ______ And Processer_ ( Connect Infopackge1C & Infopackge2B )
    __________|__________
    Attribute Change Run ( Add Infobject 1 & Infoobject 2 to this variant )
    1. Start
    2. Delete Indexes for Cube
    3. Execute Infopackage
    4.Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Activate DSO
    For an IO
    1.Start
    2.Execute infopackage
    3.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage
    3.Activate DSO
    4.Delete Indexes for Cube
    5.Execute Infopackage
    6.Create Indexes for Cube
    Some Links
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c

  • Error while loading table from flat file (.csv)

    I have a flat file which i am loading into a Target Table in Oracle Warehouse Builder. It uses SQL Loader Internally to load the data from flat file, I am facing an issue. Please find the following error ( This is an extract from the error log generated)
    SQL*Loader-500: Unable to open file (D:\MY CURRENT PROJECTS\GEIP-IHSS-Santa Clara\CDI-OWB\Source_Systems\Acquisition.csv)
    SQL*Loader-552: insufficient privilege to open file
    SQL*Loader-509: System error: The data is invalid.
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    I believe that this is related to SQL * Loader error.
    ACtually the flat file resides in my system ( D:\MY CURRENT PROJECTS\GEIP-IHSS-Santa Clara\CDI-OWB\Source_Systems\Acquisition.csv). I am connecting to a oracle server.
    Please suggest
    Is it required that i need to place the flat file in Oracle Server System ??
    Regards,
    Ashoka BL

    Hi
    I am getting an error as well which is similar to that described above except that I get
    SQL*Loader-500: Unable to open file (/u21/oracle/owb_staging/WHITEST/source_depot/Durham_Inventory_Labels.csv)
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: The system cannot find the file specified.
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    The difference is that Ashoka was getting
    SQL*Loader-552: insufficient privilege to open file
    and I get
    SQL*Loader-553: file not found
    The initial thought is that the file does not exist in the directory specified or I have spelt the filename incorrectly it but this has been checked and double checked. The unix directory also has permission to read and write.
    Also in the error message is
    Control File: C:\u21\oracle\owb_staging\WHITEST\source_depot\INV_LOAD_LABEL_INVENTORY.ctl
    Character Set WE8MSWIN1252 specified for all input.
    Data File: /u21/oracle/owb_staging/WHITEST/source_depot/Durham_Inventory_Labels.csv
    Bad File: C:\u21\oracle\owb_staging\WHITEST\source_depot\Durham_Inventory_Labels.bad
    As can be seen from the above it seems to be trying to create the ctl and bad file on my c drive instead of on the server in the same directory as the .csv file. The location is registered to the server directory /u21/oracle/owb_staging/WHITEST/source_depot
    I am at a lost as this works fine in development and I have just promoted all the development work to a systest environment using OMBPlus.
    The directory structure in development is the same as systest except that the data file is /u21/oracle/owb_staging/WHITED/source_depot/Durham_Inventory_Labels.csv and everything works fine - .ctl and .bad created in the same directory and the data sucessfully loads into a oracle table.
    Have I missed a setting in OWB during the promotion to systest or is there something wrong in the way the repository in the systest database is setup?
    The systest and development databases are on the same box.
    Any help would be much appreciated
    Thanks
    Edwin

  • Vendor master data load from flat file

    Hi Experts,
    I am trying to load data from flat file. I am confused which info object i should use. My requirement is for info cube 0BBP_C01.
    As i found for this info cube only two info objects are available: 0BBP_VENDOR and 0VENDOR. But in my flat file i have following fields: Vendor code, Vendor Name, Ultimate Parent, City, Country, Minority Status, Vendor Payment Terms. which all are together not present in either of info objects.
    Please suggest which info object i must use. or should i create Z info object for my requirement?
    Thanks in advance.
    Regards,
    Niranjan Chechani

    Hi Kiran,
    I am loading data through flat file. I am not getting your point like how to get attributes present in the R/3 system which I am not using in this case.
    Hi rvc,
    I have created two characteristics z info objects i.e. ZU_PARENT for Ultimate parent and ZVEN_PAY for Vendor Payment Terms.
    When i am trying to add these two as attributes and add it as Navigational attribute, I am getting the error "Characteristic 0VENDOR: The attibutes SID table(s) could not be filled" (Message no. R7586)
    Please suggest am i wrong somewhere?
    Thanks and Regards,
    Niranjan Chechani
    Edited by: Niranjan Chechani on Nov 28, 2011 12:15 PM

  • Loading Flat file in SQL Navigator

    Hi, I am using SQL Navigator 5.5 Standard Edition. Please tell how to load a flat file using this tool.
    Edited by: user637544 on 24-Jun-2009 01:14

    So?
    Toad : Not an Oracle product, but a Quest product
    Sql Navigator: Not an Oracle product, but a Quest product
    You should ask your questions in a vendor-related forum.
    This is an Oracle forum. This is not a Quest forum.
    Sybrand Bakker
    Senior Oracle DBA

  • Problem loading XML-file using SQL*Loader

    Hello,
    I'm using 9.2 and tryin to load a XML-file using SQL*Loader.
    Loader control-file:
    LOAD DATA
    INFILE *
    INTO TABLE BATCH_TABLE TRUNCATE
    FIELDS TERMINATED BY ','
    FILENAME char(255),
    XML_DATA LOBFILE (FILENAME) TERMINATED BY EOF
    BEGINDATA
    data.xml
    The BATCH_TABLE is created as:
    CREATE TABLE BATCH_TABLE (
    FILENAME VARCHAR2 (50),
    XML_DATA SYS.XMLTYPE ) ;
    And the data.xml contains the following lines:
    <?xml version="2.0" encoding="UTF-8"?>
    <!DOCTYPE databatch SYSTEM "databatch.dtd">
    <batch>
    <record>
    <data>
    <type>10</type>
    </data>
    </record>
    <record>
    <data>
    <type>20</type>
    </data>
    </record>
    </batch>
    However, the sqlldr gives me an error:
    Record 1: Rejected - Error on table BATCH_TABLE, column XML_DATA.
    ORA-21700: object does not exist or is marked for delete
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    If I remove the first two lines
    "<?xml version="2.0" encoding="UTF-8"?>"
    and
    "<!DOCTYPE databatch SYSTEM "databatch.dtd">"
    from data.xml everything works, and the contentents of data.xml are loaded into the table.
    Any idea what I'm missing here? Likely the problem is with special characters.
    Thanks in advance,

    I'm able to load your file just by removing the second line <!DOCTYPE databatch SYSTEM "databatch.dtd">. I dont have your dtd file, so skipped that line. Can you check if it's problem with ur DTD?

Maybe you are looking for