Need to load 2.4GB CSV file

I would like to load a portion of a 2.4gb csv file to a 10g table. The file is too large to open in notepad or excel. How can I load a portion of this file to a table?
Can I load the data into a generic table(do not define columns). Let the file dictate the colums. Is that possible? I'm not sure of the layout of the file because it is too large.
Thanks
Steve

Unfortunately DOS doesn't have the stream tools available in shell programming, but you can install cygwin and use line editors or stream editors to perform this task, you could use the head or tail command or sed or grep to gather a few lines, redirect the output to a sample file.
You cannot have a partially or non defined table in Oracle, but once you have the sqlldr to perform the table load, you can create an external table based on the file and you can query just a row subset. In order for you to create an external table based on the sqlloader controlfile is by means of the external_table=generate_only clause of the sqlldr executable.

Similar Messages

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • How to create  a datasource for 0COSTCENTER to load data in csv file  in BI

    how to create  a datasource for 0COSTCENTER to load data in csv file  in BI 7.0 system
    can you emil me the picture of the step about how to  loaded individual values of the hierarchy using CSV file
    thank you very much
    my emil is <Removed>
    allen

    Step 1: Load Required Master Data for 0CostCenter in to BI system
    Step 2: Enable Characteristics to support Hierarchy for this 0Cost Center and specify the External Characteristic(In the Lowest Node or Last Node) while creation of this Characteristic InfoObject
    Step 3: On Last Node of Hierarchy Structure in the InfoObject, Right Click and then Create Hierarchy MANUALLY by Inserting the Master Data Value as BI dosent Support the Hierarchy load directly you need to do it manually....
    Step 4: Mapping
    Create Text Node thats the first node (Root Node)
    Insert Characteristic Nodes
    Insert the Last Node of the Hierarchy
    Then you need to create a Open hub Destination for extracting data into the .csv file...
    Step1 : Create the Open Hub Destination give the Master Data table name and enter all the fields required....and create the transformations for this Open Hub connecting to the External file or excel file source...then give the location on to your local disk or path of the server in the first tab and request for the data...It should work alright let me know if you need anything else...
    Thanks,
    Sandhya

  • By using OID how to unzip the folder and load bunch of .csv files

    Hi All
    bellow is the my scenario.
    in my scenarion we have created FTP folder that will load the .zip files(Every Day).
    .zip files that will contain bunch of .csv file (flat files) and by using OID need to extract the .zip file and load the all .csv files into target database. source data is .csv files and target data is oracle10g. Dose it will possible by using ODI?? please suggest me...
    Thanks in Advance
    Zakeer

    Hi,
    i'm interrested on this way of reading multiple files, i tried to follow those steps but i'm getting an error while running the package:
    ODI-1226: Step vReadFile fails after 1 attempt(s).
    ODI-1236: Error while refreshing variable VOUCHER_CUSTOMER_DATA_INTEGRATION.vReadFile.
    Caused By: java.lang.NumberFormatException: For input string: "#Activity_report_name"
    the variable vReadFile can't recognize the other variable Activity_report_name.
    the refreshing query for vReadFile is :
    select     file     C1_FILE
    from      TABLE
    /*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=files_namesSNP$CRLOAD_FILE=C:\/files_name.txtSNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=0x0009SNP$CRFILE_SEP_LINE=0x000D0x000ASNP$CRFILE_FIRST_ROW=#Activity_report_nameSNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=fileSNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=50SNP$CRPRECISION=50SNP$CRACTION_ON_ERROR=0SNP$CR$$SNPS_END_KEY*/
    any suggestion plz
    regards

  • How export to csv work in safari browser? In my application export to csv open like a raw data in new tab. But other browsers working great!. Need to open in a csv file or save it as a csv file.

    How export to csv work in safari browser?
    In my application export to csv open like a raw data in new tab.
    But other browsers working great!.
    Need to open in a csv file or save it as a csv file.
    Please suggest me. Thank you in advance!.

    Hi Adrian,
    Why don't you try any another software for opening CSV files then Notepad ? According to my experience, you can use these softwares to open an CSV files and they are:-
    Microsoft Excel
    Open Office Calc
    Google Docs
    Also there is an additional tool available known as CSV viewer. You may try this, download it from here http://www.csvviewer.com/
    I've never used Notepad for opening CSV files, because sometimes it contains some symbols which are not not at all compatibile with Notepad.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Error when executing interface which load data from csv file which has 320

    Hi,
    Can some one provide a resolution for below error:
    I have created an interface which load data from csv file which has 320 columns, to a Synonym which has 320 columns in it
    using LKM File to SQL, IKM Sql Control Append.
    I am getting below error when executing the interface :
    com.sunopsis.tools.core.exception.SnpsSimpleMessageException: ODI-17517: Error during task interpretation. Task: 6 java.lang.Exception: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location> BSF info: Create external table at line: 0 column: columnNo
    at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:485)
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:711)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: org.apache.bsf.BSFException: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location>
    BSF info: Create external table at line: 0 column: columnNo
         at bsh.util.BeanShellBSFEngine.eval(Unknown Source)
         at bsh.util.BeanShellBSFEngine.exec(Unknown Source)
         at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:471)
         ... 11 more
    Text: The application script threw an exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 2 BSF info: Create external table at line: 0 column: columnNo
    out.print("createTblCmd = r\"\"\"\ncreate table ") ;
    out.print(odiRef.getTable("L", "COLL_NAME", "W")) ;
    out.print("<?=(extTabColFormat.getUseView())?\"_ET\":\"\"?>\n(\n\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
              "<?=extTabColFormat.getExtTabDataType(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022[DEST_WRI_DT]\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
         , ",\\n\\t", "","")) ;
    out.print("\n)\nORGANIZATION EXTERNAL\n(\n\tTYPE ORACLE_LOADER\n\tDEFAULT DIRECTORY dat_dir\n\tACCESS PARAMETERS\n\t(\n\t\tRECORDS DELIMITED BY 0x'") ;
    out.print(odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")) ;
    out.print("'\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_CHARACTERSET")) ;
    out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_STRING_SIZE")) ;
    out.print("\n\t\tBADFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.bad'\n\t\tLOGFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.log'\n\t\tDISCARDFILE\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.dsc'\n\t\tSKIP \t\t") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")) ;
    out.print("\n") ;
    if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {out.print("\n\t\tFIELDS\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\tPOSITION([FILE_POS]:[FILE_END_POS])\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    } else {out.print("\n\t\tFIELDS TERMINATED BY x'") ;
    out.print(odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")) ;
    out.print("'\n\t\t") ;
    if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){out.print("\n\t\t") ;
    } else {out.print("OPTIONALLY ENCLOSED BY '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)) ;
    out.print("' AND '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)) ;
    out.print("' ") ;
    }out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    }out.print("\tLOCATION (") ;
    out.print(odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")) ;
    out.print(")\n)\n") ;
    out.print(odiRef.getUserExit("EXT_PARALLEL")) ;
    out.print("\nREJECT LIMIT ") ;
    out.print(odiRef.getUserExit("EXT_REJECT_LIMIT")) ;
    out.print("\n\"\"\"\n \n# Create the statement\nmyStmt = myCon.createStatement()\n \n# Execute the trigger creation\nmyStmt.execute(createTblCmd)\n \nmyStmt.close()\nmyStmt = None\n \n# Commit, just in case\nmyCon.commit()") ;
    ****** ORIGINAL TEXT ******
    createTblCmd = r"""
    create table <%=odiRef.getTable("L", "COLL_NAME", "W")%><?=(extTabColFormat.getUseView())?"_ET":""?>
         <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
              "<?=extTabColFormat.getExtTabDataType(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022[DEST_WRI_DT]\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
         , ",\n\t", "","")%>
    ORGANIZATION EXTERNAL
         TYPE ORACLE_LOADER
         DEFAULT DIRECTORY dat_dir
         ACCESS PARAMETERS
              RECORDS DELIMITED BY 0x'<%=odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")%>'
              <%=odiRef.getUserExit("EXT_CHARACTERSET")%>
              <%=odiRef.getUserExit("EXT_STRING_SIZE")%>
              BADFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.bad'
              LOGFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.log'
              DISCARDFILE     '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.dsc'
              SKIP           <%=odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")%>
    <% if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {%>
              FIELDS
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\tPOSITION([FILE_POS]:[FILE_END_POS])\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%} else {%>
              FIELDS TERMINATED BY x'<%=odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")%>'
              <% if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){%>
              <%} else {%>OPTIONALLY ENCLOSED BY '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)%>' AND '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)%>' <%}%>
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%}%>     LOCATION (<%=odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")%>)
    <%=odiRef.getUserExit("EXT_PARALLEL")%>
    REJECT LIMIT <%=odiRef.getUserExit("EXT_REJECT_LIMIT")%>
    # Create the statement
    myStmt = myCon.createStatement()
    # Execute the trigger creation
    myStmt.execute(createTblCmd)
    myStmt.close()
    myStmt = None
    # Commit, just in case
    myCon.commit().
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:738)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)

    The issue is encountered because the text delimiter used in the source file did not consist of a pair of delimiters.
    Please see support Note [ID 1469977.1] for details.

  • Sql loader ?.  CSV file need 2 load to multi tbls

    I have a CSV file i.e.
    3 columns on file (qy_type_cd,cd_description,tablename)
    example data:
    PAST,Past Program Quantity,MASTER_QY
    FUT_PEACE,Future Peace Program Quantity,MASTER_QY
    TM_PHAS_USE,Time Phased Users Quantity,ADDITIVE_QY
    MISTR_RPR,MISTR Repair Quantity,USAGE_QY
    I need to load to 3 tables. Tables have two columns (qy_type,description).
    How can i read the tablename from csv file and load the qy_type_cd and cd_description INTO a table
    something like this. but I dont know the syntax
    into table api_mstr_qy
    fields terminated by ","
    when tablename ='MASTER_QY'
    (qy_type,description,tablename)

    First you need to create a directory that corresponds to where the csv file is located
    create directory mydir as 'c:\loaddata\'
    Then you need to define the metadata for the
    external table:
    CREATE TABLE myExternal (
    qy_type_cd varchar2,
    cd_description varchar2,
    tablename varchar2
    ORGANIZATION EXTERNAL
    (DEFAULT DIRECTORY mydir
    ACCESS PARAMETERS
    (RECORDS DELIMITED BY NEWLINE
    BADFILE 'bad_external'
    LOGDILE 'log_external'
    FIELDS TERMINATED BY ','
    qy_type_cd char,
    cd_description char,
    tablename char
    LOCATION (mycsv.csv))
    PARALLEL 1 --PUT 2 or the amount of clustered servers you want to use for the process
    REJECT LIMIT 1;
    Now you are able to run selects on this table
    ie: select * from myExternal will produce:
    PAST,Past Program Quantity,MASTER_QY
    FUT_PEACE,Future Peace Program Quantity,MASTER_QY
    TM_PHAS_USE,Time Phased Users Quantity,ADDITIVE_QY
    MISTR_RPR,MISTR Repair Quantity,USAGE_QY
    So here is where you use the conditional INSERT:
    INSERT
    WHEN tablename ='MASTER_QY' THEN
         INTO api_mstr_qy (qy_type,description,tablename)
         VALUES (qy_type,description,tablename)
    WHEN tablename = 'OTHER' THEN
    FROM myExternal

  • Loading records from .csv file to SAP table via SAP Program

    Hi,
    I have a .csv file with 132,869 records and I am trying to load it to an SAP table with a customized SAP program.
    After executing the program, only 99,999 records are being loaded into the table.
    Is there some setting to define how many records can be loaded into a table? Or what else could be the problem?
    Pls advice.
    Thanks!!!

    hi Arun ,
    A datasource need a extract structure to fetch data .It is nothing but a temp table to hold data.
    First you need to create atable in SE11 with fields coming from CSV file.
    Then you need to write a report program to read you CSV file and populate your table in BW .
    Then you can create a datasource on top of this table .
    After that replicate and load data at PSA and use to upper flow.
    Regards,
    Jaya Tiwari

  • Loading data into csv file based on the count of data in table in ODI

    Hi,
    I have a requirement like i have data in table and need to be loaded in csv file.
    If the count of the data in the table exceeds the csv data limit then i need to call the same procedure again to
    generate one more csv file and load the remaining data into it.my data in the table may vary all the time.
    I am new to this ODI.Can anyone tell any ways to do it in ODI or tell me any logic to do this?
    Thanks in advance..
    Edited by: 883410 on May 22, 2012 12:01 AM

    What limit on the csv file are you talking about. There isn't a limit (unless you are on Unix where the filesize is limit is 2GB). Are you talking about opening it in earlier versions of Excel where the display limit is 60,000 rows as this is irrelevant for your operation?

  • How to regularly load data from .csv file to database (using apex)

    Hi,
    i am using apex3 , I need to load data from a csv file to apex . I need to perform this automatically through code at regular time interval of 5-10 seconds.
    Is it possible .If yes how ?. Please reply as early as possible. This will decide whether to use apex or not for this application.
    this is question for Application Express. Dont know why in forum for BPEL
    Edited by: TEJU on Oct 24, 2008 2:57 PM

    Hello,
    You really need to load the data every 5-10 seconds? Presumably it's read only?
    I would look at using an Oracle external table instead, that way you just drop your CSV in a location the DB can read it and then you build a table that essentially references the underlying CSV (that way when you query the table you view the data in the CSV file).
    Take a look at this link for a quick example on usage -
    http://www.oracle-base.com/articles/9i/SQLNewFeatures9i.php#ExternalTables
    Hope this helps,
    John.
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    REWARDS: Please remember to mark helpful or correct posts on the forum, not just for my answers but for everyone!

  • Error while Loading data through .csv file

    Hi,
    I am getting below date error when loading data through into Olap tables through .csv file.
    Data stored in .csv is 20071113121100.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1> TE_7007 Transformation Evaluation Error [<<Expression Error>> [TO_DATE]: invalid string for converting to Date
    ... t:TO_DATE(u:'2.00711E+13',u:'YYYYMMDDHH24MISS')]
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1> TT_11132 Transformation [Exp_FILE_CHNL_TYPE] had an error evaluating output column [CREATED_ON_DT_OUT]. Error message is [<<Expression Error>> [TO_DATE]: invalid string for converting to Date
    ... t:TO_DATE(u:'2.00711E+13',u:'YYYYMMDDHH24MISS')].
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1> TT_11019 There is an error in the port [CREATED_ON_DT_OUT]: The default value for the port is set to: ERROR(<<Expression Error>> [ERROR]: transformation error
    ... nl:ERROR(u:'transformation error')).
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1> TT_11021 An error occurred moving data from the transformation Exp_FILE_CHNL_TYPE: to the transformation W_CHNL_TYPE_DS.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1> CMN_1086 Exp_FILE_CHNL_TYPE: Number of errors exceeded threshold [1].
    Any help is greatly appreciated.
    Thanks,
    Poojak

    1) Wrong format, you wont get much support loading OLAP cubes in here I think.
    2) Has your CSV file been anywhere near excel by chance the conversion of 20071113121100 to 2.00711E+13 looks very much like what I see when Excel has an invalid number mask / precision specified for a cell.
    *** We are using EBS. File was already generated by consultants through SQL query. Getting error while loading though Informatica. Target table is setup as date datatype and source is String(19).
    Expression in Informatica is setup as below.
    IIF(ISNULL(CREATED_ON_DT), NULL, TO_DATE(CREATED_ON_DT, 'YYYYMMDDHH24MISS'))
    Thanks,
    Poojak

  • Need help with query from .csv file

    I am trying to import a csv file with only 1 column in it.
    The column will only contain a 9 digit ID number. I want to read
    the file then use the contents to query a table to get the names
    and other information and display it. Here is what I have so far:
    <cffile action="read" file="#form.FiletoUpload#"
    variable="csvfile">
    <cfloop index="index" list="#csvfile#">
    <cfquery name="massimport" datasource="data1">
    SELECT * FROM IDTable
    WHERE CardNumber = ('#csvfile#')
    </cfquery>
    </cfloop>
    <cfoutput>#Name# #ID# #Site#</cfoutput>
    I get no errors but I am not getting any results. Just a
    blank page. Does anyone know how to query directly from a csv
    import? Thanks.

    You need to convert your file to a list somehow. Not sure if
    this is the most efficient way but, you can use the cfhttp tag to
    produce a query. Then your where clause becomes,
    where cardnumber in (#quotedvaluelist(query.column)#)
    and you won't need a loop.

  • Loading Hierarchy thru csv file

    I was unable to load the hierarchies into BW 7.0 using a CSV data file.
    (1) It does not allow me to create a datasource for the hierarchies (source system: PC_File).
    (2) But i was able to create a datasource for the attributes (source system: PC_File).
    Can anyone provide more insight into the 7.0 procedure for loading hierarchies thru flat files?
    For the time being i loaded the csv file using 3.X functionality.
    Thank you.

    I will load a hierarchy by csv. for 0CS_ITEM with a lot of existig hierarchies. One of them is a new one, created from me. It is active.
    When I create an InfoPackage on System PC_File (checked OK), there is a selection for Hierarchies, but my Hierarchy is not to select. What is wrong??
    If I take any selection and make a change at the bottem i get errors
    Dietrich von Essen

  • Loading Data From .CSV files

    Hi Gurus,
    I m new to BW
    Can u please help me out solving the below problem
    When i was trying to load data from a flat file (.CSV), which is like this
    Cust no, Cust name, Cust address
    cust001,coca cola, Inc., 17 north road, Hyderabad - 35
    infoObjects are
    ZS_CUST,ZS_CUSTNM,ZS_CUSTAD
    I am getting the first three values, into my attribute table, those seperated by commas
    i.e  cust001, coca cola  and  Inc into three fields in the attribute table (and PSA).
    Now my piont is with out replacing the seperator ',' to any thing else
    (If we have millions of records in the file and assuming that it will be a time consuming process to change ',' to some other seperator) can we extract the complete data into the fields in attribute table respectively.
    Thanks
    Suri

    Surender,
    Once you specify in BW "," as the field separator, it will be interpreted as such. Of course, "Coca Cola, Inc" will be interpreted and loaded as two separate fields.
    "Data should be fixed in the source, whenever is possible". That's a phrase you'll hear a lot when dealing with BW, especially when loading data from flat files.
    Good luck.
    Regards,
    Luis

Maybe you are looking for