How to load flatfiles using Owb?

Hai all,
I would like to access a flat file (.csv files) using owb. I am able to import the files into source module of owb. But while executing the mapping , I got the following error...
Starting Execution MAP_CSV_OWB
Starting Task MAP_CSV_OWB
SQL*Loader: Release 10.1.0.2.0 - Production on Fri Aug 11 16:34:22 2006
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Control File: C:\OWBTraining\owb\temp\MAP_CSV_OWB.ctl
Character Set WE8MSWIN1252 specified for all input.
Data File: \\01hw075862\owbfiles\employee.csv
File processing option string: "STR X'0A'"
Bad File: C:\OWBTraining\owb\temp\employee.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 200 rows, maximum of 50000 bytes
Continuation: none specified
Path used: Conventional
Table "OWNER_STG"."EMP_EXCEL", loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
"EMPNO" 1 * , CHARACTER
"EMPNAME" NEXT * , CHARACTER
value used for ROWS parameter changed from 200 to 96
SQL*Loader-500: Unable to open file (\\01hw075862\owbfiles\employee.csv)
SQL*Loader-553: file not found
SQL*Loader-509: System error: The system cannot find the file specified.
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Table "OWNER_STG"."EMP_EXCEL":
0 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 49536 bytes(96 rows)
Read buffer bytes: 65536
Total logical records skipped: 0
Total logical records read: 0
Total logical records rejected: 0
Total logical records discarded: 0
Run began on Fri Aug 11 16:34:22 2006
Run ended on Fri Aug 11 16:34:22 2006
Elapsed time was: 00:00:00.09
CPU time was: 00:00:00.03
RPE-01013: SQL Loader reported error condition, number 1.
Completing Task MAP_CSV_OWB
Completing Execution MAP_CSV_OWB
could you please help me..
thanks and regards
gowtham sen.

Thank you my friends.
As you said, I gave the file name as wrong.
Its solved. Thank you....
I have another probem.
How to load data from excel file to owb? Is it possible the way we do for flat files?
I did using ODBC + HS Services.
But after creating a mapping , and its deploying I got the following error.
"error occurred when looking up remote object <unspecified>.EmployeeRange@EXCEL_SID.US.ORACLE.COM@DEST_LOCATION_EXCEL_SOURCE_LOC
ORA-00604: error occurred at recursive SQL level 1
ORA-02019: connection description for remote database not found
Could you please help me..
Thanks and regards
Gowtham

Similar Messages

  • How to load hierrarchies using dtp  using flat file  in bw  ineed clear ste

    how to load hierrarchies using dtp  using flat file  in bw  ineed clear steps

    Hi,
    If you want to load InfoObjects in the form of hierarchies, you have to activate the indicator with hierarchies for each of the relevant InfoObjects in the InfoObject maintenance. If necessary, you need to specify whether the whole hierarchy or the hierarchy structure is to be time-dependent, whether the hierarchy can contain intervals, whether additional node attributes are allowed (only when loading using a PSA), and which characteristics are allowed.
    1.Defining the source system from which you want to load data:
    Choose Source System Tree ® Root (Source System) ® Context menu (Right Mouse Button) ® Create.
    For a flat file, choose: File System, Manual Metadata; Data via File Interface.
    2.Defining the InfoSource for which you want to load data:
    Optional: Choose InfoSource Tree ®Root (InfoSources) ® Context Menu (Right Mouse Button) ® Create Application Component.
    Choose InfoSource Tree ® Your Application Component ® Context Menu (Right Mouse Button) ® Create InfoSource ® Direct Update
    Choose an InfoObject from the proposal list, and specify a name and a description.
    3.Assigning the source system to the InfoSource
    Choose InfoSource tree ® Your Application Component ® One of your InfoSources ® Context Menu (Right Mouse Button) ® Assign Source System. You are taken automatically to the transfer structure maintenance.
    The system automatically generates DataSources for the three different data types to which you can load data.
          Attributes
          Texts
          Hierarchies (if the InfoObject has access to hierarchies)
    The system automatically generates the transfer structure, the transfer rules, and the communication structure (for attributes and texts).
    4.Maintaining the transfer structure / transfer rules
    Choose the DataSource to be able to upload hierarchies.
    Idoc transfer method: The system automatically generates a proposal for the DataSource and the transfer structure. This consists of an entry for the InfoObject, for which hierarchies are loaded. With this transfer method, during loading, the structure is converted to the structure of the PSA, which affects performance.
    PSA transfer method: The transfer methods and the communication structure are also generated here. 
    5.Maintaining the hierarchy:
    Choose Hierarchy Maintenance, and specify a technical name and a description of the hierarchy.
    PSA Transfer Method: You have the option here to set the Remove Leaf Value and Node InfoObjects indicator. As a result, characteristic values are not transferred into the hierarchy fields NODENAME, LEAFFROM and LEAFTO as is normally the case, but in their own transfer structure fields.  This option allows you to load characteristic values having a length greater than 32 characters.
    Characteristic values with a length > 32 can be loaded into the PSA, but they cannot be updated in characteristics that have a length >32.
    The node names for pure text nodes remain restricted to 32 characters in the hierarchy (0HIER_NODE characteristic).
    The system automatically generates a table with the following hierarchy format (for sorted hierarchies without removed leaf values and node InfoObjects):
    Description
    Field Name
    Length
    Type
    Node ID
    NODEID
    8
    NUMC
    InfoObject name
    INFOOBJECT
    30
    CHAR
    Node name
    NODENAME
    32
    CHAR
    Catalog ID
    LINK
    1
    CHAR
    Parent node
    PARENTID
    8
    NUMC
    First subnode
    CHILDID
    8
    NUMC
    Next adjacent node
    NEXTID
    8
    NUMC
    Language key
    LANGU
    1
    CHAR
    Description - short
    TXTSH
    20
    CHAR
    Description - medium
    TXTMD
    40
    CHAR
    Description- long
    TXTLG
    60
    CHAR
    The system transfers the settings for the intervals and for time-dependency from the InfoObject maintenance. Depending on which settings you have defined in the InfoObject maintenance, further table fields can be generated from the system.
    The valid from and valid to field is filled if you select Total Hierarchy Time-dependent in the InfoObject maintenance. The time-dependent indicator is activated if you select the Hierarchy Nodes Time-dependent option in the InfoObject maintenance.
    6.Save your entries.
    Depending on which settings you defined in the InfoObject maintenance, additional fields can be generated from the system. Also note the detailed description for Structure of a Flat Hierarchy File for Loading via an IDoc and for Structure of a Flat Hierarchy File for Loading via a PSA.
    Also find the below blog for your reference...
    /people/prakash.bagali/blog/2006/02/07/hierarchy-upload-from-flat-files
    You can load hierarchy using process chain...
    Find the below step by step procedure to load hierarchy using process chain...
    http://help.sap.com/saphelp_nw70/helpdata/EN/3d/320e3d89195c59e10000000a114084/frameset.htm
    Assign points if this helps u...
    Regards,
    KK.
    Edited by: koundinya karanam on Apr 8, 2008 1:08 PM
    Edited by: koundinya karanam on Apr 8, 2008 1:09 PM

  • HOW TO LOAD FLATFILE IN BI 7.0

    HI EXPERTS,
    I AM NEW TO BI 7.0
    CAN ANY ONE HELP ME HOW TO LOAD FLAT FILE IN BI 7.0.
    I HAVE DONE IN BW 3.5.
    I WANT TO KNOW IN BI 7.0 NOW.
    I WILL ASSIGN POINTS.

    Hi
    LOADING MASTER AND TRANSACTIONAL DATA
    Please follow the following steps to load master and transaction data in BI 7.0
    Loading of transaction data in BI 7.0:
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. Alternatively monitor icon can be used

  • How to load hierarchies using process chains

    Hi ,
    Can any one please explain the steps for loading hierarchies using process chain.whenever i drag the hierarchy infopcakage Save variant is also coming by default,so do we need to have different SAve variant for different hierarchy infopackages or can we have one save variant for all the hierarchy variables.
    Thanks,
    Vaka

    Hello Veka,
    How r u ?
    Yes ! SAVE Variant & Attribute Change Run will add up while loading the Hierarchy InfoPackage.
    Provide the InfoObject Name for which u r loading the Hierarchy in the SAVE variant and SAVE it. The same will be transferred to the Attribute Change Run also.
    If u r creating the Chain with more InfoPackages then have the Save Variant & Attribute Change Run at the last.
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • How to load images using xml in flash

    Hi
    im working on a quiz project where i plan to load images using xml in to flash, can any one help with the script with AS2 or 3

    You'll find a tutorial for what you are after here:
    http://www.gotoandlearn.com/play?id=22

  • How to load data using Control File in BW 7

    Hi All,
    I have a requirement to load data in BW using control file. The development is done in in BW 7.0. In BW 3.5 in infopackage, there is an option of FILE IS ( Control File or Data File ). Please suggest how to simulate the same in BW 7.0
    Regards,
    Vikram

    Any suggestions?

  • How to load and use JVM form other types of applications ?

    Hello
    I have a site that runs with ASP and I would like to execute some java applications to output some of the pages.
    I there a way to execute java programs in the JVM without having to create a new process running java.exe ?
    Like using and ActiveX component or a special .dll from Sun. How does a web browser run applets ?
    I can not use .jsp directly for my pages because my client also uses some custom language (MediaBase)
    they bought that works only with ASP.
    Thank you,
    Timothy Madden,
    Romania

    Yes, or something like IPC with a service process, as Michael said.
    First I wanted to know if JNI has an option the create a shared VM, since my client's web server has some other processes running that also run the JVM. Now it looks like JNI will load the JVM in-process, although the documentation says little about this, but the InitArgs structure has no 'is_shared' member.
    But all these are complicated for some web page, difficult to invoke from a scripting language, and there is too much effort just to use some library that happens to be written in Java, which I use to export search results into an Excel workbook (the library is Apache POI). I mean I would do it, but my client will not understand the need for this effort. Than I also want some data transferred between my web script and the java application.
    So I am going to stick to java.exe and create a new process on every hit, with the standard input, output and error files redirected to some vars in my script. At least until I find out more.
    Thank you,
    Timothy Madden,
    Romania

  • How to load data using two maps for the same source file in Import Manager

    Hi,
    I am trying to load the source data to MDM using two map files, since one map is too big (creating memory error).
    However the first map data able to see after loading (SAP data) but when loading second map(nonSAP data) using the remote key to match the previously loaded SAP key ,even if MDM is processing the data it is not reflecting in data manager.
    Is there any reason for this.
    Appreciate your suggestion and tips on this.
    regrds,
    Reo

    Hi Reo,
    As ur requirement seems,In the second pass perform recording matching by select default import action as "<b>update null fields only"</b> for those data records which are matching and "<b>create</b> " for those data records which are not matching with previous data records from SAP system.
    If you define "update null fields only", then only the key of the new record from NON SAP will get appended to the existing record. And if you select "create" then it will create new records and will apply NON SAP key on it.
    Please check the remote key values using data manager.Key is applied in the records or not.
    Remember that import maps are client specific. Remote key comprises of both key and client system.So be sure that you are connecting to two different client systems.
    I have tried this and this is working fine on my system.
    Thanks,
    Shiv Prashant Dixit
    Thanks,

  • Limit on records while loading flatfile using BPS GUI.

    Things done,
    1. planning function of type exit
    2. one FM for initialize the data and one FM for loading the data.
    3. the first FM is been modified to ask for a file using select file popup
    4. the data is getting loaded into the system succesfully.
    5. the file type is tab separated text file with extension .txt.
    Questions:
    1. Do we have any limit on number of records loaded into the system using BPS ?..
    2. can i load more than 20,000 records in a single step or we need to break these records into chunk of 9999 in INIT FM ?
    3. If there is a limit, please share the code if somebody has already done it.
    Thanks

    <FONT FACE = "Tahoma", Font Color = "Blue">
    Hi
    <Br>
    I have not faced this situation before so can't say with certainty about the limit on number of records. <Br><BR>
    However I would like to suggest you the option of loading Flat File to your Transactional Cube using Update Rules.<Br><BR>
    Agreed that Planning won't be available when data loading is happening but if data loading is once in a bluetime affair then this should be a very good option as we know you can load a Text file containing huge number of records to an InfoCube in a controlled manner.
    <Br><Br>Hope it helps.
    <Br>
    Cheers
    Abhijit
    <Br>* It's a good habit to reward someone with points in SDN if you think his/her
    response was helpful to you
    </FONT>

  • How to Load Characterisitcs using LSMW in ECC 6.0? Very urgent requirement

    Hi,
    We are facing problems while loading characteristics data using Batch input method RCCTBI01. In Create Batch input session step it is throwing the error as Transaction CT04 not supported.
    Can anybody help me!!!
    Thanks in advance

    Hello,
    Try to use the BAPI (Business object BUS1088)
    David

  • How to load data using sql to an Oracle database in 9i

    <p>I just started using Hyperion Analytic Admin Services and waswondering if there is a way to load data through a sql call ratherthan a flat file.  Any help is greatly appreciated.</p>

    You need to create a Rules Files: <BR>- right click at Rules Files under Database name / Create Rules File<BR>- File / Open SQL Data Source -> you need to copy and paste your SQL code<BR>- Field / Properties / Data Load Properties -> define each collumn of the result set with an especific dimension.<BR><BR>I hope that it helps you !<BR><BR>Regards<BR><BR>Andrea Crespo<BR>

  • How to load file using Java

    Hello,
    I am trying to load file into the TimesTen (csv or txt) using Java language. I tried using query "LOAD DATA INFILE ..." but I received error that there is something wrong in word "DATA", later I tried to copy solution from sample programs written in Java but I get a lot of error which i couldn't solve. I also tried to use ttBulkCp in Java like this: query = {CALL ttBulkCp ... } but it wasn't working as well. There must be a way to load data from file to the TimesTen table using script written in Java, right?
    Thank you for any ideas.

    If you take a look at the quickstart guide you get when you install the product you'll see a java code example called 'level3.java' which does what you want:
    This program uses the TimesTen JDBC driver to perform order processing operations:
    a) Connect to the database using the TimesTenDataSource interface
    b) Process all orders in the input3.dat file by inserting into the ORDERS and ORDER_ITEM tables.....
    Regards
    Tim

  • How to load configuration using Cisco USB port???

    on USB-flush configuration there. I try to download it gives this:
    Switch#copy usbflash0: 3560-ST
    Source filename [3560-st]?
    Destination filename [3560-ST]?
    %Error opening usbflash0:3560-st (File not found)
    What is the Problem? flash formatted on most Cisco (it has decreased in numbers from 16 GB to 2.75 GB)???

    Syntax is the one as Chandu poined out.
    However, i see there is a space between usb0: and the filename.
    "Switch#copy usbflash0: 3560-ST"
    Make sure there is no space in the syntax above and also the file name is correct.

  • How to manage different load frequencies in OWB?

    How to manage different load frequencies in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have situation to bring data feed into database using OWB on different load frequency. Eg. Daily, weekly and monthly.
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables maintain such load frequency data?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How do we able to validate before starting ETL process with respect to previous load frequency for current load?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    If you look at the OWB public function WB_RT_GET_LAST_EXECUTION_TIME you can examine the query OWB uses to return this information. It is based entirely on public views that you should already have synonyms for in your OWB user.
    Now, how you want to implement logic to determine if a mapping is run is entirely up to you If you are using process flows you could embed your call to start a mapping in a custom transformation that queries the repository and starts a given mapping if it deems the correct interval to have passed. We do something similar as a means to perform a process flow restart (it will skip over any successfully completed mappings from the last run if the last COMPLETE run did not finish succesfully), so that a restart picks up where the last run failed.
    The main caveat I would make to mining the runtime repository is that someone purging old data might very well purge the data you are depending on, so you might want to build a couple of tables to hold your own metadata for you. E.g. one table holding the mapping name and desired interval between loads, and another holding the mapping name and last run date. The second table could get updated by the post-mapping procedure of each mapping.
    Anyway, the data IS there in the repository. How you intend to access it or how much you are willing to trust its persistence is up to you.
    Mike

  • How to implement "delete" on a table using OWB?

    Hi All,
    How do I implement a simple delete using OWB?
    Assumption:
    (1) Table abcd (id number, age number, amt number)
    (2) the_number is a variable
    (3) the_id is a variable
    Want to implement following transformation in OWB?
    DELETE FROM ABCD WHERE AMT=0 AND number = the_number AND id = the_id ;
    Rgds,
    Deepak

    We implemented delete mappings, and delete flows to be able to reverse a failed load. This is in my opinion a very sound and valid reason for deleting from a datawarehouse. Also if the need is there it could be used for deleting old superfluous data from the datawarehouse.
    There are a few things to consider: closed records in type II should be opened up (post mapping).
    Test, test, test.
    It is indeed a bit tricky to realize,but certainly working and possible.
    steps to take are following:
    1) create new mapping
    2) drop mapping table where to delete from onto mapping (2 times, 1 source, 1 target)
    3) map all fields from source to their corresponding fields in target, except the ones that determine the "where" clause (Refered to as filter fields)
    4) Either create a select, or a mapping input parameter which should result in generating the filter-values for your delete.
    5) map above step to the filter fields.
    6)define a delete mapping by altering target table properties as follows:
    6a) Loading Type => Delete
    6b) Match by constraint => No constraints
    7) set properties each field as folows:
    7a) filter fields match column when deleting => Yes
    7b) other fields match column when deleting => No
    Hope this helps,
    Wilco

Maybe you are looking for

  • How to use SET BIT and GET BIT in ABAP Programming

    Hi, Our team is currently working for an Upgrade  project which is using a tool automatically upgrades when we supplly the code. fine, But now the requirement is , we need to write different test cases for the available key words in ABAP. Now we r lo

  • Catching one page element...

    Hello! My jsp page has 2 components: a combo box and a text field. When the user chooses one element on the combo, i have to make a query to full fill the text field with the result of the query. My problem is: how do i get the selected element, to s

  • How to post cash journal for Advances payment to vendor in SAP

    Hi Experts, I need some clarification about How to do the cash journal for special G/L indicator for customers and vendors. Please provide valuable suggestions. Regards Madhan Mohan

  • Problem with showing unit-label

    Hi All, i just have a problem with controls that have two displays (like slider). I need the digital display with an unit-label. But when I enable showing the label it appears two times and I have no chance to disable one. Is there a methode that I c

  • Is This The Sign of a Failing Hard Drive

    After I use my Mac for about 45 minutes, it completely locks up on me. I have to force it off. When I go to reboot the computer, it goes through the normal startup process. I get the blue screen, but no login screen appears. I can't find the discs th