Loading a bridge table percentage

After I load a bridge table, I need to calculate a percentage based on how many rows are in the table per order. I could probably just have a process at the end do it, but I am moving towards streams and CDC so it would be nice to do an update after an order has been processed. The calc is simple. The where to put it and how to trigger it is the issue. Any ideas?

Hi,
I think you have to add check constraint, if row number is changing, an interface that making necessary calculation will be executed.
you can have a look models tab, and right click a dataset, you will see check menu.
hope this helps.
Ugur

Similar Messages

  • Kimball's bridge table

    Hey guys,
    Has anyone tried to implement Ralph Kimball's bridge table in OWB? Precisely loading a bridge table from a dimension (or its source) with embedded recursive pointers and hierarchy on a column.
    The idea here is to have a table holding all the paths in an hierarchy tree which seats between the fact and the dimension and allow for aggregation and grouping at various levels of the hierarchy (Unachievable with the CONNECT BY clause since it won't allow a join to be involved).
    Thanks,
    Rene

    we exploded the parent child relationship using an explosion function in a post mapping. A good example of this is:
    http://www.kimballgroup.com/html/designtipsPDF/DesignTips2001/KimballDT17Populating.pdf
    you can modify this procedure accordingly.The top of the hierarchy must have PARENT = NULL.

  • 3-1674105521 Multiple Paths error while using Bridge Table

    https://support.us.oracle.com/oip/faces/secure/srm/srview/SRViewStandalone.jspx?sr=3-1674105521
    Customer Smiths Medical International Limited
    Description: Multiple Paths error while using Bridge Table
    1. I have a urgent customer encounterd a design issue and customer was trying to add 3 logical joins between SDI_GPOUP_MEMBERSHIP and these 3 tables (FACT_HOSPITAL_FINANCE_DTLS, FACT_HOSPITAL_BEDS_UTILZN and FACT_HOSPITAL_ATRIBUTES)
    2. They found found out by adding these 3 joins, they ended with circular error.
    [nQSError: 15001] Could not load navigation space for subject area GXODS.
    [nQSError: 15009] Multiple paths exist to table DIM_SDI_CUSTOMER_DEMOGRAPHICS. Circular logical schemas are not supported.
    In response to this circular error, the developer was able to bypass the error using aliases, but this is not desired by client.
    3. They want to know how to avoid this error totally without using alias table and suggest a way to resolve the circular join(Multiple Path) error.
    Appreciated if someone can give some pointer or suggestion as the customer is in stiff deadline.
    Thanks
    Teik

    The strange thing compared to your output is that I get an error when I have table prefix in the query block:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE TMP1.A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "SYSADM"."TMP3" failed to load/unload and is being skipped due to error:
    ORA-38500: Unsupported operation: Oracle XML DB not present
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Dec 13 10:39:11 2013 elapsed 0 00:00:03
    And if I remove it, it works:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SYSADM"."TMP3"                             5.406 KB       1 out of 2 rows
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 13 10:36:50 2013 elapsed 0 00:00:01
    Nicolas.
    PS: as you can see, I'm on 11.2.0.4, I do not have 11.2.0.1 that you seem to use.

  • Calc problem with fact table measure used as part of bridge table model

    Hi all,
    I'm experiencing problems with the calculation of a fact table measure ever since I've used it as part of a calculation in a bridge table relationship.
    In a fact table, PROJECT_FACT, I had a column (PROJECT_COST) whose default aggregate was SUM. Whenever PROJECT_COST was used with any dimension, the proper aggregation was done at the proper levels. But, not any longer. One of the relationships PROJECT_FACT has is with a dimension, called PROJECT.
    PROJECT_FACT contains details of employees and each day they worked on a PROJECT_ID. So for a particular day, employee, Joe, might have a PROJECT_COST of $80 for PROJECT_ID 123, on the next day, Joe might have $40 in PROJECT_COST for the same project.
    Dimension table, PROJECT, contains details of the project.
    A new feature was added to the software - multiple customers can now be charged for a PROJECT, where as before, only one customer was charged.
    This percentage charge break-down is in a new table - PROJECT_BRIDGE. PROJECT_BRIDGE has the PROJECT_ID, CUSTOMER_ID, BILL_PCT. BILL_PCT will always add up to 1.
    So, the bridge table might look like...
    PROJECT_ID CUSTOMER_ID BILL_PCT
    123          100     .20
    123          200     .30
    123          300     .50
    456 400 1.00
    678 400 1.00
    Where for project 123, is a breakdown for multiple customers (.20, .30. .50).
    Let's say in PROJECT_FACT, if you were to sum up all PROJECT_COST for PROJECT_ID = 123, you get $1000.
    Here are the steps I followed:
    - In the Physical layer, PROJECT_FACT has a 1:M with PROJECT_BRIDGE as does PROJECT to PROJECT_BRIDGE (a 1:M).
    PROJECT_FACT ===> PROJECT_BRIDGE <=== PROJECT
    - In the Logical layer, PROJECT has a 1:M with PROJECT_FACT.
    PROJECT ===> PROJECT_FACT
    - The fact logical table source is mapped to the bridge table, PROJECT_BRIDGE, so now it has multiple tables it maps to (PROJECT_FACT & PROJECT_BRIDGE). They are set for an INNER join.
    - I created a calculation measure, MULT_CUST_COST, using physical columns, that calculates the sum of the PROJECT_COST X the percentage amount in the bridge table. It looks like: SUM(PROJECT_FACT.PROJECT_COST * PROJECT_BRIDGE.BILL_PCT)
    - I brought MULT_CUST_COST into the Presentation layer.
    We still want the old PROJECT_COST around until it get's phased out, so it's in the Presentation layer as well.
    Let's say I had a request with only PROJECT_ID, MULT_CUST_COST (the new calculation), and PROJECT_COST (the original). I'd expect:
    PROJECT_ID MULT_CUST_COST PROJECT_COST
    123          $1000     $1000
    I am getting this for MULT_CUST_COST, however, for PROJECT_COST, it's tripling the value (possibly because there are 3 percent amounts?)...
    PROJECT_ID MULT_CUST_COST PROJECT_COST
    123          $1000 (correct)      $3000 (incorrect, it's been tripled)
    If I were to look at the SQL, it would have:
              SELECT SUM(PROJECT_COST),
    SUM(PROJECT_FACT.PROJECT_COST * PROJECT_BRIDGE.BILL_PCT),
                   PROJECT_ID
              FROM ...
              GROUP BY PROJECT_ID
    PROJECT_COST used to work correctly before modeling a bridge table.
    Any ideas on what I did wrong?
    Thanks!

    Hi
    Phew, what a long question!
    If I understand correctly I think the problem lies with your old cost measure, or rather combining that with you new one in the same request. If you think about it, your query as explained above will bring back 3 rows from the database which is why your old cost measure is being multiplied. I suspect that if you took it out of the query, your bridge table would be working properly for the new measure alone?
    I would consider migrating your historic data into the bridge table model so that you have a single type of query. For the historic data each would have a single row in the bridge with a 1.0 BILL_PCT.
    Best of luck,
    Paul
    http://total-bi.com

  • Bridge table to Role Playing Dimension

    Hi,
    Fact Sales table has role playing Customer dimension table related to it. For each role it had a relationship in the DSV. One on CustomerA the other on CustomerB.
    Now I have a trip dimension which should have a many to many relationship to the customers table. So I set a bridge table relating trip to customer.
    But when I slice by trip it only displays the sales that have CustomerA in the trip.
    I want it to display the sales that have CustomerB in the trip.
    How can I affect that?
    Thank you
    Namnami

    When you create a M2M relationship, you specify 2 dimensions and one intermediate measure Group.
    Even if the bridge table is loaded once in the DSV (even if you can just duplicate it to simplify its naming), you define two measure groups for the bridge table, one connected to Customer A and Trip A, and the other to Customer B and Trip B. Then, you define
    M2M relationship between Trip A and Customer A using bridgeA as intermediate, and do the same for B.
    Again, please, since M2M require understanding how it works internally, I suggest you to read the White paper I mentioned, because you have all the information there that will allow you to design the model you need. I think it's not a discussion that is
    easy to complete in a forum thread, because M2M involves many side consequences in calculation (e.g. what happen to unrelated dimensions involved in calculation?).
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

  • How can I make an easy *.CSV file to load into database table

    Hi All,
    I have a huge excel sheet having columns item#, description and qty. The description column sometimes maybe one word name, two word name separated with space or may be , spearated name. I want to write and PL/SQl code which will read this file and load it into database table. Now the *.CSV file is either comma delimited or tab text delmited which both do not solve my issue. Is there any better solution with anyone which can prevent the manual editing to the *.CSV file and I can easily load it to table.
    Your help is appreciated,
    Thanks
    Zahir

    SQL*Loader is probably the fastest method, but since you specifically asked for a PL/SQL method:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:464420312302

  • Loading MS Access Table and Data into Oracle

    Hi,
    I have few tables in MS Access. I want to create same layout of tables in Oracle and want to populate data from MS Access tables to Oracle tables.
    Please let me know if there is a way by which I can create tables and load data automatically (thru some option or script)?
    I have Oracle 10g database and its clients.
    Thanks in advance,
    Rajeev.

    You can use Oracle migration workbench
    Loading MS Access Table and Data into Oracle
    It´s very easy to use and good to import
    regards,
    Felipe

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • FDMEE Import error "No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'

    Hi,
    We are having trouble while importing one ledger 'GERMANY EUR GGAAP'. It works for Dec 2014 but while trying to import data for 2015 it gives an error.
    Import error shows " RuntimeError: No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'."
    I tried all Knowledge docs from Oracle support but no luck. Please help us resolving this issue as its occurring in our Production system.
    I also checked all period settings under Data Management> Setup> Integration Setup > Global Mapping and Source Mapping and they all look correct.
    Also its only happening to one ledger rest all ledgers are working fine without any issues.
    Thanks

    Hi,
    there are some Support documents related to this issue.
    I would suggest you have a look to them.
    Regards

  • Unable to load Mini Bridge or other extensions (PS CC 14.2 x64)

    Hello, a few weeks ago I ran into update problems with an update to Adobe CSXS Infrastructure 4, and subsequent updates to Creative Cloud app, Photoshop and Illustrator. The updates would not complete and I could not finish the update to Creative Cloud app to sign in. With help from Adobe online tech, the Creative Cloud app was removed, the Adobe Cleaner run, and CC app reinstalled. Once signed in, I uninstalled Illustrator, downloaded and reinstalled it. After that I was able to finish with the Photoshop CC update.
    I've used PS CC quite a bit since then and all seems well, except that I am not able to load Mini Bridge or another 3rd party add in product. The messages I receive are:
    Mini Bridge could not be launched because the Mini Bridge extension is not installed
    3rd party product - onOne - Cannot complete command because the extension could not be loaded
    Today (1/17) I applied the 14.2 x64 update hoping it would correct the problem with extensions but it did not. PS CC 14.2 seems to work fine otherwise.
    Any help appreciated.
    Tina

    I had the same Problem Adobe had me install three new CEPServiceManager4  folder the third try worked. Here is what I wrote back to them
    When I ran Help>Updates there were two updates the CSXS update and and Photoshop CC 14.2.  Both update installed however Photoshop extension Mini Bridge was missing.
    I was able to copy the folder "Adobe Mini Bridge 3"  from the my rename "C:\Program Files (x86)\Common Files\Adobe\CEPServiceManager4 old\extensions" to the new "C:\Program Files (x86)\Common Files\Adobe\CEPServiceManager4\extensions" folder to recover the Mini Bridge extension.

  • How to delete the source table rows once loaded in Destination Table in SSIS?

    Data Base=kssdata
    Tables= Userdetails having 1000 rows
    Using SSIS: 
    Taking A  
    OLE DB Source----------------->OLE DB Destination
    Am Taking 200 rows in Source table and loaded into Destination table once
    Constraint: here once 200 rows are exported in destination table , that 200 rows are deleted in source table
    repeat the task as source table all the records are loaded into Destination table 
    After that am taking another 200 rows in source table and loaded into Destination table

    Provided you've a sequential primary key  or audit timestamp (datetime/date) column in the table you can do an approach like this
    1. Add a execute sql task connectng to source db with below statement
    SELECT COUNT(*) FROM table
    Store the result in a variable
    2. Have another variable and set it to below expression
    (@[User::CountVariable]/200) + (@[User::CountVariable]%200 >0? 1:0)
    by setting EvaluatesExpression as true. Here CountVariable is variable created in previous step
    3. Have a for loop container with below settings
    InitExpression
    @NewVariable = @CounterVariable
    EvalExpression
    @NewVariable > 0
    AssignExpression
    @NewVariable = @NewVariable - 1
    3. Add a data flow task with OLEDB source and OLEDB Destination
    4. Use source query as
    SELECT TOP 200 columns...
    FROM Table
    ORDER BY [PK | AuditColumn]
    Use PK or audit column depending which one is sequential
    5. After data flow task have a execute sql task with statement as below
    DELETE t
    FROM (SELECT ROW_NUMBER() OVER (ORDER BY PK) AS Rn
    FROM Table)t
    WHERE Rn <= 200
    This will make sure the 200 records gets deleted each time
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How can I load data into table with SQL*LOADER

    how can I load data into table with SQL*LOADER
    when column data length more than 255 bytes?
    when column exceed 255 ,data can not be insert into table by SQL*LOADER
    CREATE TABLE A (
    A VARCHAR2 ( 10 ) ,
    B VARCHAR2 ( 10 ) ,
    C VARCHAR2 ( 10 ) ,
    E VARCHAR2 ( 2000 ) );
    control file:
    load data
    append into table A
    fields terminated by X'09'
    (A , B , C , E )
    SQL*LOADER command:
    sqlldr test/test control=A_ctl.txt data=A.xls log=b.log
    datafile:
    column E is more than 255bytes
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)

    Check this out.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1006961

  • Using dbms_lob to load image into table

    I am trying to load a set of images from my DB drive into a table. This works fine when I try to load only 1 record. If I try to load more than 1 record, first gets created but I get this error, and it doesn't load the images for the rest of them.
    ORA-22297:     warning: Open LOBs exist at transaction commit time
    Cause:     An attempt was made to commit a transaction with open LOBs at transaction commit time.
    Action:     This is just a warning. The transaction was commited successfully, but any domain or functional indexes on the open LOBs were not updated. You may want to rebuild those indexes.
    Am I missing something in the code that's needed?
    in_file UTL_FILE.FILE_TYPE;
    bf bfile;
    b blob;
    src_offset integer := 1;
    dest_offset integer := 1;
    CURSOR get_pics is select id from emp;
    BEGIN
    FOR x in get_pics LOOP
    BEGIN
    insert into stu_pic(id,student_picture)
    values(x.id,empty_blob()) returning student_picture into b;
    l_picture_uploaded := 'Y';
    bf := bfilename('INTERFACES',x.student_id || '.' || p_image_type);
    dbms_lob.fileopen(bf,dbms_lob.file_readonly);
    dbms_lob.open(b,dbms_lob.lob_readwrite);
    dbms_lob.loadBlobFromFile(b,bf,dbms_lob.lobmaxsize,dest_offset,src_offset);
    dbms_lob.close(b);
    dbms_lob.fileclose(bf);
    EXCEPTION when dup_val_on_index then null;
    END;
    END LOOP;
    END;

    There are two methods you can use.
    1. Create an external table with those images(BLOB column) and then use that external table to insert into another table.
    Demo as follows:
    This is my pdf files
    C:\Saubhik\Assembly\Books\Algorithm>dir *.pdf
    Volume in drive C has no label.
    Volume Serial Number is 6806-ABBD
    Directory of C:\Saubhik\Assembly\Books\Algorithm
    08/16/2009  02:11 PM         1,208,247 algorithms.pdf
    08/17/2009  01:05 PM        13,119,033 fci4all.com.Introduction_to_the
    d_Analysis_of_Algorithms.pdf
    09/04/2009  06:58 PM        30,375,002 sedgewick-algorithms.pdf
                   3 File(s)     44,702,282 bytes
                   0 Dir(s)   7,474,593,792 bytes free
    C:\Saubhik\Assembly\Books\Algorithm>This is my file with which I'll load the pdf files as BLOB
    C:\Saubhik\Assembly\Books\Algorithm>type mypdfs.txt
    Algorithms.pdf,algorithms.pdf
    Sedgewick-Algorithms.pdf,sedgewick-algorithms.pdf
    C:\Saubhik\Assembly\Books\Algorithm>Now the actual code
    SQL> /* This is my directory object */
    SQL> CREATE or REPLACE DIRECTORY saubhik AS 'C:\Saubhik\Assembly\Books\Algorithm';
    Directory created.
    SQL> /* Now my external table */
    SQL> /* This table contains two columns. 1.pdfname contains the name of the file
    DOC>   and 2.pdfFile is a BLOB column contains the actual pdf*/ 
    SQL> CREATE TABLE mypdf_external (pdfname VARCHAR2(50),pdfFile BLOB)
      2         ORGANIZATION EXTERNAL (
      3           TYPE ORACLE_LOADER
      4            DEFAULT DIRECTORY saubhik
      5            ACCESS PARAMETERS (
      6              RECORDS DELIMITED BY NEWLINE
      7              BADFILE saubhik:'lob_tab_%a_%p.bad'
      8              LOGFILE saubhik:'lob_tab_%a_%p.log'
      9              FIELDS TERMINATED BY ','
    10              MISSING FIELD VALUES ARE NULL
    11               (pdfname char(100),blob_file_name CHAR(100))
    12              COLUMN TRANSFORMS (pdfFile FROM lobfile(blob_file_name) FROM (saubhik) BLOB)
    13            )
    14            LOCATION('mypdfs.txt')
    15         )
    16         REJECT LIMIT UNLIMITED;
    Table created.
    SQL> SELECT pdfname,DBMS_LOB.getlength(pdfFile) pdfFileLength
      2  FROM   mypdf_external;
    PDFNAME                                            PDFFILELENGTH
    Algorithms.pdf                                           1208247
    Sedgewick-Algorithms.pdf                                30375002
    SQL> Now, you can use this table for any operation very easily. Even for your loading into another table!.
    2. Use of DBMS_LOB like this
    /* Loading a image Winter.jpg in the BLOB column as BLOB!*/
    DECLARE
      v_src_blob_locator BFILE := BFILENAME('SAUBHIK', 'Winter.jpg');
      v_amount_to_load   INTEGER := 4000;
      dest_lob_loc BLOB;
    BEGIN
      --Insert a empty row with id 1
      INSERT INTO test_my_blob_clob VALUES(1,EMPTY_BLOB(),EMPTY_CLOB())
       RETURNING BLOB_COL INTO dest_lob_loc;
      DBMS_LOB.open(v_src_blob_locator, DBMS_LOB.lob_readonly);
      v_amount_to_load := DBMS_LOB.getlength(v_src_blob_locator);
      DBMS_LOB.loadfromfile(dest_lob_loc, v_src_blob_locator, v_amount_to_load);
      DBMS_LOB.close(v_src_blob_locator);
      COMMIT;
    --id=1 is created with Winter.jpg populated in BLOB_COL and CLOB_COL is empty.  
    END;Now user this code to create a procedure with parameter and use that in loop.

  • SQL *Loader and External Table

    Hi,
    Can anyone tell me the difference between SQL* Loader and External table?
    What are the conditions under we can use SQL * Loader and External Table.
    Thanx

    External tables are accessible from SQL, which generally simplifies life if the data files are physically located on the database server since you don't have to coordinate a call to an external SQL*Loader script with other PL/SQL processing. Under the covers, external tables are normally just invoking SQL*Loader.
    SQL*Loader is more appropriate if the data files are on a different server or if it is easier to call an executable rather than calling PL/SQL (i.e. if you have a batch file that runs on a server other than the database server that wants to FTP a data file from a FTP server and then load the data into Oracle).
    Justin

  • Sql*loader and nested tables

    I'm having trouble loading a nested table via sqlldr in Oracle 10g and was hoping someone could point me in the right direction. I keep getting the following error:
    SQL*Loader-403: Referenced column not present in table mynamespace.mytable
    Here's an overview of my type and table definitions:
    create type mynamespace.myinfo as object
    i_name varchar2(64),
    i_desc varchar2(255)
    create TYPE mynamespace.myinfotbl as TABLE of mynamespace.myinfo;
    create table mynamespace.mytable
    Info mynamespace.myinfotbl,
    note varchar2(255)
    NESTED TABLE Info STORE AS mytable_nested_tab;
    My control file looks like this:
    load data
    infile 'mydatafile.csv'
    insert into table mynamespace.mytable
    fields terminated by ',' optionally enclosed by '"'
    TRAILING NULLCOLS
    Info nested table count(6)
    Info column object
    i_name char(64),
    i_desc char(255)
    note
    Example mydatafile.csv would be something like:
    lvl1,des1,lvl2,des2,lvl3,des3,lvl4,des4,lvl5,des5,lvl6,des6,a test data set
    I can't figure out why sqlldr keeps rejecting this control file. I'm using 'direct=false' in my .par file.
    Any hints?

    I just noticed that my email is wrong. If you can help, plese send email to [email protected]
    thanks.

Maybe you are looking for