Importing a Partitioned Table with 10 Million Records.

I've been trying to import from a dump file using:
imp system/######@******** fromuser=fusr touser=tusr file=/f1/f2/expfl.dbf log=/o1/implg.log grants=N &
import done in US7ASCII character set and UTF8 NCHAR character set
import server uses UTF8 character set (possible charset conversion)
This contains a Table 'Tab_Mil_Rec', with almost 10 millions of records and has 10 partitions.
Done in 9i, on Solaris9.
Problem is the process abruptly ends at 'Tab_Mil_Rec'. This table is created but nothing is imported. I checked the log file, but it has logged events before this table, but nothing (not evens errors or termination message) after that. No errors are thrown even at os level, I don't know exactly because this was done as backgruond job.
Can anybody guess went wrong and whats the next step?

Hi,
Can you tried import partition by partition of this table ?
Cheers

Similar Messages

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

    Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
    Query runs fine when query the partitioned table without partitioned indexes.
    Here is the query.
    SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
    al27.accessory_code
    FROM vlc.veh_vdc_accessorization_fact al1,
    vlc.vdc_dim al2,
    vlc.model_attribute_dim al7,
    vlc.ppo_list_dim al18,
    vlc.ppo_list_indiv_type_dim al23,
    vlc.accy_type_dim al27
    WHERE ( al2.vdc_id = al1.vdc_location_id
    AND al7.model_attribute_id = al1.model_attribute_id
    AND al18.mydppolist_id = al1.ppo_list_id
    AND al23.mydppolist_id = al18.mydppolist_id
    AND al23.mydaccytyp_id = al27.mydaccytyp_id
    AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
    AND al2.vdc_name IN
    ('PORT OF BALTIMORE',
    'PORT OF JACKSONVILLE - LEXUS',
    'PORT OF LONG BEACH',
    'PORT OF NEWARK',
    'PORT OF PORTLAND'
    AND al27.accessory_code IN ('42', '43', '44', '45')
    GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

    I would recommend that you post this at the following OTN forum:
    Database - General
    General Database Discussions
    and perhaps at:
    Oracle Warehouse Builder
    Warehouse Builder
    The Oracle OLAP forum typically does not cover general data warehousing topics.

  • Performance between two partitionned tables with different structure

    Hi,
    I would like if there is a difference between two partitionned tables with different structure in term of performance (access, query, insertions, updates ).
    I explain myself in detail :
    I have a table that stores one value every 10 minutes in a day (so we have 144 values (24*6) in the whole day), with the corresponding id.
    Here is the structure :
    | Table T1 |
    + id PK |
    + date PK |
    + sample1 |
    + sample2 |
    + ... |
    + sample144 |
    The table is partionned on the column date, with a partionned every months. The primary key is based on the columns (id, date).
    There is an additionnal index on the column (id) (is it useful ?).
    I would like to know if it is better to have a table with just (id, date, value) , so for one row in the first table we'll have 144 rows in the future? table. The partition will already be on the columns (id, date) with the index associated.
    What are the gains or loss in performance with this new structure ( access, DMLs , storage ) ?
    I discuss with the Java developers and they say it is simpler to manage in their code.
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Thanks & Regards
    From France
    Oliver
    Edited by: 998239 on 5 avr. 2013 01:59

    I mean storage in tablespaces and datafiles on disk.
    Can you justify please and give me concrete arguments why the two structures are equivalent ( except inserting data in T(id, date,value))
    because i have to make a report.i didnt say any thing like
    two structures are equivalent ( except inserting data in T(id, date,value)i said
    About structure : TABLE1(id, date, value) is better than TABLE1(id, date, sample1, .... sample144)because
    1) oracle has restriction for numbers of column. Ok you can have 144 columns now but for future if you must have more than 1000 columns , what will you do?
    2) Restrictions on Table Compression (Table compression is not supported for tables with more than 255 columns.)
    3) store same type values on diffrent columns is bad practise
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i4383
    i remember i seen Toms article about this but now i cant find it sorry ((( if i found i will post here

  • I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY WUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ??

    I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??

    Hi Kishore,
    First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
    Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table.

  • Migrating a new partition table with transportable tablespace

    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
    1) create a table as CTAS from 2012 in new Tablespace on source
    2) transport the tablespace
    3) Add partition to existing partition table Or exchange partition
    Oracle has also documented this procedure:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • Select max date from a table with multiple records

    I need help writing an SQL to select max date from a table with multiple records.
    Here's the scenario. There are multiple SA_IDs repeated with various EFFDT (dates). I want to retrieve the most recent effective date so that the SA_ID is unique. Looks simple, but I can't figure this out. Please help.
    SA_ID CHAR_TYPE_CD EFFDT CHAR_VAL
    0000651005 BASE 15-AUG-07 YES
    0000651005 BASE 13-NOV-09 NO
    0010973671 BASE 20-MAR-08 YES
    0010973671 BASE 18-JUN-10 NO

    Hi,
    Welcome to the forum!
    Whenever you have a question, post a little sample data in a form that people can use to re-create the problem and test their ideas.
    For example:
    CREATE TABLE     table_x
    (     sa_id          NUMBER (10)
    ,     char_type     VARCHAR2 (10)
    ,     effdt          DATE
    ,     char_val     VARCHAR2 (10)
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('15-AUG-2007', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('13-NOV-2009', 'DD-MON-YYYY'), 'NO');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('20-MAR-2008', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('18-JUN-2010', 'DD-MON-YYYY'), 'NO');
    COMMIT;Also, post the results that you want from that data. I'm not certain, but I think you want these results:
    `    SA_ID LAST_EFFD
        651005 13-NOV-09
      10973671 18-JUN-10That is, the latest effdt for each distinct sa_id.
    Here's how to get those results:
    SELECT    sa_id
    ,         MAX (effdt)    AS last_effdt
    FROM      table_x
    GROUP BY  sa_id
    ;

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • How to spool in excel sheet of table with 1561828 records

    how to spool in excel sheet of table with 1561828 records
    i think excel got only 65l limit?
    COUNT(*)
    1561828
    i am using windows box...any suggestions?

    Raman wrote:
    means excel 2007 can hold 15,61,828 records ? can i give like spool filename.xls?You can name the spool file anything you want, but surely you realize that naming it 'filename.xls' doesn't make it an xls file. A name is just a name. There are industry standards on certain names indicating certain file formats, but the name doesn't make it that format.

  • Importing into partitioned table from unpartitioned table

    I have taken export of a unpartitioned table using datapump.
    I want to import data using these dumpfiles into a partitioned table using impdp.
    Can you please provide me the basic script, I have tried running using the following script and I got the following error.
    noarg impdp $SCHEMA/$pw TABLES=\(ITPCS.ITPCS_BOM_GRT_SCAN\) EXCLUDE=STATISTICS parallel=4 dumpfile=dumpdir:exptable%U.dmp logfile=logdir:imptab2.log job_name=imptables_3 content=DATA_ONLY EXCLUDE=constraint,index,materialized_view
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "ITPCS"."ITPCS_BOM_GRT_SCAN" failed to load/unload and is being skipped due to error:
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.KUPD$DATA", line 1167
    ORA-14400: inserted partition key does not map to any partition
    Job "SYSTEM"."IMPTABLES_3" completed with 1 error(s) at 08:02

    pankaj2086 wrote:
    I have taken export of a unpartitioned table using datapump.
    I want to import data using these dumpfiles into a partitioned table using impdp.
    Can you please provide me the basic script, I have tried running using the following script and I got the following error.
    noarg impdp $SCHEMA/$pw TABLES=\(ITPCS.ITPCS_BOM_GRT_SCAN\) EXCLUDE=STATISTICS parallel=4 dumpfile=dumpdir:exptable%U.dmp logfile=logdir:imptab2.log job_name=imptables_3 content=DATA_ONLY EXCLUDE=constraint,index,materialized_view
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "ITPCS"."ITPCS_BOM_GRT_SCAN" failed to load/unload and is being skipped due to error:
    ORA-00942: table or view does not existWhat do you suppose the ora-00942 means? Looks pretty self explanatory to me. Have you looked it up? Have you verified that the target table exists in the target database?
    ORA-06512: at "SYS.KUPD$DATA", line 1167
    ORA-14400: inserted partition key does not map to any partitionAnd if the target table does exist, what do you suppose the ora-14400 means? Looks pretty self explanatory to me. Have you looked it up?

  • Partitioning table with sequence made in java base 36

    hi ,sorry for my poor english
    i would like to partition a big table with more than 5 millions of rows on a sequence made in java base 36
    thanks in advance for you help
    regards

    Is this sequence stored in a column in the database?
    How do you want to partition the table? RANGE? HASH? Or is there some sort of list partitioning scheme you're thinking of?
    If the table already exists, you know that you're going to have to re-create the table as a partitioned table, re-create appropriate constraints, indexes, triggers, etc. and move all the data over, right? Online table redefinition can hide some of this complexity from you (though it may well introduce other sorts of complexity).
    Justin

  • Large partitioned tables with WM

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    Hello Ben
    Thank you for your reply.
    The table structure we have is like so:
    CREATE TABLE hdr
    (   pk_id               NUMBER PRIMARY KEY,
        customer_id         NUMBER FOREIGN KEY REFERENCES customer,
        entry_type          NUMBER NOT NULL
    CREATE TABLE dtl_daily
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_200709
            VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_200710
            VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
            TABLESPACE x COMPRESS
    CREATE TABLE dtl_hourly
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        active_hour         NUMBER NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_20070901
            VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_20070902
            VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
            TABLESPACE x COMPRESS
        PARTITION ptn_20070903
            VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
            TABLESPACE x COMPRESS
        ...For every day for 20 years
    /The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
    At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
    For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
    I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
    DBMS_WM.DeleteSavepoint(
       workspace                   => 'LIVE',
       savepoint_name              => 'savepoint_20070910', --10 days ago
       compress_view_wo_overwrite  => TRUE,
    DBMS_WM.CompressWorkspace(
       workspace                   => 'LIVE,
       compress_view_wo_overwrite  => TRUE,
       firstSP                     => 'savepoint_20070911', --the new oldest save point
       );Is my understanding correct?
    David
    Message was edited by:
    fixed some formatting
    David Tyler

  • Error in import of partitioned table

    I have one partitioned table in one schema after taking export when I am trying
    to import in other database without having the same partition it is giving error
    ora-00959 tablespace "PART_FY06' does not exists.
    How to resolve it?

    Yes oracle is saying tablespace PART_FY06 does not exist. Create this tablespace. Export and Import are logical. For logical operation to get success, physical entities like tablespaces should exist.

  • Database Adapter: cannot access table with complex record type as columns

    Hi all,
    I cannot perform any operations on a table that has columns with complex record type.
    I have created a table to store purchase order details.
    Sample script:
    CREATE type XX_CUST_INFO_TYP as object
    ssn VARCHAR2(20),
    rating NUMBER(15)
    CREATE type XX_ITEM_TYP as object
    item_name VARCHAR2(20),
    unit_price NUMBER(15),
    quantity NUMBER(15)
    CREATE table XX_PORDER (cust XX_CUST_INFO_TYP, porder XX_ITEM_TYP);
    When i try to access the table X_PORDER in jdev through a database Adapter, i receive the error as
    "some tables contains columns that are not recognized by the database adpter"
    1.) so in this case, how to include such tables that have complex types?
    Also, check out this scenario also..
    1. add a table through a database adapter
    2. drop the table in backend
    3. i can still see the table and its structure in the database adapter wizard even after restarting Jdeveloper.. How is it possible?
    These are some really interesting scenarios to experiment. Please suggest your ideas on this..
    Thanks All!

    Hi Hem,
    for a select you could select against a view. And for inserts you could create a stored procedure. They support complex types since 10.1.2. Complex types support in tables/views was added for 11 (next major release).
    You might be able to use PureSQL as a workaround too, i.e.
    insert into XX_PORDER values (XX_CUST_INFO_TYP(?,?), XX_ITEM_TYP(?, ?, ?))
    As for your other problem, in 10.1.2/10.1.3 the DBAdapter wizard sits on top of the Jdev Offline Tables and TopLink Mapping Workbench components. When you remove a table in the wizard it won't delete the Offline DB component. It was added by the wizard, but afterwards it is public to the entire Jdev project. You must remove it from Jdev yourself. This has been improved for the next major release too, no artifacts from underlying components are created.
    To remove it select:
    Offline DB Objects -> <schema> -> <table> and try File.. Erase From Disk.
    Thanks
    Steve

  • Dynamic Table with Random Records

    What I am trying to do is select random records from a table
    and display them in a dynamic table with max columns set to 3 and
    the 4th record to be on a new row. Below is what I have right now
    and it works to randomly pick records but has no function to set
    columns in a table. If there is an easier way feel free to let me
    know. I have tried various ways to do this but none seem to work.
    <CFQUERY NAME="getItems" DATASOURCE="absi">
    SELECT catfit.*, modcats.*, prodmat.*, prod.* FROM catfit,
    modcats,
    prodmat, prod WHERE prodmat.prodid=catfit.prodid And
    catfit.catid=modcats.catid
    ORDER BY modl ASC </cfquery>
    <cfif getItems.recordCount>
    <cfset showNum = 3>
    <cfif showNum gt getItems.recordCount>
    <cfset showNum = getItems.recordCount>
    </cfif>
    <cfset itemList = "">
    <cfloop from="1" to="#getItems.recordCount#"
    index="i">
    <cfset itemList = ListAppend(itemList, i)>
    </cfloop>
    <cfset randomItems = "">
    <cfset itemCount = ListLen(itemList)>
    <cfloop from="1" to="#itemCount#" index="i">
    <cfset random = ListGetAt(itemList, RandRange(1,
    itemCount))>
    <cfset randomItems = ListAppend(randomItems, random)>
    <cfset itemList = ListDeleteAt(itemList,
    ListFind(itemList, random))>
    <cfset itemCount = ListLen(itemList)>
    </cfloop>
    <cfloop from="1" to="#showNum#" index="i">
    <cfoutput>
    <table width="205" border="0" align="left"
    cellpadding="0" cellspacing="0">
    <tr>
    <td width="235" height="116"> <div
    align="center"><img
    src="../Products/ProductPictures/#getitems.pic[ListGetAt(randomItems,
    i)]#" width="100"></div></td>
    </tr>
    <tr>
    <td
    class="ProdTitle">#getitems.brand[ListGetAt(randomItems,
    i)]# #getitems.modl[ListGetAt(randomItems, i)]#</td>
    </tr>
    <tr>
    <td
    class="paragraph">$#getitems.prc[ListGetAt(randomItems,
    i)]#</td>
    </tr>
    <tr>
    <td><A
    href="../Products/details.cfm?prodid=#getItems.prodid[ListGetAt(randomItems,
    i)]#" class="linkcontact">more
    info</a></td>
    </tr>
    <tr>
    <td> </td>
    </tr>
    </table>
    </cfoutput>
    </cfloop>
    </cfif>

    To start a new row after 3 records, do something like this.
    <table>
    <tr>
    <cfoutput query="something">
    <td>#data#<td>
    <cfif currentrow mod 3 is 0>
    </tr><tr>
    </cfoutput>
    </tr>
    </table>
    You should also know that your approach is very inefficient
    in that you are bringing in to cold fusion more data than you need.
    First of all you are selecting every field from 3 tables when you
    don't appear to be using all of them. Second, you are selecting
    every record and you only want to use 3. There are better ways out
    there, but they are db specific and you did not say what you are
    using.

Maybe you are looking for