Direct Path Insert into temporary table

Hi,
I made the following experiment (temp_idtable is a temp table with a single column id, V_PstSigLink is a complex View):
14:24:28 SQL> insert into temp_idtable select id from V_PstSigLink;
17084 Zeilen wurden erstellt. (in english: rows inswerted)
Statistiken
24 recursive calls
52718 db block gets
38305 consistent gets
16 physical reads
4860544 redo size
629 bytes sent via SQL*Net to client
553 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
17084 rows processed
14:24:41 SQL> commit;
Transaktion mit COMMIT abgeschlossen.
14:25:29 SQL> insert /*+ APPEND*/into temp_idtable select id from V_PstSigLink;
17084 Zeilen wurden erstellt. (in english: rows inswerted)
Statistiken
1778 recursive calls
775 db block gets
38847 consistent gets
40 physical reads
427408 redo size
613 bytes sent via SQL*Net to client
565 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
27 sorts (memory)
0 sorts (disk)
17084 rows processed
As you can see, with /*+ APPEND*/ there will be much less REDO generated.
How can it be?
1. /*+ APPEND*/ only doesn't mean that there shouldn't be REDO generated, does it? For that one has to declare the table with NOLOGGING.
2. In the doc there is a statement that for temp. tables there is no REDO log generated, just UNDO (and REDO only for UNDO).
Where is the difference coming from?
If I use Direct Path (APPEND) for a temp table, where will be the table populated - in the Buufer Cache or directly in a temporary segment on the disk?
Balazs

Hi John,
I'm a little confused about your statement " Oracle does not cache data blocks until they are read from a table.". I wanted to make a little experiment with KEEP Buffer Cache to see weather I can 'pin' the content of a temp table. But before getting to pin I realized an interesting behavior. Please see this sqlplus log:
SQL> create global temporary table temp_ins(id NUMBER);
Table created.
SQL> insert into temp_ins values (1);
1 row created.
SQL> insert into temp_ins values (2);
1 row created.
SQL> insert into temp_ins values (3);
1 row created.
SQL> set autotrace on explain statistics;
SQL> select * from temp_ins;
ID
1
2
3
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEMP_INS'
Statistics
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
416 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
SQL>
I realized exectly the opposit of your statement! Oracle apparently cached the content of the temp table before it read from it!! Could you please explain why? Now I'm going to try it with a normal table...
Balazs

Similar Messages

  • Nologging direct-path insert into an indexed table

    Hello,
    Does anyone have an idea how I can suppress generation of undo logs for direct-path insert into an indexed table on 11.2.0.1.0:
    CREATE TABLE TBL(ID NUMBER) NOLOGGING;
    CREATE INDEX IDX ON TBL(ID) NOLOGGING;
    INSERT /*+ APPEND */ INTO TBL SELECT /*+ APPEND */ ROWNUM FROM ...; -- Source table has 400,000,000+ rows
    Regards,
    Angel Tsankov

    Pl do not post duplicates - Why does Oracle not use direct-path insert when instructed to do so - pl continue the discussion in your original thread

  • Create and insert into temporary table

    Dear all,
    I want to create an temporary table and insert data from select statement and order data by field in that table, at the end i want to drop temporary table.
    CURSOR rob_twin
    IS
    SELECT t.tarj, s.b24, date, status, fii, acnt, mbr, pviv, type
    FROM
    p6.tarj t
    p6.stb24 s
    p6.cuet c
    WHERE .......
    --now I want to create table and insert above data (from cursor rob_twin)
    --after that I need to order data by t.tary from new table
    --finally I want to drop this table
    Thank you for your help.
    Regards,
    Robert

    The point is in Oracle you do not create the temporary table at run-time, use it, and drop it again.
    But you can create a GTT (Global Temporary Table) once and then it stays in the data dictionary, just like a normal table. The "temporary" part is simply that the data is temporary, can only be seen in this one session, and will go away either on commit or when the session ends.
    So first you create a global temporary table with the columns you wish. Using either ON COMMIT DELETE ROWS or ON COMMIT PRESERVE ROWS you decide whether data in this GTT should survive across transactions or not.
    Your logic could then be to populate this GTT with data from your remote database using INSERT INTO gtt_table SELECT ... FROM ...@dblink
    Then work with that data in the GTT as you would with any normal local table
    And finally either you simply commit your transaction and the data in the GTT goes away (if you did ON COMMIT DELETE ROWS), or you just DELETE gtt_table.
    You do not continually create and drop a GTT. The table definition is permanent - it is only the data that is temporary.
    Edit:
    PS: As Karthick stated above - for almost any usual purposes it is likely to be possible to solve the problem without having to use GTT's. Usually even with db_links it can be done with suitable SQL and joins without having to use a GTT. But if that turns out to be one of those rare occasions where that is hard - then the above is the way to go ;-)
    Edited by: Kim Berg Hansen on Sep 19, 2011 1:38 PM

  • Insert into temporary table

    I migrate procedures MS SQL Server to Oracle.
    In MS SQL SSERVER the use of instructions INSERT with procedure results which are in storage or dynamic instructions EXECUTE in place of VALUES clause is permissible. This construction is similar to INSERT/SELECT but we have to do with EXEC instead of SELECT. The part of EXEC should include exactly one resulted collection about the equivalent types to the types of table columns. In case of the stored procedure, we can pass on proper parameters, use the form of EXEC('string') and even call up wideranging procedures or remote control procedures from different servers. Calling up remote control procedures from different server, which place data in temporary table, and later realizing join with obtainable data, we can construct diffuse joins.
    For example. I want insert results stored procedures sp_configure, proc_obj in temporary table.
    1)INSERT #konfig
    exec sp_configure.
    2)
    CREATE PROCEDURE proc_test
    @Object_ID int,
    AS
    SET XACT_ABORT ON
    BEGIN TRAN
    CREATE TABLE #testObjects ( Object_ID int NOT NULL )
    INSERT
    #testObjects
    EXEC
    proc_obj @Object_ID,3,1
    COMMIT TRAN
    RETURN(0)
    go
    I don't know how migrate for example code to Oracle? Please examples in pl/sql.
    Best regards.

    Hi Elena,
    Patryk is you :) ?
    http://www.orafaq.com/forum/t/46956/67658/
    Rgds.

  • Forcing DIRECT PATH INSERT to go CONVENTIONAL.

    According to Oracle, to force a statement to avoid using DIRECT-PATH insert it must fall into the following:
    Direct-path INSERT is subject to a number of restrictions. If any of these restrictions is violated, then Oracle Database executes conventional INSERT serially without returning any message, unless otherwise noted:
        *     You can have multiple direct-path INSERT statements in a single transaction, with or without other DML statements. However, after one DML statement alters a particular table, partition, or index, no other DML statement in the transaction can access that table, partition, or index.
        *      Queries that access the same table, partition, or index are allowed before the direct-path INSERT statement, but not after it.
        *      If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT in the same transaction, then the database returns an error and rejects the statement.
        *      The target table cannot be part of a cluster.
        *      The target table cannot contain object type columns.
        *      Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it has a mapping table, or if it is reference by a materialized view.
        *      Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
        *      The target table cannot have any triggers or referential integrity constraints defined on it.
        *      The target table cannot be replicated.
        *      A transaction containing a direct-path INSERT statement cannot be or become distributed.Are there any others that are not documented here? We have a vendor based app and want to avoid the DIRECT PATH INSERT and have it go CONVENTIONAL. We tried the TRIGGER approach, but that did not help at all.

    Why are you wanting to force conventional ?
    Are you sure the application uses direct path ?

  • How to insert a table data into temporary table

    Hi
    Can anyone help me to insert a table data into temporary table.
    Thanks
    Navin

    If you could provide a (simplified) example of the data you have and the output you're attempting to get, that would probably be quite helpful. I'm not sure that I understand exactly what you're trying to do here...
    1) It sounds like you know the structure of the result set you're trying to generate. So it would be possible to create a temporary table once (at the same time that you create all your other tables) and write procedural PL/SQL code that would step through the data, write data to the temp table, select the data out of the temp table, and return a REF CURSOR. That would tend not to be the way that an Oracle developer would do things (there are exceptions, of course), but it would work.
    2) I don't see any inherent problems in using sub-selects and inline views to do whatever aggregation you're trying to do on the secondary tables, which would allow you to get the output in a single query. For example, given an ORDERS table and an ORDER_DETAILS table,
    SELECT o.customer_id, o.invoice_number, SUM( od.line_item_cost ) total_cost
      FROM orders o,
           order_details od
    WHERE o.order_id = od.order_id
    GROUP BY o.customer_id, o.invoice_number3) If you do need to use procedural logic, I would tend to look into the use of pipelined table functions or to read the data into an in-memory collection and to manipulate and return that collection.
    Justin

  • Insert /*+ Append */ and direct-path INSERT

    Hi Guys
    Does insert /*+ Append */ into hint cause Oracle 10G to use direct-path INSERT?
    and if insert /*+ Append */ into hint does cause Oracle to use direct-path INSERT, does insert /*+ Append */ is subject to the same restrictions as direct-path such as "The target table cannot have any triggers or referential integrity constraints defined on it."
    Thanks

    Dear,
    Here below a simple example showing the effet of existing trigger on the append hint
    mhouri@mhouri> select * from v$version where rownum=1;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    mhouri@mhouri> create table b as select * from all_objects where 1 = 2;
    Table créée.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70986 ligne(s) créée(s).
    mhouri@mhouri> select * from b;
    select * from b
    ERREUR à la ligne 1 :
    ORA-12838: impossible de lire/modifier un objet après modification en parallèle
    mhouri@mhouri> rollback;
    Annulation (rollback) effectuée.The direct path took place as far as I can't select from the table before I commit
    mhouri@mhouri> create trigger b_trg before insert on b
      2  for each row
      3  begin
      4  null;
      5  end;
      6  /
    Déclencheur créé.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70987 ligne(s) créée(s).
    424 ligne(s) sélectionnée(s).
    mhouri@mhouri> select count(1) from b;
      COUNT(1)                                                                     
         70987                                                                      While in the presence of this trigger on the table, the append hint has been silently ignored by Oracle. The fact that I can select from the table immediately afte the insert has finished is the indication that the table has not be inserted using direct path load
    Best Regards
    Mohamed Houri

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Direct path insert

    Hi Friends,
    If i used direct path insert from a (LAN) remote table with 5 million rows using:
    sql> insert /*+ append */ into EMP using select * from EMP@dblink1;
    1) Do i need to create big rollback segment?
    *Note that in direct path sqlloader, no big rollback is needed.
    2.) Can i still undo it or rollback it the 5 millions rows?
    If i can then it sure holds or need a big rollback segment.
    Thanks a lot
    Message was edited by:
    [email protected]

    From same link see below points from Tom
    b) a direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space. Think about this - if you direct pathed an insert of a 100 byte row that loaded say just two rows - and you did that 1,000 times, you would be using at least 1,000 blocks (never reuse any existing space) - each with two rows. Now, if you did that using a conventional path insert - you would get about 70/80 rows per block in an 8k block database. You would use about 15 blocks. Which would you prefer?+
    c) you cannot query a table after direct pathing into it until you commit.+
    See point c. So, this will mark block header transaction flag to commit. if you rollback as they are written above HWM and since there is no commit flag yet it just need to set table HWM to original one and that's it.

  • Direct Path Inserts = By DBWR or by Shadow Process

    Hello guys,
    if i use the append hint (direct path inserts) - which process is writting the data to the datafile?
    In the documentation of oracle:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm#10778
    Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache Ok that is clear - but who is writing down the data? The shadow process which is handling the insert statement or is it given to the dbwr and it flushes the data directly to the datatfile without putting it into the buffer cache.
    And another question regarding direct path inserts:
    http://sai-oracle.blogspot.com/2006/03/parallel-query-is-it-good-or-bad.html
    If you want to do direct path insert in parallel, you would block all other non direct path insert operations on that table. This is because direct path insert would append above highwater mark, and no other process is allowed to update HWM until your operation is doneIf i insert data in serial with direct path insert... i also write data behind the High-HWM .. why is it only locked in parallel mode?
    And why does oracle not check if there is enough space in some other blocks below the High-HWM and using these blocks for "normal" inserts?
    Regards
    Stefan

    It's the server process that's responsible for writing direct path blocks to disk.
    DBWR only ever flushes out of the buffer cache.
    You do NOT write "data behind the HWM" in direct path mode, ever. Direct path works so fast because all you do is slam whole blocks of data onto disk ABOVE the high water mark. You can slam whole blocks down without worrying about what you're over-writing precisely because the blocks are above the HWM and thus cannot possibly be in use by anyone for anything. Under the HWM and you have to start worrying about locking and contention and stuff like that.
    Parallel operations (or simultaneous serial operations by two different users) inevitably have to block each other for this sort of thing: if I am busy writing to blocks above the HWM, great. But if you want to do it too... well, what's to stop us from over-writing each other's blocks?! Nothing, actually. So Oracle simply has to lock the entire table from any other direct path operation to make the situation manageable.

  • Direct path read caused by direct path insert?

    Oracle 9.2
    ========
    Consider this INSERT statement
    INSERT /*+ APPEND USE_HASH*/ INTO schema.tablename (...)
    SELECT * FROM view_tablename;
    This statement takes about 30 minutes to complete.
    If I look at the v$session_wait, I can see that the session waits denoting db file scattered read (which is understandable). However, at the very end of the 30-minute wait, the v$session_wait view shows that it is waiting denoting direct path read. I want to understand how can a direct path INSERT (or for that matter a simple INSERT statement) can cause a direct path read wait. Can you show me some doc which expalain this in more detail than Oracle doc? Thanks in advance.

    Hmm....ok, well,first, there's a problem w/ the hint specification.
    The USE_HASH hint makes no sense as a hint to the insert statement. It only makes sense in the context of a select statement. Also, it should specify a table alias. So, you could have something like:
    insert /*+ APPEND */.into some_tab
    select /*+ use_hash(tab_alias) */ col1,col2,col3 from .....
    Now, as to the question of direct path read:
    The direct path read event indicates a disk sort. So, it's likely the source of the direct path read wait was due to sorting in the processing of the select statement. I can't imagine how the insert would cause direct path read.
    Hope that helps,
    -Mark

  • Direct Path Insert on Forms

    Hi,
    I am using following insert statement on my forms application to do direct path insert.
    insert into /*+append*/ abc select * from xyz;
    as redo is not generated if you do direct path insert but in above case redo is generating for the above insert statement.
    I then created a procedure test_insert as below
    create procedure test_insert is
    begin
    insert into /*+append*/ abc select * from xyz;
    end;
    i then replaced the insert statement with above procedure on form and after that redo for the insert statement stopped.
    i was wondering if we could use append hint for direct path insert on forms?

    My point, though, is that if you're doing your hourly load into a separate table and building the index(es) on that table before moving the partition into the partitioned table, there is no risk of doing anything to screw up the partitioned table. It's also a very handy way to ensure that your load process doesn't interfere with your production data.
    If i have understood your point right, what about the space required ( tablespaces and the indexes). Trust me, my table at source gets populated @ over 230 MIL records /day. If i have a temp table, i will have to maintain hourly partitions/indexes/tablespaces for the temp table to hold so much data, isn't it? And, the data i move (say today's data) will be used by the user only after 4 days (because it is available in the source). We have provisioned our user interface to connect to 2 databases, one with 5 days of data (incl. sysdate) and the other one is a archive DB that holds 45 days worth of data.
    </>
    I wouldn't expect that a direct path load into a single partition of a partitioned table would invalidate local indexes on other partitions, but since I'd never do a direct path load into a single partition of a partitioned table, I've not tested this to make sure.
    No problem, if i try this out i will share it in the forum. </>Thanks

  • How can I tell if direct-path insert is really being used?

    I have a number of INSERT statements with /*+ APPEND */ hints, that I suspect are not using direct-path insert processing. How can I tell (via the plan, via Enterprise Manager, whatever) if direct-path is being used?
    If it is not being used, then I can research why that might be, and try to resolve the obstacles there. But if it is being used, then I want to know that so I can focus elsewhere.
    Thanks,
    Mike

    mtefft wrote:
    The question of whether direct-path is possible with multi-table inserts has been on my mind...
    Moreover, in another forum post, we have eyewitnesses that it has happened at least once:
    APPEND Hint in Multi-table insert
    So I think that post (giving examples of Explain plans with multi-table inserts successfully using direct-path) answers my question.
    Mike,
    Thanks for that link. When I checked my example again I realised that it had a foreign key constraint between the two tables I was inserting into. (It was a demonstration of how to normalise an incoming denormalised address table, so converted a flat table into address/address lines). When I removed the constraint I got the multi-table insert.
    However, I followed this up with a check on the execution plans and for the 10.2.0.3 I was running noted the following:
    Plan reported after running the insert and select from dbms_xplan.display_cursor()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      |       |       |    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   TABLE ACCESS FULL | T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------Plan reported after explain plan / select from dbms_xplan.display()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      | 10000 |  1240K|    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   DIRECT LOAD INTO  | T1   |       |       |       |
    |   3 |   DIRECT LOAD INTO  | T2   |       |       |       |
    |   4 |    TABLE ACCESS FULL| T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------If you try to check for direct path inserts by looking at the in-memory plans, you might not see the direct load that is really happening. (Perhaps the a check that both / all the target tables are locked with TM mode 6 may give you a clue.)
    Regards
    Jonathan Lewis

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • Insert into multiple table view

    I have a view with multiple table query and and INSTEAD OF trigger on the view that inserts into multiple tables. When I attempt to do a commit out of a ADF Creation Form, I get the following error: ORA-01779: cannot modify a column which maps to a non key-preserved table ORA-06512: at line 1.
    Has anyone had success inserting into multiple tables via a view that has more than one table?
    Thanks,
    Lisa

    Lisa,
    Sounds like your instead-of trigger may not be being called and you are trying to insert directly into the view.
    I did write a [url http://stegemanoracle.wordpress.com/2006/03/15/using-updatable-views-with-adf/]blog entry about using a view with instead-of triggers last year.
    John

Maybe you are looking for