Nologging direct-path insert into an indexed table

Hello,
Does anyone have an idea how I can suppress generation of undo logs for direct-path insert into an indexed table on 11.2.0.1.0:
CREATE TABLE TBL(ID NUMBER) NOLOGGING;
CREATE INDEX IDX ON TBL(ID) NOLOGGING;
INSERT /*+ APPEND */ INTO TBL SELECT /*+ APPEND */ ROWNUM FROM ...; -- Source table has 400,000,000+ rows
Regards,
Angel Tsankov

Pl do not post duplicates - Why does Oracle not use direct-path insert when instructed to do so - pl continue the discussion in your original thread

Similar Messages

  • Direct Path Insert into temporary table

    Hi,
    I made the following experiment (temp_idtable is a temp table with a single column id, V_PstSigLink is a complex View):
    14:24:28 SQL> insert into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    24 recursive calls
    52718 db block gets
    38305 consistent gets
    16 physical reads
    4860544 redo size
    629 bytes sent via SQL*Net to client
    553 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    4 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    14:24:41 SQL> commit;
    Transaktion mit COMMIT abgeschlossen.
    14:25:29 SQL> insert /*+ APPEND*/into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    1778 recursive calls
    775 db block gets
    38847 consistent gets
    40 physical reads
    427408 redo size
    613 bytes sent via SQL*Net to client
    565 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    27 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    As you can see, with /*+ APPEND*/ there will be much less REDO generated.
    How can it be?
    1. /*+ APPEND*/ only doesn't mean that there shouldn't be REDO generated, does it? For that one has to declare the table with NOLOGGING.
    2. In the doc there is a statement that for temp. tables there is no REDO log generated, just UNDO (and REDO only for UNDO).
    Where is the difference coming from?
    If I use Direct Path (APPEND) for a temp table, where will be the table populated - in the Buufer Cache or directly in a temporary segment on the disk?
    Balazs

    Hi John,
    I'm a little confused about your statement " Oracle does not cache data blocks until they are read from a table.". I wanted to make a little experiment with KEEP Buffer Cache to see weather I can 'pin' the content of a temp table. But before getting to pin I realized an interesting behavior. Please see this sqlplus log:
    SQL> create global temporary table temp_ins(id NUMBER);
    Table created.
    SQL> insert into temp_ins values (1);
    1 row created.
    SQL> insert into temp_ins values (2);
    1 row created.
    SQL> insert into temp_ins values (3);
    1 row created.
    SQL> set autotrace on explain statistics;
    SQL> select * from temp_ins;
    ID
    1
    2
    3
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 TABLE ACCESS (FULL) OF 'TEMP_INS'
    Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    416 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL>
    I realized exectly the opposit of your statement! Oracle apparently cached the content of the temp table before it read from it!! Could you please explain why? Now I'm going to try it with a normal table...
    Balazs

  • Archive log / nologging/ direct path insert

    Could you please confirm if following are true or correct me if my understanding is wrong:
    1 ) Archive log mode and LOGGING is needed to deal with media recovery; it was not needed for instance recovery.
    2) IF insert is in NO APPEND mode , redo is generated even if table is in nologging mode and database is in noachive log mode. This redo is needed for instance recovery.
    3) Direct path insert skips undo generation and may skip redo generation if the object is in nologging mode.
    Thanks.
    In case if it is relevant , I am using Oracle 11.2.0.3.

    1) Yes, Archive logs are needed for media recovery.
    2 and 3) Even if the table is in nologging mode , it generates little bit of redo for index maintenance and dictionary data. Upon a restart from a failure - Oracle will read the online redo logs and replay any transaction it finds in there. That is the "roll forward" bit. The binary redo information is used to replay everything that did not get written to the datafiles. This replay included regenerating the UNDO information (UNDO is protected by redo).
    After the redo has been applied, the database is typically available for use now - and the rollback phase begins. For any transaction that was being processed when the instance failed - we need to undo its changes, roll it back. We do that by processing the undo for all uncommitted transactions.
    The database is now fully recovered.
    Also read he following link
    http://docs.oracle.com/cd/B19306_01/server.102/b14220/startup.htm
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869

  • Performance issue when inserting into spatial indexed table with JDBC

    We have a table named 'feature' which has a "sdo_geometry" column, and we created spatial index on that column,
    CREATE TABLE feature ( id number, desc varchar, oshape sdo_gemotry)
    CREATE INDEX feature_sp_idx ON feature(oshape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    Then we executed the following SQL to insert about 800 records into that table(We tried this by using DB visualizer and
    our Java application, both of them were using JDBC driver to connect to the oracle 11gR2 database) .
    insert into feature(id,desc,oshape) values (1001,xxx,xxxxx);
    insert into feature (id,desc,oshape) values (1002,xxx,xxxxx);
    insert into feature (id,desc,oshape) values (1800,xxx,xxxxx);
    We encoutered the same problem as this topic
    Performance of insert with spatial index
    It takes nearly 1 secs for inserting one record,compare to 50 records inserted per sec without spatial index,
    which is 50x drop in peformance when doing insertion with spatial index.
    However, when we copy and paste those insertion scripts into Oracle Client(same test and same table with spatial index), we got a totally different performance result:
    more than 50 records inserted in 1 secs, just as fast as the insertion without building spatial index.
    Is it because Oracle Client is not using JDBC? Perhaps JDBC was got something wrong when updating those spatial indexed tables.
    Edited by: 860605 on 19/09/2011 18:57
    Edited by: 860605 on 19/09/2011 18:58
    Edited by: 860605 on 19/09/2011 19:00

    Normally JDBC use auto-commit. So each insert can causes a commit.
    I don't know about Oracle Client. In sqlplus, insert is just a insert,
    and you execute "commit" to explicitly commit your changes.
    So maybe this is the reason.

  • Forcing DIRECT PATH INSERT to go CONVENTIONAL.

    According to Oracle, to force a statement to avoid using DIRECT-PATH insert it must fall into the following:
    Direct-path INSERT is subject to a number of restrictions. If any of these restrictions is violated, then Oracle Database executes conventional INSERT serially without returning any message, unless otherwise noted:
        *     You can have multiple direct-path INSERT statements in a single transaction, with or without other DML statements. However, after one DML statement alters a particular table, partition, or index, no other DML statement in the transaction can access that table, partition, or index.
        *      Queries that access the same table, partition, or index are allowed before the direct-path INSERT statement, but not after it.
        *      If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT in the same transaction, then the database returns an error and rejects the statement.
        *      The target table cannot be part of a cluster.
        *      The target table cannot contain object type columns.
        *      Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it has a mapping table, or if it is reference by a materialized view.
        *      Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
        *      The target table cannot have any triggers or referential integrity constraints defined on it.
        *      The target table cannot be replicated.
        *      A transaction containing a direct-path INSERT statement cannot be or become distributed.Are there any others that are not documented here? We have a vendor based app and want to avoid the DIRECT PATH INSERT and have it go CONVENTIONAL. We tried the TRIGGER approach, but that did not help at all.

    Why are you wanting to force conventional ?
    Are you sure the application uses direct path ?

  • Direct Path Inserts = By DBWR or by Shadow Process

    Hello guys,
    if i use the append hint (direct path inserts) - which process is writting the data to the datafile?
    In the documentation of oracle:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm#10778
    Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache Ok that is clear - but who is writing down the data? The shadow process which is handling the insert statement or is it given to the dbwr and it flushes the data directly to the datatfile without putting it into the buffer cache.
    And another question regarding direct path inserts:
    http://sai-oracle.blogspot.com/2006/03/parallel-query-is-it-good-or-bad.html
    If you want to do direct path insert in parallel, you would block all other non direct path insert operations on that table. This is because direct path insert would append above highwater mark, and no other process is allowed to update HWM until your operation is doneIf i insert data in serial with direct path insert... i also write data behind the High-HWM .. why is it only locked in parallel mode?
    And why does oracle not check if there is enough space in some other blocks below the High-HWM and using these blocks for "normal" inserts?
    Regards
    Stefan

    It's the server process that's responsible for writing direct path blocks to disk.
    DBWR only ever flushes out of the buffer cache.
    You do NOT write "data behind the HWM" in direct path mode, ever. Direct path works so fast because all you do is slam whole blocks of data onto disk ABOVE the high water mark. You can slam whole blocks down without worrying about what you're over-writing precisely because the blocks are above the HWM and thus cannot possibly be in use by anyone for anything. Under the HWM and you have to start worrying about locking and contention and stuff like that.
    Parallel operations (or simultaneous serial operations by two different users) inevitably have to block each other for this sort of thing: if I am busy writing to blocks above the HWM, great. But if you want to do it too... well, what's to stop us from over-writing each other's blocks?! Nothing, actually. So Oracle simply has to lock the entire table from any other direct path operation to make the situation manageable.

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • SQL Loader direct path loads and unusable indexes

    sorry about all the questions. I am researching several issues. I am reading the Utilities document. It says that in certain circumstances indexes will become unusable. I have some questions about my scenario.
    1. tables partitioned by range
    2. local indexes
    3. all tables have 1 primary key and other indexes are non-unique
    4. all sql loads will go into the most recent partition
    5. users will be querying tables while sql loader is occurring
    6. only one sql loader session will run per table
    7. no foreign keys, triggers, or other constrants other than primary keys.
    The docs are not clear. Do I have a concern about unusable indexes with direct path loads? Will indexes function while the sql loader direct path is occurring(this I can't test since I have small data files now and they load fast, but I will have larger ones in production).
    My understanding is that Extertnal tables using Insert append is exactly the same as sql loader direct path load. Is this true?

    if you dont have anything productive to say how about you don't post at all? you have made ignorant posts like this for years.
    as far as reading the docs what do you think "the docs are not clear" means? By the docs I am referring to the utilities document.
    As far as version number its 10.2 and I forgot that. However, it does not appear that sql loader has really changed all that much over the last few versions.
    Finally I plan on testing it out and its more than a 2 minute test. I wanted to make sure I don't miss anythng in my tests.
    don't respond to any threads or posts I make from now on.

  • Direct Path Insert on Forms

    Hi,
    I am using following insert statement on my forms application to do direct path insert.
    insert into /*+append*/ abc select * from xyz;
    as redo is not generated if you do direct path insert but in above case redo is generating for the above insert statement.
    I then created a procedure test_insert as below
    create procedure test_insert is
    begin
    insert into /*+append*/ abc select * from xyz;
    end;
    i then replaced the insert statement with above procedure on form and after that redo for the insert statement stopped.
    i was wondering if we could use append hint for direct path insert on forms?

    My point, though, is that if you're doing your hourly load into a separate table and building the index(es) on that table before moving the partition into the partitioned table, there is no risk of doing anything to screw up the partitioned table. It's also a very handy way to ensure that your load process doesn't interfere with your production data.
    If i have understood your point right, what about the space required ( tablespaces and the indexes). Trust me, my table at source gets populated @ over 230 MIL records /day. If i have a temp table, i will have to maintain hourly partitions/indexes/tablespaces for the temp table to hold so much data, isn't it? And, the data i move (say today's data) will be used by the user only after 4 days (because it is available in the source). We have provisioned our user interface to connect to 2 databases, one with 5 days of data (incl. sysdate) and the other one is a archive DB that holds 45 days worth of data.
    </>
    I wouldn't expect that a direct path load into a single partition of a partitioned table would invalidate local indexes on other partitions, but since I'd never do a direct path load into a single partition of a partitioned table, I've not tested this to make sure.
    No problem, if i try this out i will share it in the forum. </>Thanks

  • Insert /*+ Append */ and direct-path INSERT

    Hi Guys
    Does insert /*+ Append */ into hint cause Oracle 10G to use direct-path INSERT?
    and if insert /*+ Append */ into hint does cause Oracle to use direct-path INSERT, does insert /*+ Append */ is subject to the same restrictions as direct-path such as "The target table cannot have any triggers or referential integrity constraints defined on it."
    Thanks

    Dear,
    Here below a simple example showing the effet of existing trigger on the append hint
    mhouri@mhouri> select * from v$version where rownum=1;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    mhouri@mhouri> create table b as select * from all_objects where 1 = 2;
    Table créée.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70986 ligne(s) créée(s).
    mhouri@mhouri> select * from b;
    select * from b
    ERREUR à la ligne 1 :
    ORA-12838: impossible de lire/modifier un objet après modification en parallèle
    mhouri@mhouri> rollback;
    Annulation (rollback) effectuée.The direct path took place as far as I can't select from the table before I commit
    mhouri@mhouri> create trigger b_trg before insert on b
      2  for each row
      3  begin
      4  null;
      5  end;
      6  /
    Déclencheur créé.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70987 ligne(s) créée(s).
    424 ligne(s) sélectionnée(s).
    mhouri@mhouri> select count(1) from b;
      COUNT(1)                                                                     
         70987                                                                      While in the presence of this trigger on the table, the append hint has been silently ignored by Oracle. The fact that I can select from the table immediately afte the insert has finished is the indication that the table has not be inserted using direct path load
    Best Regards
    Mohamed Houri

  • Direct path insert

    Hi Friends,
    If i used direct path insert from a (LAN) remote table with 5 million rows using:
    sql> insert /*+ append */ into EMP using select * from EMP@dblink1;
    1) Do i need to create big rollback segment?
    *Note that in direct path sqlloader, no big rollback is needed.
    2.) Can i still undo it or rollback it the 5 millions rows?
    If i can then it sure holds or need a big rollback segment.
    Thanks a lot
    Message was edited by:
    [email protected]

    From same link see below points from Tom
    b) a direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space. Think about this - if you direct pathed an insert of a 100 byte row that loaded say just two rows - and you did that 1,000 times, you would be using at least 1,000 blocks (never reuse any existing space) - each with two rows. Now, if you did that using a conventional path insert - you would get about 70/80 rows per block in an 8k block database. You would use about 15 blocks. Which would you prefer?+
    c) you cannot query a table after direct pathing into it until you commit.+
    See point c. So, this will mark block header transaction flag to commit. if you rollback as they are written above HWM and since there is no commit flag yet it just need to set table HWM to original one and that's it.

  • Direct path read caused by direct path insert?

    Oracle 9.2
    ========
    Consider this INSERT statement
    INSERT /*+ APPEND USE_HASH*/ INTO schema.tablename (...)
    SELECT * FROM view_tablename;
    This statement takes about 30 minutes to complete.
    If I look at the v$session_wait, I can see that the session waits denoting db file scattered read (which is understandable). However, at the very end of the 30-minute wait, the v$session_wait view shows that it is waiting denoting direct path read. I want to understand how can a direct path INSERT (or for that matter a simple INSERT statement) can cause a direct path read wait. Can you show me some doc which expalain this in more detail than Oracle doc? Thanks in advance.

    Hmm....ok, well,first, there's a problem w/ the hint specification.
    The USE_HASH hint makes no sense as a hint to the insert statement. It only makes sense in the context of a select statement. Also, it should specify a table alias. So, you could have something like:
    insert /*+ APPEND */.into some_tab
    select /*+ use_hash(tab_alias) */ col1,col2,col3 from .....
    Now, as to the question of direct path read:
    The direct path read event indicates a disk sort. So, it's likely the source of the direct path read wait was due to sorting in the processing of the select statement. I can't imagine how the insert would cause direct path read.
    Hope that helps,
    -Mark

  • How can I tell if direct-path insert is really being used?

    I have a number of INSERT statements with /*+ APPEND */ hints, that I suspect are not using direct-path insert processing. How can I tell (via the plan, via Enterprise Manager, whatever) if direct-path is being used?
    If it is not being used, then I can research why that might be, and try to resolve the obstacles there. But if it is being used, then I want to know that so I can focus elsewhere.
    Thanks,
    Mike

    mtefft wrote:
    The question of whether direct-path is possible with multi-table inserts has been on my mind...
    Moreover, in another forum post, we have eyewitnesses that it has happened at least once:
    APPEND Hint in Multi-table insert
    So I think that post (giving examples of Explain plans with multi-table inserts successfully using direct-path) answers my question.
    Mike,
    Thanks for that link. When I checked my example again I realised that it had a foreign key constraint between the two tables I was inserting into. (It was a demonstration of how to normalise an incoming denormalised address table, so converted a flat table into address/address lines). When I removed the constraint I got the multi-table insert.
    However, I followed this up with a check on the execution plans and for the 10.2.0.3 I was running noted the following:
    Plan reported after running the insert and select from dbms_xplan.display_cursor()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      |       |       |    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   TABLE ACCESS FULL | T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------Plan reported after explain plan / select from dbms_xplan.display()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      | 10000 |  1240K|    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   DIRECT LOAD INTO  | T1   |       |       |       |
    |   3 |   DIRECT LOAD INTO  | T2   |       |       |       |
    |   4 |    TABLE ACCESS FULL| T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------If you try to check for direct path inserts by looking at the in-memory plans, you might not see the direct load that is really happening. (Perhaps the a check that both / all the target tables are locked with TM mode 6 may give you a clue.)
    Regards
    Jonathan Lewis

  • How to execute a Java method when row inserted into a database table?

    I have the need to fire off a java method when a row is inserted into a database table. I am unfortunately working with MySQL which just recently supported triggers but these new triggers can not execute a Java application on any event.
    What I am looking for is an event driven approach such that when a row is inserted into a specific table I can fire off a java method (sitting in a tomcat container) that will take the contents and send it to a web service.
    It has been mentioned that JMS may have the ability to poll and monitor a database table. Just wondering if anyone could point me in the right direction.
    thanks
    JavaTek

    A service handler might be the right way to run some code at the end of a service call (another way would be to make use of filters).
    First, make sure your static table is merged with ServiceHandlers.
    Secondary, change your custom method name into one that is not already in the service definition of COLLECTION_COPY_LOT (preferably a unique method name like collectionCopyLotLastAction that describes its purpose) and remove the following line from your code:
    m_service.doCodeEx("",this);Now create a service definition for COLLECTION_COPY_LOT in your custom component based on the original COLLECTION_COPY_LOT (copy paste from the original service definition) and add you own method collectionCopyLotLastAction as the last step in the service. Play with the load order to make sure CS is using your service definition of COLLECTION_COPY_LOT instead of the original.
    regards,
    Fabian

  • How to insert into two differents tables at the same time

    Hi
    I'm newer using JDev, (version 3.1.1.2 cause the OAS seems to support just the JSP 1.0)
    and I want to insert into two differents tables at the same time using one view.
    How can I do that ?
    TIA
    Edgar

    Oracle 8i supports 'INSTEAD OF' triggers on object views so you could use a process similar to the following:
    1. Create an object view that joins your two tables. 'CREATE OR REPLACE VIEW test AS SELECT d.deptno, d.deptname, e.empname FROM DEPT d, EMP E'.
    2. Create an INSTEAD OF trigger on the view.
    3. Put code in the trigger that looks at the :NEW values being processed and determines which columns should be used to INSERT or UPDATE for each table. Crude pseudo-code might be:
    IF :NEW.deptno NOT IN (SELECT deptno FROM DEPT) THEN
    INSERT INTO dept VALUES(:NEW.deptno, :NEW.deptname);
    INSERT INTO emp VALUES (:NEW.deptno, :NEW.empname);
    ELSE
    IF :NEW.deptname IS NOT NULL THEN
    UPDATE dept SET deptname = :NEW.deptname
    WHERE deptno = :NEW.deptno;
    END IF;
    IF :NEW.empname IS NOT NULL THEN
    UPDATE emp SET empname = :NEW.empname
    WHERE deptno = :NEW.deptno;
    Try something along those lines.
    null

Maybe you are looking for

  • Support Package Import Error in ECC 6.0 SR2

    Hi, We tried to upgrade EA-GLTRADE in our ECC 6.0 SR 2 system with SAPKGPGD07 patch during which, we encounter with some error at DDIC_ACTIVATION. (Attached: Entire Log) Note: We have already updated the Kernal with latest patch and SPAM with latest

  • Fax from computer to photosmart 7525 gets cancelled after spooling

    I can fax directly from printer, but when I use the HP software or print function on the computer to set up fax and send, the progress goes from spooling to cancelled without any error notices. I have a Dell XPS 8500 running 8.1 and Photosmart 7525 c

  • How to reset drill-down column-chart based on map-chart?

    Hi guys, I'm facing pritty common issue while designing one of my dashboards.  Let me describe a little bit  more: 1. I have some data extracted from BW using Query as WebService into my Xcelsius spreadsheet. 2. I have map-chart - regions for some co

  • Flex performance on Atom in IE

    We have a flex app that is compiled into one SWF about 3mb. It works fine on every platform except on Atom processor. Even on Atom it works fine in FF or IE with hyper threading off. But as soon as hyper threading is turned on, it takes several minut

  • Help opening Photoshop CS6

    I am having trouble opening Photoshop CS6. On initial start-up, my screen keeps freezing and it says the program is not responding. I have uninstalled and re-installed 3 times now on my computer. I have had Photoshop since May 2013 and have not had a