Subpartitions and nologging

Hello,
is there any way to change the subpartitions of an index from logging to nologging without creating the index again?
Thanks
Eva

What kind of index? A image matching index?
If this question is about Oracle text (formerly intermedia text) you will get a quicker more expert answer by asking in the Oracle text forum.

Similar Messages

  • DataGuard and NoLogging

    i would like to know pertaining to DataGuard and NoLogging.
    I read that it causes block corruption if nologging is there on the primary, which can be a problem on the standby db.
    but when i tried to simulate the same on my primary/standby it was not observed.
    create table xyz(name varchar2(20));
    insert into xyz values ('myname');
    [did this by insert 10 rows]
    then created index
    create index ind_xyz on xyz(name) nologging; <-- this should be causing a problem
    then same log is transfered on stadnby .
    but when i query on standby (in read only mode)
    it works fine. the select query works fine.
    so wheres the problem of nologging ???

    SQL> select tablespace_name, force_logging from dba_tablespaces;
    TABLESPACE_NAME FORCE_LOGGING
    SYSTEM NO
    UNDOTBS NO
    TEMP NO
    SQL> select force_logging from v$database;
    FOR
    NO
    ???? then where is the catch ???

  • Insert using APPEND and NOLOGGING

    Hi
    I want to know that if we use INSERT with hint APPEND and NOLOGGING, then is the data written to rollback segment.
    Please tell me how can we avoid writing to rollback segment as i am having a lot of data to insert and want to do it using select insert and not by using bulk insert.
    Thanks,
    Manish
    Message was edited by:
    user532526

    is there any way that if some error occurs then the data should be rollled back If an exception occurs during the insert, the work will be rolled back automatically.
    as with 20 lakh records the rollback segment will overflow.Are you sure? Have you tried it?

  • Tablespace, table logging and nologging

    Hi i had a tablespace which is in logging mode.
    i had a table in that tablespace which i want to place in nologging mode.
    i do that using following command
    ALTER TABLE SCOTT.XYZ NOLOGGING;
    i want to know whether the above statement will work even though my tablespace is in logging mode?or should i place my tablespace also in nologging mode?
    thanks in advance

    The tablespace and table logging mode (specified during create tablespace and create table) DOES NOT affect the logging of inserts, deletes, updates.
    Inserts, deletes, updates will be logged regardless, with the exception of "insert /*+ append. */ ...".
    The following sentence from the blog should be re-worded because it is ambiguous:
    Actually the real meaning of NOLOGGING is whatever operations are performed on
    the object with the NOLOGGING option, will NOT be recorded in logfiles.

  • Flashback database and nologging tablespaces

    DB Versions
    Oracle 11.2.0.3
    2 other DBs at 11.2.0.3 with 10.2.0.3 compatibility mode. 
    We are using flashback database in our dev/test environments so we can flashback and apply builds again. I am ok if we lose data and can't roll forward. We are running out of archive space. Lately we have been going over a month or more between builds. Is it possible to set tablespaces to nologging and still have flashback work? I dont need the data. I just need to be able to reset to the structure before the last build. I am 100% ok with losing test data. I have no say over when builds get done.
    Reason I am asking not just testing (I did do some google searches):
    Testing this isn't that simple due to process issues. I have to get a build scheduled which can take a week or more(no extra DB for me to test this) and the build team generally doesn't listen, so if I tell them to wait for me for the test, it generally doesn't happen, then it can be another week before I get a build and so on. Restoring from backup isn't an easy change because that is a process change and I have to go through incredible amounts of 'process' and approval to get any kind of change to a process (think dilbert on steroids). So I have to ask ahead of even running a test.

    Hi,
    Doc Ref:ALTER TABLESPACE
    Changing Tablespace Logging Attributes: Example The following example changes the default logging attribute of a tablespace to NOLOGGING:
    ALTER TABLESPACE tbs_03 NOLOGGING;
    Altering a tablespace logging attribute has no affect on the logging attributes of the existing schema objects within the tablespace. The tablespace-level logging attribute can be overridden by logging specifications at the table, index, and partition levels.
    So if you want to Nologging  then you have to go at table level.
    i'd suggest use the  the different location for Archive and Flashback log. and clean the Archive area using some OS job. Flashback database required only the flashback log files during restore.
    HTH

  • RMAN and nologging operations

    Currently some of our DWHs is in noarchivelog mode and we are planning to change it to archivelog mode as users can also insert data into some tables. We will schedule an incremantal backup after the nightly batches that contain nologging operations (as suggested here http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/vldb_backup.htm ).
    There are also append-inserts during the day. The developers are requested to remove these nologging operations during the day - there is an uncertainty that the developers will find all relevant source files and not overlook some. Is there a way to monitor if a nologging operation has been started (DBs are currently in noarchivelog mode if this makes a difference)?

    Hi Tante Käthe!
    The following query will show you all datafiles where NOLOGGING operations has happened.
    SELECT NAME, UNRECOVERABLE_CHANGE#,
    TO_CHAR (UNRECOVERABLE_TIME,'DD-MON-YYYY HH:MI:SS')
    FROM V$DATAFILE;You should have a look at the following link. It will provide you with further informations.
    [http://sduoracle.itpub.net/post/18983/155842]
    Yours sincerely
    Florian W.

  • How to truncate data in a subpartition

    Hi All,
    I am using oracle 11gr2 database.
    I have a table as given below
    CREATE TABLE SCMSA_ESP.PP_DROP
    ESP_MESSAGE_ID VARCHAR2(50 BYTE) NOT NULL ,
    CREATE_DT DATE DEFAULT SYSDATE,
    JOB_LOG_ID NUMBER NOT NULL ,
    MON NUMBER GENERATED ALWAYS AS (TO_CHAR("CREATE_DT",'MM'))
    TABLESPACE SCMSA_ESP_DATA
    PARTITION BY RANGE (JOB_LOG_ID)
    SUBPARTITION BY LIST (MON)
    PARTITION PMINVALUE VALUES LESS THAN (1)
    ( SUBPARTITION PMINVALUE_M1 VALUES ('01') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M2 VALUES ('02') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M3 VALUES ('03') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M4 VALUES ('04') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M5 VALUES ('05') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M6 VALUES ('06') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M7 VALUES ('07') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M8 VALUES ('08') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M9 VALUES ('09') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M10 VALUES ('10') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M11 VALUES ('11') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMINVALUE_M12 VALUES ('12') TABLESPACE SCMSA_ESP_DATA
    PARTITION PMAXVALUE VALUES LESS THAN (MAXVALUE)
    ( SUBPARTITION PMAXVALUE_M1 VALUES ('01') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M2 VALUES ('02') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M3 VALUES ('03') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M4 VALUES ('04') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M5 VALUES ('05') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M6 VALUES ('06') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M7 VALUES ('07') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M8 VALUES ('08') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M9 VALUES ('09') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M10 VALUES ('10') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M11 VALUES ('11') TABLESPACE SCMSA_ESP_DATA,
    SUBPARTITION PMAXVALUE_M12 VALUES ('12') TABLESPACE SCMSA_ESP_DATA
    ENABLE ROW MOVEMENT;
    I have populate two sets of data.
    One with Positive job_log_id and another with Negative job logid as given below.
    Step 1:
    Data going to PMAXVALUE Partition
    INSERT INTO PP_DROP ( ESP_MESSAGE_ID, CREATE_DT,JOB_LOG_ID)
    SELECT LEVEL, SYSDATE+TRUNC(DBMS_RANDOM.VALUE(1,300)), 1 FROM DUAL CONNECT BY LEVEL <=300;
    Step 2:
    Data going to PMINVALUE partition
    INSERT INTO PP_DROP ( ESP_MESSAGE_ID, CREATE_DT,JOB_LOG_ID)
    SELECT LEVEL, SYSDATE+TRUNC(DBMS_RANDOM.VALUE(1,300)), -1 FROM DUAL CONNECT BY LEVEL <=300;
    Now the question is how to truncate the data that is present only in the Positive partitions subpartition
    Like in the PMAXVALUE partition I have 10 subpartitions and I need to truncate the data in the JAN MONTH Partition only of the PMAXVALUE partition.
    Appreciate your valuable response.
    Thanks,
    MK.

    For future reference:
    http://www.morganslibrary.org/reference/truncate.html
    The library index is located at
    http://www.morganslibrary.org/library.html

  • Selecting records from DEFAULT range-list subpartition

    Is it possible to select records which belong to DEFAULT subpartition of a certain partition of a RANGE-LIST partitioned table, using subpartition pruning? Our task assumes creating partitions and subpartitions in a table dynamically before running ETL. But sometimes due to complexity of the path subpartition key goes from staging data to materialized views in data warehouse, we cannot predict all subpartition key values upfront. We can create DEFAULT subpartition for every partition though, and have Oracle place records which don't match any subpartition condition into that subpartition.
    But later we need to split DEFAULT subpartition so that all records will go into their dedicated subpartitions, and DEFAULT will be left empty - other layers of our application need all records to be stored in subpartition other than DEFAULT in order to be available. For that, we need to know which keys are stored in DEFAULT subpartition. And the question is - how can we effectively achieve that?
    The obvious "brute force" approach is to issue a query like:
    select distinct subpart_key from mytable partition (myrangepartition) where not subpart_key in (list of all subpartition key values for this partition)
    but it does not use partition pruning - I have checked execution plan. While it is possible to use only DEFAULT partition, this query will iterate through all subpartitions, which is huge performance impact. How can we instruct Oracle to use only DEFAULT subpartition?

    is the solution. I have overlooked this syntax - now my life is much easier.

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • Table with nologging on

    Guys,
    My Oracle procedure has an insert operation to a table with the insert being huge volume of data selected from a query on few other table.Hence it takes hell lots of time.
    Well,I don't care about the redo generation;So if I alter the table to turn on "nologging", and insert using the /*+ append */ clause,would that do and would there be any ill effects?
    Thanks for your guidance in advance!!!
    Regards,
    Bhagat

    Hello
    It's easy enough to test:
    tylerd@DEV2> CREATE TABLE dt_test_redo(id number)
      2  /
    Table created.
    Elapsed: 00:00:00.00
    tylerd@DEV2>
    tylerd@DEV2> CREATE TABLE dt_test_redo_nolog(id number) nologging
      2  /
    Table created.
    Elapsed: 00:00:00.03
    tylerd@DEV2>
    tylerd@DEV2> set autot trace stat
    tylerd@DEV2>
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.03
    Statistics
             69  recursive calls
            153  db block gets
             35  consistent gets
              0  physical reads
         145600 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
             69  recursive calls
            151  db block gets
             36  consistent gets
              0  physical reads
         145276 redo size
            677  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Virtually no difference for the insert when the table has nologging specified and the insert is done via the conventional path.
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.03
    Statistics
            133  recursive calls
             92  db block gets
             42  consistent gets
              0  physical reads
           8128 redo size
            661  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
             89  db block gets
             42  consistent gets
              0  physical reads
           8180 redo size
            661  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Again, virtually no difference in the amount of redo between the table with logging and nologging, the append hint however had a large impact for both tables as the insert was done using direct path.
    Now create indexes with logging on each table to see what impact that has on redo
    tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id)
      2  /
    Index created.
    Elapsed: 00:00:00.06
    tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id)
      2  /
    Index created.
    Elapsed: 00:00:00.06
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.07
    Statistics
            273  recursive calls
            946  db block gets
            188  consistent gets
             21  physical reads
        1155872 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.09
    Statistics
            259  recursive calls
            933  db block gets
            176  consistent gets
             21  physical reads
        1155372 redo size
            677  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Again, virtually no difference when the append hint is not used
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.07
    Statistics
            133  recursive calls
            254  db block gets
             44  consistent gets
              0  physical reads
         339856 redo size
            661  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.06
    Statistics
            133  recursive calls
            250  db block gets
             44  consistent gets
              0  physical reads
         339872 redo size
            661  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01The append hint has significantly reduced the amount of redo generated but it is almost the same for both tables. This is because redo is being generated for the indexes as they have not used the nologging option. So now repeat the test using indexes with nologging
    tylerd@DEV2> DROP INDEX dt_test_redo_i1;
    Index dropped.
    Elapsed: 00:00:00.03
    tylerd@DEV2> DROP INDEX dt_test_redo_nolog_i1;
    Index dropped.
    Elapsed: 00:00:00.03
    tylerd@DEV2>
    tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id) NOLOGGING
      2  /
    Index created.
    Elapsed: 00:00:00.09
    tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id) NOLOGGING
      2  /
    Index created.
    Elapsed: 00:00:00.09
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.15
    Statistics
            320  recursive calls
           1445  db block gets
            240  consistent gets
             43  physical reads
        1921940 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.09
    Statistics
            292  recursive calls
           1408  db block gets
            218  consistent gets
             42  physical reads
        1887772 redo size
            678  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01Again, very little change in the redo between the table with and without logging.
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
            278  db block gets
             65  consistent gets
              0  physical reads
         313104 redo size
            663  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
            274  db block gets
             66  consistent gets
              0  physical reads
         313096 redo size
            663  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2>So from these tests you can see that the nologging clause specified at the table level has very little impact. The Append hint is having more impact, and as you scale the numbers up, making sure the indexes have nologging clause specified and using the append hint gives the least amount of redo generated. Of course, if you can do the insert without any indexes on the table, and you use the append hint, this will give you the best throughput.
    HTH
    David
    Message was edited by:
    David Tyler
    fixed the formatting

  • UNRECOVERABLE or NOLOGGING

    Hi,
    Are the UNRECOVERABLE and NOLOGGING clause same or is there any big difference in these two.
    I know both bypass Redo logs during transaction.
    Thanks for your time.
    Bhupinder

    Hi,
    Be very careful using UNRECOVERABLE clause (Oracle7) and the NOLOGGING clause (Oracle8)
    when performing CREATE INDEX or CREATE TABLE AS SELECT (CTAS) commands.
    The CTAS with NOLOGGING or UNRECOVERABLE will send the actual create statement to the redo logs (this information is needed in the data dictionary), but all rows loaded into the table during the operation are NOT sent to the redo logs.
    With NOLOGGING in Oracle8, although you can set the NOLOGGING attribute
    for a table, partition, index, or tablespace, NOLOGGING mode does not apply to every operation performed on the schema object for which you set the NOLOGGING attribute.
    Only the following operations can make use of the NOLOGGING option:
    alter table...move partition
    alter table...split partition
    alter index...split partition
    alter index...rebuild
    alter index...rebuild partition
    create table...as select
    create index
    direct load with SQL*Loader
    direct load INSERT
    Many Oracle professionals use NOLOGGING because the actions runs fast because the Oracle redo logs are bypassed. However, this can be quite dangerous if you need to roll-forward through this time period during a database recovery.
    It is not possible to roll forward through a point in time when an NOLOGGING operation has taken place. This can be a CREATE INDEX NOLOGGING, CREATE TABLE AS SELECT NOLOGGING, or an NOLOGGING table load.
    The NOLOGGING clause is a wonderful tool since it often halves run times, but you need to remember the danger. For example, a common practice is to reorganize very large tables is to use CTAS:
    Create table
    new_customer
    tablespace
    new_ts
    NOLOGGING
    as
    select * from customer;
    Drop table customer;
    Rename new_customer to customer;
    However, you must be aware that a roll-forward through this operation is not possible,
    since there are no images in the archived redo logs for this operation. Hence, you MUST take a
    full backup after performing any NOLOGGING operation.
    Thanks
    Pavan Kumar N

  • Append nologging

    can anyone please let me know difference between hints append and nologging.
    Thanks in advance
    Thanks
    jeevan.

    Important points about LOGGING and NOLOGGING
    See this:
    http://oraclenz.com/__oneclick_uploads/2008/06/redo_reduction_v_15.pdf
    Francisco Munoz Alvarez
    Note: This is part of a White Paper regarding Logging and NoLogging I'm writting.
    Please: Any suggestion or comment referent the paper are welcome!
    Message was edited by:
    Francisco Munoz Alvarez

  • Confusion on nologging clause - for create table

    hi guys,
    just 1 simple question.
    q1) when i create a table via no logging, is the CREATION of the table not logged as well ?
    Regards,
    Noob

    OracleWannabe wrote:
    q1) when i create a table via no logging, is the CREATION of the table not logged as well ?If the table is created empty (as in not using a create table as select or CTAS) then there is no difference in the create step. The difference between LOGGING and NOLOGGING takes place for direct path operations.
    The Oracle Docs give a very good explanation:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#i1009116
    Regards,
    Greg Rahn
    http://structureddata.org

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

  • Why log or not a materializad view ?

    Hi,
    I know we can log a materialized view but not found a reason to do it. I'm not talking about materilized view logs used to refresh a mv but to log a mv.
    If I work with two databases (a - master and b - mv site) and do a lot of changes to a table on database a replicated on database b. Changes are logged on both sides and to my opinion wasting space on database b. Case 2, if do a complete refresh the deleted and inserted data will be logged using sometimes a lot of disk space.
    Why my question, I'm used to turn off logs on materilized views but now I'm working with some databases as and want to propose not to log mv because it's usual to have disk space problems.
    What do you think about ?

    What, exactly, do you mean by "log a materialized view" if not the presence of materialized view logs? Are you talking about
    CREATE MATERIALIZED VIEW
      LOGGING | NOLOGGINGIf so, that merely controls whether direct-path inserts to the materialized view or the initial materialized view creation generates redo. If you are doing incremental refreshes, NOLOGGING is irrelevant because the refresh mechanism doesn't use direct-path operations. If there are any indexes on the materialized view, NOLOGGING is less relevant because index operations are always logged, so you lose most of the performance benefits.
    If you are doing complete refreshes of materialized views and those materialized views are not in refresh groups, Oracle will do direct-path loads and NOLOGGING can be quite useful assuming there are no indexes. There are obviously downsides from a recovery standpoint, however, since those are unrecoverable operations. It may also be useful when initially building a large materialized view to use NOLOGGING.
    Justin

Maybe you are looking for