Table with nologging on

Guys,
My Oracle procedure has an insert operation to a table with the insert being huge volume of data selected from a query on few other table.Hence it takes hell lots of time.
Well,I don't care about the redo generation;So if I alter the table to turn on "nologging", and insert using the /*+ append */ clause,would that do and would there be any ill effects?
Thanks for your guidance in advance!!!
Regards,
Bhagat

Hello
It's easy enough to test:
tylerd@DEV2> CREATE TABLE dt_test_redo(id number)
  2  /
Table created.
Elapsed: 00:00:00.00
tylerd@DEV2>
tylerd@DEV2> CREATE TABLE dt_test_redo_nolog(id number) nologging
  2  /
Table created.
Elapsed: 00:00:00.03
tylerd@DEV2>
tylerd@DEV2> set autot trace stat
tylerd@DEV2>
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.03
Statistics
         69  recursive calls
        153  db block gets
         35  consistent gets
          0  physical reads
     145600 redo size
        677  bytes sent via SQL*Net to client
        622  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.00
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.04
Statistics
         69  recursive calls
        151  db block gets
         36  consistent gets
          0  physical reads
     145276 redo size
        677  bytes sent via SQL*Net to client
        628  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.00Virtually no difference for the insert when the table has nologging specified and the insert is done via the conventional path.
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.03
Statistics
        133  recursive calls
         92  db block gets
         42  consistent gets
          0  physical reads
       8128 redo size
        661  bytes sent via SQL*Net to client
        632  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.04
Statistics
        133  recursive calls
         89  db block gets
         42  consistent gets
          0  physical reads
       8180 redo size
        661  bytes sent via SQL*Net to client
        638  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.00Again, virtually no difference in the amount of redo between the table with logging and nologging, the append hint however had a large impact for both tables as the insert was done using direct path.
Now create indexes with logging on each table to see what impact that has on redo
tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id)
  2  /
Index created.
Elapsed: 00:00:00.06
tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id)
  2  /
Index created.
Elapsed: 00:00:00.06
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.07
Statistics
        273  recursive calls
        946  db block gets
        188  consistent gets
         21  physical reads
    1155872 redo size
        677  bytes sent via SQL*Net to client
        622  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.09
Statistics
        259  recursive calls
        933  db block gets
        176  consistent gets
         21  physical reads
    1155372 redo size
        677  bytes sent via SQL*Net to client
        628  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.00Again, virtually no difference when the append hint is not used
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.07
Statistics
        133  recursive calls
        254  db block gets
         44  consistent gets
          0  physical reads
     339856 redo size
        661  bytes sent via SQL*Net to client
        632  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.06
Statistics
        133  recursive calls
        250  db block gets
         44  consistent gets
          0  physical reads
     339872 redo size
        661  bytes sent via SQL*Net to client
        638  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01The append hint has significantly reduced the amount of redo generated but it is almost the same for both tables. This is because redo is being generated for the indexes as they have not used the nologging option. So now repeat the test using indexes with nologging
tylerd@DEV2> DROP INDEX dt_test_redo_i1;
Index dropped.
Elapsed: 00:00:00.03
tylerd@DEV2> DROP INDEX dt_test_redo_nolog_i1;
Index dropped.
Elapsed: 00:00:00.03
tylerd@DEV2>
tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id) NOLOGGING
  2  /
Index created.
Elapsed: 00:00:00.09
tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id) NOLOGGING
  2  /
Index created.
Elapsed: 00:00:00.09
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.15
Statistics
        320  recursive calls
       1445  db block gets
        240  consistent gets
         43  physical reads
    1921940 redo size
        677  bytes sent via SQL*Net to client
        622  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2> INSERT
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.09
Statistics
        292  recursive calls
       1408  db block gets
        218  consistent gets
         42  physical reads
    1887772 redo size
        678  bytes sent via SQL*Net to client
        628  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01Again, very little change in the redo between the table with and without logging.
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.04
Statistics
        133  recursive calls
        278  db block gets
         65  consistent gets
          0  physical reads
     313104 redo size
        663  bytes sent via SQL*Net to client
        632  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2> INSERT +append
  2  INTO
  3      dt_test_redo_nolog
  4  SELECT
  5      rownum
  6  FROM
  7      dual
  8  CONNECT BY
  9      LEVEL <= 10000
10  /
10000 rows created.
Elapsed: 00:00:00.04
Statistics
        133  recursive calls
        274  db block gets
         66  consistent gets
          0  physical reads
     313096 redo size
        663  bytes sent via SQL*Net to client
        638  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      10000  rows processed
tylerd@DEV2> COMMIT
  2  /
Commit complete.
Elapsed: 00:00:00.01
tylerd@DEV2>So from these tests you can see that the nologging clause specified at the table level has very little impact. The Append hint is having more impact, and as you scale the numbers up, making sure the indexes have nologging clause specified and using the append hint gives the least amount of redo generated. Of course, if you can do the insert without any indexes on the table, and you use the append hint, this will give you the best throughput.
HTH
David
Message was edited by:
David Tyler
fixed the formatting

Similar Messages

  • Is there a way to let OWB create a table with NOLOGGING option?

    A table created by OWB deployment is by default with LOGGING. How can I specify NOLOGGING.
    Even better if we can define a partitioned table in OWB.
    Thanks

    Hi,
    it is possible to set NOLOGGING with table configuration properties "Logging Mode".
    OWB table editor has "Partitions" tab for creating different types of partitions
    Regards,
    Oleg

  • I want to enable a PK in a tables with more than 1680 subpartitions

    Hi All
    i want to enable a PK in a tables with more than 1680 subpartitions
    SQL> ALTER SESSION ENABLE PARALLEL DDL ;
    Session altered.
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    SQL> alter table FDW.GL_JE_LINES_BASE_1 parallel 8;
    Table altered.
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK
    ERROR at line 1:
    ORA-01652: unable to extend temp segment by 128 in tablespace TS_FDW_DATA
    Please advice or tell how it would be the best way to do this
    Regards
    Jesus

    When you try to create a PK you automaticaly created an index. If you want to put this index in different tablespace you should use 'using index..' option when you create this primary key.

  • Loading millions of rows using SQL*loader to a table with constraints

    I have a table with constraints and I need to load millions of rows in it using SQL*Loader.
    What is the best way to do this, means what SQL*Loader options to use, for getting the best loading performance and how to deal with constraints?
    Regards

    - check if your table has check constraints (like column not null)
    if you trust the data in the file you have to load you can disable this constrainst and after the loader enable this constrainst.
    - Check if you can modify the table and place it in nologging mode (generate less redo but ONLY is SOME Conditions)
    Hope it helps
    Rui Madaleno

  • How to create table with 1 row 1MB in size?

    Hello,
    I am doing some R&D and want to create a Table with 1 row, which is 1 MB in size.
    i.e. I want to create a row which is 1 MB in size.
    I am using a 11g DB.
    I do this in SQL*Plus:
    (1.) CREATE TABLE onembrow  (pk NUMBER PRIMARY KEY, onembcolumn CLOB);
    (2.) Since 1MB is 1024*1024 bytes (i.e. 1048576 bytes) and since in English 1 letter = 1 byte, I do this
    SQL> INSERT INTO onembrow VALUES (1, RPAD('A', 1048576, 'B'));
    1 row created.
    (3.) Now, after committing, I do an analyze table.
    SQL> ANALYZE TABLE onembrow COMPUTE STATISTICS;
    Table analyzed.
    (4.) Now, I check the actual size of the table using this query.
    select segment_name,segment_type,bytes/1024/1024 MB
    from user_segments where segment_type='TABLE' and segment_name='ONEMBROW';
    SEGMENT_NAME       
    SEGMENT_TYPE        
    MB
    ONEMBROW           
    TABLE            
    .0625
    Why is the size only .0625 MB, when it should be 1 MB?
    Here is the DB Block related parameters:
    SELECT * FROM v$parameter WHERE upper(name) LIKE '%BLOCK%';
      NUM NAME                                                                                   TYPE VALUE 
      478 db_block_buffers                                                                          3 0     
      482 db_block_checksum                                                                         2 TYPICAL
      484 db_block_size                                                                             3 8192  
      682 db_file_multiblock_read_count                                                             3 128   
      942 db_block_checking                                                                         2 FALSE 
    What am I doing wrong here???

    When testing it is necessary to do something that is a reasonably realistic model of a problem you might anticipate appearing in a production system - a row of 1MB doesn't seem likely to be a useful source of information for "R&D on performance tuning"
    What's wrong with creating millions of rows ?
    Here's a cut and paste from a windows system running 11.2.0.3
    SQL> set timing on
    SQL>
    SQL> drop table t1 purge;
    Table dropped.
    Elapsed: 00:00:00.04
    SQL>
    SQL> create table t1
      2  nologging
      3  as
      4  with generator as (
      5     select
      6             rownum id
      7     from dual
      8     connect by
      9             level <= 50
    10  ),
    11  ao as (
    12     select
    13             *
    14     from
    15             all_objects
    16     where   rownum <= 50000
    17  )
    18  select
    19     rownum          id,
    20     ao.*
    21  from
    22     generator       v1,
    23     ao
    24  ;
    Table created.
    Elapsed: 00:00:07.09
    7 seconds to generate 2.5M rows doesn't seem like a problem.  For a modelling example I have one script that generates 6.5M (carefully engineered) rows, with a couple of indexes and a foreign key or two, then collects stats (no histograms) in 3.5 minutes.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Now on Twitter: @jloracle

  • Create table with logging clause....

    hi,
    just reading this http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#ADMIN01507
    it mention under Consider Using NOLOGGING When Creating Tables:
    The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. S*ubsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.*
    Help me with my understanding. Does it mean that when you create a table using logging clause and then when you need to do a direct loads, you can specify nologging to reduce the redo log. The nologging part will only work for this activity but not on DML operation even when you specify nologging?

    sybrand_b wrote:
    Nologging basically applies to the INSERT statement with the APPEND hint. Direct load means using this hint.
    All other statements are always logging regardless of any table setting.>
    Sybrand Bakker
    Senior Oracle DBAi did a few test :
    create table test
    (id number) nologging;
    Table created.
    insert into test values (1);
    1 row created.
    commit;
    Commit complete.
    delete from test;
    1 row deleted.
    rollback;
    Rollback complete.
    select * from test;
            ID
             1there is no logging on table- level and tablespace - level. So what i am doing is to check -> Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo."
    the above makes me confuse cos isn't rollback related to UNDO tablespace not redo log. So one can still rollback even when it is in no logging.
    REDO log is for rolling forward , recovery when there is a system crash... ..

  • Is it possible to create table with partition in compress mode

    Hi All,
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables.
    My question is:
    cant we create a table with partition/subpartition in compress mode..? Please help.
    Below is the code that i have used for table creation.
    CREATE TABLE temp
      TRADE_ID                    NUMBER,
      SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      SRC_TRADE_ID                VARCHAR2(60 BYTE),
      SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      TRADE_STATUS                VARCHAR2(60 BYTE),
      TRADE_TYPE                  VARCHAR2(60 BYTE),
      SECURITY_TYPE               VARCHAR2(60 BYTE),
      VOLUME                      NUMBER,
      ENTRY_DATE                  DATE,
        REASON                      VARCHAR2(255 BYTE),
    TABLESPACE data
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    NOLOGGING
    COMPRESS
    NOCACHE
    PARALLEL (DEGREE 6 INSTANCES 1)
    MONITORING
    PARTITION BY RANGE (TRADE_DATE)
    SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    SUBPARTITION TEMPLATE
      (SUBPARTITION SALES VALUES ('sales'),
       SUBPARTITION MAG VALUES ('MAG'),
       SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
       SUBPARTITION FIS VALUES ('FIS'),
       SUBPARTITION GD VALUES ('GS'),
       SUBPARTITION ST VALUES ('ST'),
       SUBPARTITION KOR VALUES ('KOR'),
       SUBPARTITION BLR VALUES ('BLR'),
       SUBPARTITION SUT VALUES ('SUT'),
       SUBPARTITION RM VALUES ('RM'),
       SUBPARTITION DEFAULT VALUES (default)
    PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
        LOGGING
            TABLESPACE data
         ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE data,
        SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE data,
        SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE data,
        SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE data,
        SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE data,
        SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE data,
        SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE data,
        SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE data,
        SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE data,
        SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE data,
        SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE data)); Edited by: user11942774 on 8 Dec, 2011 5:17 AM

    user11942774 wrote:
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables. First of all your CREATE TABLE statement is full of syntax errors. Next time test it before posting - we don't want to spend time on fixing things not related to your question.
    Now, I bet you check COMPRESSION value of partitioned table same way you do it for a non-partitioned table - in USER_TABLES - and therefore get wrong results. Since compreesion can be enabled on individual partition level you need to check COMPRESSION in USER_TAB_PARTITIONS:
    SQL> CREATE TABLE temp
      2  (
      3    TRADE_ID                    NUMBER,
      4    SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      5    SRC_TRADE_ID                VARCHAR2(60 BYTE),
      6    SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      7    ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      8    TRADE_STATUS                VARCHAR2(60 BYTE),
      9    TRADE_TYPE                  VARCHAR2(60 BYTE),
    10    SECURITY_TYPE               VARCHAR2(60 BYTE),
    11    VOLUME                      NUMBER,
    12    ENTRY_DATE                  DATE,
    13      REASON                      VARCHAR2(255 BYTE),
    14    TRADE_DATE                  DATE
    15  )
    16  TABLESPACE users
    17  PCTUSED    0
    18  PCTFREE    10
    19  INITRANS   1
    20  MAXTRANS   255
    21  NOLOGGING
    22  COMPRESS
    23  NOCACHE
    24  PARALLEL (DEGREE 6 INSTANCES 1)
    25  MONITORING
    26  PARTITION BY RANGE (TRADE_DATE)
    27  SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    28  SUBPARTITION TEMPLATE
    29    (SUBPARTITION SALES VALUES ('sales'),
    30     SUBPARTITION MAG VALUES ('MAG'),
    31     SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
    32     SUBPARTITION FIS VALUES ('FIS'),
    33     SUBPARTITION GD VALUES ('GS'),
    34     SUBPARTITION ST VALUES ('ST'),
    35     SUBPARTITION KOR VALUES ('KOR'),
    36     SUBPARTITION BLR VALUES ('BLR'),
    37     SUBPARTITION SUT VALUES ('SUT'),
    38     SUBPARTITION RM VALUES ('RM'),
    39     SUBPARTITION DEFAULT_SUB VALUES (default)
    40    )  
    41  (  
    42   PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
    43      LOGGING
    44          TABLESPACE users
    45       ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE users,
    46      SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE users,
    47      SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE users,
    48      SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE users,
    49      SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE users,
    50      SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE users,
    51      SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE users,
    52      SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE users,
    53      SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE users,
    54      SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE users,
    55      SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE users));
    Table created.
    SQL>
    SQL>
    SQL> SELECT  PARTITION_NAME,
      2          COMPRESSION
      3    FROM USER_TAB_PARTITIONS
      4    WHERE TABLE_NAME = 'TEMP'
      5  /
    PARTITION_NAME                 COMPRESS
    RMS_TRADE_DLY_MAX              ENABLED
    SQL> SELECT  COMPRESSION
      2    FROM USER_TABLES
      3    WHERE TABLE_NAME = 'TEMP'
      4  /
    COMPRESS
    SQL> SY.

  • Export table with LOB column

    Hi!
    I have to export table with lob column (3 GB is the size of lob segment) and then drop that lob column from table. Table has about 350k rows.
    (I was thinking) - I have to:
    1. create new tablespace
    2. create copy of my table with CTAS in new tablespace
    3. alter new table to be NOLOGGING
    4. insert all rows from original table with APPEND hint
    5. export copy of table using transport tablespace feature
    6. drop newly created tablespace
    7. drop lob column and rebuild original table
    DB is Oracle 9.2.0.6.0.
    UNDO tablespace limited on 2GB with retention 10800 secs.
    When I tried to insert rows to new table with /*+append*/ hint operation was very very slow so I canceled it.
    How much time should I expect for this operation to complete?
    Is my UNDO sufficient enough to avoid snapshot too old?
    What do you think?
    Thanks for your answers!
    Regards,
    Marko Sutic

    I've seen that document before I posted this question.
    Still I don't know what should I do. Look at this document - Doc ID:     281461.1
    From that document:
    FIX
    Although the performance of the export cannot be improved directly, possible
    alternative solutions are:
    +1. If not required, do not use LOB columns.+
    or:
    +2. Use Transport Tablespace export instead of full/user/table level export.+
    or:
    +3. Upgrade to Oracle10g and use Export DataPump and Import DataPump.+
    I just have to speed up CTAS little more somehow (maybe using parallel processing).
    Anyway thanks for suggestion.
    Regards,
    Marko

  • Find size of table with XMLTYPE column STORE AS BINARY XML

    Hi,
    I have a table with structure as:
    CREATE TABLE XML_TABLE_1
    ID NUMBER NOT NULL,
    SOURCE VARCHAR2(255 CHAR) NOT NULL,
    XML_TEXT SYS.XMLTYPE,
    CREATION_DATE TIMESTAMP(6) NOT NULL
    XMLTYPE XML_TEXT STORE AS BINARY XML (
    TABLESPACE Tablespace1_LOB
    DISABLE STORAGE IN ROW
    CHUNK 16384
    RETENTION
    CACHE READS
    NOLOGGING)
    ALLOW NONSCHEMA
    DISALLOW ANYSCHEMA
    TABLESPACE Tablespace2_DATA
    - So HOW do I find the total size occupied by this table. Does BINARY storage work as LOB storage. i.e. I need to consider USER_LOBS as well for this.
    OR foll. will work
    select segment_name as tablename, sum(bytes/ (1024 * 1024 * 1024 )) as tablesize_in_GB
    From dba_segments
    where segment_name = 'XML_TABLE_1'
    and OWNER = 'SCHEMANAME'
    group by segment_name ;
    - Also if I am copying it to another table of same structure as:
    Insert /*+ append */ into XML_TABLE_2 Select * from XML_TABLE_1.
    Then how much space in ROllbackSegment do I need. Is it equal to the size of the table XML_TABLE_1?
    Thanks..

    I think foll query calculates it right while including the LOB storage as:
    SELECT SUM(bytes)/1024/1024/1024 gb
    FROM dba_segments
    WHERE (owner = 'SCHEMA_NAME' and
    segment_name = 'TABLE_NAME')
    OR (owner, segment_name) IN (
    SELECT owner, segment_name
    FROM dba_lobs
    WHERE owner = 'SCHEMA_NAME'
    AND table_name = 'TABLE_NAME')
    It's 80 GB for our Table with XMLType Column.
    But for the second point:
    Do we need 80GB of UNDO/ROLLBACK Segment for performing:
    Insert /*+ append */ into TableName1 Select * from TableName;
    Thanks..

  • When cache log table modified "nologging" , Does any problem occur?

    test environment   :
        *. readonly cache group :
    create readonly cache group cg_tb_test1
    autorefresh interval 1 seconds
    from
    TB_TEST1
    (       C1      tt_integer,
             C2      CHAR (10),
             C3      tt_integer,
             C4      CHAR (10),
             C5      CHAR (10),
             C6      CHAR (10),
             C7      CHAR (10),
             C8      CHAR (10),
             C9      tt_integer,
             C10     DATE,
      PRIMARY KEY (C1)
        *. oracle's tables
        SQL> select * from tab;
             TB_TEST1                       TABLE
             TT_06_147954_L                 TABLE
             TT_06_AGENT_STATUS             TABLE
            TT_06_AR_PARAMS                TABLE
            TT_06_CACHE_STATS              TABLE
            TT_06_DATABASES                TABLE
            TT_06_DBSPECIFIC_PARAMS        TABLE
            TT_06_DB_PARAMS                TABLE
            TT_06_DDL_L                    TABLE
            TT_06_DDL_TRACKING             TABLE
            TT_06_LOG_SPACE_STATS          TABLE
            TT_06_SYNC_OBJS                TABLE
            TT_06_USER_COUNT               TABLE
    15 rows selected.
        SQL>
    After generated cache group , Lots of archive logs generated.  So, I modified the log table "TT_06_147954_L" with "nologging".
    Does any problem occur?
    Thank you.

    If you ever need to recover the Oracle database, or this table, in any way then you are hosed and things will break. Also, I'm pretty sure this is not supported. Why is the logging a problem?
    Chris

  • Error while running spatial queries on a table with more than one geometry.

    Hello,
    I'm using GeoServer with Oracle Spatial database, and this is a second time I run into some problems because we use tables with more than one geometry.
    When GeoServer renders objects with more than one geometry on the map, it creates a query where it asks for objects which one of the two geometries interacts with the query window. This type of query always fails with "End of TNS data channel" error.
    We are running Oracle Standard 11.1.0.7.0.
    Here is a small script to demonstrate the error. Could anyone confirm that they also have this type of error? Or suggest a fix?
    What this script does:
    1. Create table object1 with two geometry columns, geom1, geom2.
    2. Create metadata (projected coordinate system).
    3. Insert a row.
    4. Create spacial indices on both columns.
    5. Run a SDO_RELATE query on one column. Everything is fine.
    6. Run a SDO_RELATE query on both columns. ERROR: "End of TNS data channel"
    7. Clean.
    CREATE TABLE object1
    id NUMBER PRIMARY KEY,
    geom1 SDO_GEOMETRY,
    geom2 SDO_GEOMETRY
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM1',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM2',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO object1 VALUES(1, SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(550000, 450000, NULL), NULL, NULL));
    CREATE INDEX object1_geom1_sidx ON object1(geom1) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    CREATE INDEX object1_geom2_sidx ON object1(geom2) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE' OR
    SDO_RELATE("GEOM2", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'OBJECT1';
    DROP INDEX object1_geom1_sidx;
    DROP INDEX object1_geom2_sidx;
    DROP TABLE object1;
    Thanks for help.

    This error appears in GeoServer and SQLPLUS.
    I have set up a completly new database installation to test this error and everything works fine. I tried it again on the previous database but I still get the same error. I also tried to restart the database, but with no luck, the error is still there. I geuss something is wrong with the database installation.
    Anyone knows what could cause an error like this "End of TNS data channel"?

  • Error while importing a table with BLOB column

    Hi,
    I am having a table with BLOB column. When I export such a table it gets exported correctly, but when I import the same in different schema having different tablespace it throws error
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "CMM_PARTY_DOC" ("PDOC_DOC_ID" VARCHAR2(10), "PDOC_PTY_ID" VAR"
    "CHAR2(10), "PDOC_DOCDTL_ID" VARCHAR2(10), "PDOC_DOC_DESC" VARCHAR2(100), "P"
    "DOC_DOC_DTL_DESC" VARCHAR2(100), "PDOC_RCVD_YN" VARCHAR2(1), "PDOC_UPLOAD_D"
    "ATA" BLOB, "PDOC_UPD_USER" VARCHAR2(10), "PDOC_UPD_DATE" DATE, "PDOC_CRE_US"
    "ER" VARCHAR2(10) NOT NULL ENABLE, "PDOC_CRE_DATE" DATE NOT NULL ENABLE) PC"
    "TFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS"
    " 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "TS_AGIMSAPPOLOLIVE030"
    "4" LOGGING NOCOMPRESS LOB ("PDOC_UPLOAD_DATA") STORE AS (TABLESPACE "TS_AG"
    "IMSAPPOLOLIVE0304" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE L"
    "OGGING STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEF"
    "AULT))"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'TS_AGIMSAPPOLOLIVE0304' does not exist
    I used the import command as follows :
    imp <user/pwd@conn> file=<dmpfile.dmp> fromuser=<fromuser> touser=<touser> log=<logfile.log>
    What can I do so that this table gets imported correctly?
    Also tell me "whether the BLOB is stored in different tablespace than the default tablespace of the user?"
    Thanks in advance.

    Hello,
    U can either
    1) create a tablespace with the same name in destination where you are trying to import.
    2) get the ddl of the table, modify the tablespace name to reflect the existing tablespace name in destination and run the ddl in the destination database, and run your import command with option ignore=y--> which will ignore all the create errors.
    Regards,
    Vinay

  • Loop at table with unspecified type but with where-condition

    Hi,
    Doing a loop over an internal table with unspecified type and in addition using a condtion may be done as follows: Thereby the
    condition would be "... WHERE parentid EQ i_nodeid" if the type of <it_htab> would be static. However dynamic specification of a component through bracketed character-type data objects is not possible.
    FIELD-SYMBOLS: <it_htab> TYPE STANDARD TABLE,
                                    <wa_htab> TYPE ANY,
                                    <parentid> TYPE rsparent.
      ASSIGN me->ref_htab->* TO <it_htab>.
      LOOP AT <it_htab> ASSIGNING <wa_htab>.
        ASSIGN COMPONENT 'PARENTID' OF STRUCTURE <wa_htab> TO <parentid>.
        CHECK <parentid> EQ i_nodeid.
      ENDLOOP.
    Since you have to loop over the whole table and to check within the loop whether the condition is fullfilled, this is rather bad for performance.
    Questions: Are there any tricks to do this better?
    Best Regards and Thank you,
    Ingo

    >
    Lalit Mohan Gupta wrote:
    > you can put the condition in the where clause....
    only if you have the upcoming 7.0 EhP2 (Kernel 7.02 or 7.20) the following dynamic where works:
    DATA cond_syntax TYPE string.
    cond_syntax = `parentid = i_nodeid`.
    LOOP AT <it_htab> ASSIGNING <wa_htab>
                           WHERE (cond_syntax).
    in older releases you would have to use program generation to achieve a dynamic where... .
    Kind regards,
    Hermann

  • Load fact table with null dimension keys

    Dear All,
    We have OWB 10g R2 and ROLAP star schema. In our source system some rows don’t have all attributes populated with values (null value), and this empty attributes are dimension (business) keys in star schema. Is it possible to load fact table with such rows (some dimension keys are null) in the OWB mappings? We use cube operator in mappings.
    Thanks And Regards
    Miran

    The dimension should have a row indicating UNKNOWN, this will have a business key outside of the normal range e.g. -999999.
    In the mapping the missing business keys can then be NVL'd to -999999.
    Cheers
    Si

  • Can't update a sql-table with a space

    Hello,
    In a transaktion I'm getting some Values from a SAP-ERP System via JCO.
    I update a sql-table with this values with a sql-query command.
    But sometimes the values I get from SAP-ERP are empty (space) and I'm not able to update the sql-table because of a null-value exception. (The column doesn't allow null-values). It seems that MII thinks null and space are the same.
    I tried to something like this when passing the value to the sql-query parameter but it didn't work:
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
    this works but I don't want to have a "_"
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", "_")
    Any suggestions?
    thank you.
    Matthias

    The problem is Oracle doesn't know the space function. But it knows a similar function: NVL --> replaces a null value with something else. So this statement works fine for me:
    update marc set
    LGort = '[Param.3]',
    dispo = '[Param.4]',
    schgt = NVL('[Param.5]', ' '),
    dismm = '[Param.6]',
    sobsl = NVL('[Param.7]',' '),
    fevor = '[Param.8]'
    where matnr = '[Param.1]' and werks = '[Param.2]'
    If Param.5 or Param.7 is null Oracle replaces it with a space in every other case it is the parameter itself.
    Christian, thank you for your hint with the space function. So I remembered the NVL-function.
    Regards
    Matthias

Maybe you are looking for