Compress nonclustered index on a compressed table

Hi all,
I've compressed a big table, space has been shrunk from 180GB to 20GB using page compression.
I've observed that this table has 50GB of indexes too, this space has remanied the same.
1) is it possible to compress nonclustered index on an already compressed table?
2) is it a best practice?

ALTER INDEX...
https://msdn.microsoft.com/en-us/library/ms188388.aspx
You saved the disk space, that's fine, but now see if there is some performance impact on the queries, do you observe that any improvement in terms of performance?
http://blogs.technet.com/b/swisssql/archive/2011/07/09/sql-server-database-compression-speed-up-your-applications-without-programming-and-complex-maintenance.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • BRSPACE create new tablespace for compressed table

    Dear expert,
    I'm plannng to create new tablespace  PSAPCOMP for compressed table by brspace.
    the current total size of the tables that i want to compression is 1T.
    Now i have  2 question about the brspace script
    1.How big size should the PSAPCOMP be , let's say 500G?
    2. i'm confused about the datafile_dir value i should put (please refer to the attachment for current datafile_dir design)
    Could you help to correct/improve the scritpts as following?
    scriptA : brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    repeat scriptB for 20 times
    scriptB : brspace -f tsextend -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 -f1 4 -f2 4 -f3 4 -f4 4 -f5 4 --> extend 25G in one run
    Qestion: is it OK to assign the PSAPCOMP only to "sapdata4"? or i should specify "-f sapdata1 sapdata2 sapdata3 sapdata4" so the table data can distribute among different sapdata_dir?
    Thank you!

    hi Kate,
    some of the questions depend on the "customer" decision.
    is it OK to assign the PSAPCOMP only to "sapdata4"?
    how much space is available? what kind of storage do they have? is totally stripped or not? is there "free" space on the other filesystems?
    1.How big size should the PSAPCOMP be , let's say 500G?
    as I explained to you already, it is expected that applying all compressions you can save 1/2 of the space but you have to test this as it depends on the content of the tables and selectivity of the indexes. Use the oracle package to simulate the savings for the tables you are going to compress.
    do you want the scripts interactive (option -c) or not?
    The SAP Database Guide: Oracle has all possible options so you only have to check them
    brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    if you want to create a 500 GB, why you limit the maximum size to 15360?

  • Space reusage after deletion in compressed table

    Hi,
    Some sources tell, that free space after DELETE in compressed table is not reused.
    For example, this http://www.trivadis.com/uploads/tx_cabagdownloadarea/table_compression2_0411EN.pdf
    Is it true?
    Unfortunatly I cannot reproduce it.

    Unfortunatly the question is still open.
    In Oracle 9i space, freed after DELETE in compressed block, was not reused in subsequent inserts.
    Isn't it?
    I saw many evidences from other people. One link I gave above.
    But in Oracle 10g I see another figures. After delete rows in compressed blocks, and subsequent insert into that block, block defragmented!
    Please, if who know any documentation about change in this behavior, please post links.
    p.s.
    in 10g:
    1. CTAS compress. Block is full.
    2. after, deleted every 4 from 5 rows.
    avsp=0x3b
    tosp=0x99e
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0xea8 -- deleted
    0x28:pri[2]     offs=0xea0 -- deleted
    0x2a:pri[3]     offs=0xe98 -- deleted
    0x2c:pri[4]     offs=0xe90 -- deleted
    0x2e:pri[5]     offs=0xe88 -- live
    0x30:pri[6]     offs=0xe80 -- deleted
    0x32:pri[7]     offs=0xe78 -- deleted
    0x34:pri[8]     offs=0xe70 -- deleted
    0x36:pri[9]     offs=0xe68 -- deleted
    0x38:pri[10]     offs=0xe60 -- live
    0x3a:pri[11]     offs=0xe58 -- deleted
    0x3c:pri[12]     offs=0xe50 -- deleted
    0x3e:pri[13]     offs=0xe48 -- deleted
    0x40:pri[14]     offs=0xe40 -- deleted
    0x42:pri[15]     offs=0xe38  -- live
    0x44:pri[16]     offs=0xe30 -- deleted
    0x46:pri[17]     offs=0xe28 -- deleted
    0x48:pri[18]     offs=0xe20 -- deleted
    0x4a:pri[19]     offs=0xe18 -- deleted
    0x4c:pri[20]     offs=0xe10 -- live
    ...3. insert into table t select from ... where rownum < 1000;
    Inserted rows were inserted in a several blocks. Total number of not empty blocks was not changed. Chains did not occure.
    Block above looks as follow:
    avsp=0x7d
    tosp=0x7d
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0x776 - new
    0x28:pri[2]     offs=0x84b - new
    0x2a:pri[3]     offs=0x920 - new
    0x2c:pri[4]     offs=0x9f5 - new
    0x2e:pri[5]     offs=0xea8 - old
    0x30:pri[6]     offs=0xaca - new
    0x32:pri[7]     offs=0xb9f - new
    0x34:pri[8]     offs=0x34d - new
    0x36:pri[9]     offs=0x422 - new
    0x38:pri[10]     offs=0xea0 - old
    0x3a:pri[11]     offs=0x4f7 - new
    0x3c:pri[12]     offs=0x5cc - new
    0x3e:pri[13]     offs=0x6a1 - new
    0x40:pri[14]     sfll=16  
    0x42:pri[15]     offs=0xe98 - old
    0x44:pri[16]     sfll=17
    0x46:pri[17]     sfll=18
    0x48:pri[18]     sfll=19
    0x4a:pri[19]     sfll=21
    0x4c:pri[20]     offs=0xe90 -- old
    0x4e:pri[21]     sfll=22
    0x50:pri[22]     sfll=23
    0x52:pri[23]     sfll=24
    0x54:pri[24]     sfll=26As we see, that old rows were defragmented, and repacked, and moved to the bottom of block.
    New rows (inserted after compressing of table) fill remaining space.
    So, deleted space was reused.

  • 11.2.0.3.3  impdp compress table

    HI ML :
    源库 : 10.2.0.3 compress table
    target : 11.2.0.3.3 impdp 源端的compress tables,在目标端是否是compress table
    之前在10g库直接 通过impdp dblink 导入时候 发现,入库的表需要手工做move compress。
    MOS 文档给的的测试时 在10g开始 支持导入自动维护compress table :
    Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.1 - Release: 9.2 to 11.2
    Information in this document applies to any platform.
    Symptoms
    Original import utility bypasses the table compression or does not compress, if the table is precreated as compressed. Please follow the next example that demonstrates this.
    connect / as sysdba
    create tablespace tbs_compress datafile '/tmp/tbs_compress01.dbf' size 100m;
    create user test identified by test default tablespace tbs_compress temporary tablespace temp;
    grant connect, resource to test;
    connect test/test
    -- create compressed table
    create table compressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 compress;
    -- create non-compressed table
    create table noncompressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 nocompress;
    -- populate compressed table with data
    begin
    for i in 1..100000 loop
    insert into compressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- populate non-compressed table with identical data
    begin
    for i in 1..100000 loop
    insert into noncompressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- compress the table COMPRESSED (previous insert doesn't use the compression)
    alter table compressed move compress;
    Let's now take a look at data dictionary to see the differences between the two tables:
    connect test/test
    select dbms_metadata.get_ddl ('TABLE', 'COMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','COMPRESSED')
    CREATE TABLE "TEST"."COMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 COMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    SQL> select dbms_metadata.get_ddl ('TABLE', 'NONCOMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','NONCOMPRESSED')
    CREATE TABLE "TEST"."NONCOMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    The table COMPRESSED needs fewer storage space than the table NONCOMPRESSED. Now, let's export the tables using the original export utility:
    #> exp test/test file=test_compress.dmp tables=compressed,noncompressed compress=n
    About to export specified tables via Conventional Path ...
    . . exporting table COMPRESSED 100000 rows exported
    . . exporting table NONCOMPRESSED 100000 rows exported
    Export terminated successfully without warnings.
    and then import them back:
    connect test/test
    drop table compressed;
    drop table noncompressed;
    #> imp test/test file=test_compress.dmp tables=compressed,noncompressed
    . importing TEST's objects into TEST
    . . importing table "COMPRESSED" 100000 rows imported
    . . importing table "NONCOMPRESSED" 100000 rows imported
    Import terminated successfully without warnings.
    Verify the extents after original import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 11534336 26 1408
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is gone.
    Cause
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    alter table compressed move compress;
    Beginning with Oracle version 10g, DataPump utilities (expdp/impdp) perform direct path operations and so the table compression is maintained, like in the following example:
    - after crating/populating the two tables, export them with:
    #> expdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . exported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    and re-import after deletion with:
    #> impdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . imported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Job "TEST"."SYS_IMPORT_TABLE_01" successfully completed at 12:47:51
    Verify the extents after DataPump import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is kept.
    ===========================================================
    1 到底11.2.0.3 是否 支持impdp自动维护compress table 通过dblink 方式?
    2
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    以上意思在10g之前是 必须以上方式才能支持目标端入库压缩表,10g开始支持自动压缩? 貌似 10g也需要手工move。

    ODM TEST:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> create table nocompres tablespace users as select * from dba_objects;
    Table created.
    SQL> create table compres_tab tablespace users as select * from dba_objects;
    Table created.
    SQL> alter table compres_tab compress 3;
    Table altered.
    SQL> alter table compres_tab move ;
    Table altered.
    select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
                  9 NOCOMPRES
    C:\Users\ML>expdp  maclean/oracle dumpfile=temp:COMPRES_TAB2.dmp  tables=COMPRES_TAB
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:12 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** dumpfile=temp:COMPRES_TAB2.dmp tables=COMPRES_TAB
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 3 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "MACLEAN"."COMPRES_TAB"                     7.276 MB   75264 rows
    Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
      D:\COMPRES_TAB2.DMP
    Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 12:01:20
    C:\Users\ML>impdp maclean/oracle remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:47 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MACLEAN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MACLEAN"."SYS_IMPORT_FULL_01":  maclean/******** remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "MACLEAN1"."COMPRES_TAB"                    7.276 MB   75264 rows
    Job "MACLEAN"."SYS_IMPORT_FULL_01" successfully completed at 12:01:50
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
    SQL> drop table compres_tab;
    Table dropped.
    C:\Users\ML>exp maclean/oracle tables=COMPRES_TAB file=compres1.dmp
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:19 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    About to export specified tables via Conventional Path ...
    . . exporting table                    COMPRES_TAB      75264 rows exported
    Export terminated successfully without warnings.
    C:\Users\ML>
    C:\Users\ML>imp maclean/oracle  fromuser=maclean touser=maclean1  file=compres1.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:45 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V11.02.00 via conventional path
    import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    . importing MACLEAN's objects into MACLEAN1
    . . importing table                  "COMPRES_TAB"      75264 rows imported
    Import terminated successfully without warnings.
    SQL> conn maclean1/oracle
    Connected.
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  8 COMPRES_TAB
                     我的理解 对于direct load 总是能保持compression
    但是 imp默认是conventional path 即使用普通的走buffer cache的INSERT实现导入 所以无法保持compression
    而impdp不管 access_method 是external_table还是direct_path 模式都可以保持compression

  • How to Add column with default value in compress table.

    Hi ,
    while trying to add column to compressed table with default value i am getting error.
    Even i tried no compress command on table still its giivg error that add/drop not allowed on compressed table.
    Can anyone help me in this .
    Thanks.

    Aman wrote:
    while trying to add column to compressed table with default value i am getting error.This is clearly explain in the Oracle doc :
    "+You cannot add a column with a default value to a compressed table or to a partitioned table containing any compressed partition, unless you first disable compression for the table or partition+"
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5163
    Nicolas.

  • Compressed tables with more than 255 columns

    hi,
    Would anyone have a sql to find out compressed tables with more than 255 columns.
    Thank you
    Jonu

    SELECT table_name,
           Count(column_name)
    FROM   user_tab_columns utc
    WHERE  utc.table_name IN (SELECT table_name
                              FROM   user_tables
                              WHERE  compression = 'ENABLED')
    HAVING Count(column_name) > 255
    GROUP  BY table_name

  • Drop column from compressed table

    NLSRTL
    11.2.0.3.0
    Production
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    PL/SQL
    11.2.0.3.0
    Production
    TNS for Linux:
    11.2.0.3.0
    Production
    Hello,
    I read about how to drop column from a compressed table - first set it unused and then drop the unused columns. However, in the example below on the database I ran it, it does not work. Please, can you tell me WHEN this approach does not work. What it is dependent on - parameters or something else. Why I cannot drop the unused columns?
    And the example along with the errors:
    create table tcompressed compress as select * from all_users;
    > table TCOMPRESSED created.
    alter table tcompressed add x number;
    > table TCOMPRESSED altered.
    alter table tcompressed drop column x;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    alter table tcompressed set unused column x;
    > table TCOMPRESSED altered.
    alter table tcompressed drop unused columns;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    As you can see even after altering the table by setting the column X as unused I still cannot drop it by using DROP UNUSED COLUMNS.
    Thank you.

    check this link it might help. At the end it has also mentioned of a bug check the same.
    http://sharpcomments.com/2008/10/ora-39726-unsupported-adddrop-column-operation-on-compressed-tables.html

  • Importing into a compressed table

    Is importing into a compressed table effective?
    I mean is data get compressed when using imp utility to load into a compressed table?
    from the oracle doc :
    Compression occurs while data is being bulk inserted or bulk loaded. These operations include:
    ·Direct Path SQL*Loader
    ·CREATE TABLE … AS SELECT statement
    ·Parallel INSERT (or serial INSERT with an APPEND hint) statement

    A quick journey over to the SQL Reference manual, where things like table-create options are defined, gives us the following comment:
    table_compression
    The table_compression clause is valid only for heap-organized tables. Use this clause to instruct the database whether to compress data segments to reduce disk use. This clause is especially useful in environments such as data warehouses, where the amount of insert and update operations is small. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
    When you enable table compression, Oracle Database attempts to compress data during direct-path INSERT operations when it is productive to do so. The original import utility (imp) does not support direct-path INSERT, and therefore cannot import data in a compressed format. You can specify table compression for the following portions of a heap-organized table:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331

  • How to add column to compressed table

    Hi gurus,
    Can any one help me how to add a column to compressed tables
    Thanks in advance

    The only difference is if added column has default value. In that case:
    SQL> create table tbl(id number,val varchar2(10))
      2  /
    Table created.
    SQL> insert into tbl
      2  select level,lpad('X',10,'X')
      3  from dual
      4  connect by level <= 100000
      5  /
    100000 rows created.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       3145728
    SQL> alter table tbl move compress
      2  /
    Table altered.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       2097152
    SQL> alter table tbl add name varchar2(5) default 'NONE'
      2  /
    alter table tbl add name varchar2(5) default 'NONE'
    ERROR at line 1:
    ORA-39726: unsupported add/drop column operation on compressed tables
    SQL> alter table tbl add name varchar2(5)
      2  /
    Table altered.
    SQL> update tbl set name = 'NONE'
      2  /
    100000 rows updated.
    SQL> commit
      2  /
    Commit complete.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       7340032
    SQL> select compression from user_tables where table_name = 'TBL'
      2  /
    COMPRESS
    ENABLED
    SQL> alter table tbl move compress
      2  /
    Table altered.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       2097152
    SQL> SY.

  • Drop unused column in compressed table

    Good day for all.
    I have partitioned compressed table under 10.2.0.4 database. I need to remove column, and did it according metalink note 429898.1, but then i execute command alter table .... drop unused columns, i got error ORA-39726 (unsupported add/drop column operation for compreseed table), so it mean that proposed workaround don't works now.
    Does anyone have new workaround for this problem?

    alter table MY_TABLE set unused(MY_FNG_COLUMN);
    -- success
    alter table MY_TABLE drop unused columns;
    -- error

  • TimesTen synchronize with OLTP-compressed table

    Hi,
    We are looking at potentially using TimesTen as a frontend data store for our Oracle database. Database is at version 11.2.0.3, and many tables in the databases are OLTP-compressed (Advanced Compression). Is synchronizing with compressed table supported?  Is there any known issues doing that?
    Thanks,
    George

    Thanks Chris, for you quick reply.
    George

  • DB6CONV v5.04 not fully compressing tables

    In testing the new version of DB6CONV using the "Compress Data" option with an online table move I have had two instances so far where the resulting compression % was very low.  For example
    - compression check against table EKPO ( 159GB ) estimated  76%
    - created a new large tablespace with 16k page size to move / compress EKPO into.  (EKPO currently in a regular 16 page tablespace.)
    - Moved / compressed EKPO into new table space and obtained a 55% compression
    - Ran compression check again and still reporting a 76%
    - Created a 2nd large 16k page size tabelspace and moved EKPO into this one (with compress data option checked) and received a 76% compression
    Any idea what is causing this?
    DB2 9.5 FP 5
    ECC 604

    hi, i want to publish the findings on this topic as of today:
    activating the COMPRESS DATA option within DB6CONV always results in the target table of the conversion being created with "COMPRESS YES". but this alone does not mean that an optimal compression dictionary is created during the conversion. for an optimal compression dictionary SAMPLING needs to be used.
    1. all OFFLINE CONVERSIONS with COMPRESS DATA use sampling.
    2. all ONLINE CONVERSIONS using ADMIN_MOVE_TABLE (DB2 >=V9.7) check the compression flag or the TARGET table and use sampling accordingly.
    so the compression dictionary after conversions of  these two classes will be optimal.
    3. all ONLINE CONVERSIONS using ONLINE_TABLE_MOVE (DB2 < V9.7) in contrast check the compression flag of the SOURCE(!) table and use sampling accordingly.
    so the compression dictionary after converting a formerly compressed table using ONLINE_TABLE_MOVE will be optimal (SAMPLING will be used), whereas during the conversion with ONLINE_TABLE_MOVE of a formerly not compressed table sampling will NOT be used! the compression dictionary will be created at some point in time by ADC and thus be suboptimal.
    this explains the behaviour detected by eddie:
    the first conversion of an uncompression table with option "compress" resulted in 55% compression rate,
    the second conversion of the same table (but at this time already compressed) resulted in 76% compression rate.
    this problem is planned to be solved with the next version of ONLINE_TABLE_MOVE .
    until then the workaround to ensure the best compression dictionary would be to issue an "ALTER TABLE <SOURCETAB> COMPRESS YES" on the source table before starting the conversion (side effect: this will/might lead to an unneccessary data compression (traffic/workload) on the source table as well).
    this is meanwhile documented in the DB6CONV OSS note 1513862 as well.
    regards, frank

  • Can we create secondary index for a cluster table

    hi
    can we create secondary index for a cluster table

    Jyothsna,
    There seems to be some kind of misunderstanding here. You <i>cannot</i> create a secondary index on a cluster table. A cluster table does not exist as a separate physical table in the database; it is part of a "physical cluster". In the case of BSEG for instance, the physical cluster is RFBLG. The only fields of the cluster table that also exist as fields of the physical cluster are the leading fields of the primary key. Taking again BSEG as the example, the primary key includes the fields MANDT, BUKRS, BELNR, GJAHR, BUZEI. If you look at the structure of the RFBLG table, you will see that it has primary key fields MANDT, BUKRS, BELNR, GJAHR, PAGENO. The first four fields are those that all cluster tables inside BSEG have in common. The fifth field, PAGENO, is a "technical" field giving the sequence number of the current record in the series of cluster records sharing the same primary key.
    All the "functional" fields of the cluster table (for BSEG this is field BUZEI and everything beyond that) exist only inside a raw binary object. The database does not know about these fields, it only sees the raw object (the field VARDATA of the physical cluster). Since the field does not exist in the database, it is impossible to create a secondary index on it. If you try to create a secondary index on a cluster table in transaction SE11, you will therefore rightly get the error "Index maintenance only possible for transparent tables".
    Theoretically you could get around this by converting the cluster table to a transparent table. You can do this in the SAP dictionary. However, in practice this is almost never a good solution. The table becomes much larger (clusters are compressed) and you lose the advantage that related records are stored close to each other (the main reason for having cluster tables in the first place). Apart from the performance and disk space hit, converting a big cluster table like BSEG to transparent would take extremely long.
    In cases where "indexing" of fields of a cluster table is worthwhile, SAP has constructed "indexing tables" around the cluster. For example, around BSEG there are transparent tables like BSIS, BSAS, etc. Other clusters normally do not have this, but that simply means there is no reason for having it. I have worked with the SAP dictionary for over 12 years and I have never met a single case where it was necessary to convert a cluster to transparent.
    If you try to select on specific values of a non-transparent field in a cluster without also specifying selections for the primary key, then the database will have to do a serial read of the whole physical cluster (and the ABAP DB interface will have to decompress every single record to extract the fields). The performance of that is monstrous -- maybe that was the reason of your question. However, the solution then is (in the case of BSEG) to query via one of the index tables (where you are free to create secondary indexes since those tables are transparent).
    Hope this clarifies things,
    Mark

  • SQL 2005 re-enable nonclustered index ?

    Hello, what's the syntax to re-enable a nonclustered index that was disable?
    And after the index is back up online does it's fragmentation remain the same prior being disabled or does fragmentation gets cleared-rebuilt ?
    Thanks in advance.

    How to Enable and Disable Indexes for
    2005
    When an index is disabled, you can enable it by either rebuilding the index or re-creating the index. When a clustered index is disabled,
    nonclustered indexes for the table are automatically disabled, too. When the clustered index is rebuilt or re-created, the nonclustered indexes are not automatically enabled unless the option to rebuild all indexes is used.
    Follow these steps to enable the
    clustered index on the your database table located in your Database let's say your database is AdventureWorks
    From within SQL Server Management Studio, expand
    SQL01\INSTANCE01, Databases, AdventureWorks, and then Tables . Expand your table you wish to workon
    Expand the Indexes folder located beneath
    your table. Right-click the Index (e.g. PK_Address_AddressID) and select
    Rebuild.
     When the Rebuild Index window opens, verify that the correct index is listed and then click
    OK. When the clustered index has been rebuilt, the data can once again be queried.
    NOTE:
    However, in YOUR CASE the NONCLUSTERED INDEXES CANNOT be selected by the query\ optimizer because they need to be enabled individually. You can use the same procedure to ENABLE each NONCLUSTERED index. Alternatively, you can use the following code to
    rebuild all indexes on the table, YOU WILL EFFECTIVELY ENABLING EACH INDEX AS THE REBUILD IS COMPETE: 
    USE [AdventureWorks]
    GO
    ALTER INDEX
    ALL ON [Person].[Address]
    REBUILD
    GO
    How to Enable and Disable Indexes for
    2008
    USE AdventureWorks
    GO
    ----Diable Index
    ALTER INDEX [IX_StoreContact_ContactTypeID] ON Sales.StoreContact DISABLE
    GO
    ----Enable Index
    ALTER INDEX [IX_StoreContact_ContactTypeID] ON Sales.StoreContact REBUILD
    GO
    Update Table Columns 
    I hope this will help you a lot goodluck!!
    Please specify as "Mark as Answer" or "Helpful Post" if this post has answered your question as it is very supportive
    for those who have the same question.

  • How to create index on specific partition table?

    Hi Experts,
    we created 4 partitions on table .
    Table Name : test
    partitions : Test_prt1
                       Test_prt2
                       Test_prt3
                      Test_prt4
    Our requiremnt  create the index on specific partition (ex : Test_prt2) only.

    Creating Partitioned Tables and Indexes
    http://technet.microsoft.com/en-us/library/ms187526(v=sql.105).aspx
    you can create a aligned index, the index will be spread over the filegroups
    Create NonClustered Index IX_orders_aligned
    On dbo.orders(order_id)
    On test_monthlyDateRange_ps(orderDate);
    OR
    Unaligned parition, you can create index on any filegroups
    Create NonClustered Index IX_orders_unpartitioned
    On dbo.orders(order_id)
    On [Test_prt2_FileGroup];
    For more information refer the below link
    http://sqlfool.com/2008/12/indexing-for-partitioned-tables/
    Or
    You can try Creating a filtered index (I've not tried it though)
    http://www.mssqltips.com/sqlservertip/1785/sql-server-filtered-indexes-what-they-are-how-to-use-and-performance-advantages/
    --Prashanth

Maybe you are looking for

  • Should I delete files after moving them to a shared folder

    I just transfered files from the hard drive to a shared folder so that both users on the computer could have access to the files.  I was wondering if I should delete them from the hard drive and leave them only in the shared folder since it is about

  • External display only with lid open?

    I feel like I must be missing something really obvious, so I apologize in advance, but I'm trying to use my external display (23" cinema display) without using the MBP display yet still keeping the lid open (mainly for the isight camera). The first t

  • Problem in  Graphical Mapping

    Hi all,      i have a Graphical mapping logic here Source Structure: GS GP[1]   F1-     QI   SDQ     F2-     2 GP[2]   F1-     QD   SDQ     F2-     3 GP[3]   F1-     PC   CTP     F2- 5.3 GP[4]   F1-     AI   SDQ     F2-     4 GS Target structure:    

  • The secWinAD security plugin is not available

    I am getting this message when I login into the cmc. i restarted the server. but the issue stil exits. thanks,

  • E-rec   How to create or generate the index category????? SOS!!!!!

    Hi guys, How to create or generate the index category?. From the posts of this forum I have read more things but anything works: reports: -RSRETT02 -RCF_INITIALIZE_KPRO -RSTIRIDX_INDXCAT_RETRIEVE Can anyone show me the way? Thanks in advance.