Compressed table

Hi,
I would need to alter a table of almost 120milion records, making it compressed.
Is it an alter I can do without problem?
Is it a locking operation?
What should I take care before and after the alter?
Thanks in advance,
Samuel Rabini

No version number.
No information as to the type of compression you are considering.
What ALTER TABLE statement? Is that also not important to helping you?
I don't see how you can expect help when what you posted is equivalent to "My car won't start tell me why?"

Similar Messages

  • Compress nonclustered index on a compressed table

    Hi all,
    I've compressed a big table, space has been shrunk from 180GB to 20GB using page compression.
    I've observed that this table has 50GB of indexes too, this space has remanied the same.
    1) is it possible to compress nonclustered index on an already compressed table?
    2) is it a best practice?

    ALTER INDEX...
    https://msdn.microsoft.com/en-us/library/ms188388.aspx
    You saved the disk space, that's fine, but now see if there is some performance impact on the queries, do you observe that any improvement in terms of performance?
    http://blogs.technet.com/b/swisssql/archive/2011/07/09/sql-server-database-compression-speed-up-your-applications-without-programming-and-complex-maintenance.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Space reusage after deletion in compressed table

    Hi,
    Some sources tell, that free space after DELETE in compressed table is not reused.
    For example, this http://www.trivadis.com/uploads/tx_cabagdownloadarea/table_compression2_0411EN.pdf
    Is it true?
    Unfortunatly I cannot reproduce it.

    Unfortunatly the question is still open.
    In Oracle 9i space, freed after DELETE in compressed block, was not reused in subsequent inserts.
    Isn't it?
    I saw many evidences from other people. One link I gave above.
    But in Oracle 10g I see another figures. After delete rows in compressed blocks, and subsequent insert into that block, block defragmented!
    Please, if who know any documentation about change in this behavior, please post links.
    p.s.
    in 10g:
    1. CTAS compress. Block is full.
    2. after, deleted every 4 from 5 rows.
    avsp=0x3b
    tosp=0x99e
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0xea8 -- deleted
    0x28:pri[2]     offs=0xea0 -- deleted
    0x2a:pri[3]     offs=0xe98 -- deleted
    0x2c:pri[4]     offs=0xe90 -- deleted
    0x2e:pri[5]     offs=0xe88 -- live
    0x30:pri[6]     offs=0xe80 -- deleted
    0x32:pri[7]     offs=0xe78 -- deleted
    0x34:pri[8]     offs=0xe70 -- deleted
    0x36:pri[9]     offs=0xe68 -- deleted
    0x38:pri[10]     offs=0xe60 -- live
    0x3a:pri[11]     offs=0xe58 -- deleted
    0x3c:pri[12]     offs=0xe50 -- deleted
    0x3e:pri[13]     offs=0xe48 -- deleted
    0x40:pri[14]     offs=0xe40 -- deleted
    0x42:pri[15]     offs=0xe38  -- live
    0x44:pri[16]     offs=0xe30 -- deleted
    0x46:pri[17]     offs=0xe28 -- deleted
    0x48:pri[18]     offs=0xe20 -- deleted
    0x4a:pri[19]     offs=0xe18 -- deleted
    0x4c:pri[20]     offs=0xe10 -- live
    ...3. insert into table t select from ... where rownum < 1000;
    Inserted rows were inserted in a several blocks. Total number of not empty blocks was not changed. Chains did not occure.
    Block above looks as follow:
    avsp=0x7d
    tosp=0x7d
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0x776 - new
    0x28:pri[2]     offs=0x84b - new
    0x2a:pri[3]     offs=0x920 - new
    0x2c:pri[4]     offs=0x9f5 - new
    0x2e:pri[5]     offs=0xea8 - old
    0x30:pri[6]     offs=0xaca - new
    0x32:pri[7]     offs=0xb9f - new
    0x34:pri[8]     offs=0x34d - new
    0x36:pri[9]     offs=0x422 - new
    0x38:pri[10]     offs=0xea0 - old
    0x3a:pri[11]     offs=0x4f7 - new
    0x3c:pri[12]     offs=0x5cc - new
    0x3e:pri[13]     offs=0x6a1 - new
    0x40:pri[14]     sfll=16  
    0x42:pri[15]     offs=0xe98 - old
    0x44:pri[16]     sfll=17
    0x46:pri[17]     sfll=18
    0x48:pri[18]     sfll=19
    0x4a:pri[19]     sfll=21
    0x4c:pri[20]     offs=0xe90 -- old
    0x4e:pri[21]     sfll=22
    0x50:pri[22]     sfll=23
    0x52:pri[23]     sfll=24
    0x54:pri[24]     sfll=26As we see, that old rows were defragmented, and repacked, and moved to the bottom of block.
    New rows (inserted after compressing of table) fill remaining space.
    So, deleted space was reused.

  • 11.2.0.3.3  impdp compress table

    HI ML :
    源库 : 10.2.0.3 compress table
    target : 11.2.0.3.3 impdp 源端的compress tables,在目标端是否是compress table
    之前在10g库直接 通过impdp dblink 导入时候 发现,入库的表需要手工做move compress。
    MOS 文档给的的测试时 在10g开始 支持导入自动维护compress table :
    Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.1 - Release: 9.2 to 11.2
    Information in this document applies to any platform.
    Symptoms
    Original import utility bypasses the table compression or does not compress, if the table is precreated as compressed. Please follow the next example that demonstrates this.
    connect / as sysdba
    create tablespace tbs_compress datafile '/tmp/tbs_compress01.dbf' size 100m;
    create user test identified by test default tablespace tbs_compress temporary tablespace temp;
    grant connect, resource to test;
    connect test/test
    -- create compressed table
    create table compressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 compress;
    -- create non-compressed table
    create table noncompressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 nocompress;
    -- populate compressed table with data
    begin
    for i in 1..100000 loop
    insert into compressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- populate non-compressed table with identical data
    begin
    for i in 1..100000 loop
    insert into noncompressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- compress the table COMPRESSED (previous insert doesn't use the compression)
    alter table compressed move compress;
    Let's now take a look at data dictionary to see the differences between the two tables:
    connect test/test
    select dbms_metadata.get_ddl ('TABLE', 'COMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','COMPRESSED')
    CREATE TABLE "TEST"."COMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 COMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    SQL> select dbms_metadata.get_ddl ('TABLE', 'NONCOMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','NONCOMPRESSED')
    CREATE TABLE "TEST"."NONCOMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    The table COMPRESSED needs fewer storage space than the table NONCOMPRESSED. Now, let's export the tables using the original export utility:
    #> exp test/test file=test_compress.dmp tables=compressed,noncompressed compress=n
    About to export specified tables via Conventional Path ...
    . . exporting table COMPRESSED 100000 rows exported
    . . exporting table NONCOMPRESSED 100000 rows exported
    Export terminated successfully without warnings.
    and then import them back:
    connect test/test
    drop table compressed;
    drop table noncompressed;
    #> imp test/test file=test_compress.dmp tables=compressed,noncompressed
    . importing TEST's objects into TEST
    . . importing table "COMPRESSED" 100000 rows imported
    . . importing table "NONCOMPRESSED" 100000 rows imported
    Import terminated successfully without warnings.
    Verify the extents after original import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 11534336 26 1408
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is gone.
    Cause
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    alter table compressed move compress;
    Beginning with Oracle version 10g, DataPump utilities (expdp/impdp) perform direct path operations and so the table compression is maintained, like in the following example:
    - after crating/populating the two tables, export them with:
    #> expdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . exported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    and re-import after deletion with:
    #> impdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . imported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Job "TEST"."SYS_IMPORT_TABLE_01" successfully completed at 12:47:51
    Verify the extents after DataPump import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is kept.
    ===========================================================
    1 到底11.2.0.3 是否 支持impdp自动维护compress table 通过dblink 方式?
    2
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    以上意思在10g之前是 必须以上方式才能支持目标端入库压缩表,10g开始支持自动压缩? 貌似 10g也需要手工move。

    ODM TEST:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> create table nocompres tablespace users as select * from dba_objects;
    Table created.
    SQL> create table compres_tab tablespace users as select * from dba_objects;
    Table created.
    SQL> alter table compres_tab compress 3;
    Table altered.
    SQL> alter table compres_tab move ;
    Table altered.
    select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
                  9 NOCOMPRES
    C:\Users\ML>expdp  maclean/oracle dumpfile=temp:COMPRES_TAB2.dmp  tables=COMPRES_TAB
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:12 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** dumpfile=temp:COMPRES_TAB2.dmp tables=COMPRES_TAB
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 3 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "MACLEAN"."COMPRES_TAB"                     7.276 MB   75264 rows
    Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
      D:\COMPRES_TAB2.DMP
    Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 12:01:20
    C:\Users\ML>impdp maclean/oracle remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:47 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MACLEAN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MACLEAN"."SYS_IMPORT_FULL_01":  maclean/******** remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "MACLEAN1"."COMPRES_TAB"                    7.276 MB   75264 rows
    Job "MACLEAN"."SYS_IMPORT_FULL_01" successfully completed at 12:01:50
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
    SQL> drop table compres_tab;
    Table dropped.
    C:\Users\ML>exp maclean/oracle tables=COMPRES_TAB file=compres1.dmp
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:19 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    About to export specified tables via Conventional Path ...
    . . exporting table                    COMPRES_TAB      75264 rows exported
    Export terminated successfully without warnings.
    C:\Users\ML>
    C:\Users\ML>imp maclean/oracle  fromuser=maclean touser=maclean1  file=compres1.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:45 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V11.02.00 via conventional path
    import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    . importing MACLEAN's objects into MACLEAN1
    . . importing table                  "COMPRES_TAB"      75264 rows imported
    Import terminated successfully without warnings.
    SQL> conn maclean1/oracle
    Connected.
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  8 COMPRES_TAB
                     我的理解 对于direct load 总是能保持compression
    但是 imp默认是conventional path 即使用普通的走buffer cache的INSERT实现导入 所以无法保持compression
    而impdp不管 access_method 是external_table还是direct_path 模式都可以保持compression

  • How to Add column with default value in compress table.

    Hi ,
    while trying to add column to compressed table with default value i am getting error.
    Even i tried no compress command on table still its giivg error that add/drop not allowed on compressed table.
    Can anyone help me in this .
    Thanks.

    Aman wrote:
    while trying to add column to compressed table with default value i am getting error.This is clearly explain in the Oracle doc :
    "+You cannot add a column with a default value to a compressed table or to a partitioned table containing any compressed partition, unless you first disable compression for the table or partition+"
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5163
    Nicolas.

  • Compressed tables with more than 255 columns

    hi,
    Would anyone have a sql to find out compressed tables with more than 255 columns.
    Thank you
    Jonu

    SELECT table_name,
           Count(column_name)
    FROM   user_tab_columns utc
    WHERE  utc.table_name IN (SELECT table_name
                              FROM   user_tables
                              WHERE  compression = 'ENABLED')
    HAVING Count(column_name) > 255
    GROUP  BY table_name

  • Drop column from compressed table

    NLSRTL
    11.2.0.3.0
    Production
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    PL/SQL
    11.2.0.3.0
    Production
    TNS for Linux:
    11.2.0.3.0
    Production
    Hello,
    I read about how to drop column from a compressed table - first set it unused and then drop the unused columns. However, in the example below on the database I ran it, it does not work. Please, can you tell me WHEN this approach does not work. What it is dependent on - parameters or something else. Why I cannot drop the unused columns?
    And the example along with the errors:
    create table tcompressed compress as select * from all_users;
    > table TCOMPRESSED created.
    alter table tcompressed add x number;
    > table TCOMPRESSED altered.
    alter table tcompressed drop column x;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    alter table tcompressed set unused column x;
    > table TCOMPRESSED altered.
    alter table tcompressed drop unused columns;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    As you can see even after altering the table by setting the column X as unused I still cannot drop it by using DROP UNUSED COLUMNS.
    Thank you.

    check this link it might help. At the end it has also mentioned of a bug check the same.
    http://sharpcomments.com/2008/10/ora-39726-unsupported-adddrop-column-operation-on-compressed-tables.html

  • Importing into a compressed table

    Is importing into a compressed table effective?
    I mean is data get compressed when using imp utility to load into a compressed table?
    from the oracle doc :
    Compression occurs while data is being bulk inserted or bulk loaded. These operations include:
    ·Direct Path SQL*Loader
    ·CREATE TABLE … AS SELECT statement
    ·Parallel INSERT (or serial INSERT with an APPEND hint) statement

    A quick journey over to the SQL Reference manual, where things like table-create options are defined, gives us the following comment:
    table_compression
    The table_compression clause is valid only for heap-organized tables. Use this clause to instruct the database whether to compress data segments to reduce disk use. This clause is especially useful in environments such as data warehouses, where the amount of insert and update operations is small. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
    When you enable table compression, Oracle Database attempts to compress data during direct-path INSERT operations when it is productive to do so. The original import utility (imp) does not support direct-path INSERT, and therefore cannot import data in a compressed format. You can specify table compression for the following portions of a heap-organized table:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331

  • How to add column to compressed table

    Hi gurus,
    Can any one help me how to add a column to compressed tables
    Thanks in advance

    The only difference is if added column has default value. In that case:
    SQL> create table tbl(id number,val varchar2(10))
      2  /
    Table created.
    SQL> insert into tbl
      2  select level,lpad('X',10,'X')
      3  from dual
      4  connect by level <= 100000
      5  /
    100000 rows created.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       3145728
    SQL> alter table tbl move compress
      2  /
    Table altered.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       2097152
    SQL> alter table tbl add name varchar2(5) default 'NONE'
      2  /
    alter table tbl add name varchar2(5) default 'NONE'
    ERROR at line 1:
    ORA-39726: unsupported add/drop column operation on compressed tables
    SQL> alter table tbl add name varchar2(5)
      2  /
    Table altered.
    SQL> update tbl set name = 'NONE'
      2  /
    100000 rows updated.
    SQL> commit
      2  /
    Commit complete.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       7340032
    SQL> select compression from user_tables where table_name = 'TBL'
      2  /
    COMPRESS
    ENABLED
    SQL> alter table tbl move compress
      2  /
    Table altered.
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
         BYTES
       2097152
    SQL> SY.

  • BRSPACE create new tablespace for compressed table

    Dear expert,
    I'm plannng to create new tablespace  PSAPCOMP for compressed table by brspace.
    the current total size of the tables that i want to compression is 1T.
    Now i have  2 question about the brspace script
    1.How big size should the PSAPCOMP be , let's say 500G?
    2. i'm confused about the datafile_dir value i should put (please refer to the attachment for current datafile_dir design)
    Could you help to correct/improve the scritpts as following?
    scriptA : brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    repeat scriptB for 20 times
    scriptB : brspace -f tsextend -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 -f1 4 -f2 4 -f3 4 -f4 4 -f5 4 --> extend 25G in one run
    Qestion: is it OK to assign the PSAPCOMP only to "sapdata4"? or i should specify "-f sapdata1 sapdata2 sapdata3 sapdata4" so the table data can distribute among different sapdata_dir?
    Thank you!

    hi Kate,
    some of the questions depend on the "customer" decision.
    is it OK to assign the PSAPCOMP only to "sapdata4"?
    how much space is available? what kind of storage do they have? is totally stripped or not? is there "free" space on the other filesystems?
    1.How big size should the PSAPCOMP be , let's say 500G?
    as I explained to you already, it is expected that applying all compressions you can save 1/2 of the space but you have to test this as it depends on the content of the tables and selectivity of the indexes. Use the oracle package to simulate the savings for the tables you are going to compress.
    do you want the scripts interactive (option -c) or not?
    The SAP Database Guide: Oracle has all possible options so you only have to check them
    brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    if you want to create a 500 GB, why you limit the maximum size to 15360?

  • Drop unused column in compressed table

    Good day for all.
    I have partitioned compressed table under 10.2.0.4 database. I need to remove column, and did it according metalink note 429898.1, but then i execute command alter table .... drop unused columns, i got error ORA-39726 (unsupported add/drop column operation for compreseed table), so it mean that proposed workaround don't works now.
    Does anyone have new workaround for this problem?

    alter table MY_TABLE set unused(MY_FNG_COLUMN);
    -- success
    alter table MY_TABLE drop unused columns;
    -- error

  • TimesTen synchronize with OLTP-compressed table

    Hi,
    We are looking at potentially using TimesTen as a frontend data store for our Oracle database. Database is at version 11.2.0.3, and many tables in the databases are OLTP-compressed (Advanced Compression). Is synchronizing with compressed table supported?  Is there any known issues doing that?
    Thanks,
    George

    Thanks Chris, for you quick reply.
    George

  • DB6CONV v5.04 not fully compressing tables

    In testing the new version of DB6CONV using the "Compress Data" option with an online table move I have had two instances so far where the resulting compression % was very low.  For example
    - compression check against table EKPO ( 159GB ) estimated  76%
    - created a new large tablespace with 16k page size to move / compress EKPO into.  (EKPO currently in a regular 16 page tablespace.)
    - Moved / compressed EKPO into new table space and obtained a 55% compression
    - Ran compression check again and still reporting a 76%
    - Created a 2nd large 16k page size tabelspace and moved EKPO into this one (with compress data option checked) and received a 76% compression
    Any idea what is causing this?
    DB2 9.5 FP 5
    ECC 604

    hi, i want to publish the findings on this topic as of today:
    activating the COMPRESS DATA option within DB6CONV always results in the target table of the conversion being created with "COMPRESS YES". but this alone does not mean that an optimal compression dictionary is created during the conversion. for an optimal compression dictionary SAMPLING needs to be used.
    1. all OFFLINE CONVERSIONS with COMPRESS DATA use sampling.
    2. all ONLINE CONVERSIONS using ADMIN_MOVE_TABLE (DB2 >=V9.7) check the compression flag or the TARGET table and use sampling accordingly.
    so the compression dictionary after conversions of  these two classes will be optimal.
    3. all ONLINE CONVERSIONS using ONLINE_TABLE_MOVE (DB2 < V9.7) in contrast check the compression flag of the SOURCE(!) table and use sampling accordingly.
    so the compression dictionary after converting a formerly compressed table using ONLINE_TABLE_MOVE will be optimal (SAMPLING will be used), whereas during the conversion with ONLINE_TABLE_MOVE of a formerly not compressed table sampling will NOT be used! the compression dictionary will be created at some point in time by ADC and thus be suboptimal.
    this explains the behaviour detected by eddie:
    the first conversion of an uncompression table with option "compress" resulted in 55% compression rate,
    the second conversion of the same table (but at this time already compressed) resulted in 76% compression rate.
    this problem is planned to be solved with the next version of ONLINE_TABLE_MOVE .
    until then the workaround to ensure the best compression dictionary would be to issue an "ALTER TABLE <SOURCETAB> COMPRESS YES" on the source table before starting the conversion (side effect: this will/might lead to an unneccessary data compression (traffic/workload) on the source table as well).
    this is meanwhile documented in the DB6CONV OSS note 1513862 as well.
    regards, frank

  • Compressed table using oracle advanced compression but space is not release

    Dear Experts,
    I am using oracle 11.2.0.2 with sap as application
    I have compressed one of my largest indetified table "SWWCNTP0" as per below statement
    Alter table sapsr3.SWWCNTP0 compress for oltp;
    After this i have performed reorg for the above table but it doesnt released the space
    before reorg 39gb
    after reorg 39g
    Please advice
    Regards
    Navin Somal

    906078 wrote:
    Dear Experts,
    I am using oracle 11.2.0.2 with sap as application
    I have compressed one of my largest indetified table "SWWCNTP0" as per below statement
    Alter table sapsr3.SWWCNTP0 compress for oltp;
    After this i have performed reorg for the above table but it doesnt released the space
    before reorg 39gb
    after reorg 39g
    Please advice
    Regards
    Navin SomalHow did you go about doing the reorg?
    He's a simple working example ...
    ME_ORCL?create table to_compress as
      2  select * from all_objects
      3  union all
      4  select * from all_objects
      5  union all
      6  select * from all_objects
      7  union all
      8  select * from all_objects
      9  union all
    10  select * from all_objects;
    Table created.
    Elapsed: 00:00:16.74
    ME_ORCL?select bytes
      2  from dba_segments
      3  where segment_name = 'TO_COMPRESS';
                 BYTES
              44040192
    1 row selected.
    Elapsed: 00:00:00.06
    ME_ORCL?alter table to_compress compress for oltp;
    Table altered.
    Elapsed: 00:00:00.09
    --we don't expect any change here, because this only affects subsequent operations on the table, we need to MOVE the table to see any benefits in the existing data
    ME_ORCL?select bytes
      2  from dba_segments
      3  where segment_name = 'TO_COMPRESS';
                 BYTES
              44040192
    1 row selected.
    Elapsed: 00:00:00.08
    ME_ORCL?alter table to_compress move compress for oltp;
    Table altered.
    Elapsed: 00:00:01.08
    ME_ORCL?select bytes
      2  from dba_segments
      3  where segment_name = 'TO_COMPRESS';
                 BYTES
              14680064
    1 row selected.
    Elapsed: 00:00:00.02
    ME_ORCL?

  • ABPA  code to compress table into a table of fixed-length strings

    Hi all,
    I need to compress a large, sparse table into a table of fixed-length strings in my ABAP code, and then uncompress it.  Is there a standard format or facility to do this?  I seem to remember a function that does this, but any other hints would be appreciated.
    Thank you in advance,
    Sunny

    For example, given a table like:
    Column0    Column1    Column2
    abc            C               !@#&@
                     P
    def                              $*(
    Compress it into a table consisting of one column of fixed-length strings, like:
    Column0
    0abc1C2
    !@#&@01
    P20def1
    2$*(
    ..and then uncompress it back out to the original table.
    Sunny

Maybe you are looking for

  • Polish character in the word "tysiecy" is not shown correctly  in my output

    Hi , From time to time we have noticed that polish character in the word "tysiecy" is not shown correctly on the output type: RD10/ZD10 in my form. It is showing '' tysi#cy '' .We don't know why it is behaving like this?? Please help me if any clue t

  • Connection closed by remote host issue

    Hi , currently we are working on SuccessFactors PI integration. On the first test we got following error message on receiver Axis (soap) adapter. The certificates was imported to the trusted CA's. Any help will be helpful. Thank you, PM

  • MBP Bluetooth connection to Garmin nuvi

    Good morning, everyone! I am trying to figure out if it is possible to pair my MBP (C2D) with my Garmin nuvi 660 gps unit. Rather, I've been able to pair the two devices, but the nuvi expects to be paired to a phone, so it doesn't recognise the MBP o

  • Agent 2.2, Apache 2.0.59 and Solaris 10 - problem- Urgent

    Hi, We are using Solaris 10 (T2000), apache 2.0.59 and Policy Agent 2.2.. Apache Server 2.0.59 works just fine. When I installed Policy Agent 2.2. The Apache start gives following error: /usr/local/apache2/bin/apachectl start Syntax error on line 1 o

  • Ipod crashes with some audiobooks - before they start

    I have downloaded some free audiobooks from the itunes site and when I try to play them on my ipod, the ipod crashes and I get a black screen with the apple for a while. This is somewhat annoying for me as I bought the frakking thing to listen to boo