Compress a table

How do you compress a table ?

Hi 404045 :-)
To compress
ALTER TABLE schema.table_name COMPRESS;
If you change your mind:
ALTER TABLE schema.table_name NOCOMPRESS;
data_segment_compression
The data_segment_compression clause is valid only for heap-organized tables. Use this clause to instruct Oracle whether to compress data segments to reduce disk and memory use. The COMPRESS keyword enables data segment compression. The NOCOMPRESS keyword disables data segment compression.
Note:
The first time a table is altered in such a way that compressed data will be added, all bitmap indexes and bitmap index partitions on that table must be marked UNUSABLE.
See Also:
Oracle9i Database Performance Tuning Guide and Reference for information on calculating the compression ratio and to Oracle9i Data Warehousing Guide for information on data compression usage scenarios
data_segment_compression clause of CREATE TABLE information on creating objects with data segment compression
Regards.
Daniel
Mensaje editado por:
DanielRey

Similar Messages

  • How can I add a new column in compress partition table.

    I have a compress partition table when I add a new column in that table it give me an error "ORA-22856: CANNOT ADD COLUMNS TO OBJECT TABLES". I had cretaed a table in this clause. How can I add a new column in compress partition table.
    CREATE TABLE Employee
    Empno Number,
    Tr_Date Date
    COMPRESS PARTITION BY RANGE (Tr_Date)
    PARTITION FIRST Values LESS THAN (To_Date('01-JUL-2006','DD-MON-YYYY')),
    PARTITION JUNK Values LESS THAN (MAXVALUE));
    Note :
    When I create table with this clause it will allow me to add a column.
    CREATE TABLE Employee
    Empno Number,
    Tr_Date Date
    PARTITION BY RANGE (Tr_Date)
    PARTITION FIRST Values LESS THAN (To_Date('01-JUL-2006','DD-MON-YYYY')),
    PARTITION JUNK Values LESS THAN (MAXVALUE));
    But for this I have to drop and recreate the table and I dont want this becaue my table is in online state i cannot take a risk. Please give me best solution.

    Hi Fahed,
    I guess, you are using Oracle 9i Database Release 9.2.0.2 and the Table which you need to alter is in OLTP environment where data is usually inserted using regular inserts. As a result, these tables generally do not get much benefit from using table compression. Table compression works best on read-only tables that are loaded once but read many times. Tables used in data warehousing applications, for example, are great candidates for table compression.
    Reference : http://www.oracle.com/technology/oramag/oracle/04-mar/o24tech_data.html
    Topic : When to Use Table Compression
    Bug
    Reference : http://dba.ipbhost.com/lofiversion/index.php/t147.html
    BUG:<2421054>
    Affects: RDBMS (9-A0)
    NB: FIXED
    Abstract: ENH: Allow ALTER TABLE to ADD/DROP columns for tables using COMPRESS feature
    Details:
    This is an enhancement to allow "ALTER TABLE" to ADD/DROP
    columns for tables using the COMPRESS feature.
    In 9i errors are reported for ADD/DROP but the text may
    be misleading:
    eg:
    ADD column fails with "ORA-22856: cannot add columns to object tables"
    DROP column fails with "ORA-12996: cannot drop system-generated virtual column"
    Note that a table which was previously marked as compress which has
    now been altered to NOCOMPRESS also signals such errors as the
    underlying table could still contain COMPRESS format datablocks.
    As of 10i ADD/SET UNUSED is allowed provided the ADD has no default value.
    Best Regards,
    Muhammad Waseem Haroon
    [email protected]

  • Doubt in compression of tables

    Hello,
    I have a doubt regarding compression of tables.
    I just compressed three tables in my Database and took a dump of the compressed tables. And later i took the dump of the same uncompressed tables (having same data as compressed table). But there was no difference in the file size of the two dumps..Both the dump having compressed tables and uncompressed tables are of the same size, infact the dump of compressed tables is slightly larger by some MBs compared to dump of uncompressed tables..
    My doubt is why is the size of the dumps not varied..Will compressing does not change the physical size of the data file??

    Table compression is supported (and useful) in following cases:
    1) direct path sqlloader,
    2) create table as select
    3) parallel inserts or inserts with append hint
    4) single-row or array insert and updates
    If you want to compress dump data, use compression feature of datapump (11g)
    Note: advanced compression is an SE/EE option and needs an extra licence .
    Werner

  • How to compress Segment/Table to 1 extent in Oracle 8.0.6?

    The best strategy I find to compress a table's extent is to: 1) export the table 2) drop the table 3) import the table.
    However, this is proving to be very problematic. When you export a table with constraints=y, it does not include the foreign key constraints on children tables. But, if you drop the table with "cascade constraints" option, all foreign key constraints on these children tables are dropped. There is no way to re-create them when the table is imported.
    My export parameters are indexes=y constraints=y grants=y. And, the import parameters are rows=y constraints=y indexes=y grants=y
    I came up with an alternate strategy which does not include dropping the table. Using ignore=y on the import would bring the extents down to 2.
    Any suggestions?
    Thank you,
    Tom.

    It's simple from the view perspective. What you see in the table is a reflection of the list data. So if your data in the list is setup like you want to show it in the UI you are done.
    Conclusion is, that you have to setup the list in a way that it shows the aggregated data. However, this you have to do yourself. No help from the framework here.
    Timo

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Why should avoid OLTP compression on tables with massive update/insert?

    Dear expert,
    We are planning oracle OLTP compression on a IS-U system, could you tell me
    Why should avoid OLTP compression on tables with massive update/insert?
    What kind of impact on the performance in the worst case?
    Best regards,
    Kate

    Hi
    When updating compressed data Oracle has to read it, uncompress it and update it.
    The compression is then performed again later asynchronously. This does require a lot more CPU than for a simple update.
    An other drawback is that compression on highly modified tables will generate a major increase in redo / undo log generation. I've experienced it on an DB where RFC tables where by mistake compressed, the redo increase was over 15%.
    Check the remark at the end of  Jonathan Lewis post.
    Regards
    http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/
    Possibly this is all part of the trade-off that helps to explain why Oracle doesn't end up compressing the last few rows that get inserted into the block.
    The effect can be investigated fairly easily by inserting about 250 rows into the empty table - we see Oracle inserting 90 rows, then generating a lot of undo and redo as it compresses those 90 rows; then we insert another 40 rows, then generate a lot of undo and redo compressing the 130 rows. Ultimately, by the time the block is full we have processed the first 90 rows into the undo and redo four or five times.

  • Compress partitioned table

    Hello
    I have a table with 10 partitions and I would like to compress all of them. What is the best way to accomplish this?
    Thank you.

    There are several ways to do it, depending on your availability needs.
    One option:
    1. Create a compressed staging table that has the same structure as your partitioned table.
    2. Direct path insert data from one partition into this table, remember to order the data to gain the best compression (remember that chainging the order will affect the clustering factor for the indexes, could cause change in execution plans).
    3. Create the same indexes on your staging table as you have on your main table.
    4. Exchange the current partition with the staging table (alter table ... exchange partition ...).
    Repeat it for all of your partitions.
    /Kristian

  • Compressing partition table

    Hi Gurus,
    I have to compress partition tables which has global(non-partitioned)indexes.If the do the partition compress with ALTER TABLE MOVE PARTITION COMPRESS this will invalidates the global indexes .
    Can i use update global indexes with the above statement?or there is no way to avoid global indexes going invalid during the compress operation?
    Synatx for compressing partition with update global index would really help me.
    Thanks

    Hi,
    Have a look at:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_3001.htm#BABDIDEJ
    If the moved partitions are not empty, then the database marks them UNUSABLE. The database invalidates global indexes on heap-organized tables. You can update these indexes during this operation using the update_index_clauses.
    Regards
    Maurice

  • ORA-12815 while reorg/compression of tables without LONG and LOB with 11g

    Hello fellows,
    I am in the luxury situation that I got a copy of our production R/3 environment that was left over from a project and is no more required by any of our developers.
    As we are still on oracle 9.2.0.7 I upgraded this copy to 11.2 in a two step process (from 9i to 10g to 11g).
    I got myself the SAP dbatools 7.20(3) and the Note 1431296 - LOB conversion and table compression with BRSPACE 7.20.
    I started with some small tablespaces but after a while I thought I'd like to try to reorg/compress the worst of all tablespaces...PSAPPOOLD with ~15.000 tables.
    I first converted tables with LONG fields online that can be compressed, than the onse that can not be compressed, than I reorged the tables that contain old LOB fields online. With these different executions of the brspace commands that are also mentioned in the above note I managed to move ~ 3.000 tables without any issues.
    But now I started with the biggest bunch of tables, the compression of tables without LONG and LOB fields online.
    This is the command I used:
    brspace -u / -p reorgEXCL.tab -f tbreorg -a reorg -o sapr3 -s PSAPPOOLD -t allsel -n psapreorg -i psapreorgi -c ctab -SCT
    ...after a few checks that are performed by brspace, I end up in the screen
    Options for reorganization of tables (which is still nothing I wouldn't have expected)
    1 * Reorganization action (action) ............ [reorg]
    2 - Reorganization mode (mode) ................ [online]
    3 - Create DDL statements (ddl) ............... [yes]
    4 ~ New destination tablespace (newts) ........ [PSAPREORG]
    5 ~ Separate index tablespace (indts) ......... [PSAPREORGI]
    6 - Parallel threads (parallel) ............... [1]
    7 ~ Table/index parallel degree (degree) ...... []
    8 ~ Category of initial extent size (initial) . []
    9 ~ Sort by fields of index (sortind) ......... []
    10 # Index for IOT conversion (iotind) ......... [FIRST]
    11 - Compression action (compress) ............. [none]
    12 # LOB compression degree (lobcompr) ......... [medium]
    13 # Index compression method (indcompr) ....... [ora_proc]
    But independent of what I enter in point 6 and 7, I always end up with below erros during the reorg/compression of the outstanding tables:
    Just one sample, but the issue is always the same.
    BR0301E SQL error -12815 in thread 2 at location tab_onl_reorg-26, SQL statement:
    'CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0#$" ON "SAPR3"."RTXTF#$" ("MANDT", "APPLCLASS", "TEXT_NAME", "TEXT_TYPE", "FROM_LINE",
    "FROM_POS")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 1662976 NEXT 655360 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSAPREORGI" PARALLEL ( INSTANCES 0) '
    ORA-12815: value for INSTANCES must be greater than 0
    Just in case, here it the OBJECT DDL:
    CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0"
        ON "SAPR3"."RTXTF"  ("MANDT", "APPLCLASS", "TEXT_NAME",
        "TEXT_TYPE", "FROM_LINE", "FROM_POS")
        TABLESPACE "PSAPPOOLI" PCTFREE 10 INITRANS 2 MAXTRANS 255
        STORAGE ( INITIAL 1624K NEXT 640K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
        LOGGING
    Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    Many thanks
    Florian

    Hello Florian,
    > Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    I have not performed any compression operations on Oracle 11g R2 with brspace until yet .. but this error seems to be very obvious.
    It seems like SAP is still not using the procedure DBMS_REDEFINITION.COPY_TABLE_DEPENDENT to create the indexes (and NOT NULL constraints) on Oracle 11g R2. No idea why, i can only think of one case (create a DDL file before reorganisation to change the DDL parameters through the reorganisation in some kind of ways).
    So in your case it seems like SAP is creating a wrong SQL for creating the index on the interim table.
    You can try to create the DDL file first and correct the parameters and after that you can try to run the reorganisation again.
    Please check sapnote #646681 (Remark 5) for more information about the procedure for "creating the DDL first .. and then do the reorg with edited parameters).
    Regards
    Stefan

  • Compression on table and on clustered index

    Hi all,
    if I had a table with a clustered index on it, is it the same to rebuild the table with compression and rebuild the clustered index with compression?
    Is this
    ALTER TABLE dbo.TABLE_001 REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW, ONLINE=ON)
    Equivalent to this?
    ALTER INDEX PK_TABLE_001 ON dbo.TABLE_001 REBUILD WITH (DATA_COMPRESSION=ROW, ONLINE=ON)
    where PK_TABLE_001 is the clustered index of the table TABLE_001

    Andrea,
    Cluster index is table itself organized according to clustering key value. So if you apply compression on CI means internally table would be compressed and because cluster index includes all columsn of the table so complete table would be compressed.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • ASM space increased after compression of tables

    Hi all,
    I will have compressed some my huge tables in dataware house database , tables size are reduce  after compression, while on ASM space has been increased.
    datbasebase is 10.2.0.4 (64 bit) and OS is AIX 5.3 (64 bit)

    I have checked the tablespaces of compressed table now. And it shows huge free space:
    Tablespace size in GB
        Free space in GB
    658
    513
    682
    546
    958
    767
    686
    551

  • Advanced Table Compression Create Table with LOBs

    Hi,
    I need some help with the Advanced Table Compression when creating a table especially when they contain a LOB.
    Here are 3 examples:
    Exp#1
    CREATE TABLE emp (
          emp_id NUMBER, 
          first_name VARCHAR2(128), 
          last_name VARCHAR2(128)
    ) COMPRESS FOR OLTP;
    This one is ok - all elements are compressed.
    Exp#2
    CREATE TABLE photos (
          photo_id NUMBER,
          photo BLOB)
          LOB(photo) STORE AS SECUREFILE (COMPRESS LOW);
    This one I am confused - is it just the LOB(photo) that is compressed or the whole table. If it is just the LOB then what syntax do I need for the whole table?
    I also assume that the LOB is being stored in the default tablespace associated with this table - correct me if I am wrong!
    Exp#3
    CREATE TABLE images (
          image_id NUMBER,
          image BLOB)
          LOB(image) STORE AS SECUREFILE (TABLESPACE lob_tbs COMPRESS);
    This one I am confused - I think it is telling me that LOB(image) is being compresses and stored in tablespace lob_tbs and the other elements are being stored uncompressed in the default tablespace.
    Again if it is just the LOB then what syntax do I need for the whole table?
    Thanks & regards
    -A

    Welcome to the forums !
    Pl post details of OS, database and EBS versions. Pl be aware that Advanced Compression is a separately licensed product. Pl see if these links help
    http://blogs.oracle.com/stevenChan/2008/10/using_advanced_compression_with_e-business_suite.html
    http://blogs.oracle.com/stevenChan/2008/11/early_benchmarks_using_advanced_compression_with_ebs.html
    http://blogs.oracle.com/stevenChan/2010/05/new_whitepaper_advanced_compression_11gr1_benchmar.html
    HTH
    Srini

  • Drop column from compressed partitioned table

    Hi,
    DB version is 11.2.02.
    We have table which is range partitioned and sub-partitioned by list.
    Table is also compressed.
    When I try to drop a column, I get error.
    CREATE TABLE DWH_REP.P_RATING (
      id_source$                 NUMBER(38,0)  NULL,
      time_insert$               DATE          ,
      time_update$               DATE          ,
      FLG_CURRENT$               NUMBER(38,0)  ,
      FLG_CHANGED$               NUMBER(38,0)  ,
      id_audit$                  NUMBER(38,0)  ,
      ID_DATE_PSTING             NUMBER(38,0)  ,
      partner_rating_id          VARCHAR2(256) ,
      partner_id                 VARCHAR2(256) ,
      id_partner                 NUMBER(38,0)  , 
      rating_system_id           VARCHAR2(256) ,
      rating_id                  VARCHAR2(256) ,
      date_rating                DATE          ,
      date_follow_up             DATE          ,
      risk_team_id               VARCHAR2(256) ,
      risk_team_descr            VARCHAR2(256) ,
      risk_team_changed_id       VARCHAR2(256) ,
      risk_team_changed_descr    VARCHAR2(256) ,
      date_risk_team_changed     DATE          ,
      assignment_id              VARCHAR2(256) ,
      date_assignment            DATE          ,
      date_assignment_confirmed  DATE          ,
      date_assignment_expiration DATE          ,
      flg_exception              VARCHAR2(256) ,
      exception_id               VARCHAR2(256) ,
      date_exception             DATE         
    -- TABLESPACE DWH_REP_DATA
    PARTITION BY RANGE (FLG_CURRENT$, ID_DATE_PSTING)
       SUBPARTITION BY LIST (ID_SOURCE$)
       (PARTITION P_RATING_2010               
           VALUES LESS THAN (0, 20110101)
           SUBPARTITION P_RATING_2010_UCS VALUES  (10) TABLESPACE DWH_O_2010_TBS,
           SUBPARTITION P_RATING_2010_UCM VALUES (11) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_ORBI30 VALUES  (30) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_ORBI31 VALUES  (31) TABLESPACE DWH_O_2010_TBS,
           SUBPARTITION P_RATING_2010_CETELEM VALUES  (40) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_MILES VALUES  (60) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_BHI VALUES  (80) TABLESPACE DWH_O_2010_TBS,
           SUBPARTITION P_RATING_2010_DF VALUES  (DEFAULT) TABLESPACE DWH_O_2010_TBS),   
         PARTITION P_RATING_2011            
           VALUES LESS THAN (0, 20120101)
           SUBPARTITION P_RATING_2011_UCS VALUES (10) TABLESPACE DWH_O_2011_TBS,
           SUBPARTITION P_RATING_2011_UCM VALUES (11) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_ORBI30 VALUES (30) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_ORBI31 VALUES (31) TABLESPACE DWH_O_2011_TBS,
           SUBPARTITION P_RATING_2011_CETELEM VALUES (40) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_MILES VALUES (60) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_BHI VALUES (80) TABLESPACE DWH_O_2011_TBS,
           SUBPARTITION P_RATING_2011_DF VALUES (DEFAULT) TABLESPACE DWH_O_2011_TBS),
        PARTITION P_RATING_current           
           VALUES LESS THAN (maxvalue, maxvalue)
           SUBPARTITION P_RATING_CUR_UCS VALUES (10) TABLESPACE DWH_O_CRT_UCS_TBS,
           SUBPARTITION P_RATING_CUR_UCM VALUES (11) TABLESPACE DWH_O_CRT_UPM_TBS,
    --     SUBPARTITION P_RATING_CUR_ORBI30 VALUES (30) TABLESPACE DWH_O_CRT_ORBI30_TBS,
    --     SUBPARTITION P_RATING_CUR_ORBI31 VALUES (31) TABLESPACE DWH_O_CRT_ORBI31_TBS,
           SUBPARTITION P_RATING_CUR_CETELEM VALUES (40) TABLESPACE DWH_O_CRT_CETELEM_TBS,
    --     SUBPARTITION P_RATING_CUR_MILES VALUES (60) TABLESPACE DWH_O_CRT_MILES_TBS,
    --     SUBPARTITION P_RATING_CUR_BHI VALUES (80) TABLESPACE DWH_O_CRT_BHI_TBS,
           SUBPARTITION P_RATING_CUR_DF VALUES (DEFAULT) TABLESPACE DWH_O_CRT_DF_TBS))
    ENABLE ROW MOVEMENT
    NOLOGGING
    COMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    ALTER TABLE DWH_REP.P_RATING DROP COLUMN ID_PARTNER;
    ORA-39726: unsupported add/drop column operation on compressed tables

    littleboy wrote:
    Hi,
    DB version is 11.2.02.
    We have table which is range partitioned and sub-partitioned by list.
    Table is also compressed.
    When I try to drop a column, I get error.
    CREATE TABLE DWH_REP.P_RATING (
    id_source$                 NUMBER(38,0)  NULL,
    time_insert$               DATE          ,
    time_update$               DATE          ,
    FLG_CURRENT$               NUMBER(38,0)  ,
    FLG_CHANGED$               NUMBER(38,0)  ,
    id_audit$                  NUMBER(38,0)  ,
    ID_DATE_PSTING             NUMBER(38,0)  ,
    partner_rating_id          VARCHAR2(256) ,
    partner_id                 VARCHAR2(256) ,
    id_partner                 NUMBER(38,0)  , 
    rating_system_id           VARCHAR2(256) ,
    rating_id                  VARCHAR2(256) ,
    date_rating                DATE          ,
    date_follow_up             DATE          ,
    risk_team_id               VARCHAR2(256) ,
    risk_team_descr            VARCHAR2(256) ,
    risk_team_changed_id       VARCHAR2(256) ,
    risk_team_changed_descr    VARCHAR2(256) ,
    date_risk_team_changed     DATE          ,
    assignment_id              VARCHAR2(256) ,
    date_assignment            DATE          ,
    date_assignment_confirmed  DATE          ,
    date_assignment_expiration DATE          ,
    flg_exception              VARCHAR2(256) ,
    exception_id               VARCHAR2(256) ,
    date_exception             DATE         
    -- TABLESPACE DWH_REP_DATA
    PARTITION BY RANGE (FLG_CURRENT$, ID_DATE_PSTING)
    SUBPARTITION BY LIST (ID_SOURCE$)
    (PARTITION P_RATING_2010               
    VALUES LESS THAN (0, 20110101)
    SUBPARTITION P_RATING_2010_UCS VALUES  (10) TABLESPACE DWH_O_2010_TBS,
    SUBPARTITION P_RATING_2010_UCM VALUES (11) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_ORBI30 VALUES  (30) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_ORBI31 VALUES  (31) TABLESPACE DWH_O_2010_TBS,
    SUBPARTITION P_RATING_2010_CETELEM VALUES  (40) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_MILES VALUES  (60) TABLESPACE DWH_O_2010_TBS,
    --     SUBPARTITION P_RATING_2010_BHI VALUES  (80) TABLESPACE DWH_O_2010_TBS,
    SUBPARTITION P_RATING_2010_DF VALUES  (DEFAULT) TABLESPACE DWH_O_2010_TBS),   
    PARTITION P_RATING_2011            
    VALUES LESS THAN (0, 20120101)
    SUBPARTITION P_RATING_2011_UCS VALUES (10) TABLESPACE DWH_O_2011_TBS,
    SUBPARTITION P_RATING_2011_UCM VALUES (11) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_ORBI30 VALUES (30) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_ORBI31 VALUES (31) TABLESPACE DWH_O_2011_TBS,
    SUBPARTITION P_RATING_2011_CETELEM VALUES (40) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_MILES VALUES (60) TABLESPACE DWH_O_2011_TBS,
    --     SUBPARTITION P_RATING_2011_BHI VALUES (80) TABLESPACE DWH_O_2011_TBS,
    SUBPARTITION P_RATING_2011_DF VALUES (DEFAULT) TABLESPACE DWH_O_2011_TBS),
    PARTITION P_RATING_current           
    VALUES LESS THAN (maxvalue, maxvalue)
    SUBPARTITION P_RATING_CUR_UCS VALUES (10) TABLESPACE DWH_O_CRT_UCS_TBS,
    SUBPARTITION P_RATING_CUR_UCM VALUES (11) TABLESPACE DWH_O_CRT_UPM_TBS,
    --     SUBPARTITION P_RATING_CUR_ORBI30 VALUES (30) TABLESPACE DWH_O_CRT_ORBI30_TBS,
    --     SUBPARTITION P_RATING_CUR_ORBI31 VALUES (31) TABLESPACE DWH_O_CRT_ORBI31_TBS,
    SUBPARTITION P_RATING_CUR_CETELEM VALUES (40) TABLESPACE DWH_O_CRT_CETELEM_TBS,
    --     SUBPARTITION P_RATING_CUR_MILES VALUES (60) TABLESPACE DWH_O_CRT_MILES_TBS,
    --     SUBPARTITION P_RATING_CUR_BHI VALUES (80) TABLESPACE DWH_O_CRT_BHI_TBS,
    SUBPARTITION P_RATING_CUR_DF VALUES (DEFAULT) TABLESPACE DWH_O_CRT_DF_TBS))
    ENABLE ROW MOVEMENT
    NOLOGGING
    COMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    ALTER TABLE DWH_REP.P_RATING DROP COLUMN ID_PARTNER;
    ORA-39726: unsupported add/drop column operation on compressed tables
    can you checkwith following?
    SQL>alter table t set unused column x;
    SQL>alter table t drop unused columns;Tom explains it ->http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:69076630635645

  • "Compact/Compress" internal table

    I have an internal table that has two line items per records where the first line item has data in some fields and the second has data in the other fields.  What I want is to put these two line items into one so that all fields are filled, in one line.  I started coding the following but I then realized that move-corresponding will overwrite with NULL values.  Basically I need to do a MOVE-CORRESPONDING when the source field is not NULL, or, when the target field is NULL.  Is there a way to do this?
      SORT it_out BY belnr.
      CLEAR: d_tabix, d_index.
      LOOP AT it_out INTO wa_out.
        ADD 1 TO d_tabix.
        AT END OF belnr.
          d_index = d_tabix - 1.
          READ TABLE it_out INTO wa_out2 INDEX d_index.
          MOVE-CORRESPONDING wa_out2 TO wa_out.
          MODIFY it_out FROM wa_out.
          DELETE it_out INDEX d_index.
          SUBTRACT 1 FROM d_tabix.
        ENDAT.
      ENDLOOP.
    Regards,
    Davis

    Hi,
    1)
    you can move one by one..
    2) Use the function module get_component_list to get the components of the source structure and move it using field-symbols..
    Example code..
    DATA: wa_mara TYPE mara.
    DATA: wa_vbap TYPE vbap.
    DATA: t_comp  TYPE STANDARD TABLE OF rstrucinfo.
    DATA: wa_comp TYPE rstrucinfo.
    FIELD-SYMBOLS: <fs_mara> TYPE ANY,
                   <fs_vbap> TYPE ANY.
    * get the components.
    CALL FUNCTION 'GET_COMPONENT_LIST'
      EXPORTING
        program    = sy-repid
        fieldname  = 'WA_MARA'
      TABLES
        components = t_comp.
    wa_mara-matnr = 'test'.
    * Move from wa_mara to wa_vbap with the common field name and only if
    * is populated.
    LOOP AT t_comp INTO wa_comp.
    * GEt the mara value.
      ASSIGN COMPONENT wa_comp-compname OF STRUCTURE wa_mara TO <fs_mara>.
      CHECK sy-subrc = 0.
    * GEt the vbap value.
      ASSIGN COMPONENT wa_comp-compname OF STRUCTURE wa_vbap TO <fs_vbap>.
      CHECK sy-subrc = 0.
    * If the source is populated the only move.
      IF NOT <fs_mara>  IS INITIAL.
        <fs_vbap> = <fs_mara>.
      ENDIF.
    ENDLOOP.
    WRITE: / wa_vbap-matnr.
    THanks
    Naren

  • Space reusage after deletion in compressed table

    Hi,
    Some sources tell, that free space after DELETE in compressed table is not reused.
    For example, this http://www.trivadis.com/uploads/tx_cabagdownloadarea/table_compression2_0411EN.pdf
    Is it true?
    Unfortunatly I cannot reproduce it.

    Unfortunatly the question is still open.
    In Oracle 9i space, freed after DELETE in compressed block, was not reused in subsequent inserts.
    Isn't it?
    I saw many evidences from other people. One link I gave above.
    But in Oracle 10g I see another figures. After delete rows in compressed blocks, and subsequent insert into that block, block defragmented!
    Please, if who know any documentation about change in this behavior, please post links.
    p.s.
    in 10g:
    1. CTAS compress. Block is full.
    2. after, deleted every 4 from 5 rows.
    avsp=0x3b
    tosp=0x99e
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0xea8 -- deleted
    0x28:pri[2]     offs=0xea0 -- deleted
    0x2a:pri[3]     offs=0xe98 -- deleted
    0x2c:pri[4]     offs=0xe90 -- deleted
    0x2e:pri[5]     offs=0xe88 -- live
    0x30:pri[6]     offs=0xe80 -- deleted
    0x32:pri[7]     offs=0xe78 -- deleted
    0x34:pri[8]     offs=0xe70 -- deleted
    0x36:pri[9]     offs=0xe68 -- deleted
    0x38:pri[10]     offs=0xe60 -- live
    0x3a:pri[11]     offs=0xe58 -- deleted
    0x3c:pri[12]     offs=0xe50 -- deleted
    0x3e:pri[13]     offs=0xe48 -- deleted
    0x40:pri[14]     offs=0xe40 -- deleted
    0x42:pri[15]     offs=0xe38  -- live
    0x44:pri[16]     offs=0xe30 -- deleted
    0x46:pri[17]     offs=0xe28 -- deleted
    0x48:pri[18]     offs=0xe20 -- deleted
    0x4a:pri[19]     offs=0xe18 -- deleted
    0x4c:pri[20]     offs=0xe10 -- live
    ...3. insert into table t select from ... where rownum < 1000;
    Inserted rows were inserted in a several blocks. Total number of not empty blocks was not changed. Chains did not occure.
    Block above looks as follow:
    avsp=0x7d
    tosp=0x7d
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0x776 - new
    0x28:pri[2]     offs=0x84b - new
    0x2a:pri[3]     offs=0x920 - new
    0x2c:pri[4]     offs=0x9f5 - new
    0x2e:pri[5]     offs=0xea8 - old
    0x30:pri[6]     offs=0xaca - new
    0x32:pri[7]     offs=0xb9f - new
    0x34:pri[8]     offs=0x34d - new
    0x36:pri[9]     offs=0x422 - new
    0x38:pri[10]     offs=0xea0 - old
    0x3a:pri[11]     offs=0x4f7 - new
    0x3c:pri[12]     offs=0x5cc - new
    0x3e:pri[13]     offs=0x6a1 - new
    0x40:pri[14]     sfll=16  
    0x42:pri[15]     offs=0xe98 - old
    0x44:pri[16]     sfll=17
    0x46:pri[17]     sfll=18
    0x48:pri[18]     sfll=19
    0x4a:pri[19]     sfll=21
    0x4c:pri[20]     offs=0xe90 -- old
    0x4e:pri[21]     sfll=22
    0x50:pri[22]     sfll=23
    0x52:pri[23]     sfll=24
    0x54:pri[24]     sfll=26As we see, that old rows were defragmented, and repacked, and moved to the bottom of block.
    New rows (inserted after compressing of table) fill remaining space.
    So, deleted space was reused.

Maybe you are looking for

  • How to find out the table from the field.

    Hi Friends, I want to know the table name from the field. I mean i know the field with which i want to know the table. I tried but could find the structure only but how can i find the table. Please guide me. siva.

  • Inbuilt authorization in PDF files

    I am a student and my university uses ebooks with inbuilt authorization in PDF files. Is there an application that can open this type of file? I tried the readers on the app store but have not had success. Any developers out there that can support th

  • How to invoke "Turn off the computer at the end"

    "Hey honey...wake up and get your spotty white *** out of bed and go turn off my computer"

  • 10.1.2.0.2 Developer Suite released!

    Download from here: http://www.oracle.com/technology/software/products/ids/index.html http://www.oracle.com/technology/software/products/ids/htdocs/101202winsoft.html

  • Migrating settings and library from Lightroom 4.4 to 5

    I Just istalled Lightroom 5 and I would like my settings and stuff to be just like in 4.4, how do I transfer or migrate my setting, preferences, Libraries...? Do I need to start from scratch again?