ASM space increased after compression of tables

Hi all,
I will have compressed some my huge tables in dataware house database , tables size are reduce  after compression, while on ASM space has been increased.
datbasebase is 10.2.0.4 (64 bit) and OS is AIX 5.3 (64 bit)

I have checked the tablespaces of compressed table now. And it shows huge free space:
Tablespace size in GB
    Free space in GB
658
513
682
546
958
767
686
551

Similar Messages

  • Space reusage after deletion in compressed table

    Hi,
    Some sources tell, that free space after DELETE in compressed table is not reused.
    For example, this http://www.trivadis.com/uploads/tx_cabagdownloadarea/table_compression2_0411EN.pdf
    Is it true?
    Unfortunatly I cannot reproduce it.

    Unfortunatly the question is still open.
    In Oracle 9i space, freed after DELETE in compressed block, was not reused in subsequent inserts.
    Isn't it?
    I saw many evidences from other people. One link I gave above.
    But in Oracle 10g I see another figures. After delete rows in compressed blocks, and subsequent insert into that block, block defragmented!
    Please, if who know any documentation about change in this behavior, please post links.
    p.s.
    in 10g:
    1. CTAS compress. Block is full.
    2. after, deleted every 4 from 5 rows.
    avsp=0x3b
    tosp=0x99e
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0xea8 -- deleted
    0x28:pri[2]     offs=0xea0 -- deleted
    0x2a:pri[3]     offs=0xe98 -- deleted
    0x2c:pri[4]     offs=0xe90 -- deleted
    0x2e:pri[5]     offs=0xe88 -- live
    0x30:pri[6]     offs=0xe80 -- deleted
    0x32:pri[7]     offs=0xe78 -- deleted
    0x34:pri[8]     offs=0xe70 -- deleted
    0x36:pri[9]     offs=0xe68 -- deleted
    0x38:pri[10]     offs=0xe60 -- live
    0x3a:pri[11]     offs=0xe58 -- deleted
    0x3c:pri[12]     offs=0xe50 -- deleted
    0x3e:pri[13]     offs=0xe48 -- deleted
    0x40:pri[14]     offs=0xe40 -- deleted
    0x42:pri[15]     offs=0xe38  -- live
    0x44:pri[16]     offs=0xe30 -- deleted
    0x46:pri[17]     offs=0xe28 -- deleted
    0x48:pri[18]     offs=0xe20 -- deleted
    0x4a:pri[19]     offs=0xe18 -- deleted
    0x4c:pri[20]     offs=0xe10 -- live
    ...3. insert into table t select from ... where rownum < 1000;
    Inserted rows were inserted in a several blocks. Total number of not empty blocks was not changed. Chains did not occure.
    Block above looks as follow:
    avsp=0x7d
    tosp=0x7d
    0x24:pri[0]     offs=0xeb0
    0x26:pri[1]     offs=0x776 - new
    0x28:pri[2]     offs=0x84b - new
    0x2a:pri[3]     offs=0x920 - new
    0x2c:pri[4]     offs=0x9f5 - new
    0x2e:pri[5]     offs=0xea8 - old
    0x30:pri[6]     offs=0xaca - new
    0x32:pri[7]     offs=0xb9f - new
    0x34:pri[8]     offs=0x34d - new
    0x36:pri[9]     offs=0x422 - new
    0x38:pri[10]     offs=0xea0 - old
    0x3a:pri[11]     offs=0x4f7 - new
    0x3c:pri[12]     offs=0x5cc - new
    0x3e:pri[13]     offs=0x6a1 - new
    0x40:pri[14]     sfll=16  
    0x42:pri[15]     offs=0xe98 - old
    0x44:pri[16]     sfll=17
    0x46:pri[17]     sfll=18
    0x48:pri[18]     sfll=19
    0x4a:pri[19]     sfll=21
    0x4c:pri[20]     offs=0xe90 -- old
    0x4e:pri[21]     sfll=22
    0x50:pri[22]     sfll=23
    0x52:pri[23]     sfll=24
    0x54:pri[24]     sfll=26As we see, that old rows were defragmented, and repacked, and moved to the bottom of block.
    New rows (inserted after compressing of table) fill remaining space.
    So, deleted space was reused.

  • Release of space after delete/truncate table

    Hello,
    How does release of space after delete/truncate table works? Is the space used before deletion released once delete is complete or not? Will I see the space occupied by deleted table as free in dba_segments or will I need to reorganize the table (drop and recreate again?). Reason why I am asking is that I can see table with 0 rows, but in dba_segment I can see it is occupying few gigabytes....
    Thank you

    Here is a little illustration for you;
    SQL> conn ogan/password
    Connected.
    SQL> create table ogan_deneme as select * from all_objects;
    Table created.
    SQL> select count(*) from ogan_deneme;
      COUNT(*)
        228470
    SQL> set line 1000
    SQL> set pagesize 1000
    SQL> select * from dba_segments where owner='OGAN';
    OWNER    SEGMENT_NAME        PARTITION_NAME           SEGMENT_TYPE       TABLESPACE_NAME                HEADER_FILE HEADER_BLOCK      BYTES     BLOCKS    EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE  FREELISTS FREELIST_GROUPS RELATIVE_FNO BUFFER_
    OGAN      OGAN_DENEME          TABLE              SYSTEM                                 854       319981   *30408704*       *1856*         *44*          65536                       1  2147483645                       1               1          854 DEFAULT
    SQL> truncate table ogan_deneme;
    Table truncated.
    SQL> select * from dba_segments where owner='OGAN';
    OWNER    SEGMENT_NAME        PARTITION_NAME           SEGMENT_TYPE       TABLESPACE_NAME                HEADER_FILE HEADER_BLOCK      BYTES     BLOCKS    EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE  FREELISTS FREELIST_GROUPS RELATIVE_FNO BUFFER_
    OGAN      OGAN_DENEME           TABLE              SYSTEM                                 854       319981      *65536*          *4*          *1*          65536                       1  2147483645                       1               1          854 DEFAULT
    SQL>Hope it Helps,
    Ogan

  • Space before/after tables

    Is there any way to eliminate the space before/after consecutive tables? I don't want to combine the tables, but would like them to appear as if my multiple tables are one. The problem now is that when you place two tables next to each other (inline) there is a 1/8" gap between the tables.
    I have tried adjusting font preferences, but the only thing that changes is the text inside the table.

    I'm afraid you have only the two choices. I imagine the reason for the spacing is to preserve the separateness of multiple tables and prevent accidental "collisions" and overlapping. You can always ungroup, rearrange, and regroup. When we want to do unusual things it's often with the penalty of extra work.
    Walt

  • After compressed the database table affect the system performance

    There is a very big database table in my system called MSEG. It is about 910G and the wasted tablespace is about 330G. I want to compress the table, but the table is written and deleted frequently. will it affect the system performance if I compress the table. Or I can only select reorganize the dababase table avoid it affect the system performance? Thanks.

    Hi Huiyong,
    If you talk about table compression, it cannot be done online. we need to do this with a planned downtime. Table compression has some percentage of overhead on CPU. Refer SAP note for
    1289494
    FAQ: Oracle compression
    1436352
    Oracle Database 11g Advanced Compression for SAP Systems
    If you talk about online table reorg, yes definitely there would be impact on user performance.
    As the table size is very big it may take some days or hours to perform online Reorg.
    Other faster method is to perform table export import which is faster than online reorg. But it will again require downtime .
    Hope this helps.
    Regards,
    Deepak Kori

  • Space left on DVD after compression

    Workflow:
    1.Edit project in FCP7
    2. Export to Quicktime (not self contained)
    3. Use Bit Budget to create compression settings, maxed out at average 6.8, maximum 7.8
    4. Drag to compressor, change settings to match above, 7.8 changes to 8
    5. After compression build in DVDSP, burn in Toast
    6. Toast says there is 1.2 Gb left on DVD.
    Question: Why wouldn't there be less compression? The video looks ok, but with 1.2 gb left on the DVD seems there could be less compression for a better looking DVD.
    Thanks in advance for all tips and help.

    As hanumang said there are limits as to rates. If your movie is one minute long, there will be a ton of room left
    http://dvdstepbystep.com/faqs_3.php
    The maximum rate that a DVD should output is 10.08 Mbps (Page 43 DVD SP 4.1.2 Pdf Manual) Of the 10.08, the maximum rate for the video stream is 9.8 Mbps. Video, audio and subtitle streams count to the max Mbps of 10.08 (Page 44 Pdf ) Note that some players do not handle the theoretical rates (meaning the rates they are suppossed to be able to handle) well, more so on discs burned from your computer. This rate is not related to how long or short a track is, it relates to how much data is being put out by the DVD.

  • 0IC_C03 Issue, after compression, the data still in F table,what happening

    Dears,
      After I performed the compression on the InfoCube 0IC_C03, all the queries on this InfoCube don't exeucte or with horrible low performance.
      Any suggestions are appreciated.
    B.R
    Gerald

    Hi Gerald,
    I think there is no connection of compression of request and low performance of the queries. Infact Queries run faster after compression as of my knowledge. Do some other checks to get better performance.
    Regards,
    Krish

  • ASM space consumption

    Hi,
    We have 2 node RAC (10.2.0.3 db) hosted in ASM in AIX 5.3. Our db size is currently 1.8TB. We have purging policy to hold only 3 months data + current month data. Some tables use XML Blobs which take most of the spaces. We do purge from this table as well.
    What i believe is, after this purging is complete, a rebuild of the indexes involved in the tables purged will reclaim the space (extents for that matter) used and be used by incoming data thereby the size of the table will not grow more. This has been the case with the application's old version (which was 9.2.0.6 HACMP clustered). But now in this 10.2.0.3 ASM database this is not happening. The purged space is not being reclaimed and only new space from ASM is utilized increasing the ASM space.
    Is this how ASM is supposed to behave or any way to make Oracle use the purged space back. Comments are welcome.
    Thanks

    v$asm_disk will show the free space in whole disk which has not been allocated by any segment. Since deleting some LOB data doesn't release the space from segment, you wont be able to see that free up space in v$asm_disk view. You need to check the free space in tablespace using DBA_FREE_SPACE and also importantly, check the free space in segment itself using DBMS_SPACE package. After purge, once you shrink the lob segment then only that free space will be released by that segment to the tablespace and you should be able to that space in DBA_FREE_SPACE, again not in V$ASM_DISK because the free space is part of tablespace or datafiles and since we never shrink data files, that free space will not be visible in v$asm_disk. But before shrinking LOB, read metalink doc: 386341.1, that will be real helpful.
    Truncate should have deallocated all the space by default, so check DBA_FREE_SPACE to find out total free space you have.
    Thanks
    Daljit Singh
    Edited by: Daljit on Jul 9, 2009 11:50 AM

  • Index size increased after import

    hi i mentioned already the index creation problem when i am trying to create index using script after import of table.So droped the table and created table using script and index also without data,then i started to import at tablelevel with indexes=n then ia m importing data from the production database.
    The size of the 2 indexes in production is 750 and 1200 mb each in test db both index size increased around double 1200 and 1700 mb each.I used same script in both db.Why this is increased here i took the export with compress=y full database export.Why the index size increased? when i created the index with initial extent and next extent size respective 800 and 100 mb.Whether is it the reason?
    with regards
    ramya

    i gave initial 1000 and next 100 for the index size around 1.1 gb in production but here in test why this became around 1.7 gb,eventhough the pct increase is 50 it should come around 1.3 maximum.Whether it will give any performance problem
    wiht regards
    ramya

  • "get all new data request by request" after compressing source Cube

    Hi
    I need to transfer data from one Infocube to another and use the Delta request by request.
    I have tried this when data on Source Infocube was not compressed and it worked.
    Afterwards some requests were compressed and after that the delta request by request is transfering all the information to target Infocube in only one request.
    Do you know if this a normal behavior?  
    Thanks in advance

    Hi
    The objective of compression is it will delete all the request in your F table and moves data to E table.after compression you don't have request by request by data.
    This is the reason you are getting all the data in single request.
    Get data request by request works, if you don't compress the data in your Cube.
    If you want to know about compression, check the below one
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035d300-b477-2d10-0c92-f858f7f1b575?QuickLink=index&overridelayout=true
    Regards,
    Venkatesh.

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • How to delete the duplicate requests in a cube after compression.

    Hi experts,
        1. How to delete the duplicate requests in a cube after compression.?
        2. How to show a charaterstics and a keyfigure side by side in a bex query output?
    Regards,
    Nishuv.

    Hi,
    You cannot delete the request as its compressed as all the data would have been moved to E table ..
    If you have the double records you may use the selective deletion .
    Check this thread ..
    How to delete duplicate data from compressed requests?
    Regards,
    shikha

  • Incorrect results after compressing a non-cumulative InfoCube

    Hi Gurus,
    In BI 7.0 After compressing the non cumulative InfoCube its showing the incorrect reference points .leis_03_bf (pintail stock moments)  showing as the reference points(opening Balance) after compressing  as no marker update. Due to this its showing in correct result in reporting.please suggest me .
    Thanks
    Naveen

    Hi Nirajan,
    First of all as I undestood 2LIS_03_BX is the initial upload of stocks, so there is no need of delta load for this datasource, it collects data from MARC, and MARD tables when running the stock setup in R3 and you ahve to load it just one time.
    If between delta loads of 2LIS_03_BF you're loading full updates you are dupplicating material movements data, the idea of compression with marker update is that this movements affects to the stock value in the query, it's because of that when you load the delta init you do it without marker update because these movements are contained in the opening stock loaded with 2LIS_03_BX so you dont want to affect the stock calculation.
    You can refer to the How to handle Inventory management scenarios in BW for more detail on the topic.
    I hope this helps,
    Regards,
    Carlos.

  • Taking free space out after TDMS copy

    Hello Experts,
    I have performed a TDMS copy from Production to one of our quality systems. Since I have taken only 6 months data the size of our quality system Database is reduced which was previously equal to Production.
    1)At present around 1 TB of space is free in PSAPSR3 tablespace whichI want to bring to OS level filesystem. I do not wantto go for whole tablespace reorg due to some specific reasons like large no of tables with Long fields, space issues etc.
      If anyone of you have faced a similarsituation pleasedo guide.
    Regards,
    Umesh

    Hello Rajesh,
    I already found tables for reorg and tried reorg for some of those tables , but the space released after table reorg is at tablespace level and not at OS file system.  So again the issue remains the same to pull out space from tablespace without reorg.
    I have searched various forums but everthing at the end comes to tablespace reorg . Since I have space contraints , I am searching for workaround for this.
    Regards,
    Umesh.

  • Why should avoid OLTP compression on tables with massive update/insert?

    Dear expert,
    We are planning oracle OLTP compression on a IS-U system, could you tell me
    Why should avoid OLTP compression on tables with massive update/insert?
    What kind of impact on the performance in the worst case?
    Best regards,
    Kate

    Hi
    When updating compressed data Oracle has to read it, uncompress it and update it.
    The compression is then performed again later asynchronously. This does require a lot more CPU than for a simple update.
    An other drawback is that compression on highly modified tables will generate a major increase in redo / undo log generation. I've experienced it on an DB where RFC tables where by mistake compressed, the redo increase was over 15%.
    Check the remark at the end of  Jonathan Lewis post.
    Regards
    http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/
    Possibly this is all part of the trade-off that helps to explain why Oracle doesn't end up compressing the last few rows that get inserted into the block.
    The effect can be investigated fairly easily by inserting about 250 rows into the empty table - we see Oracle inserting 90 rows, then generating a lot of undo and redo as it compresses those 90 rows; then we insert another 40 rows, then generate a lot of undo and redo compressing the 130 rows. Ultimately, by the time the block is full we have processed the first 90 rows into the undo and redo four or five times.

Maybe you are looking for

  • OC4J startup failed.  Not sure how to fix it.

    I am starting the oc4j container from eclipse. I was able to start it and debug it, but now I cannot. The only thing that change is that I installed the oracle client. Then I started getting the problems bellow. I think that change several environmen

  • Non-domain computer WSUS entries keep getting reset

    We have a non-domain member laptop. So it does not get domain GPOs, confirmed by RSOP. Somewhere along the line someone entered the wrong WSUS server into gpedit.msc for the WUSERVER and WUSTATUSSERVER: http://server02:80. Whenver I try to change it

  • Parallel processing in MRP

    CAn anybody tell me  the use of parallel processing in MD01 screen... SAP help tells the following.. But I cant understand that.. PLease help me get a clarification. Karthick P

  • Function within SELECT statement

    Help please... How can I get the my custom RFCal function to work within this select statement?? My Error: The value SUM((a2.act_rate + a2.act_gratuity) * ap2.actpac_quantity) cannot be converted to a number. My Code: SELECT outer select blah blah, (

  • Printing from evince makes fonts misaligned

    Hey, as the title says: Some letters are over each other. The problem doesn't appear if I am using the command lp or lpr. Also it doesn't make any difference if I am printing to PDF or on a real printer. Haven't found any error or warning in the cups