Table Compression on Partitions

Hi,
Can any help how to implement Table compression on Partitions?
Thanks in advance

Here is two examples for you.
Example 1. This table has two partitions. It has compression at the table level, one partition is compressed and one is not compressed
SQL>create table test_compress1
(t_id number(10),
tname varchar2(30)) partition by range (t_id)
(partition p0 values less than (50) compress
,partition p1 values less than (100) nocompress)
compress ;
Example 2. This table has two partitions. It has no compression at the table level, both partitions are compressed
SQL>create table test_compress2
(t_id number(10),
tname varchar2(30)) partition by range (t_id)
(partition p0 values less than (50) compress
,partition p1 values less than (100) compress);
You can play with different options but you must ensure you read more about the limitations in your SQL Reference manual before using Compression for both table or partition.

Similar Messages

  • Table compress  or Partition compress or both

    Hi,
    I have an oracle table with approx 250 Mill. records and approx size is around 35 gb.
    I want to save space by compressing the table.
    I am not sure which compression is more effective either table compress or partitions within table should be compressed or both ?
    Please advise,
    JP

    Hi,
    this is a compromise between manageablility and performance.
    If you work with partitions you give yourself a change to work with smaller sets of data (which under certain circumstances unload your DB with unecessary IOs).
    For the rest it (space efficiency) it depends highly on the nature of what you're compressing (normal columns, blob/clob, 'redundancy' of the data ...) and which method you're using (mormal compression, Advanced compression...)
    I would suggest you to try the different methods over a quite big sample of your table (with partitioning / not paritionning, with Advanced compression, normal compression).
    I would also test bulk inserts/update/deltes (with heavy volumes), and also try the main queries you're using against this table an compare IOs/CPU
    I suggest you to have a look to this thread:
    Implement Advanced compression
    (With such volumes, I guess you'll benefit (in most of the situation) from the compression...
    Edited by: user11268895 on Jul 22, 2010 11:11 AM

  • Can we compress hash partitioned table in 9.2

    Hi
    Can we compress the hash partitioned table? How to check the compression? Is there any way to check for the partition size after compression?
    Thanks

    hi
    go through below link
    hope it will help you.
    http://www.dbazine.com/oracle/or-articles/foot6
    http://www.google.ae/search?hl=en&q=compressed+hash+partition+++oracle+9i&meta=
    also check in google seconed point... Table compression do and don't.
    hope this helps
    Taj.

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • Oracle table compression

    Can you please explain - how oracle compression works - It would be really good if things will be explained with examples - also would be very use if you post some useful links ...
    Will there be any performace problems if the table is compressed ..

    BelMan wrote:
    Table compression was designed primarily for read-only environments and can cause processing overhead for DML operations in some cases. However, it increases performance for many read operations, especially when your system is I/O bound
    http://download.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm
    Not necessarily true. Envision a table where you have 7 years worth of data (say for auditing purposes) partitioned monthly with only the current month being actively DML'd, the rest are for all intents and purposes read only and compressed.
    Table compression was designed to be designed with :)

  • 10gR2 Table Compression Question

    I have a very large table (560GB, 12billion records) which I would like to compress.
    This table has 6 partitions (range partition) and 24 subpartitons for each partition (hash partition)
    Data in this table is never deleted (obviously).  Data is inserted daily via bulk load (SqlLoader)
    Please make recommendations on how to compress using 10g compression.
    Thank you,
    Larry

    903039 wrote:
    I have a very large table (560GB, 12billion records) which I would like to compress.
    This table has 6 partitions (range partition) and 24 subpartitons for each partition (hash partition)
    Data in this table is never deleted (obviously).  Data is inserted daily via bulk load (SqlLoader)
    Please make recommendations on how to compress using 10g compression.
    Thank you,
    Larry
    The first thing you should do is fully document the business requirements you plan to implement. That document should include:
    1. why you are compressing data - which you haven't told us by the way.
    2. what data you intend to compress - new data? existing data? both?
    3. to what extent is the data updated or deleted? and to what extent are 'normal' (i.e. not direct-path) INSERTs done?
    4. what compression options are available - basic? advanced?
    5. how you plan to TEST your compression options and validate the savings and problems with each of them. That testing should include deletes and updates.
    6. how you plan to do the compression - online? in an outage windows? over one month and one subpartition at a time? other?
    7. what your 'fallback' plan is if you need to undo part or all of the compression.
    8. any opportunities for archiving older, unneeded data. If you plan to compress the existing data you need to move it so now is the time to offload any older, unneeded data.
    For basic compression Oracle does NOT compress existing data in place. That means if you want the existing data compressed you need to MOVE it; and that means moving 560GB of data. If you want to compress new data you need to use BULK inserts; that means 'direct-path' inserts.
    If you just want to start compressing new data you can do that fairly quickly since no existing data needs to be moved.
    Don't even begin an operation like compression on a large table without a good requirements doc as suggested above.

  • Query on Oracle Concepts: Table Compression

    The attributes for table compression can be declared for a tablespace, table, or table partition. If declared at the tablespace level, then tables created in the tablespace are compressed by default. You can alter the compression attribute for a table, in which case the change only applies to new data going into that table. Consequently, a single table or partition may contain compressed and uncompressed blocks, which guarantees that data size will not increase because of compression. If compression could increase the size of a block, then the database does not apply it to the block.
    Can anybody please explain text marked as bold? How can data-size/block-size can increase by compression?
    Regards,
    Ankit Rathi
    http://oraclenbeyond.blogspot.in

    >
    The attributes for table compression can be declared for a tablespace, table, or table partition. If declared at the tablespace level, then tables created in the tablespace are compressed by default. You can alter the compression attribute for a table, in which case the change only applies to new data going into that table. Consequently, a single table or partition may contain compressed and uncompressed blocks, which guarantees that data size will not increase because of compression. If compression could increase the size of a block, then the database does not apply it to the block.
    Can anybody please explain text marked as bold? How can data-size/block-size can increase by compression?
    >
    First let's be clear on what is being said. The doc says this:
    >
    If compression could increase the size of a block, then the database does not apply it to the block.
    >
    That is misleading because, of course, the size of the block can't change. You should really read that as
    >
    If compression could increase the size of the data being stored in a block, then the database does not apply it to the block.
    >
    There is overhead associated with the compression because the metadata that is needed to translate any compressed data back into its original state is stored in the block along with the compressed data.
    The simplest analogy (though not a perfect one) is the effect you can get if you try to zip an already highly compressed file.
    For example, if you try to use Winzip to compress an image file (jpg, gif, etc) or a video file you can easily wind up with a zip file that is larger than the uncompressed file was to begin with. That is because the file itself hardly compresses at all but the overhead of the zip file adds to the ultimate file size.
    I suggest you edit your thread subject since this question is NOT about partitioning.

  • Compression without partition.

    Hi,
    Would it be useful to compress an infocube even if there is no Fiscal partion on the cube.
    Thanks.

    Hi,
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    If you compress the Cube all the duplicate records will be summarized.
    Otherwise it will be summarized during Query runtime effecting the Query performance.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • Do we have Table compression in Standard Edition

    Hi,
    We installed Oracle Version 11g(11.2.0.2) in our Local server and the size of the table is around 11GB. we thought of doing Table Compression. My Question is Are we having This feature in 11g Standard Edition?
    Thanks,
    Lakshmikanth

    Sorry short update...
    I found a old Thread here with same information about this issue.
    License for Table compression
    regards
    Peter

  • Table Compression in 9.2.0.1

    Dear All,
    I need to use table compression. Could you please suggest some ideas from your practice?
    I tried to load 545 mb text file into database and compress it. the result of compression is not such significant I expected. I got 456 MB table. pctfree was set to 0.
    As I know oracle compresses data on Database block level.
    What happens if I sort text file? Will this increase the compression ratio? I mean duplicate rows will be located in same block.
    Sincerely,
    giviut

    Not having access to a 9i R2 database I have not been able to try this for myself. However, you may find this OTN article helpful: http://otn.oracle.com/oramag/webcolumns/2003/techarticles/poess_tablecomp.html
    Cheers, APC

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Is it possible a partition table create a partition itself?

    Hi,
    I migrated a table to range partition table by years on production system.
    But I thought, after the new year 2011 must I add new partition again for year 2011?
    For example,
    When a new record comes for year 2011 and if there is no partition for 2011 the table should be create new partition for 2011 itself?
    Every year must I add new partition myself? this is a time consuming job.
    Yes. I know MAXVALUE but I don't want to use it. I want to be done automatically.
    regards,

    Hi,
    I haven't tried EXECUTE_IMMEDIATE. It doesn't matter because I haven't found how the partition name concatenate a variable automatically.
    DB version is 10.2.0.1.
    table script is:
    CREATE TABLE INVOICE_PART1
    ID NUMBER(10) NOT NULL,
    PREPARED DATE NOT NULL,
    TOTAL NUMBER,
    FINAL VARCHAR2(1 BYTE),
    NOTE VARCHAR2(240 BYTE),
    CREATED DATE NOT NULL,
    CREATOR VARCHAR2(8 BYTE) NOT NULL,
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    LOGGING
    PARTITION BY RANGE (CREATED)
    PARTITION INV08 VALUES LESS THAN (TO_DATE(' 2010-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV09 VALUES LESS THAN (TO_DATE(' 2010-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV10 VALUES LESS THAN (TO_DATE(' 2010-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    ENABLE ROW MOVEMENT;

  • Advanced Table Compression Create Table with LOBs

    Hi,
    I need some help with the Advanced Table Compression when creating a table especially when they contain a LOB.
    Here are 3 examples:
    Exp#1
    CREATE TABLE emp (
          emp_id NUMBER, 
          first_name VARCHAR2(128), 
          last_name VARCHAR2(128)
    ) COMPRESS FOR OLTP;
    This one is ok - all elements are compressed.
    Exp#2
    CREATE TABLE photos (
          photo_id NUMBER,
          photo BLOB)
          LOB(photo) STORE AS SECUREFILE (COMPRESS LOW);
    This one I am confused - is it just the LOB(photo) that is compressed or the whole table. If it is just the LOB then what syntax do I need for the whole table?
    I also assume that the LOB is being stored in the default tablespace associated with this table - correct me if I am wrong!
    Exp#3
    CREATE TABLE images (
          image_id NUMBER,
          image BLOB)
          LOB(image) STORE AS SECUREFILE (TABLESPACE lob_tbs COMPRESS);
    This one I am confused - I think it is telling me that LOB(image) is being compresses and stored in tablespace lob_tbs and the other elements are being stored uncompressed in the default tablespace.
    Again if it is just the LOB then what syntax do I need for the whole table?
    Thanks & regards
    -A

    Welcome to the forums !
    Pl post details of OS, database and EBS versions. Pl be aware that Advanced Compression is a separately licensed product. Pl see if these links help
    http://blogs.oracle.com/stevenChan/2008/10/using_advanced_compression_with_e-business_suite.html
    http://blogs.oracle.com/stevenChan/2008/11/early_benchmarks_using_advanced_compression_with_ebs.html
    http://blogs.oracle.com/stevenChan/2010/05/new_whitepaper_advanced_compression_11gr1_benchmar.html
    HTH
    Srini

  • The detail algorithm of OLTP table compress and basic table compress?

    I'm doing a research on the detail algorithm of OLTP table compress and basic table compress, anyone who knows, please tell me. 3Q, and also the difference between them

    http://www.oracle.com/us/products/database/db-advanced-compression-option-1525064.pdf
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM

Maybe you are looking for

  • What is the phone number to talk to a human

    I need a phone number for apple support.  I forgot my password today during the upgrade and chose to update it through e-mail   It has been 3 hours and I haven't received it.  I can't find a phone number.

  • Has anyone gotten OAM/WebLogic SSPI to work w/WebLogic Portal Server 9.2?

    Hi, I was wondering if anyone here has been able to get the WebLogic SSPI (10.1.4.0.1) working with WebLogic 9.2 (on Solaris 10)? I've gotten it installed, but when I switch from the default "myrealm" WebLogic realm to the "NetPointRealm", WebLogic w

  • Idoc inbound posting program for CRMXIF_PARTNER_SAVE_M02

    Hi Abapers,         Im working on datamigration for Business partner using Lsmw idoc method and getting Idoc status 53 with message crmxif_partner_save function module generated successfully with BP number but some of the entries of the fields is not

  • Advantages of using a seperate controller for guest access?

    Can someone give me a good reason to use a seperate controller in a DMZ for guest users versus just trunking a DMZ VLAN to the controller. Certainly it makes sense to have a guest controller when you DMZ is not accessable to the controller locations

  • Safari 8.02 Unexpected annoying unwanted popup pages

    Using Safari 8.02 with Yosemite, this bug happens sytematicly each time I click on a page on screen. Every time, in order to get the expected url, I receive an Unexpected annoying and unwanted page, looking like malware or advertising unvolving a qui