Table compress  or Partition compress or both

Hi,
I have an oracle table with approx 250 Mill. records and approx size is around 35 gb.
I want to save space by compressing the table.
I am not sure which compression is more effective either table compress or partitions within table should be compressed or both ?
Please advise,
JP

Hi,
this is a compromise between manageablility and performance.
If you work with partitions you give yourself a change to work with smaller sets of data (which under certain circumstances unload your DB with unecessary IOs).
For the rest it (space efficiency) it depends highly on the nature of what you're compressing (normal columns, blob/clob, 'redundancy' of the data ...) and which method you're using (mormal compression, Advanced compression...)
I would suggest you to try the different methods over a quite big sample of your table (with partitioning / not paritionning, with Advanced compression, normal compression).
I would also test bulk inserts/update/deltes (with heavy volumes), and also try the main queries you're using against this table an compare IOs/CPU
I suggest you to have a look to this thread:
Implement Advanced compression
(With such volumes, I guess you'll benefit (in most of the situation) from the compression...
Edited by: user11268895 on Jul 22, 2010 11:11 AM

Similar Messages

  • Table Compression on Partitions

    Hi,
    Can any help how to implement Table compression on Partitions?
    Thanks in advance

    Here is two examples for you.
    Example 1. This table has two partitions. It has compression at the table level, one partition is compressed and one is not compressed
    SQL>create table test_compress1
    (t_id number(10),
    tname varchar2(30)) partition by range (t_id)
    (partition p0 values less than (50) compress
    ,partition p1 values less than (100) nocompress)
    compress ;
    Example 2. This table has two partitions. It has no compression at the table level, both partitions are compressed
    SQL>create table test_compress2
    (t_id number(10),
    tname varchar2(30)) partition by range (t_id)
    (partition p0 values less than (50) compress
    ,partition p1 values less than (100) compress);
    You can play with different options but you must ensure you read more about the limitations in your SQL Reference manual before using Compression for both table or partition.

  • Can we compress hash partitioned table in 9.2

    Hi
    Can we compress the hash partitioned table? How to check the compression? Is there any way to check for the partition size after compression?
    Thanks

    hi
    go through below link
    hope it will help you.
    http://www.dbazine.com/oracle/or-articles/foot6
    http://www.google.ae/search?hl=en&q=compressed+hash+partition+++oracle+9i&meta=
    also check in google seconed point... Table compression do and don't.
    hope this helps
    Taj.

  • Create table, PARTITION, compress ORACLE SUPPORT PLS !

    Can someone PLEASE explain to me the following (read carefully):
    SQL> create table abc
    2 (a number)
    3 PARTITION BY LIST(a)
    4 (PARTITION A_A values (2),
    5 PARTITION A_B values (DEFAULT) COMPRESS);
    Table created.
    SQL> alter table abc add b number;
    alter table abc add b number
    ERROR at line 1:
    ORA-22856: cannot add columns to object tables
    SQL> alter table abc modify partition A_B nocompress;
    Table altered.
    SQL> alter table abc add b number;
    alter table abc add b number
    ERROR at line 1:
    ORA-22856: cannot add columns to object tables
    SQL> drop table abc;
    Table dropped.
    SQL> create table abc
    2 (a number)
    3 PARTITION BY LIST(a)
    4 (PARTITION A_A values (2),
    5 PARTITION A_B values (DEFAULT));
    Table created.
    SQL> alter table abc modify partition A_B compress;
    Table altered.
    SQL> alter table abc add b number;
    Table altered.
    I definetelly think this is a BUG !

    14464, 00000, "Compression Type not specified"
    // *Cause: Compression Type was not specified in the Compression Clause.
    // *Action: specify Compression Type in the Compression Clause.                                                                                                                                                                                                                                                                                                                                                                                   

  • Compression without partition.

    Hi,
    Would it be useful to compress an infocube even if there is no Fiscal partion on the cube.
    Thanks.

    Hi,
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    If you compress the Cube all the duplicate records will be summarized.
    Otherwise it will be summarized during Query runtime effecting the Query performance.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • OLTP compression and Backupset Compression

    We are testing out a new server before we migrate our production systems.
    For the data we are using OLTP compression.
    I am now testing performance of rman backups, and finding they are very slow and CPU bound (on a single core).
    I guess that this is because I have also specified to create compressed backupsets.
    Of course for the table blocks I can understand this attempt at double compression will cause slowdown.
    However for index data (which of course cannot be compressed using OLTP compression), compression will be very useful.
    I have attempted to improve performance by increasing the parallelism of the backup, but I from testing this only increases
    the channels writing the data, there is still only one core doing the compression.
    Any idea how I can apply compression to index data, but not the already compressed table segments?
    Or is it possible that something else is going on?

    Hi Patrick,
    You can also check my compression level test.
    http://taliphakanozturken.wordpress.com/2012/04/07/comparing-of-rman-backup-compression-levels/
    Thanks,
    Talip Hakan Ozturk
    http://taliphakanozturken.wordpress.com/

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Is it possible a partition table create a partition itself?

    Hi,
    I migrated a table to range partition table by years on production system.
    But I thought, after the new year 2011 must I add new partition again for year 2011?
    For example,
    When a new record comes for year 2011 and if there is no partition for 2011 the table should be create new partition for 2011 itself?
    Every year must I add new partition myself? this is a time consuming job.
    Yes. I know MAXVALUE but I don't want to use it. I want to be done automatically.
    regards,

    Hi,
    I haven't tried EXECUTE_IMMEDIATE. It doesn't matter because I haven't found how the partition name concatenate a variable automatically.
    DB version is 10.2.0.1.
    table script is:
    CREATE TABLE INVOICE_PART1
    ID NUMBER(10) NOT NULL,
    PREPARED DATE NOT NULL,
    TOTAL NUMBER,
    FINAL VARCHAR2(1 BYTE),
    NOTE VARCHAR2(240 BYTE),
    CREATED DATE NOT NULL,
    CREATOR VARCHAR2(8 BYTE) NOT NULL,
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    LOGGING
    PARTITION BY RANGE (CREATED)
    PARTITION INV08 VALUES LESS THAN (TO_DATE(' 2010-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV09 VALUES LESS THAN (TO_DATE(' 2010-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV10 VALUES LESS THAN (TO_DATE(' 2010-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    ENABLE ROW MOVEMENT;

  • How to make a table to be partitioned automatically in 10.1.0.2.0 ?

    Hi
    I am using Oracle database 10.1.0.2.0.
    I created a table. I have done partition on Date column.
    Initially I created partitions for 2007 year.
    Now I should create partitions for 2008 year. So every year, I have to add partitions manually.
    Could any one help me, how can I automate this process.?
    Thank you,
    Regards,
    Gowtham sen.

    I don't know.. how much it would effect.
    In my case, I am having around 100 tables which are partitioned using date column.
    I am loading a table with 5 crore records on an average for its load frequence.
    If I use trigger, it would verify..whether the value in the coumn is new value or not.
    So I am thinking.. it would effect.
    Thank you,
    Regards,
    Gowtham Sen.

  • Alter range partition table to Interval partitioning table.

    Hi DBA's,
    I have a very big range partitioned table.
    Recently we have upgraded our database to 11gR2 which has a feautre called interval partitioning.
    Now i want to modify that existing range partitioned table to Interval Partitioning.
    can we alter the range partitioned table to interval partitioning table?
    I googled for the syntax but i didn't find it, can any one help[ me out on this?
    Thanks.

    If you ignore the "alter session set NLS_CALENDAR=PERSIAN;" during create/alter, everything else seems to work.
    When you set the "alter session..." during inserts, the rows gets inserted into the correct partitions.
    Only thing is when you look at HIGH_VALUE, you need to convert from the default GREGORIAN to PERSIAN.

  • Hey All, what is the best file compression app to compress large video file on my Mac Book Pro?

    Hey All, do you recommend compressor 4 or what is the best file compression app to compress large video file on my Mac Book Pro?

    X,
    Thanks for checking in.  The anwser to your questions are below.
    Where is the material coming from?
    These are MP4 files from my Canon vixia HFR40
    What software are you using to edit the material?
    I am using iMOVIE 9.0.9
    What do you want to do with the files after compression? I want to email I need to have the file be smaller than 100MB to send.
    I am an actor and I video tape my auditions and send them to my agent so I need Highest Quality HD files that have small file sizes, no more than 100MB.  The scenes are like 1-5 Mins long in most cases. Thanks for your help.

  • Does Pool & Cluster tables has the same structure in both Dictionary and Db

    ------------ Exists with the same structure both in dictionary as well as in database exactly with the same data and fields
    a. Pool Table
    b. Cluster Table
    c. Transparent Table
    d. All the above
    To my knowledge, I know transparent table has the same structure in both Dictionary and database.
    Can anyone tel me the answer for the above question.. whether it is
    c. Transparent table
    or
    d. All the above

    Transparent Table:
    A physical table definition in the database for the table definition which is stored in the ABAP Dictionary for transparent tables when the
    table is activated. The table definition is translated from the ABAP Dictionary to a definition of the particular database.
    A transparent table in the dictionary has one to one relationship with a table in the database.
    For each transparent table in the data dictionary there is one associated table in the database.The database table has the same name, the same number of fields and fields have same names as the transparent table definition. Transparent tables are used to hold application data. Application data is master data or transaction data used by an application.
    e.g. master data - table of customers
    Transaction data - order placed by the customers.
    Pooled tables:
    Pooled tables can be used to store control data (e.g. screen sequences,program parameters or temporary data). Several pooled tables can be
    combined to form a table pool. The table pool corresponds to a physical table on the database in which all the records of the allocated pooledtables are stored.
    Pooled table in R/3 has a many to one relationship with a table in the database. For one table in the database there are many tables in the R/3 data dictionary.R/3 uses pooled tables to hold large number of very small tables. You might create a table pool if yoou need to create hundreds of small tables that each hold only a few rows of data.
    Cluster tables :
    Cluster tables contain continuous text, for example, documentation.Several cluster tables can be combined to form a table cluster. Severallogical lines of different tables are combined to form a physical record in this table type. This permits object-by-object storage or
    object-by-object access. In order to combine tables in clusters, atleast parts of the keys must agree. Several cluster tables are stored in
    one corresponding table on the database.
    A cluster table is similar to pool table . It has many to one relationship with the table in the database.
    They are used to hold the data from a few(approximatelly 2 to 10) very large tables. They would be used when these tables have a part of their primary keys in common and if data in these tables are all accessed simultaneously.A cluster is advantageous in the case where data is accessed from multiple tables simultaneously and those tables have at least one of their primary key fields in common.Cluster table reduce number of database reads and thereby improves performance.

  • Page compression vs. row compression - 30 % savings

    According to
    Note 1143246 - R3load: row compression as default for SQL 2008
    R3load uses row compression (starting with a certain patchlevel).
    I built up a system using TDMS (ERP 6.0 with EHP4) with row compression, target system size was ~ 550 GB. If I built the same system using page compression (by changing DBMSS.TPL during R3load) the system uses only 300 GB.
    As of now, is page compression supported and will there be a standard way in building systems using page compression (e. g. an R3load option?)
    Markus

    Hi Markus,
    Please check the following note:
    Note 991014 - Row Compression/Vardecimal on SQL Server
    > Page Compression:
    >
    > With SQL Server 2008 "Page Compression" was introduced, as well. Here the whole data page will be compressed, not
    > just row by row as with "Row Compression".
    > Page compression is not the standard for SAP and it's currently still under investigation.
    > Page compression is supported from the ABAP/4 dictionary and we did not see or do expect any problems in that area.
    > But you should still first test page compression for your application before you go productive with it.
    Regards,
    Federico Biavati

Maybe you are looking for

  • How to create a report with dynamic no of columns

    Hi All, we have a report with 6 columns. and its been access by 10-20 people. the user dont want to have a fix report with 6 columns rather they want to have a flexibility to select the columns from the report they want to see. i.e. user one one time

  • G/L Account assignment to condition type

    can we assign more than one G/L account to one condition type and if how? Edited by: Lakshmipathi on Aug 4, 2011 12:01 PM Better use the subject effectively.  Also search the forum before posting such basic question and stick to forum rules

  • Creative Cloud Provides Only Trial Versions?

    Does the purchase of creative cloud provide a serial number, or will I have to download the programs every month. I have been using the trial versions and leaving them open so that I do not run out of the aloted number of program openings. Overall, t

  • How to change the text color of a label by using RGB values without changing the background colour?

    xCode interface builder: When I try to change the color property of a label's text by using the RGB values, the background color also changes to the same value automatically. in other words: While setting the RGB values for text colour of labels, the

  • FCP7  -1920 x 1080 photo  mismatch timeline issue

    I have DSLR still photos shot at 1920x1080 (16:9)  ratio Photoshop confirms the dimensions-- and also square pixels. (72dpi) If I create a timeline in FCP7   choosing  a 1920 x 1080   preset (HDTV)... ---when I place the photo on the timeline--   it