Cube compression and partition related ?

Hello BW Experts,
Is it only possible to partition the cube after the cube compression. that means can we only parition the E table and not the F table.?
Thanks,
BWer

InfoCube Partitioning is not supported by all DBs that BW runs on - the option is greyed out for DBs that do not support it.
You can partition on 0FISCPER or 0CALMONTH, although if you have a need to partition on someting else it might be worth a customer message to SAP.  You should review any proposed partitioning scheme with your DBA in you are not familiar with the concepts and DB implications.
The E fact table is what gets partitioned using this option.  The F fact table would already be partitioned by Req ID.   In 3.x, the partitioning you specify for the InfoCube is also applied to any aggregate E tables that get created if the partitioning characteristics (0FISCPER/0CALMONTH) is in that aggregate.  In NW2004s, you will have a choice whether you want the partitioning to apply to the aggregate or not.
NW2004s also provides some additional partiion tools, e.g. the ability to change the partitioning.

Similar Messages

  • Cube Compression and InfoSpoke Delta

    Dear Experts,
    I submitted a message ("InfoSpoke Delta Mechanism") the other day regarding a problem I am having with running an InfoSpoke as a delta against a cube and didn't receive an answer that would fix the problem.   Since that time I have been told that we COMPRESS the data in the cube after it is loaded.  It is after the compression that I have been trying to run the delta InfoSpoke.   As explained earlier there have been 18 loads to a cube since the initial (Full) run of the InfoSpoke.  I am now trying to run the InfoSpoke in Delta mode and get the "There is no new data" message. Could the compression of the cube be causing the problem with the "There is no new data" message that appears when I try to run the InfoSpoke after the cube load and compression?    If someone could explain what happens in a compression also this would be helpful.
    Your help is greatly appreciated.
    Thank you,
    Dave

    You need uncompressed requests to feed your deltas. Infocube deltas uses request ids. Compressed requests cannot be used since the request ids are set to zero.
    You need to resequence the events.
    1. Load into infocube.
    2. Run infospoke delta to extract delta requests.
    3. Compress.

  • Cube compression and DB Statistics

    Hi,
       I am going to run Cube compressions on a number of my cubes and was wondering few facts about DB Statistics. Like:
    1) How does the % of Info Cube space used for DB stats helps.  I know that the more % we use the bigger is the stat and faster is the access but stats run longer.  But would increasing the default value of 10% make any difference or overall performance improvements.
    2) I will compress the cubes on a weekly basis and most of them will have around one request per day so will probably compress 7 requests for each cube.  So it is advisable to run stats also on a weekly basis or can it be run on bi-weekly or monthly basis? and what factors does it depend on?
    Thanks.  I think we can have a good discussion on these apart from points.

    What DB are we talking about?
    Oracle provides so many options on when and how to collect statistics, even allowing Oracle itself to make the decisions.
    At any rate - no point in collecting statistics more than weekly if you are only going to compress weekly.  Is your polan to compress all the requests when you run, or are you going to leave the most recent Reqs uncompressed in case you need to back out a Req for some reason.  We compress weekly, but only Reqs that are more 14 days old so we can back out a Req if there is a data issue.
    As far as sampling percent, 10% is good, and I definitely would not go below 5% on very large tables.  My experience has been that sampling at less than 5% results in useful indices don't get selected.  I have never seen a recommendation below 5% in any data warehouse info I have seen.
    Are you running the statistics on the InfoCube by using the performance table option or process chain?  I can not speak to the process chain statistics aproach, but I imagine it is similar, but I know when you run the statistics collection from performance tab, it not only collects the stats on the fact and dimension tables, but it also gos after all the master data tables for every InfoObject in the cube. That can cause some long run times.

  • Cube compression and request's id

    Can we decompress compressed cube using the request id's?
    What happens to the request id's when the cube gets compressed?
    rgds

    Hi Nitin,
    when you load data into the InfoCube, entire requests can be inserted at the same time.
    Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs.
    Hope now is clearer (and don't forget to assign some points by clickin'on the star to the contributors that helped you !!!)
    Bye,
    Roberto

  • Cube Compression and Aggregation

    Hello BW Gurus,
    Can I first compress my infocube data and load data into the aggregates.
    The reason being, that when the infocube is compressed the Request Id's are removed.
    Are the Request Id's necessary for data to be transfered to the Aggregates and then later on for aggregate compression.
    Kindly suggest.
    regards,
    TR PRADEEP

    Hi,
    just to clarify this:
    1) you can compress your infocube and then INITIALLY fill the aggregates. The Request information is then no longer needed.
    2) But you can NOT compress requests in your infocube, when your aggregates are already filled and these requests are not yet "rolled up" into the aggregates (this action is prohibited anyway by the system).
    Hope this helps,
    Klaus

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • Compress and change to read-only on a warehouse table partition?

    I found the command to compress a partition. I was wondering if I can also make a particular partition read only or is that only possible at the table space level? This is for a warehouse and we'd like to save space on our partitions that age out of the current month since they will never change we would also like performance gains(backup speed) by putting the old partitions into read only mode on each of our warehouse tables. Is that possible and if so what's the command?
    Thanks,
    Dave

    Hello Wang,
    You need to bind the readOnly attribute of the input field to a newly to created attribute beneath the dataSource node of the table within the context. That way, each input field of each row of the table can have a different value for readOnly.
    Best regards,
    Thomas

  • Effect of Cube Compression on BIA index's

    What effect does cube compression have on a BIA index?
    Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
    Thank you

    <b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
    In the standard case, you could even completely forgo these processes. But please note the following aspects:
    Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
    DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
    Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
    a)data mart (for mass data extraction, BIA is not used)
    b)real-time InfoProvider (with most-recent queries)
    Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
    Hope it Helps
    Chetan
    @CP..

  • What is the effect of INDEXING on Cube? and on DSO?

    Hi,
    I am trying to figure out the effect of indexing on a Cube and DSO.
    1. My findings so far indicates that Indexing is a database concept but so is partition but partitioning applies in the case of a Cube. Right?
    2.What is the effect of Indexing on a cube? How is it related to the "database concept that I keep reading?
    How is it implement in BI?
    3.What is the effect of Indexing on a DSO? How is it related to the "database concept that I keep reading?
    How is it implement in BI?
    Thanks

    Hi,
    thanks for the detailed information. I will appreciate some help on these follow ups on your posting:
    On your answer to 1:
    I read that there are 2 partitions, logical and physical. It is the physical partitioning which takes place at the database level, so can I assume that when you said u201CEven DSO can be partitionedu201D you meant the physical partition? And that logical partitioning is not possible in DSOs?
    On your answer to 2:
    So in the above you suggested that indexes apply not only to cubes but also to DSO with the statement that u201C..secondary indexes can be built on DSO u2026u201D
    If so, how come there is not feature to drop and rebuild indexes for data load to DSOs?
    I will appreciate a hint on the difference between u201Cindexesu201D and u201Csecondary indexesu201D. I am ok with what u201Cindexesu201D is, is the use of u201Csecondaryu201D that threw me off.//
    On your answer to 3:
    i.
    Based on the link, there does not appear to be a simply way to drop and rebuild indexes in DSOs. It appears an ABAPer needs to be involved. Isnu2019t there a simply way as in cubes, to click on buttons for the index implementation for DSOs?
    ii.
    It suggests one must choose between B-tree and Bit map indexes, right? Is this the function of the BI consultant or as DBA? If the BI consultant should be able to do this, where is it implemented in BI?
    Can you explain what you meant by u201Ccardinality of a column is lowu201D?
    iii.
    I know how in rsa1, a Cube can be partitioned on 0calmonth and 0fisper. If this is done at the database level, then can it be done also for DSO? If so, where in BI is it done for DSOs?
    You also noted that SAP DB does not support table partitioning but Oracle does.
    I know in my environment, the BASIS consultants are always talking about Oracle so I am assuming we have Oracle and not SAP DB.
    I never heard of SAP DB, is it a substitute for Oracle in SAP BI environments?
    Is it a DB that comes directly from SAP; if so, why do companies not keep the total SAP package but prefer to use Oracle?  Is it because the SAP DB does not support table partitioning?
    Thanks

  • Compression without partition.

    Hi,
    Would it be useful to compress an infocube even if there is no Fiscal partion on the cube.
    Thanks.

    Hi,
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    If you compress the Cube all the duplicate records will be summarized.
    Otherwise it will be summarized during Query runtime effecting the Query performance.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • Updating Physical Cube Tables and Hierarchies OBIEE 11.1.1.7

    OBIEE 11.1.1.7
    I have imported a MSAS Cube, modeled it and created a presentation layer. I now have some new hierarchies I need to add to the phyicial layer.
    How do you import new Cube Tables and/or Hierarchies?
    When I select the Import option it creates a new OLAP database and connection pool, I am unable to successfully move those changes into the existing OLAP Connection.
    I have been through the documentation and it reccomends importing over a manual process. However I am unable to figure out how to do this with the import process.
    Oracle® Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition
    11g Release 1 (11.1.1) Part Number E10540-05
    Chapter 8 Working with Physical Tables, Cubes, and Joins
    There is a section called Working with Multidimensional Sources in the Physical Layer, in this section is says
    "Each multidimensional catalog in the data source can contain multiple physical cubes. You can import the metadata for one or more of these cubes into your Oracle BI repository. Although it is possible to create a cube table manually, it is recommended that you import metadata for cube tables and their components"
    The New Utilities feature(s) is really cool, but it doesn't work for OLAP connections. 
    Any help is greatly appreciated.

    Hi Michael,
    it works, but its a bit tricky. I did it with Oracle OLAP and I think it must be similar with other OLAP Databases.
    Rename your existing Database in Physical Layer to the name Metadata Import dialog would create. With Oracle OLAP the "Data Source Name" ist used. Just try it once.
    Unfortunately you can not merge dimensions and cubes as it is with relational Objects . So you have to delete the cube and dimensions you will import from you renamed Database in Physical Layer. Of course the mapping between physical and business model will be lost
    Now you can import your modified OLAP-Cubes and Dimensions and it should be placed in your renamed Database in Physical Layer
    Open the sources of your logical tables in business model and add the Dimensions or Cubes in the "General" Tab. Move to the Tab "Column Mapping" and check if mapping is OK. If not (this will be the case, if you customized the column names), you have to do the mapping from the scratch .
    I recommend to use the original names from the data source. If you need another name for a column, just create a new logical column and use "Derive from existing columns". Now you can reimport OLAP Metadate quicker because the Mapping is done automatically.
    Hope this helps
    Regards Claus

  • Inventory cube compression

    Hi
    We are live with Inventory cube filling and delta of material movements, for past 1 year, However we had not automated compression of cube with marker update, What are the steps to automate rollup & compression of inventory cube, is this recommended as a standard practice?.

    Dear Anil
    I have followed the same procedure
    i.e.
    BX-init: Compression with marker update
    BF-init: Compression without marker update
    UM-init: Compression without marker update
    BF-delta loads: Compression with marker update
    UM-delta loads: Compression with marker update
    But surprisingly the values are not reflecting correctly
    Thanks & Regards
    Daniel

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • IB Queue: Can a Queue be Unordered and Partitioned at the same same time?

    Hi,
    My question is related to Unordered Queue:
    Can a Queue be Unordered and Partitioned at the same same time?
    From PeopleBooks: Managing Service Operation Queues
    "Unordered:
    Select to enable field partioning and to process service operations unordered.
    By default, the check box is cleared and inbound service operations that are assigned to a queue are processed one at a time sequentially in the order that they are sent.
    Select to force the channel to handle all of its service operations in parallel (unordered), which does not guarantee a particular processing sequence. This disables the channel’s partitioning fields.
    Clear this check box if you want all of the queues’s service operations processed sequentially or if you want to use the partitioning fields."
    This seems to indicate that Unordered queues don't use partitioned fields. Yet, there are a few delivered Queues that are Unordered and have one or more Partition fields selected.
    EOEN_MSG_CHNL
    PSXP_MSG_CHNL
    SERVICE_ORDERS
    How does partitioning work in this case? Or is partitioning ignored in such cases?
    Thanks!

    I guess you could use reflection and do something like the following:
    import java.lang.reflect.Constructor ;
    import java.lang.reflect.Method ;
    public class MyClass<T> implements Cloneable {
      T a ;
      public MyClass ( final T a ) {
        // super ( ) ; // Superfluous
        this.a = a ;
      public MyClass<T> clone ( ) {
        MyClass<T> o = null ;
        try { o = (MyClass<T>) super.clone ( ) ; }
        catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        o.a = null ;
        //  See if there is an accessible clone method and if there is use it.
        Class<T> c = (Class<T>) a.getClass ( ) ;
        Method m = null ;
        try {
          m = c.getMethod ( "clone" ) ;
          o.a = (T) m.invoke ( a ) ;
        } catch ( NoSuchMethodException nsme ) {
          System.err.println ( "NoSuchMethodException on clone." ) ;
          //  See if there is a copy constructor an if so use it.
          Constructor<T> constructor = null ;
          try {
            System.err.println ( c.getName ( ) ) ;
            constructor = c.getConstructor ( c ) ;
            o.a = constructor.newInstance ( a ) ;
          } catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        } catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        return o ;
      public String toString ( ) { return "[ " + ( ( a == null ) ? "" : a.toString ( ) ) + " ]" ; }
      public static void main ( final String[] args ) {
        MyClass<String> foo = new MyClass<String> ( "zero" ) ;
        MyClass<String> fooClone = foo.clone ( ) ;
        System.out.println ( "foo = " + foo ) ;
        System.out.println ( "fooClone = " + fooClone ) ;
    }

  • Reference partitioning and partition pruning

    Hi All,
    I am on v 11.2.0.3.
    I have a pair of typical parent-child tables. Child table is growing like hell, hence we want to partition it, which will be used for deleting/dropping old data later on.
    There is no partitioning key in the child table which I can use for relating the data to the time when data was created. So, I thought I can use the timestamp from parent table for partitioning the parent table and reference partition the child table.
    I am more concerned about the child table (or the queries running on the child table) in terms of performance. ITEM_LIST_ID from the child table is extensively used in queries to access data from child table.
    How will partition pruning work when the child table is queried on the foreign key? will it every time go to the parent table, find out the partition and then resolve the partition for child table?
    The setup is given in the scripts below, will it cause lot of locking (to resolve partitions)? or am I worrying for nothing?
    Here are the scripts
    CREATE TABLE ITEM_LISTS /* Parent table, tens of thousands of records */
      ITEM_LIST_ID    NUMBER(10)     NOT NULL PRIMARY KEY, /* Global index on partitioned table !!! */
      LIST_NAME       VARCHAR2(500)  NOT NULL,
      FIRST_INSERTED  TIMESTAMP(6)   NOT NULL
    PARTITION BY RANGE ( FIRST_INSERTED )
      partition p0 values less than ( to_date('20130101','YYYYMMDD') ),
      partition p201301 values less than ( to_date('20130201','YYYYMMDD') ),
      partition p201302 values less than ( to_date('20130301','YYYYMMDD') ),
      partition p201303 values less than ( to_date('20130401','YYYYMMDD') ),
      partition p201304 values less than ( to_date('20130501','YYYYMMDD') ),
      partition p201305 values less than ( to_date('20130601','YYYYMMDD') )
    CREATE INDEX ITEM_LISTS_IDX1 ON ITEM_LISTS ( LIST_NAME ) LOCAL ;
    CREATE TABLE ITEM_LIST_DETAILS /* Child table, millions of records */
      ITEM_ID        NUMBER(10)     NOT NULL,
      ITEM_LIST_ID   NUMBER(10)     NOT NULL, /* Always used in WHERE clause by lots of big queries */
      CODE           VARCHAR2(30)   NOT NULL,
      ALT_CODE       VARCHAR2(30)   NOT NULL,
      CONSTRAINT   ITEM_LIST_DETAILS_FK
      FOREIGN KEY  ( ITEM_LIST_ID ) REFERENCES ITEM_LISTS
    PARTITION BY REFERENCE ( ITEM_LIST_DETAILS_FK )
    CREATE INDEX ITEM_LIST_DETAILS_IDX1 ON ITEM_LIST_DETAILS (ITEM_ID) LOCAL;
    CREATE INDEX ITEM_LIST_DETAILS_IDX2 ON ITEM_LIST_DETAILS (ITEM_LIST_ID, CODE) LOCAL;Any thoughts / opinions / corrections ?
    Thanks in advance

    To check how partition pruning works here, I inserted some data in these tables. Inserted data in ITEM_LISTS (parent) from DBA_OBJECTS ( object_id => item_list_id, object_name => list_name, first_inserted => manually created). Also created corresponding child data in ITEM_LIST_DETAILS, so that, for every item_list_id in parent about 5000 records go in child table.
    Looking at the queries and plan below, my question is, what exactly does the operations "PARTITION REFERENCE SINGLE" and "PARTITION REFERENCE ITERATOR" imply ??
    I gave a search on "PARTITION REFERENCE ITERATOR" in Oracle 11.2 documentation, it says "No exact match found" !!
    /* Direct query on child table */
    SQL> select count(*) from item_list_details where item_list_id = 6323 ;
      COUNT(*)
          5000
    1 row selected.
    Execution Plan
    Plan hash value: 2798904155
    | Id  | Operation                   | Name                   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT            |                        |     1 |     5 |    22   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE             |                        |     1 |     5 |            |          |       |       |
    |   2 |   PARTITION REFERENCE SINGLE|                        |  5000 | 25000 |    22   (0)| 00:00:01 |   KEY |   KEY |
    |*  3 |    INDEX RANGE SCAN         | ITEM_LIST_DETAILS_IDX2 |  5000 | 25000 |    22   (0)| 00:00:01 |   KEY |   KEY |
    Predicate Information (identified by operation id):
       3 - access("ITEM_LIST_ID"=6323)
    SQL> select * from temp1; /* Dummy table to try out some joins */
    OBJECT_ID OBJECT_NAME
          6598 WRH$_INTERCONNECT_PINGS
    1 row selected.
    /* Query on child table, joining with some other table */
    SQL> select count(*)
      2  from temp1 d, ITEM_LIST_DETAILS i1
      3  where d.object_id = i1.item_list_id
      4  and d.object_name = 'WRH$_INTERCONNECT_PINGS';
      COUNT(*)
          5000
    1 row selected.
    Execution Plan
    Plan hash value: 2288153583
    | Id  | Operation                      | Name                   | Rows  | Bytes | Cost (%CPU)| Time  | Pstart| Pstop |
    |   0 | SELECT STATEMENT               |                        |     1 |    70 |    24   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE                |                        |     1 |    70 |            |       |  |       |
    |   2 |   NESTED LOOPS                 |                        |  5000 |   341K|    24   (0)| 00:00:01 |       |       |
    |*  3 |    TABLE ACCESS FULL           | TEMP1                  |     1 |    65 |     3   (0)| 00:00:01 |       |       |
    |   4 |    PARTITION REFERENCE ITERATOR|                        |  5000 | 25000 |    21   (0)| 00:00:01 |   KEY |   KEY |
    |*  5 |     INDEX RANGE SCAN           | ITEM_LIST_DETAILS_IDX2 |  5000 | 25000 |    21   (0)| 00:00:01 |   KEY |   KEY |
    Predicate Information (identified by operation id):
       3 - filter("D"."OBJECT_NAME"='WRH$_INTERCONNECT_PINGS')
       5 - access("D"."OBJECT_ID"="I1"."ITEM_LIST_ID")

Maybe you are looking for

  • Can thumbnails be renumbered in Acrobat X Standard?

    Can they be renumbered in v.9 Standard and v.8? Which versions will work for me, what to buy?

  • Cannot change the nickname/label in my .mac profile

    Mountain Lion completely hosed my 2 year old MacBook Pro. Slow -- beachballs all the time. SNL was great, oh why did I have to be an idiot and upgrade to the Vista of Apple. I have a full DD of the old SNL, but in storage 100 miles away. Still worth

  • Just did a clean install, realized I lost my iPhoto album in the process...

    I went to Japan 2 years ago and I took a ton of picture. It was the only album I had in iPhoto. I installed Leopard the other night and it gave me that DivX error and I couldn't fix it. So I wiped my iBook clean and it runs like a charm now. I don't

  • What programs are comparable...

    After recently making the switch from PC to Mac I need to find compatible software... I create curricula for a small school along with a school-wide phone directory. I have used Microsoft Publisher extensively and know it isn't supported but wonder i

  • Error Message - Can't get rid of it

    Error Some installation files are corrupt. Please download a fresh copy and retry the installation. I've gotten that error at least 5 times, from 5 seperate times downloading. I have never previously had iTunes installed on this computer. Tried it fr