Proper Partitioning for a table

Dear Netters,
We have a table that is defined as follows:
CREATE TABLE RECORDSINGLECHOICEVALUE
  RECORDFK        RAW(16)                        NOT   NULL,
  CHOICEFK        RAW(16)                         NOT   NULL,
  FIELDFK         RAW(16)                           NOT  NULL,
  SOURCEENTITYFK  RAW(16)                   NOT   NULL
CONSTRAINT RDSINGLECHOICEVAL_PK PRIMARY KEY (RECORDFK, FIELDFK)In it, we store GUIDs that reference other tables in the application.
There are generally the following types of queries that use the table:
SELECT COUNT(DISTINCT t1.SourceEntityFk)
FROM RECORDSINGLECHOICEVALUE t1
    INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
           t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
           t1.RecordFk = t2.RecordFk                                    ---- .... AND to the same Record
           AND t2.FieldFk = {some other guid value}
WHERE t1.FieldFk = {some guid value}                  -- always a single
   AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalor
SELECT COUNT(DISTINCT t1.SourceEntityFk)
FROM RECORDSINGLECHOICEVALUE t1
    INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
           t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
           AND t2.FieldFk = {some other guid value}
WHERE t1.FieldFk = {some guid value}                  -- always a single
   AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalThe table could be joined to itself multiple times.
For partitioning, we used HASH partition on FieldFk (128 partitions were created), since this is a scalar that participates in 99% of the queries against the table. However, due to the nature of the data, some of the partitions are heavily skewed (one field is more prevalent than others), resulting in some partitions having < 10k rows, and others having > 200M rows.
Would you recommend an alternative partitioning schema? Sub-partitions?
Thank you in advance.
--Alex                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

>
The table in question (and we have a few other ones very similarly defined), participates in many queries against the database. Queries can be formed in such a way that the user can pick the Field (FieldFk) and (optionally) ChoiceFks at will. This is a highly flexible user-driven query engine. Table(s) can be joined many times within the same query, resulting in sub-optimal performance.
The goal is to come up with the schema (partitioning/indexing/any other) that will support positive user experience. The 200M rows in a single partition was an example of when things start breaking lose. In the near future, this number can grow at least 10x.
To clear up a business case, imagine human subjects, which have genetic variants. Say, there are 100 million people in the database (EntityFk). They all have 23 chromosomes, about 20,000 protein producing genes of interest (460000 combinations), and these have genetic variations (say, 10000) of different types (types are defined as ChoiceFk).
The query would then try to identify subjects that have a specific type of gene variation (Field = "Gene variation", Choice = "Fusion"), and are males (Field = "Gender", Choice = "Male"), and have been diagnosed with a specific disorder (Field = "Diagnosis", Choice = "Specific Disorder"), and that have a recording of treatment (Field = "Treatment", choice is NOT specified) in the database. So, the table is getting joined onto itself in a few different ways, any many times (sometimes as many as 10).
With stats in place, with index covering on Entity + Field + Choice (in all possible combinations thereof), with hash partitioning on Field alone (keys are GUIDs, so range partitioning, while possible, is kind of counter-intuitive), performance is suffering with increasing volume.
We are evaluating other options, for different partition keys, indexing, and anything else in between.
Any suggestions are much appreciated.
>
Thanks for the additional information. From what you describe it sounds like a classic use case for more of a star-schema architecture or am I still missing something?
To see what I am talking about take a look at my extensive reply in this thread from a year ago
Re: How to design a fact table to keep track of active dimensions?
Re: How to design a fact table to keep track of active dimensions?
Posted: Mar 18, 2012 7:13 PM
I provided example code that should give you the idea of what I mean.
For use cases like this bitmap indexes are VERY efficient. And since you said this:
>
The problem is performance. Maintenance side of the house is minimal - data is loaded from an external source, once every X days, via ETL, and so that is not a concern.
>
you should only have to rebuild/update the bitmap indexes every X days also. The main drawback of bitmap indexes is the performance when they are updated. They are NOT appropriate for OLTP systems but for OLAP where the index updates can be done offline in batch mode (or rebuilt) that is an ideal use case.
You can easily conduct some tests using the example code I provide in that thread link as a template.
In my example the attributes I used were: age, beer, marital_status, softdrink, state, summer_sport.
You would use attributes like: Gene variation, Gender, Diagnosis, Treatment.
Bitmap indexes store a bit for NULL values also so you could use NULL to indicate NO TREATMENT.
Your goal would be to construct a query that uses a logical combination of your attributes to specify what you are interested in. Then, as you can see by the plans I posted, Oracle will take it from there and perform bitmap index operations using ONLY the indexes. This is one sample query I provided:
SQL> select rowid from star_fact where
  2   (state = 'CA') or (state = 'CO')
  3  and (age = 'young') and (marital_status = 'divorced')
  4  and (((summer_sport = 'baseball') and (softdrink = 'pepsi'))
  5  or ((summer_sport = 'golf') and (beer = 'coors')));Your query would use your attribute names and values. Notice also that there are no multiple joins to the same table, although there can be if necessary without preventing Oracle from using the bitmap indexes efficiently.

Similar Messages

  • Suggestions for table partition for existing tables.

    I have a table as below. This table contains huge data. This table has so many child tables .I am planning to do a 'Reference Partitioning' for the same.
    create table PROMOTION_DTL
      PROMO_ID              NUMBER(10) not null,
      EVENT                 VARCHAR2(6),
      PROMO_START_DATE      TIMESTAMP(6),
      PROMO_END_DATE        TIMESTAMP(6),
      PROMO_COST_START_DATE TIMESTAMP(6),
      EVENT_CUT_OFF_DATE    TIMESTAMP(6),
      REMARKS               VARCHAR2(75),
      CREATE_BY             VARCHAR2(50),
      CREATE_DATE           TIMESTAMP(6),
      UPDATE_BY             VARCHAR2(50),
      UPDATE_DATE           TIMESTAMP(6)
    alter table PROMOTION_DTL
      add constraint PROMOTION_DTL_PK primary key (PROMO_ID);
    alter table PROMOTION_DTL
      add constraint PROMO_EVENT_FK foreign key (EVENT)
      references SP_PROMO_EVENT_MST (EVENT);
    -- Create/Recreate indexes
    create index PROMOTION_IDX1 on PROMOTION_DTL (PROMO_ID, EVENT)
    create unique index PROMOTION_PK on PROMOTION_DTL (PROMO_ID)
    -- Grant/Revoke object privileges
    grant select, insert, update, delete on PROMOTION_DTL to SCHEMA_RW_ROLE;I would like to partition this table .Most of the queries contains the following conditions.
    promo_end_date >=   SYSDATE
    and
    (event = :input_event OR
    (:input_Start_Date <= promo_end_date           
    AND promo_start_date <= :input_End_Date))Any time the promotion can be closed by updating the PROMO_END_DATE.
    Interval partioning on PROMO_END_DATE is not possible as PROMO_END_DATE is a nullable and updatable field.
    I am now to table partition.
    Any suggestions are welcome...

    DO NOT POST THE SAME QUESTION IN MULTIPLE FORUMS PLEASE!
    Suggestions for table partition of existing tables

  • How to identity segment when to setup partition for large table?

    I have a table with size is about 3G. There is a code column in this table with distinguish 20 value. I am try to create list partition on this column.  How can I assign segment for different value for this partition?In my database, only less than 10 segments available. If I want the better performance, each partition should be on different segment or different device? Sould each sement have enough size to hold all data? what happen if the segment is smaller? for example, if I honly have 4 sgements each with 500M?
    If I remove or change the strategy of partition, for example, change the type to range, if the system can release the partitions on segment automatically?

    This section of the performance and tuning guide addresses all of these concerns.  Give it a good read and post questions that you have about the documentation:
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00841.1570/html/phys_tune/title.htm

  • [SOLVED]Proper Partitioning for Dual-boot

    I am attempting to dual-boot Windows XP and Arch, except I've left no free space...
    Do I have to reformat?
    Also, a question on windows. It uses 1 partition for everything, then allows you to create logical ones, right?
    Last edited by Haptic (2011-06-22 04:31:38)

    Haptic wrote:
    I am attempting to dual-boot Windows XP and Arch, except I've left no free space...
    Do I have to reformat?
    Also, a question on windows. It uses 1 partition for everything, then allows you to create logical ones, right?
    What Mardoct said about resizing w/ gparted is true, so unless you have the data backed-up, do it as your own risk.  Can you post a screenshot of a gparted map of your drive? 
    As you know, Win needs to be the first partition on the drive.  You are limited to four primary partitions, so if you want more, make three primary partitions, then the rest of the space an an extended partition.  Inside the extended partition, you can add many logical partitions.  I'd recommend doing all this from within gparted.
    My 1 TB drive example:
    /dev/sda1 (primary for windows), 20 gigs
    /dev/sda2 (primary for Arch), 20 gigs
    /dev/sda3 (backup of my Arch partition), 20 gigs
    /dev/sda4 (EXTENDED PARTITION), 873 gigs
    /dev/sda5 (logical for /home), 72 gigs
    /dev/sda6 (logical for /var), 8 gigs
    /dev/sda7 (logical for /boot), 0.11 gigs
    /dev/sda8 (logical for /data), 784 gigs
    /dev/sda9 (logical for swap), 8 gigs
    As a side note, you can also you gparted to copy/paste entire partitions which makes keeping backup very easy.  My /dev/sda3 is a periodic backup of my live Arch partition.
    Last edited by graysky (2009-09-16 11:15:29)

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Use multiple partitions on a table in query

    Hi All,
    Overview:-
    I have a table - TRACK which is partitioned on weekly basis. Im using this table in one of my SQL queries in which I require to find a monthly count of some column data. The query looks like:-
    Select  count(*)
    from Barcode B
    inner join Track partition P(99) T
        on B.item_barcode = T.item_barcode
    where B.create_date between 20120202 and 20120209;In the above query I am fetching the count for one week using the Partitions created on that table during that week.
    Desired output:-
    I want to fetch data between 01-Feb and 01-Mar and use the rest of the partitions for that table during the duration in the above query. The weekly partitions currently present for Track table are -
    P(99) - 20120202
    P(100) - 20120209
    P(101) - 20120216
    P(102) - 20120223
    P(103) - 20120301
    My question is that above Ive used one partition successfully, now how can I use the other 4 partitions in the same query if I am finding the count for one month (i.e. from 201201 to 20120301) ?
    Environment:-
    Oracle version - Oracle 10g R2 (10.2.0.4)
    Operating System - AIX version 5
    Thanks.
    Edited by: Sandyboy036 on Mar 12, 2012 10:47 AM

    I'm with damorgan on this one, though I was lazy and only read it twice.
    You've got a mix of everything in this one and none of it is correct.
    1. If B.create_date is VARCHAR2 this is the wrong way to store dates.
    2. All Track partitions are needed for one month if you only have the 5 partitions you listed so there is no point in mentioning any of them by name. So the answer to 'how can I use the other 4 partitions' is - don't; Oracle will use them anyway.
    3. BETWEEN 01_Feb and 01-Mar - be aware that the BETWEEN operator includes both endpoints so if you actually used it in your query the data would include March 1.

  • Sql Server Partitioning for Update

    hi ... 
    i have created on my sql server database a table that hold transactions on my database , some of updating process take more time for update .
    my question is is the partitioning for this table will be useful and decrease the updating time , or will be the same
    Thanks for attention

    IMO, I would never partition a table which had 10,000 rows.  You might want to if there is a good partitioning key and you expect this table to get much larger in the future. 
    In any case, on a 10,000 row table, I can't see any scenario where partitioning will significantly improve performance.
    You would be much better off either improving the indexing and/or rewriting the queries to be more efficient. 

  • Partitioning an Existing Table with data

    Hi All,
    I am few tables with data, I need to Partition the table without affecting existing table values is it possible?.
    if yes then Please suggest me some ideas to archive that..
    Thanks & Regards
    Sami

    Hi All,
    I Need to partition for existing table with 1 million records
    1. First partition should be created for 6 months
    2.Second partition should created for 1 year.
    3.So as of now we have 6 months Data in production + another 6 months data in First partition + another 1 year data in Second partition.
    4.More than 2 year’s data’s should be moved other partition or archived.
    kindly provide your valuabe suggestion.
    Thanks & Regards
    Sami

  • Drop older ( more than 3days of ) partitions in a table

    Hi Guru's,
    I have created an HOURLY interval partitioning for a table and the management was decide to 3 days of retention policy. So i need to schedule a Cron job for removing older partitions older than 3days, but i am not sure, how to write a shell script or a procedure to do this. Please help me on this and below are the table syntax and the partitions.
    CREATE TABLE TRANSACTION
         ID NUMBER(18) NOT NULL ,
         ACCT_ID NUMBER(18) NOT NULL ,
         BANKING_ID NUMBER(18) NOT NULL ,
         CREATED_DATE TIMESTAMP(3) NOT NULL ,
         COMMISSION_AMOUNT NUMBER(15,2) NULL ,
         CONFIRMATION_NO NVARCHAR2(255) NULL ,
         CREATED_BY NVARCHAR2(255) NOT NULL ,
         CREATED_TS TIMESTAMP(3) NOT NULL ,
         MODIFIED_BY NVARCHAR2(255) NOT NULL ,
         MODIFIED_TS TIMESTAMP(3) NOT NULL
    PARTITION BY RANGE ("CREATED_TS") INTERVAL( NUMTODSINTERVAL(1, 'HOUR'))
    ( PARTITION TRANS_DATA VALUES LESS THAN (TO_DATE(' 2011-11-04 20:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P,
    PARTITION TRANS_DATA1 VALUES LESS THAN (TO_DATE(' 2011-11-04 21:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P1,
    PARTITION TRANS_DATA2 VALUES LESS THAN (TO_DATE(' 2011-11-04 22:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P2,
    PARTITION TRANS_DATA3 VALUES LESS THAN (TO_DATE(' 2011-11-04 23:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P3,
    PARTITION TRANS_DATA4 VALUES LESS THAN (TO_DATE(' 2011-11-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P4,
    PARTITION TRANS_DATA5 VALUES LESS THAN (TO_DATE(' 2011-11-05 01:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TABLE_TS_P5
    Here Partitioning key is "CREATED_TS".
    HERE are the partitions created for this table.
    HIGH_VALUE PARTITION_NAME
    TIMESTAMP' 2011-11-04 20:00:00' TRANS_DATA
    TIMESTAMP' 2011-11-04 21:00:00' TRANS_DATA1
    TIMESTAMP' 2011-11-04 22:00:00' TRANS_DATA2
    TIMESTAMP' 2011-11-04 23:00:00' TRANS_DATA3
    TIMESTAMP' 2011-11-05 00:00:00' TRANS_DATA4
    TIMESTAMP' 2011-11-05 01:00:00' TRANS_DATA5
    TIMESTAMP' 2011-11-05 02:00:00' SYS_P41
    TIMESTAMP' 2011-11-05 03:00:00' SYS_P42
    TIMESTAMP' 2011-11-05 04:00:00' SYS_P44
    TIMESTAMP' 2011-11-05 05:00:00' SYS_P46
    TIMESTAMP' 2011-11-05 06:00:00' SYS_P49
    TIMESTAMP' 2011-11-05 07:00:00' SYS_P52
    TIMESTAMP' 2011-11-05 08:00:00' SYS_P102
    TIMESTAMP' 2011-11-05 09:00:00' SYS_P121
    TIMESTAMP' 2011-11-05 10:00:00' SYS_P141
    TIMESTAMP' 2011-11-05 11:00:00' SYS_P144
    TIMESTAMP' 2011-11-05 12:00:00' SYS_P147
    TIMESTAMP' 2011-11-05 13:00:00' SYS_P149
    TIMESTAMP' 2011-11-05 14:00:00' SYS_P151
    TIMESTAMP' 2011-11-05 15:00:00' SYS_P152
    TIMESTAMP' 2011-11-05 16:00:00' SYS_P154
    TIMESTAMP' 2011-11-05 17:00:00' SYS_P157
    TIMESTAMP' 2011-11-05 18:00:00' SYS_P222
    TIMESTAMP' 2011-11-05 19:00:00' SYS_P159
    TIMESTAMP' 2011-11-05 20:00:00' SYS_P243
    TIMESTAMP' 2011-11-05 21:00:00' SYS_P261
    TIMESTAMP' 2011-11-05 22:00:00' SYS_P282
    TIMESTAMP' 2011-11-06 01:00:00' SYS_P285
    TIMESTAMP' 2011-11-06 02:00:00' SYS_P303
    TIMESTAMP' 2011-11-06 03:00:00' SYS_P287
    TIMESTAMP' 2011-11-06 04:00:00' SYS_P289
    TIMESTAMP' 2011-11-06 05:00:00' SYS_P307
    TIMESTAMP' 2011-11-06 06:00:00' SYS_P324
    TIMESTAMP' 2011-11-06 07:00:00' SYS_P310
    TIMESTAMP' 2011-11-06 08:00:00' SYS_P313
    TIMESTAMP' 2011-11-06 09:00:00' SYS_P342
    TIMESTAMP' 2011-11-06 10:00:00' SYS_P292
    TIMESTAMP' 2011-11-06 11:00:00' SYS_P315
    TIMESTAMP' 2011-11-06 12:00:00' SYS_P295
    TIMESTAMP' 2011-11-06 13:00:00' SYS_P298
    TIMESTAMP' 2011-11-06 14:00:00' SYS_P361
    TIMESTAMP' 2011-11-06 15:00:00' SYS_P363
    TIMESTAMP' 2011-11-06 16:00:00' SYS_P365
    TIMESTAMP' 2011-11-06 17:00:00' SYS_P366
    TIMESTAMP' 2011-11-06 18:00:00' SYS_P368
    TIMESTAMP' 2011-11-06 19:00:00' SYS_P371
    TIMESTAMP' 2011-11-06 20:00:00' SYS_P373
    Here the partition names are not in order, so i am not able to figure out, to do the syntax to drop the partitions. Please let me know, how to drop the older partitions.

    You can use partition_position from user_tab_partitions to determine how many partitions you want to drop and which ones. These are always in order, regardless of your partition names. This obviously assumes that all your partitions are uniform (hourly in your case).
    Milina

  • Find the partition for the fact table

    Oracle version : Oracle 10.2
    I have one fact table with daily partitions.
    I am inserting some test data in this table for old date 20100101.
    I am able to insert this record in this table as below
    insert into fact_table values (20100101,123,456);
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)
    but I am able to extract the data using
    select * from facT_table where date_id=20100101
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?

    user507531 wrote:
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)Wrong approach.
    but I am able to extract the data using
    select * from facT_table where date_id=20100101Correct approach.
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?Who says that the date is invalid..? This is a range partition - which means that each partition covers a range. And if you bothered to read up in the SQL Reference Guide on how a range partition is defined, you will notice that each partition is defined with the end value of the range it covers. There is no start value - as the previous partition's end value is the "+border+" between this and the prior partition.
    I suggest that before you use a database feature you first familiarise yourself with it. Else incorrectly using it, and making the wrong assumptions about it, more than likely results.

  • How to create ddl for partitioning for a child table?

    Hi folks,
    I am new to the partitioning topic. I red some manuals and tutorials but I did not found exactly what I need. So I would ask for your help.
    I have two tables.
    The partitioning of the master table is clear:
    create TABLE ASP_PARTITION_TABLE_MASTER
    MASTER_ID VARCHAR2(20) NOT NULL
    , TIMESTAMP DATE
    , DATA VARCHAR2(20)
    , CONSTRAINT ASP_PARTITION_TABLE_MASTE_PK PRIMARY KEY
    MASTER_ID
    USING INDEX
    ENABLE
    PARTITION BY RANGE (TIMESTAMP)
    PARTITION PARTITION1 VALUES LESS THAN (TO_DATE('20090101','YYYYMMDD')) NOCOMPRESS
    , PARTITION PARTITION2 VALUES LESS THAN (TO_DATE('20100101','YYYYMMDD')) NOCOMPRESS
    , PARTITION PARTITION3 VALUES LESS THAN (TO_DATE('20110101','YYYYMMDD')) NOCOMPRESS
    This table should be partitioned by a timestamp.
    Now comes the difficulty:
    I have a child table which has the master_id as foreign key. The slave table should also be partitioned by the timestamp - but this one occurs only in the master table.
    CREATE TABLE ASP_PARTITION_TABLE_SLAVE
    SLAVE_ID VARCHAR2(20) NOT NULL
    , FK_MASTER_ID NUMBER(10)
    , DATA1 VARCHAR2(20)
    , DATA2 VARCHAR2(20)
    , COLUMN1 VARCHAR2(20)
    , CONSTRAINT ASP_PK_SLAVE PRIMARY KEY
    SLAVE_ID
    ENABLE
    ALTER TABLE ASP_PARTITION_TABLE_SLAVE
    ADD CONSTRAINT ASP_FK_MASTER_IS FOREIGN KEY
    SLAVE_ID
    REFERENCES ASP_PARTITION_TABLE_MASTER
    MASTER_ID
    ON DELETE CASCADE ENABLE;
    How can I create such a range partition for the slave table?
    PS: Currently we are using oracle 10.2 but we will upgrade to oracle 11g in near future.
    Thanks in advance,
    Andreas

    @sb92075:
    seems if I should give more details:
    the parent table is holding the master_id, the timestamp and plenty of general information which is valid for all children.
    Each parent has between 5 and 50 children which contain different data items. So our child table is much bigger than the parent table.
    Our selects are joining parent and child table filtered by the timestamp and some other indexed rows, so we are not saving the same timestamp redundantly.
    @bluefrog
    With the partitioning we would like to reach the following goals:
    1. gain more performance (quicker response time)
    2. get the ability to archive and drop partitions which are older than a specific timestamp. (We only need to keep the data of the last 1.5 years so we would like to drop the complete partition / tablespace instead of using a delete-statement which generates much load (transaction, archive, logging)).
    Thank you for your links. I will have a look on it.
    But due to the archiving issue we would also like split up the child table by the timestamp of the parent table.
    Regards, Andreas

  • Need help in deciding type of partition for tables

    Hi,
    I have few tables which have millions of record. In some of the tables, we have data of previous years, which we dont use now. Can we create a partition table for such type of tables.
    On other tables, How to decide if we have to use range/list/hash partitioning on our tables.
    Do i need to recreate indexes for this tables after creating partition tables.
    Please guide me.
    Best Regards,

    Partitioning decisions are based upon how you will access the data.
    If you access by date then partition by date.
    If you access by means of a list of values then use list.
    If you there is no pattern and you just need to break the data up into smaller buckets use hash.
    I see no reason why, based on what you have written, range partitioning by date would not be worthy of consideration.

  • Partitioning for Fact and Bridge tables

    In our data warehouse we have a fact table having 35 million records which keeps monthly snapshot of data. This has been range partitioned on date key for each month end. We have another bridge table having around 50 million records, which is also partitioned the same way. These two tables can be joined on some fields, one of which is date key. When I run a query which uses both of these tables to display data for a month, it uses partitions in fact but does a full scan on bridge table. Partitions of bridge table are not being used at all. This results in very long response time.
    Can you suggest me some way to make it work?
    Thanks

    That's a matter of policy.Does your organisation execute a strict differentiation between ETL developers and DBAs? Then it's probably up to the DBAs to set up the tables, but of course you have to tell them the structures of the tables that they shall create for you.Other organisations allow their ETL developers to create tables "on the fly" in development environments. If this the case for you, then it's up to you. If this is not what you wanted to know, could you please post your question differently so that we can clearly understand what you mean? Regards,Nico

  • List partitioning multi-org tables

    Hi
    I am doing list partitioning on receivables multi-org tables on org_id column. Running into a performance problem with multi org views. The multi-org views for receivables tables are defined like below with a nvl condition on org_id (partitioned column) in their where clause
    create or replace ra_customer_trx
    select select * from ra_customer_trx_all
    WHERE NVL(ORG_ID,NVL(TO_NUMBER(DECODE(SUBSTRB(USERENV ('CLIENT_INFO'),1,1), ' ', NULL, SUBSTRB(USERENV ('CLIENT_INFO'),1,10))),-99)) = NVL(TO_NUMBER(DECODE(SUBSTRB(USERENV ('CLIENT_INFO'),1,1), ' ', NULL, SUBSTRB(USERENV ('CLIENT_INFO'),1,10))),-99)
    Queries against the view are doing all partition scan when I exptected partition pruning to kick in and the query goes only against the spefific partition.
    select count(1) from ra_customer_trx ---- does all partition scan
    select count(1) from ra_customer_trx_all where org_id = <> ---- does single partition scan, works well.
    When I recreate the view with out any function calls on the org_id column partition pruning happens.
    In a non partitioned environment which has an index on org_id column, both the above sqls use the index and yield same result.
    So my questions are -
    1. Is there a way to get around this problem without having to modify the oracle supplied multi-org views? Any options I can supply in the partition script?
    2. In a non-partitioned env, with an index on org_id how is the optmizer able to go against the index and where as it is not able to in partitioned environment..? Both these envs has the same view definition with NVL(org.......) consition.
    Does anyone have any suggestions?
    Thank you.

    user2317378 wrote:
    1. Is there a way to get around this problem without having to modify the oracle supplied multi-org views? Any options I can supply in the partition script?You mean to say that the expression used in the view belongs to some Oracle supplied schema, like APPS? Or is this a view you've created yourself?
    Can you show us the output of EXPLAIN PLAN using DBMS_XPLAN.DISPLAY when querying the view? Use the \ tag before and after to use proper formatting in fixed font.
    Please make sure that the "Predicate Information" section below the plan is also included in your post. If it is missing your plan table is old and needs to be upgraded using $ORACLE_HOME/rdbms/admin/utlxplan.sql or dropped if you're in 10g which provides a system wide PLAN_TABLE.
    2. In a non-partitioned env, with an index on org_id how is the optmizer able to go against the index and where as it is not able to in partitioned environment..? Both these envs has the same view definition with NVL(org.......) consition.
    These are two different questions. One is about partition pruning not taking place, the other one about an index not being used.
    Can you show us the output of EXPLAIN PLAN using DBMS_XPLAN.DISPLAY when querying the unpartitioned example? Use the \ tag before and after to use proper formatting in fixed font.
    Please make sure that the "Predicate Information" section below the plan is also included in your post. If it is missing your plan table is old and needs to be upgraded using $ORACLE_HOME/rdbms/admin/utlxplan.sql or dropped if you're in 10g which provides a system wide PLAN_TABLE.
    It would be interesting to know how Oracle can use the index given the complex expression in the WHERE clause.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Partition an existing table

    How can I partition an existing table? What's the SQL syntax?
    Thanks for help,
    Chen Zhao

    Does anybody know how to partition an existing table?
    YES! That is the simple answer to your question. There are a lot of people that know how to do that.
    Whether partitioning is appropriate and, if so, which method might be 'best' for YOU depends on the particulars of your use case. But as with most problems you need to make sure you troubleshoot whatever issue you have in the proper order:
    1. Identify a PROBLEM or issue that needs to be resolved - you haven't told us anything. Please post info about this
    2. Validate that the problem/issue actually exists (it could just be a fluke occurence)
    3. Identify potential solutions to the problem/issue - that list of solutions may, or may not, include partitioning
    4. Select a small number (e.g. 1 or 2) of those solutions for further analysis and actual tests.
    5. Select the 'best' (based on your orgs criteria) of those tested solutions for implemention
    You seem to already be on step #5. But in order for us to help you we have to understand what the results of steps 1 thru 4 are.
    Please post the information about your PROBLEM that we need to help you.

Maybe you are looking for

  • I upgraded to Lion and now my iPhoto won't open and I don't know where my pictures are

    I upgraded to Lion and now i don't have Iphoto anymore.  Where did all of my pictures go??

  • How to uninstall a plug-in?

    This is with Photoshop CC 2014 on Windows 7. Help > System Info reports the following (at the very end of the list): Optional and third party plug-ins:    SilverFast 8 x64 10.0 Plug-ins that failed to load: NONE Flash: Installed TWAIN devices: NONE T

  • ORA-01110 ORA-27041

    I accidently deleted some datafiles that were part of a tablespace. I don't care about the data because I had not yet used these datafiles. But when my database won't startup because it has the following error in the trace file. I dont have a back up

  • Powerbook won't boot after Leopard upgrade.

    I'll get the apple logo and spinning gear, then blue (kernal panic) screen, and that's as far as it will go. I thought I recalled a problem with the Unsanity application enhancers, so I started the PB up in target disk mode and removed the one .ape..

  • Editing number of Years

    Hi, We have an HFM app profile with number of years 10. Now we want to increase the years 10 to 15. Is it possible to increase the number of years 10 to 15 or 20? and How? Thanks in Advance.... Edited by: user10926115 on Dec 9, 2010 10:57 AM