Hash Partitioning and Partition-Wise Joins

Hi,
For the ETL process of a Data Warehouse project I have to join 2 tables (~10M rows) over their primary key.
I was thinking of hash partitioning these 2 tables over their PK. Through this the database (9.2) should be able to do a Hash-Hash Full Partition-wise Join. For more detail about that you can have a look at:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96520/parpart.htm#98291
What I'm looking for are some documents or recommandation concerning the number of hash partitions to create depending on the number of rows of the tables, CPU of the server or any other parameters.
I would be grateful if someone could give some input.
Mike

here you have all papers:
Oracle9i Database List of Books
(Release 2 (9.2))
http://otn.oracle.com/pls/db92/db92.docindex?remark=homepage
Joel P�rez

Similar Messages

  • Partition Pruning vs Partition-Wise Join

    Hi,
    I am not sure if this is the right place for this question, but here it goes.
    I am in a situation where in the begining I have to join two big tables without any where clauses. It is pretty much a Caretsian Product where a criteria meets (Internal Code), but I have to join all rows. (Seems like a good place for Partition-Wise Join).
    Later I only need to update certain rows based on a key value (Customer ID, Region ID). (Good candidate for Partition Pruning).
    What would be the best option. Is there a way to use both?
    Assume that following:
    Table 1 has the structure of
    Customer ID
    Internal Code
    Other Data
    There are about 1000 Customer ID. Each Customer ID has 1000 Internal Codes.
    Table 2 has the structure of
    Region ID
    Internal Code
    Other Data
    There are about 5000 Region ID. Each Region ID has 1000 Internal Codes(same as Table 1).
    I am currently thinking of doing a HASH PARTITION (8 partitions) on Customer ID for Table 1 and HASH PARTITION (8 partitions) on Region ID for Table 2.
    The initial insert will take a long time, but when I go to update the joined data based on specific Customer ID, or Region ID atleast from one Table only one Partition will be used.
    I would sincerely appreciate some advice from the gurus.
    Thanks...

    Hi,
    I still don't understand what it is that you are trying to do.
    Would it be possible for you to create a silly example with just a few rows
    to show us what it is that you are trying to accomplish?
    Then we can help you solve whatever problem it is that you are having.
    create table t1(
       customer_id   number       not null
      ,internal_code varchar2(20) not null
      ,<other_columns>
      ,constraint t1_pk primary key(customer_id, internal_code)
    create table t2(
       region_id number not null
      ,internal_code varchar2(20) not null
      ,<other_columns>
      ,constraint t2_pk primary key(region_id, internal_code)
    insert into t1(customer_id, internal_code, ...) values(...);
    insert into t1(customer_id, internal_code, ...) values(...);
    insert into t2(region_id, internal_code, ...) values(...);
    insert into t2(region_id, internal_code, ...) values(...);
    select <the rating calculation>
       from t1 join t2 using(internal_code);

  • Partition wise join

    Hello,
    i'm playing with partition. I have read about (full) partition wise join in hash-hash table. The description and the example is in Oracle Database Warehousing guide (b14223.pdf) in chaper 5.
    <cite>
    A full partition-wise join divides a large join into smaller joins between a pair of
    partitions from the two joined tables. To use this feature, you must equipartition both
    tables on their join keys. For example, consider a large join between a sales table and a
    customer table on the column customerid. The query "find the records of all
    customers who bought more than 100 articles in Quarter 3 of 1999" is a typical example
    of a SQL statement performing such a join. The following is an example of this:
    SELECT c.cust_last_name, COUNT(*)
    FROM sales s, customers c
    WHERE s.cust_id = c.cust_id AND
    s.time_id BETWEEN TO_DATE('01-JUL-1999', 'DD-MON-YYYY') AND
    (TO_DATE('01-OCT-1999', 'DD-MON-YYYY'))
    GROUP BY c.cust_last_name HAVING COUNT(*) > 100;
    </cite>
    I have created some sales table with 1M rows and customers table with 1k rows (it means that approximately each customer has one thousand of invoices). Both tables are hash partitioned with 64 partitions. The are analyzed. But there is not any improvement comparing the same table without paritioning and with index on customerid column.
    I would just to see that partition wise joins is working. It means is is either faster or it consume less resources.
    Thanks
    sasa

    I don't think that 64 partition is too many ... the problem is that there isn't any gain from partitioning.
    Lets see the explain plan:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 50 | 2500 | 11436 (5)| 00:02:18 | | |
    |* 1 | FILTER | | | | | | | |
    | 2 | HASH GROUP BY | | 50 | 2500 | 11436 (5)| 00:02:18 | | |
    |* 3 | HASH JOIN | | 903K| 43M| 11320 (4)| 00:02:16 | | |
    | 4 | PARTITION HASH ALL| | 1000 | 34000 | 88 (0)| 00:00:02 | 1 | 64 |
    | 5 | TABLE ACCESS FULL| PART_CUSTOMERS | 1000 | 34000 | 88 (0)| 00:00:02 | 1 | 64 |
    | 6 | PARTITION HASH ALL| | 903K| 13M| 11219 (4)| 00:02:15 | 1 | 64 |
    |* 7 | TABLE ACCESS FULL| PART_INVOICES | 903K| 13M| 11219 (4)| 00:02:15 | 1 | 64 |
    And compare this explain plan with the following which is created on non-partition tables (the cost of expalin plan for non partitioned tables is 180 compare to cost 11436 of partitioned tables):
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 46 | 180 (2)| 00:00:03 |
    |* 1 | FILTER | | | | | |
    | 2 | HASH GROUP BY | | 1 | 46 | 180 (2)| 00:00:03 |
    |* 3 | TABLE ACCESS BY INDEX ROWID | PART_INVOICES2 | 903 | 10836 | 177 (1)| 00:00:03 |
    | 4 | NESTED LOOPS | | 903 | 41538 | 179 (1)| 00:00:03 |
    | 5 | TABLE ACCESS BY INDEX ROWID| PART_CUSTOMERS2 | 1 | 34 | 2 (0)| 00:00:01 |
    |* 6 | INDEX RANGE SCAN | PART_CUSTOMERS2_IDX01 | 1 | | 1 (0)| 00:00:01 |
    |* 7 | INDEX RANGE SCAN | PART_INVOCIES2_IDX02 | 9991 | | 22 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------------------

  • Make parallel query (e.g. partition-wise join) evenly distributed cross RAC

    How To Make Parallel Query's Slaves (e.g. full partition-wise join) Evenly Distributed Across RAC Nodes?
    Environment
    * 4-node Oracle 10gR2 (10.2.0.3)
    * all instances are included in the same distribution group
    * tables are hash-partitioned by the same join key
    * 8-CPU per node, 48GB RAM per node
    Query
    Join 3 big tables (each has DOP=4) based on the hash partition key column.
    Problem
    The QC is always on one node, and all the slaves are on another node. The slave processes are supposed to be distributed or allocated to multiple nodes/instances, but even the query spawns 16 or more slaves, these slaves are running from only one node. And the QC process is never running on the same node! ? ! ? !
    The other 2 nodes are not busy during this time. Is there any configuration wrong or missing here? Why can't the RAC distribute the slaves better, or at least run some slaves together with QC?
    Please advise.
    Thank you very much!
    Eric

    Hi,
    If your PARALLEL_INSTANCE_GROUP and LOAD_BALANCING is set properly it means the oracle is assuming that intra node parallelism is more beneficial than inter node parallelism, i mean parallelism across multiple nodes. This is very true in scenarios where partition wise joins are involved. intra node parallelism avoids unnecessary interconnect traffic.

  • Hint to disable partition wise join

    Is there a way to disable partition wise join(serial) in 10gR2? i.e. via hint.. The reason I want to do this is, to use intra-partition parallelism for a very big partition. re-partitioning or subpartitioning is not an option for now. SQL is scanning only one partition so P-W join is not useful and it limit the intra-partition parallelism.
    TIA for your answers.

    user4529833 wrote:
    Above is the plan. Currently there is no prallelism being used but P-W join is used as you can see. Table EC is huge .. (cardinality is screwed up here becasue of IN clause , which has just one vallid part key. [ 3rd party crappy app, so can't change it.] ) . I'd like to enable parallelism here using parallel (EC, 6) hint , it just applied to hash-join and not to table EC because of P-W join, I believe. What I want is to scan EC table via PQ slave.. i.e. PX BLOCK INTERATOR step before TABLE access step... How do I get one? Will PQ_DISTRIBUTE help me there??? or Is there any way to speed up the scan of EC..
    The pq_distribute() should do the job. Here's an example
    select      
         /*+
              parallel(pt_range_1 2)
              parallel(pt_range_2 2)
              ordered
    --          pq_distribute(pt_range_2 hash hash)
    --          pq_distribute(pt_range_2 broadcast none)
         pt_range_2.grp,
         count(pt_range_1.small_vc)
    from
         pt_range_1,
         pt_range_2
    where     
         pt_range_1.id in (10,20,40)
    and     pt_range_2.id = pt_range_1.id
    group by
         pt_range_2.grp
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                |            |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10001   |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY               |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   4 |     PX RECEIVE                 |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   5 |      PX SEND HASH              | :TQ10000   |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
    |   6 |       PX PARTITION RANGE INLIST|            |     3 |    42 |     5  (20)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWC |            |
    |*  7 |        HASH JOIN               |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
    |*  8 |         TABLE ACCESS FULL      | PT_RANGE_1 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWP |            |
    |*  9 |         TABLE ACCESS FULL      | PT_RANGE_2 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWP |            |
    ------------------------------------------------------------------------------------------------------------------------------------------Unhinted I have a partition-wise parallel join.
    The next plan is using hash disrtibution - which may be better for you if the EX_C table is large:
    | Id  | Operation                  | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT           |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR            |            |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)      | :TQ10003   |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,03 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY           |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
    |   4 |     PX RECEIVE             |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
    |   5 |      PX SEND HASH          | :TQ10002   |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,02 | P->P | HASH       |
    |*  6 |       HASH JOIN BUFFERED   |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |   7 |        PX RECEIVE          |            |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |   8 |         PX SEND HASH       | :TQ10000   |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
    |   9 |          PX BLOCK ITERATOR |            |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWC |            |
    |* 10 |           TABLE ACCESS FULL| PT_RANGE_1 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWP |            |
    |  11 |        PX RECEIVE          |            |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |  12 |         PX SEND HASH       | :TQ10001   |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->P | HASH       |
    |  13 |          PX BLOCK ITERATOR |            |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,01 | PCWC |            |
    |* 14 |           TABLE ACCESS FULL| PT_RANGE_2 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,01 | PCWP |            |
    --------------------------------------------------------------------------------------------------------------------------------------Then the broadcast version if the EC_C data is relatively small (so that the whole set can fit in the memory of each slave)
    | Id  | Operation                  | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT           |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR            |            |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)      | :TQ10002   |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,02 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY           |            |     3 |    42 |     6  (34)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |   4 |     PX RECEIVE             |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |   5 |      PX SEND HASH          | :TQ10001   |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,01 | P->P | HASH       |
    |*  6 |       HASH JOIN            |            |     3 |    42 |     5  (20)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   7 |        PX RECEIVE          |            |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   8 |         PX SEND BROADCAST  | :TQ10000   |     3 |    21 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | BROADCAST  |
    |   9 |          PX BLOCK ITERATOR |            |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWC |            |
    |* 10 |           TABLE ACCESS FULL| PT_RANGE_1 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,00 | PCWP |            |
    |  11 |        PX BLOCK ITERATOR   |            |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,01 | PCWC |            |
    |* 12 |         TABLE ACCESS FULL  | PT_RANGE_2 |     3 |    21 |     2   (0)| 00:00:01 |KEY(I) |KEY(I) |  Q1,01 | PCWP |            |
    --------------------------------------------------------------------------------------------------------------------------------------The "hash join buffered" in the hash/hash distribution might hammer your temporary tablespace though, thanks to [an oddity I discovered |http://jonathanlewis.wordpress.com/2008/11/05/px-buffer/] in parallel hash joins a little while ago.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Expected for a partition-wise join?

    I have two tables that are partitioned by a hash of the same VARCHAR2(16) strings. When I do a query similar to
    select * from table1 a join table2 b
    on b.partition_Column = a.partition_Column
    I get the following as the "Operation" portion of an explain from Oracle Developer running Oracle 11gR1:
    PARTITION HASH(ALL)
    HASH JOIN
    TABLE ACCESS(FULL) schema.Table1
    TABLE ACCESS(FULL) schema.Table2
    Is this indicative of a partition-wise hashed join?

    pstart/pstop does give partition information but the key to whether this is a "partition-wise" join, is that the partition operation is above the join operation.
    Using David's tables above, example of serial partition-wise join:
    | Id  | Operation           | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |        |  2500M|  4670G|       |   111K (61)| 00:09:17 |       |       |
    |   1 |  PARTITION HASH ALL |        |  2500M|  4670G|       |   111K (61)| 00:09:17 |     1 |     8 |
    |*  2 |   HASH JOIN         |        |  2500M|  4670G|    12M|   111K (61)| 00:09:17 |       |       |
    |   3 |    TABLE ACCESS FULL| TABLE1 |   100K|    95M|       |  5891   (1)| 00:00:30 |     1 |     8 |
    |   4 |    TABLE ACCESS FULL| TABLE2 |   200K|   191M|       | 11808   (1)| 00:01:00 |     1 |     8 |
    ------------------------------------------------------------------------------------------------------Example of serial non PWJ:
    | Id  | Operation           | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |        |  2500M|  4670G|       |   111K (61)| 00:09:17 |       |       |
    |*  1 |  HASH JOIN          |        |  2500M|  4670G|    96M|   111K (61)| 00:09:17 |       |       |
    |   2 |   PARTITION HASH ALL|        |   100K|    95M|       |  5891   (1)| 00:00:30 |     1 |     8 |
    |   3 |    TABLE ACCESS FULL| TABLE1 |   100K|    95M|       |  5891   (1)| 00:00:30 |     1 |     8 |
    |   4 |   PARTITION HASH ALL|        |   200K|   191M|       | 11808   (1)| 00:01:00 |     1 |     8 |
    |   5 |    TABLE ACCESS FULL| TABLE2 |   200K|   191M|       | 11808   (1)| 00:01:00 |     1 |     8 |
    ------------------------------------------------------------------------------------------------------Example of parallel PWJ:
    | Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT        |          |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |        |      |            |
    |   1 |  PX COORDINATOR         |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10000 |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX PARTITION HASH ALL|          |  2500M|  4670G| 23536  (80)| 00:01:58 |     1 |     8 |  Q1,00 | PCWC |            |
    |*  4 |     HASH JOIN           |          |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |  Q1,00 | PCWP |            |
    |   5 |      TABLE ACCESS FULL  | TABLE1   |   100K|    95M|  1628   (1)| 00:00:09 |     1 |     8 |  Q1,00 | PCWP |            |
    |   6 |      TABLE ACCESS FULL  | TABLE2   |   200K|   191M|  3263   (1)| 00:00:17 |     1 |     8 |  Q1,00 | PCWP |            |
    ---------------------------------------------------------------------------------------------------------------------------------E.g. parallel non PWJ:
    | Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT           |          |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |        |      |            |
    |   1 |  PX COORDINATOR            |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)      | :TQ10001 |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN               |          |  2500M|  4670G| 23536  (80)| 00:01:58 |       |       |  Q1,01 | PCWP |            |
    |   4 |     PART JOIN FILTER CREATE| :BF0000  |   100K|    95M|  1628   (1)| 00:00:09 |       |       |  Q1,01 | PCWP |            |
    |   5 |      PX RECEIVE            |          |   100K|    95M|  1628   (1)| 00:00:09 |       |       |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST    | :TQ10000 |   100K|    95M|  1628   (1)| 00:00:09 |       |       |  Q1,00 | P->P | BROADCAST  |
    |   7 |        PX BLOCK ITERATOR   |          |   100K|    95M|  1628   (1)| 00:00:09 |     1 |     8 |  Q1,00 | PCWC |            |
    |   8 |         TABLE ACCESS FULL  | TABLE1   |   100K|    95M|  1628   (1)| 00:00:09 |     1 |     8 |  Q1,00 | PCWP |            |
    |   9 |     PX BLOCK ITERATOR      |          |   200K|   191M|  3263   (1)| 00:00:17 |:BF0000|:BF0000|  Q1,01 | PCWC |            |
    |  10 |      TABLE ACCESS FULL     | TABLE2   |   200K|   191M|  3263   (1)| 00:00:17 |:BF0000|:BF0000|  Q1,01 | PCWP |            |
    ------------------------------------------------------------------------------------------------------------------------------------Edited by: Dom Brooks on Jul 8, 2011 11:32 AM
    Added serial vs parallel

  • How data is distributed in HASH partitions

    Guys,
    I want to partitions my one big table into 5 different partitions based on HASH value of the LOCATION field of the table.
    My question is, Will the data be distributed equally in partitions or will end up in one partition or I need to have 5 diferent HASH value for location key to end up in five partitions.

    Hash partitioning enables easy partitioning of data that does not lend itself to range or list partitioning. It does this with a simple syntax and is easy to implement. It is a better choice than range partitioning when:
    1) You do not know beforehand how much data maps into a given range
    2) The sizes of range partitions would differ quite substantially or would be difficult to balance manually
    3) Range partitioning would cause the data to be undesirably clustered
    4) Performance features such as parallel DML, partition pruning, and partition-wise joins are important
    The concepts of splitting, dropping or merging partitions do not apply to hash partitions. Instead, hash partitions can be added and coalesced.
    What I think that is, in your case list partitioning can be of choice.
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#i462869

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Is list partition and hash partition one and the same

    I am creating table with partition with the commands
    CREATE TABLE ABD (ENO NUMBER(5),CID NUMBER(3),ENAME VARCHAR2(10))
    PARTITION BY LIST (ENO)
    (PARTITION P1 VALUES (123),
    PARTITION P2 VALUES (143),
    PARTITION CLIENT_ID VALUES (746))
    ALTER TABLE ABD
    ADD PARTITION CLIENT_756 VALUES (756)
    but when i describe the table script it is showing like this
    CREATE TABLE ABD (
    ENO NUMBER (5),
    ENAME VARCHAR2 (10),
    CID NUMBER (3) )
    PARTITION BY HASH (ENO)
    PARTITIONS 4
    STORE IN ( USERS,USERS,USERS,
    USERS);
    actually i am creating list partition but it is showing hash partition why is it so?

    when i describe the table script it is showing like thisHow do you describe it, and which version are you on ?
    TEST@db102 SQL> CREATE TABLE ABD (ENO NUMBER(5),CID NUMBER(3),ENAME VARCHAR2(10))
      2  PARTITION BY LIST (ENO)
      3  (PARTITION P1 VALUES (123),
      4  PARTITION P2 VALUES (143),
      5* PARTITION CLIENT_ID VALUES (746))
    TEST@db102 SQL> /
    Table created.
    TEST@db102 SQL> ALTER TABLE ABD
      2* ADD PARTITION CLIENT_756 VALUES (756)
    TEST@db102 SQL> /
    Table altered.
    TEST@db102 SQL> select dbms_metadata.get_ddl('TABLE','ABD','TEST') from dual;
    DBMS_METADATA.GET_DDL('TABLE','ABD','TEST')
      CREATE TABLE "TEST"."ABD"
       (    "ENO" NUMBER(5,0),
            "CID" NUMBER(3,0),
            "ENAME" VARCHAR2(10)
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(
      BUFFER_POOL DEFAULT)
      TABLESPACE "USERS"
      PARTITION BY LIST ("ENO")
    (PARTITION "P1"  VALUES (123)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "P2"  VALUES (143)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "CLIENT_ID"  VALUES (746)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "CLIENT_756"  VALUES (756)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS )
    TEST@db102 SQL>                                                                               

  • Cost to change hash partition key column in a history table

    Hi All,
    I have the following scenario.
    We have a history table in production which has 16 hash partitions on the basis of key_column.
    But the nature of data that we have in history table that has 878 distinct values of the key_column and about 1000 million data and all partitons are in same tablespace.
    Now we have a Pro*C module which purges data from this history table in the following way..
    > DELETE FROM hsitory_tab
    > WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND ROWNUM <= 210;
    Now (p_date,t_data are one of the two columns in history table) data is deleted using thiese two date column conditions but key_column for partition is different.
    So as per aboove statement this history table containd 6 months data.
    DBA is asking to change this query and use partiton date wise.Now will it be proper to change the partition key_column (the existing hash partiton key_column >have 810 distinct values) and what things we need to cosider to calculate cost behind this hash partition key_column cahange(if it is appropriate to change >partition )key_column)Hope i explained my problem clearly and waiting for your suggestions .
    Thanks in advance.

    Hi Sir
    Many thanks for the reply.
    For first point -
    we are in plan to move the database to 10g after a lot of hastle between client.For second point -
    If we do partition by date or week we will have 30 or 7 partitions .As suggested by you as we have 16 partitions in the table best approach would be to have >partition by week then we will have 7 partitions and then each query will heat 7 partitions .For third point -
    Our main aim to reduce the timings of a job(a Pro*C program) which contains the following delete query to delete data from a history table .So accroding to the >query it is deleting data every day for 7 months and while deleting it it queries this hug etable by date.So in this case hash partition or range partiton or >hash/range partition which will be more suitable.
    DELETE FROM hsitory_tab
    WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND ROWNUM <= 210;I have read in hash partition is used so that data will be evenly distributed in all partitions (though it depends on nature of data).In my case i want some suggestion from you to take the best approach .

  • Partition pruning not working for partitioned table joins

    Hi,
    We are joining  4 partitioned tables on partition column & other key columns. And we are filtering the driving table on partition key. But explain plan is showing that all tables except the driving table are not partition pruning and scanning all partitions.Is there any limitation that filter condition cannot be dynamic?
    Thanks a lot in advance.
    Here are the details...
    SELECT a.pay_prd_id,
                  a.a_id,
                  a.a_evnt_no
      FROM b,
                c,
                a,
                d
    WHERE  (    a.pay_prd_id = b.pay_prd_id ---partition range all
                AND a.a_evnt_no  = b.b_evnt_no
                AND a.a_id       = b.b_id
       AND (    a.pay_prd_id = c.pay_prd_id---partition range all
            AND a.a_evnt_no  = c.c_evnt_no
            AND a.a_id       = c.c_id
       AND (    a.pay_prd_id = d.pay_prd_id---partition range all
            AND a.a_evnt_no  = d.d_evnt_no
            AND a.a_id       = d.d_id
       AND (a.pay_prd_id =  ---partition range single
               CASE '201202'
                  WHEN 'YYYYMM'
                     THEN (SELECT min(pay_prd_id)
                                      FROM pay_prd
                                     WHERE pay_prd_stat_cd = 2)
                  ELSE TO_NUMBER ('201202', '999999')
               END
    DDLs.
    create table pay_prd
    pay_prd_id number(6),
    pay_prd_stat_cd integer,
    pay_prd_stat_desc varchar2(20),
    a_last_upd_dt DATE
    insert into pay_prd
    select 201202,2,'OPEN',sysdate from dual
    union all
    select 201201,1,'CLOSE',sysdate from dual
    union all
    select 201112,1,'CLOSE',sysdate from dual
    union all
    select 201111,1,'CLOSE',sysdate from dual
    union all
    select 201110,1,'CLOSE',sysdate from dual
    union all
    select 201109,1,'CLOSE',sysdate from dual
    CREATE TABLE A
    (PAY_PRD_ID    NUMBER(6) NOT NULL,
    A_ID        NUMBER(9) NOT NULL,
    A_EVNT_NO    NUMBER(3) NOT NULL,
    A_DAYS        NUMBER(3),
    A_LAST_UPD_DT    DATE
    PARTITION BY RANGE (PAY_PRD_ID)
    INTERVAL( 1)
      PARTITION A_0001 VALUES LESS THAN (201504)
    ENABLE ROW MOVEMENT;
    ALTER TABLE A ADD CONSTRAINT A_PK PRIMARY KEY (PAY_PRD_ID,A_ID,A_EVNT_NO) USING INDEX LOCAL;
    insert into a
    select 201202,1111,1,65,sysdate from dual
    union all
    select 201202,1111,2,75,sysdate from dual
    union all
    select 201202,1111,3,85,sysdate from dual
    union all
    select 201202,1111,4,95,sysdate from dual
    CREATE TABLE B
    (PAY_PRD_ID    NUMBER(6) NOT NULL,
    B_ID        NUMBER(9) NOT NULL,
    B_EVNT_NO    NUMBER(3) NOT NULL,
    B_DAYS        NUMBER(3),
    B_LAST_UPD_DT    DATE
    PARTITION BY RANGE (PAY_PRD_ID)
    INTERVAL( 1)
      PARTITION B_0001 VALUES LESS THAN (201504)
    ENABLE ROW MOVEMENT;
    ALTER TABLE B ADD CONSTRAINT B_PK PRIMARY KEY (PAY_PRD_ID,B_ID,B_EVNT_NO) USING INDEX LOCAL;
    insert into b
    select 201202,1111,1,15,sysdate from dual
    union all
    select 201202,1111,2,25,sysdate from dual
    union all
    select 201202,1111,3,35,sysdate from dual
    union all
    select 201202,1111,4,45,sysdate from dual
    CREATE TABLE C
    (PAY_PRD_ID    NUMBER(6) NOT NULL,
    C_ID        NUMBER(9) NOT NULL,
    C_EVNT_NO    NUMBER(3) NOT NULL,
    C_DAYS        NUMBER(3),
    C_LAST_UPD_DT    DATE
    PARTITION BY RANGE (PAY_PRD_ID)
    INTERVAL( 1)
      PARTITION C_0001 VALUES LESS THAN (201504)
    ENABLE ROW MOVEMENT;
    ALTER TABLE C ADD CONSTRAINT C_PK PRIMARY KEY (PAY_PRD_ID,C_ID,C_EVNT_NO) USING INDEX LOCAL;
    insert into c
    select 201202,1111,1,33,sysdate from dual
    union all
    select 201202,1111,2,44,sysdate from dual
    union all
    select 201202,1111,3,55,sysdate from dual
    union all
    select 201202,1111,4,66,sysdate from dual
    CREATE TABLE D
    (PAY_PRD_ID    NUMBER(6) NOT NULL,
    D_ID        NUMBER(9) NOT NULL,
    D_EVNT_NO    NUMBER(3) NOT NULL,
    D_DAYS        NUMBER(3),
    D_LAST_UPD_DT    DATE
    PARTITION BY RANGE (PAY_PRD_ID)
    INTERVAL( 1)
      PARTITION D_0001 VALUES LESS THAN (201504)
    ENABLE ROW MOVEMENT;
    ALTER TABLE D ADD CONSTRAINT D_PK PRIMARY KEY (PAY_PRD_ID,D_ID,D_EVNT_NO) USING INDEX LOCAL;
    insert into c
    select 201202,1111,1,33,sysdate from dual
    union all
    select 201202,1111,2,44,sysdate from dual
    union all
    select 201202,1111,3,55,sysdate from dual
    union all
    select 201202,1111,4,66,sysdate from dual

    Below query generated from Business Objects and submitted to Database (the case statement is generated by BO). Cant we use Case/Subquery/Decode etc for the partitioned column? We are assuming that  the case causing the issue to not to dynamic partition elimination on the other joined partitioned tables (TAB_B_RPT, TAB_C_RPT).
    SELECT TAB_D_RPT.acvy_amt,
           TAB_A_RPT.itnt_typ_desc,
           TAB_A_RPT.ls_typ_desc,
           TAB_A_RPT.evnt_no,
           TAB_C_RPT.pay_prd_id,
           TAB_B_RPT.id,
           TAB_A_RPT.to_mdfy,
           TAB_A_RPT.stat_desc
      FROM TAB_D_RPT,
           TAB_C_RPT fee_rpt,
           TAB_C_RPT,
           TAB_A_RPT,
           TAB_B_RPT
    WHERE (TAB_B_RPT.id = TAB_A_RPT.id)
       AND (    TAB_A_RPT.pay_prd_id = TAB_D_RPT.pay_prd_id -- expecting Partition Range Single, but doing Partition Range ALL
            AND TAB_A_RPT.evnt_no    = TAB_D_RPT.evnt_no
            AND TAB_A_RPT.id         = TAB_D_RPT.id
       AND (    TAB_A_RPT.pay_prd_id = TAB_C_RPT.pay_prd_id -- expecting Partition Range Single, but doing Partition Range ALL
            AND TAB_A_RPT.evnt_no    = TAB_C_RPT.evnt_no
            AND TAB_A_RPT.id         = TAB_C_RPT.id
       AND (    TAB_A_RPT.pay_prd_id = fee_rpt.pay_prd_id -- expecting Partition Range Single
            AND TAB_A_RPT.evnt_no    = fee_rpt.evnt_no
            AND TAB_A_RPT.id         = fee_rpt.id
       AND (TAB_A_RPT.rwnd_ind = 'N')
       AND (TAB_A_RPT.pay_prd_id =
               CASE '201202'
                  WHEN 'YYYYMM'
                     THEN (SELECT DISTINCT pay_prd.pay_prd_id
                                      FROM pay_prd
                                     WHERE pay_prd.stat_cd = 2)
                  ELSE TO_NUMBER ('201202', '999999')
               END
    And its explain plan is...
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 79 K Bytes: 641 M Cardinality: 3 M
    18 HASH JOIN Cost: 79 K Bytes: 641 M Cardinality: 3 M
    3 PART JOIN FILTER CREATE SYS.:BF0000 Cost: 7 K Bytes: 72 M Cardinality: 3 M
    2 PARTITION RANGE ALL Cost: 7 K Bytes: 72 M Cardinality: 3 M Partition #: 3 Partitions accessed #1 - #1048575
    1 TABLE ACCESS FULL TABLE TAB_D_RPT Cost: 7 K Bytes: 72 M Cardinality: 3 M Partition #: 3 Partitions accessed #1 - #1048575
    17 HASH JOIN Cost: 57 K Bytes: 182 M Cardinality: 874 K
    14 PART JOIN FILTER CREATE SYS.:BF0001 Cost: 38 K Bytes: 87 M Cardinality: 914 K
    13 HASH JOIN Cost: 38 K Bytes: 87 M Cardinality: 914 K
    6 PART JOIN FILTER CREATE SYS.:BF0002 Cost: 8 K Bytes: 17 M Cardinality: 939 K
    5 PARTITION RANGE ALL Cost: 8 K Bytes: 17 M Cardinality: 939 K Partition #: 9 Partitions accessed #1 - #1048575
    4 TABLE ACCESS FULL TABLE TAB_C_RPT Cost: 8 K Bytes: 17 M Cardinality: 939 K Partition #: 9 Partitions accessed #1 - #1048575
    12 HASH JOIN Cost: 24 K Bytes: 74 M Cardinality: 957 K
    7 INDEX FAST FULL SCAN INDEX (UNIQUE) TAB_B_RPT_PK Cost: 675 Bytes: 10 M Cardinality: 941 K
    11 PARTITION RANGE SINGLE Cost: 18 K Bytes: 65 M Cardinality: 970 K Partition #: 13 Partitions accessed #KEY(AP)
    10 TABLE ACCESS FULL TABLE TAB_A_RPT Cost: 18 K Bytes: 65 M Cardinality: 970 K Partition #: 13 Partitions accessed #KEY(AP)
    9 HASH UNIQUE Cost: 4 Bytes: 14 Cardinality: 2
    8 TABLE ACCESS FULL TABLE PAY_PRD Cost: 3 Bytes: 14 Cardinality: 2
    16 PARTITION RANGE JOIN-FILTER Cost: 8 K Bytes: 106 M Cardinality: 939 K Partition #: 17 Partitions accessed #:BF0001
    15 TABLE ACCESS FULL TABLE TAB_C_RPT Cost: 8 K Bytes: 106 M Cardinality: 939 K Partition #: 17 Partitions accessed #:BF0001
    Thanks Again.

  • Hash Partitioning

    Hi All,
    Does hash partitioning always use the same hashing function, and will it always produce the same result if a new table is created with the same number of hash partitions hashed on the same field?
    For example, I have to join a multi-million record data set to table1 this morning. table1 is hash partitioned on row_id into 32 partitions.
    If I create a temp table to hold the data I want to join and hash partition it likewise into 32 partitions on row_id, will any given record from partition number N in my new table find its match in partition number N of table?
    If so, that would allow us to join one partition at a time which performs exponentially better in the resource-contested environment.
    I hope you can help.

    Using 10gR2
    Partition pruning does occur when joined to a global temporary table with hash partitioning. Providing the parimary key on the global temp table is the key used for hashing the relational table:
    SQL> create table t (
      2    a number)
      3    partition by hash(a) (
      4      partition p1 ,
      5      partition p2 ,
      6      partition p3 ,
      7      partition p4
      8    )
      9  /
    Table created.
    SQL>
    SQL> alter table t add (constraint t_pk primary key (a)
      2  using index local (partition p1_idx
      3                   , partition p2_idx
      4                   , partition P3_idx
      5                   , partition p4_idx)
      6  )
      7  /
    Table altered.
    SQL> insert into t (a) values (1);
    1 row created.
    SQL> insert into t (a) values (2);
    1 row created.
    SQL> insert into t (a) values (3);
    1 row created.
    SQL>  insert into t (a) values (4);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create global temporary table tm (a number)
      2  /
    Table created.
    SQL> insert into tm (a) values (2);
    1 row created.
    SQL> set autotrace traceonly explain
    SQL> select tm.a from tm, t
      2  where tm.a = t.a
      3  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=26)
       1    0   NESTED LOOPS (Cost=2 Card=1 Bytes=26)
       2    1     TABLE ACCESS (FULL) OF 'TM' (TABLE (TEMP)) (Cost=2 Card=
              1 Bytes=13)
       3    1     PARTITION HASH (ITERATOR) (Cost=0 Card=1 Bytes=13)
       4    3       INDEX (UNIQUE SCAN) OF 'T_PK' (INDEX (UNIQUE)) (Cost=0
               Card=1 Bytes=13)As you can see from the above, a full scan was performed on the global temp table, but partition pruning occured on TM. So, in theory, whatever data you load the global temp table with, will be matched to the partition.
    P;

  • Creation of Hash Partitioned Global Index

    Hash Partion Index creation
    Hi friends,
    Could you suggest me whether we can create a hash partitioned index by using syntax as below in 9i.
    CREATE INDEX hgidx ON tab (c1,c2,c3) GLOBAL
    PARTITION BY HASH (c1,c2)
    (PARTITION p1 TABLESPACE tbs_1,
    PARTITION p2 TABLESPACE tbs_2,
    PARTITION p3 TABLESPACE tbs_3,
    PARTITION p4 TABLESPACE tbs_4);
    I am getting error ORA-14005 Missing Key word Range.
    Thanks in advance for your help.

    Yaseer,
    Is it possible to create Non-Partitioned and Global Index on Range-Partitioned Table?
    Yes
    We have 4 indexes on CS_BILLING range-partitioned table, in which one is CBS_CLIENT_CODE(*local partitioned index*) and others are unknown types of index to me??
    Means other 3 indexes are what type indexes ...either non-partitioned global index OR non-partitioned normal index??
    You got local index and 3 non-partitioned "NORMAL" b-tree tyep indexes
    Also if we create index as :(create index i_name on t_name(c_name)) By default it will create Global index. Please correct me......
    Above staement will create non-partitioned index
    Here is an example of creating global partitioned indexes
    CREATE INDEX month_ix ON sales(sales_month)
       GLOBAL PARTITION BY RANGE(sales_month)
          (PARTITION pm1_ix VALUES LESS THAN (2)
           PARTITION pm2_ix VALUES LESS THAN (3)
           PARTITION pm3_ix VALUES LESS THAN (4)
            PARTITION pm12_ix VALUES LESS THAN (MAXVALUE));Regards

  • Uneven distribution in Hash Partitioning

    Version :11.1.0.7.0 - 64bit Production
    OS :RHEL 5.3
    I have a range partitioning on ACCOUNTING_DATE column and have 24 monthly partitions.
    To get rid of buffer busy waits on index, i have created global partitioned index using below ddl
    DDL :
    CREATE INDEX IDX_GL_BATCH_ID ON SL_JOURNAL_ENTRY_LINES(GL_BATCH_ID)
    GLOBAL PARTITION BY HASH (GL_BATCH_ID) PARTITIONS 16 TABLESPACE OTC_IDX PARALLEL 8 INITRANS 8 MAXTRANS 8 PCTFREE 0 ONLINE;After index creation, i realized that only one index hash partition got all rows.
    select partition_name,num_rows from dba_ind_partitions where index_name='IDX_GL_BATCH_ID';
    PARTITION_NAME                   NUM_ROWS
    SYS_P77                                 0
    SYS_P79                                 0
    SYS_P80                                 0
    SYS_P81                                 0
    SYS_P83                                 0
    SYS_P84                                 0
    SYS_P85                                 0
    SYS_P87                                 0
    SYS_P88                                 0
    SYS_P89                                 0
    SYS_P91                                 0
    SYS_P92                                 0
    SYS_P78                                 0
    SYS_P82                                 0
    SYS_P86                                 0
    SYS_P90                         256905355As far as i understand, HASH partitioning will distribute evenly. By looking at above distribution, i think, i did not benefit of having multiple insert points using HASH partitioning as well.
    Here is index column statistics :
    select TABLE_NAME,COLUMN_NAME,NUM_DISTINCT,NUM_NULLS,LAST_ANALYZED,SAMPLE_SIZE,HISTOGRAM,AVG_COL_LEN from dba_tab_col_statistics where table_name='SL_JOURNAL_ENTRY_LINES'  and COLUMN_NAME='GL_BATCH_ID';
    TABLE_NAME                     COLUMN_NAME          NUM_DISTINCT  NUM_NULLS LAST_ANALYZED        SAMPLE_SIZE HISTOGRAM       AVG_COL_LEN
    SL_JOURNAL_ENTRY_LINES         GL_BATCH_ID                     1          0 2010/12/28 22:00:51    259218636 NONE                      4

    It looks like that inserted data has always the same value for the partitioning key: it is expected that in this case the same partition is used because
    >
    For optimal data distribution, the following requirements should be satisfied:
    Choose a column or combination of columns that is unique or almost unique.
    Create multiple partitions and subpartitions for each partition that is a power of two. For example, 2, 4, 8, 16, 32, 64, 128, and so on.
    >
    See http://download.oracle.com/docs/cd/E11882_01/server.112/e16541/part_avail.htm#VLDBG1270.
    Edited by: P. Forstmann on 29 déc. 2010 09:06

  • What is the significance of Hash Partition ?

    Hi All,
    First time i am going to implement Hash partition as well as subpartition also.before implementing i have some query regarding that.
    1.What is the Max no. of partition or sub partition we can specify and default ?
    2.How do we know whch data comes under whch Hash partition ? i mean suppose incase of range partition based on specified range we are able to know what data comes under what partition. Same incase of List partition.
    Does anyone have any idea.
    Thanks n advance.
    Anwar

    1. Take a look here : http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/limits003.htm
    2. Take a look here : Re: Access to HASH PARTITION
    Nicolas.
    Correction of link
    Message was edited by:
    N. Gasparotto

Maybe you are looking for