Subpartition or not?
We have the following scenario:
a) A very large table (approx. 450GB) with several millions of rows (more than 90MB everyday).
b) Fields:
i) Year.Month (eg 200411, 200412...)
ii) Company From
iii) Company To
iv) Description
The table holds information about items being sent from one company to another, the year & month the item was sent, and a description of the product.
c) Queries:
i) By month eg. All the items transferred during October 2004
ii) By Company From + Company To eg. All items transferred from Company A to Company B
Since the amount of data is very large, we have opted to adopt partitioning on this table. Originally the proposed design was the following:
Partition by range on the 'month' field. Then subpartition on the combined field of the 'Company From' and 'Company To' fields eg if the 'Company From' is ABC and 'Company To' is DEF, assuming they have the same length, the combined field is ABCDEF. There are 22 'Company From' + 'Company To' combinations. This means 22 partitions for every month and each of them will hold data according to the 'Company From' + 'Company To' combination. We want to retain data for two years (so that makes 22 * 24 = 528 partitions'). Theoretically since all the available data is found on one particular partition, there is no need to index.
However someone proposed another solution which we would like to ask METALINK for an advise whether it is better or not for our scenario. The other proposed solution is to partition only by range on month, so there is only one partition by month. To cater for the response time for queries regarding 'Company From' + 'Company To' clauses, we would bitmap index (local partition index) those two fields. Thus to find all the items transferred with a particular 'Company From' 'Company To' combination, we would search on each of the 22 partitions (1 partition per month, for two years) making use of bitmap indexing.
Which would provide the best response time?
22 combinations of company from/to probably would be enough for a bitmap index to be used to access an individual combination, as long as you physically order the rows within the partition by company from/to, however ...
... the optimum solution in your situation is going to depend on so many variables, with hardware, software version, data compression etc. that I feel the only robust method for determining the optimum approach is to actually benchmark the two methods on your own system with your own data.
Let me float out another option ... create a materialized view with query rewrite enabled, partitioned by company from/to. More to manage, but it might be worth benchmarking.
Similar Messages
-
Subpartitions NUM_ROWS not displaying
i have created the table as
create table range2 (roll number, age number) partition by range (roll) subpartition by range(age) (partition p1 values less than (100) (subpartition p1sp1 values less than (30), subpartition p1sp2 values less than (60), subpartition p1sp3 values less than (90), subpartition p1sp4 values less than (maxvalue)), partition p2 values less than (200) (subpartition p2sp1 values less than (30), subpartition p2sp2 values less than (60), subpartition p2sp3 values less than (90),subpartition p2sp4 values less than (maxvalue)));
insert into range2 select rownum,rownum from dual connect by level < 1000;
commit;
exec dbms_stats.gather_table_stats('VISHNU','RANGE2');
select table_name,partition_name,num_rows from user_tab_partitions where table_name='RANGE2';
TABLE_NAME PARTITION_NAME NUM_ROWS
RANGE2 P2 100
RANGE2 P1 99
RANGE2 P3 800
select table_name,partition_name,subpartition_name,num_rows from user_tab_subpartitions where table_name='RANGE2';
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P2 P2SP1
RANGE2 P2 P2SP2
RANGE2 P2 P2SP3
RANGE2 P2 P2SP4
RANGE2 P1 P1SP1
RANGE2 P1 P1SP2
RANGE2 P1 P1SP3
RANGE2 P1 P1SP4
RANGE2 P3 SYS_SUBP101
The num_rows column returns values from DBA_TAB_PARTITIONS but from DBA_TAB_SUBPARTITIONS it doesn't.. am i missing some thing here...
Thanks,
Vishnu PYou need to specify GRANULARITY parameter:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> --
SQL> create table range2 (roll number, age number)
2 partition by range (roll)
3 subpartition by range(age)
4 (partition p1 values less than (100) (
5 subpartition p1sp1 values less than (30),
6 subpartition p1sp2 values less than (60),
7 subpartition p1sp3 values less than (90),
8 subpartition p1sp4 values less than (maxvalue)),
9 partition p2 values less than (200) (
10 subpartition p2sp1 values less than (30),
11 subpartition p2sp2 values less than (60),
12 subpartition p2sp3 values less than (90),
13 subpartition p2sp4 values less than (maxvalue)),
14 partition p3 values less than (maxvalue) (
15 subpartition p3sp1 values less than (30),
16 subpartition p3sp2 values less than (60),
17 subpartition p3sp3 values less than (90),
18 subpartition p3sp4 values less than (maxvalue))
19 );
Table created.
SQL> insert into range2 select rownum,rownum from dual connect by level < 1000;
999 rows created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user,'RANGE2');
PL/SQL procedure successfully completed.
SQL> select
2 table_name,
3 partition_name,
4 num_rows
5 from user_tab_partitions
6 where table_name='RANGE2'
7 order by 1,2,3;
TABLE_NAME PARTITION_NAME NUM_ROWS
RANGE2 P1 99
RANGE2 P2 100
RANGE2 P3 800
SQL> select
2 table_name,
3 partition_name,
4 subpartition_name,
5 num_rows
6 from user_tab_subpartitions
7 order by 1,2,3;
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P1 P1SP1
RANGE2 P1 P1SP2
RANGE2 P1 P1SP3
RANGE2 P1 P1SP4
RANGE2 P2 P2SP1
RANGE2 P2 P2SP2
RANGE2 P2 P2SP3
RANGE2 P2 P2SP4
RANGE2 P3 P3SP1
RANGE2 P3 P3SP2
RANGE2 P3 P3SP3
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P3 P3SP4
12 rows selected.
SQL> --
SQL> exec dbms_stats.gather_table_stats(ownname=> user, tabname => 'RANGE2', partname => 'P1');
PL/SQL procedure successfully completed.
SQL> select
2 table_name,
3 partition_name,
4 subpartition_name,
5 num_rows
6 from user_tab_subpartitions
7 order by 1,2,3;
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P1 P1SP1
RANGE2 P1 P1SP2
RANGE2 P1 P1SP3
RANGE2 P1 P1SP4
RANGE2 P2 P2SP1
RANGE2 P2 P2SP2
RANGE2 P2 P2SP3
RANGE2 P2 P2SP4
RANGE2 P3 P3SP1
RANGE2 P3 P3SP2
RANGE2 P3 P3SP3
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P3 P3SP4
12 rows selected.
SQL> --
SQL> exec dbms_stats.gather_table_stats(ownname => user, tabname => 'RANGE2', granularity => 'SUBPARTITION');
PL/SQL procedure successfully completed.
SQL> select
2 table_name,
3 partition_name,
4 subpartition_name,
5 num_rows
6 from user_tab_subpartitions
7 where table_name='RANGE2'
8 order by 1,2,3;
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P1 P1SP1 29
RANGE2 P1 P1SP2 30
RANGE2 P1 P1SP3 30
RANGE2 P1 P1SP4 10
RANGE2 P2 P2SP1 0
RANGE2 P2 P2SP2 0
RANGE2 P2 P2SP3 0
RANGE2 P2 P2SP4 100
RANGE2 P3 P3SP1 0
RANGE2 P3 P3SP2 0
RANGE2 P3 P3SP3 0
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
RANGE2 P3 P3SP4 800
12 rows selected. -
AVOID Subpartition(list) to be created when Splitting Main Partition(range)
I have created a table structure as below:
CREATE TABLE TEST_SUBPARTITIONS_1
RECORD_ID INTEGER NOT NULL,
SUB_ID VARCHAR2(100),
COBDATE DATE,
DESC VARCHAR2(2000)
PARTITION BY RANGE (COBDATE)
SUBPARTITION BY list(SUB_ID)
PARTITION INITIAL_PARTITION VALUES LESS THAN (TO_DATE(' 2200-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE TBS_DATA
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
(SUBPARTITION INITIAL_SUBPARTITION VALUES ('INITIAL_DUMMY_SUB_ID') TABLESPACE TBS_DATA
CREATE UNIQUE INDEX TEST_SUBPARTITIONS_1_PK ON TEST_SUBPARTITIONS_1 (COBDATE, RECORD_ID, SUB_ID) LOCAL;
ALTER TABLE TEST_SUBPARTITIONS_1 ADD CONSTRAINT TEST_SUBPARTITIONS_1_PK PRIMARY KEY (COBDATE, RECORD_ID, SUB_ID);
I am partitioning the table based on range (COBDATE) and subpartitioning based on list (SUB_ID).
The table now is created with initial partitions and initial subpartition.
We are splitting the partitions in our procedure as below
ALTER TABLE TEST_SUBPARTITIONS_1 SPLIT PARTITION
TST_SUB_R21001231 AT (TO_DATE(20130220,'YYYYMMDD') ) INTO
(PARTITION TST_SUB_R20130219 TABLESPACE TBS_DATA, PARTITION TST_SUB_R21001231)
The partition is getting split correctly with new partition as
TST_SUB_R20130219, but the subpartition is also created automatically with some 'SYS' name.
(i.e Name: SYS_SUBP693 , Values: INITIAL_DUMMY_SUB_ID)
This happens after every split of range by COBDATE.
Here it has created as below:
Partition SubPartition
TST_SUB_R21001231 INITIAL_SUBPARTITION
TST_SUB_R20130219 SYS_SUBP693
TST_SUB_R20130220 SYS_SUBP694
TST_SUB_R20130221 SYS_SUBP695
I want to AVOID splitting subpartition when I split the main partition
i.e a SYS subpartition should not be created when I split the partition for COBDATE.
Let me know how do I avoid this in main "alter statement" above?
Any other solution? I do not want to drop the SYS subpartition later, instead want it to avoid creating only when I split the partition.>
I want to AVOID splitting subpartition when I split the main partition
i.e a SYS subpartition should not be created when I split the partition for COBDATE.
Let me know how do I avoid this in main "alter statement" above?
Any other solution? I do not want to drop the SYS subpartition later, instead want it to avoid creating only when I split the partition.
>
The subpartitions aren't being split. Oracle is creating new subpartitions for the new partition. The subpartitions need to exist since that is where the data is stored.
You can avoid the SYS prefix on the name though by using a different naming convention.
See the 'Splitting a *-List Partition' section of the VLDB and Partitioning Guide
http://docs.oracle.com/cd/E11882_01/server.112/e25523/part_admin002.htm#i1008028
>
The ALTER TABLE ... SPLIT PARTITION statement provides no means of specifically naming subpartitions resulting from the split of a partition in a composite partitioned table. However, for those subpartitions in the parent partition with names of the form partition name_subpartition name, the database generates corresponding names in the newly created subpartitions using the new partition names. All other subpartitions are assigned system generated names of the form SYS_SUBPn. System generated names are also assigned for the subpartitions of any partition resulting from the split for which a name is not specified. Unnamed partitions are assigned a system generated partition name of the form SYS_Pn. -
Hint to disable partition wise join
Is there a way to disable partition wise join(serial) in 10gR2? i.e. via hint.. The reason I want to do this is, to use intra-partition parallelism for a very big partition. re-partitioning or subpartitioning is not an option for now. SQL is scanning only one partition so P-W join is not useful and it limit the intra-partition parallelism.
TIA for your answers.user4529833 wrote:
Above is the plan. Currently there is no prallelism being used but P-W join is used as you can see. Table EC is huge .. (cardinality is screwed up here becasue of IN clause , which has just one vallid part key. [ 3rd party crappy app, so can't change it.] ) . I'd like to enable parallelism here using parallel (EC, 6) hint , it just applied to hash-join and not to table EC because of P-W join, I believe. What I want is to scan EC table via PQ slave.. i.e. PX BLOCK INTERATOR step before TABLE access step... How do I get one? Will PQ_DISTRIBUTE help me there??? or Is there any way to speed up the scan of EC..
The pq_distribute() should do the job. Here's an example
select
/*+
parallel(pt_range_1 2)
parallel(pt_range_2 2)
ordered
-- pq_distribute(pt_range_2 hash hash)
-- pq_distribute(pt_range_2 broadcast none)
pt_range_2.grp,
count(pt_range_1.small_vc)
from
pt_range_1,
pt_range_2
where
pt_range_1.id in (10,20,40)
and pt_range_2.id = pt_range_1.id
group by
pt_range_2.grp
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 3 | 42 | 6 (34)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,01 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,01 | PCWP | |
| 5 | PX SEND HASH | :TQ10000 | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,00 | P->P | HASH |
| 6 | PX PARTITION RANGE INLIST| | 3 | 42 | 5 (20)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWC | |
|* 7 | HASH JOIN | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | PT_RANGE_1 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWP | |
|* 9 | TABLE ACCESS FULL | PT_RANGE_2 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------------------Unhinted I have a partition-wise parallel join.
The next plan is using hash disrtibution - which may be better for you if the EX_C table is large:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 3 | 42 | 6 (34)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,02 | P->P | HASH |
|* 6 | HASH JOIN BUFFERED | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 7 | PX RECEIVE | | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 8 | PX SEND HASH | :TQ10000 | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,00 | P->P | HASH |
| 9 | PX BLOCK ITERATOR | | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWC | |
|* 10 | TABLE ACCESS FULL| PT_RANGE_1 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWP | |
| 11 | PX RECEIVE | | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,01 | PCWC | |
|* 14 | TABLE ACCESS FULL| PT_RANGE_2 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,01 | PCWP | |
--------------------------------------------------------------------------------------------------------------------------------------Then the broadcast version if the EC_C data is relatively small (so that the whole set can fit in the memory of each slave)
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 3 | 42 | 6 (34)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,02 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 3 | 42 | 6 (34)| 00:00:01 | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 5 | PX SEND HASH | :TQ10001 | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,01 | P->P | HASH |
|* 6 | HASH JOIN | | 3 | 42 | 5 (20)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX RECEIVE | | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 8 | PX SEND BROADCAST | :TQ10000 | 3 | 21 | 2 (0)| 00:00:01 | | | Q1,00 | P->P | BROADCAST |
| 9 | PX BLOCK ITERATOR | | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWC | |
|* 10 | TABLE ACCESS FULL| PT_RANGE_1 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,00 | PCWP | |
| 11 | PX BLOCK ITERATOR | | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,01 | PCWC | |
|* 12 | TABLE ACCESS FULL | PT_RANGE_2 | 3 | 21 | 2 (0)| 00:00:01 |KEY(I) |KEY(I) | Q1,01 | PCWP | |
--------------------------------------------------------------------------------------------------------------------------------------The "hash join buffered" in the hash/hash distribution might hammer your temporary tablespace though, thanks to [an oddity I discovered |http://jonathanlewis.wordpress.com/2008/11/05/px-buffer/] in parallel hash joins a little while ago.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking" Carl Sagan -
Hi, folks,
I am trying to do a table_stats, and all I want to do is a specific partition and all of its subpartitions. I tried using granularity='ALL', and it appears to be doing global stats, plus the specific partition and subpartitions. Can any of you tell me how I can accomplish doing just the part & subparts that I want?
Thanks!
Paul D.Thanks, Ignacio, but perhaps you did not understand the question - I am trying to do a single execution of table_stats, and I want a partition, its subpartitions, but NOT the global stats. My execution looks like this:
DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'owner',
TABNAME=> 'tbl_nm',
PARTNAME=> 'PARTITION_1',
GRANULARITY=> 'PARTITION');
If I use granularity of 'ALL', I get the global plus the partition and subparts; I do not want the global. If I use 'PARTITIION' I get only the partition. If I use 'SUBPARTITION' I get only the subpart. It looks as if I need to run twice - once w/ 'PARTITION', once with 'SUBPARTITION'.
Thanks!
Paul -
Pruning on subpartition not occurs
Hello
I have a little problem of performance on a query that seems for me simple
SELECT /*+ no_parallel (dwh_ind_ca_marge) use_nl(dwh_ind_ca_marge) */
FROM dwh_ind_ca_marge, dwh_ag ag, dwh_ag com
WHERE com.pk_id_ag = ag.id_ag_com
AND fk_id_ag = ag.pk_id_ag
AND com.cd_ag = 121
AND id_an_mois = 200910
Dwh_ind_ca_marge is a subpartition table (range / hash) :
the range is on the month (200101 200102 .....)
the hash is on fk_id_ag : the id of an agency (128 subpartitions)
dwh_ag is the table of agency with primary key pk_id_ag
In fact I dont understand why subquery pruning on fk_id_ag doesn t occur
The problem is workarounded by forcing the nested loop between dwh_ag andf dwh_ind_ca_marge
What is surprising is that on the 10053 trace the nested loop has a lower cost that the plan choosen !!!!
What s the problem ?
thanks
ps : i am on 10.2.0.5 enterprise edition
Edited by: user12210577 on 21 oct. 2010 05:22
Edited by: user12210577 on 21 oct. 2010 05:44QUERY BLOCK TEXT
select /*+ no_parallel (dwh_ind_ca_marge) */ -----use_nl(dwh_ind_ca_marge) */
* from dwh_ind_ca_marge, dwh_ag ag, dwh_ag com where
com.pk_id_ag = ag.id_ag_com and fk_id_ag= ag.pk_id_ag and
com.cd_ag=121 and id_an_mois=200910 and 1=1
QUERY BLOCK SIGNATURE
qb name was generated
signature (optimizer): qb_name=SEL$1 nbfros=3 flg=0
fro(0): flg=0 objn=14305 hint_alias="AG"@"SEL$1"
fro(1): flg=0 objn=14305 hint_alias="COM"@"SEL$1"
fro(2): flg=0 objn=121227 hint_alias="DWH_IND_CA_MARGE"@"SEL$1"
SYSTEM STATISTICS INFORMATION
Using WORKLOAD Stats
CPUSPEED: 1257 millions instructions/sec
SREADTIM: 10 milliseconds
MREADTIM: 42 millisecons
MBRC: 51.000000 blocks
MAXTHR: 33554432 bytes/sec
SLAVETHR: 2097152 bytes/sec
BASE STATISTICAL INFORMATION
Table Stats::
Table: DWH_AG Alias: COM (NOT ANALYZED)
#Rows: 1634 #Blks: 20 AvgRowLen: 100.00
Column (#1): PK_ID_AG(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 51 Nulls: 0 Density: 0.019584
Index Stats::
Index: IDX_DWH_AG_1 Col#: 5 (NOT ANALYZED)
LVLS: 1 #LB: 25 #DK: 100 LB/K: 1.00 DB/K: 1.00 CLUF: 800.00
Index: PK_DWH_AG Col#: 1 (NOT ANALYZED)
LVLS: 1 #LB: 25 #DK: 100 LB/K: 1.00 DB/K: 1.00 CLUF: 800.00
Table Stats::
Table: DWH_AG Alias: AG (NOT ANALYZED)
#Rows: 1634 #Blks: 20 AvgRowLen: 100.00
Column (#22): ID_AG_COM(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 51 Nulls: 0 Density: 0.019584
Column (#1): PK_ID_AG(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 51 Nulls: 0 Density: 0.019584
Index Stats::
Index: IDX_DWH_AG_1 Col#: 5 (NOT ANALYZED)
LVLS: 1 #LB: 25 #DK: 100 LB/K: 1.00 DB/K: 1.00 CLUF: 800.00
Index: PK_DWH_AG Col#: 1 (NOT ANALYZED)
LVLS: 1 #LB: 25 #DK: 100 LB/K: 1.00 DB/K: 1.00 CLUF: 800.00
Table Stats::
Table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE (Using composite stats)
#Rows: 100302273 #Blks: 46538 AvgRowLen: 452.00
Index Stats::
Index: IDX_DWH_IND_CA_MARGE_1 Col#: 88
LVLS: 2 #LB: 1223 #DK: 98204 LB/K: 1.00 DB/K: 2.00 CLUF: 277386.00
Index: IDX_DWH_IND_CA_MARGE_2 Col#: 10 5
LVLS: 3 #LB: 350280 #DK: 1086 LB/K: 322.00 DB/K: 33633.00 CLUF: 36526000.00
Index: IDX_DWH_IND_CA_MARGE_3 Col#: 6
LVLS: 3 #LB: 385313 #DK: 2124 LB/K: 181.00 DB/K: 19421.00 CLUF: 41252307.00
Index: IDX_DWH_IND_CA_MARGE_4 Col#: 9
LVLS: 3 #LB: 353713 #DK: 6863 LB/K: 80.00 DB/K: 14320.00 CLUF: 98677547.00
Index: IDX_DWH_IND_CA_MARGE_5 Col#: 10
LVLS: 3 #LB: 302187 #DK: 112 LB/K: 2698.00 DB/K: 95529.00 CLUF: 10699333.00
Index: IDX_DWH_IND_CA_MARGE_6 Col#: 5
LVLS: 3 #LB: 315320 #DK: 18 LB/K: 17517.00 DB/K: 1746632.00 CLUF: 31439393.00
Index: PK_DWH_IND_CA_MARGE Col#: 1
LVLS: 3 #LB: 314060 #DK: 97686787 LB/K: 1.00 DB/K: 1.00 CLUF: 92745280.00
SINGLE TABLE ACCESS PATH
BEGIN Single Table Cardinality Estimation
*** 2010-10-21 13:52:04.086
** Performing dynamic sampling initial checks. **
Column (#8): ID_AN_MOIS(NUMBER)
AvgLen: 5.00 NDV: 79 Nulls: 0 Density: 5.9155e-06 Min: 200404 Max: 201010
Histogram: Freq #Bkts: 79 UncompBkts: 15045341 EndPtVals: 79
** Dynamic sampling initial checks returning FALSE.
Table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
Card: Original: 100302273 Rounded: 1907307 Computed: 1907306.66 Non Adjusted: 1907306.66
END Single Table Cardinality Estimation
Access Path: TableScan
Cost: 4014.54 Resp: 4014.54 Degree: 0
Cost_io: 3710.00 Cost_cpu: 3963249652
Resp_io: 3710.00 Resp_cpu: 3963249652
Best:: AccessPath: TableScan
Cost: 4014.54 Degree: 1 Resp: 4014.54 Card: 1907306.66 Bytes: 0
SINGLE TABLE ACCESS PATH
BEGIN Single Table Cardinality Estimation
*** 2010-10-21 13:52:04.087
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 4).
** Dynamic sampling updated table stats.: blocks=20
*** 2010-10-21 13:52:04.087
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), COUNT(DISTINCT C3), NVL(SUM(CASE WHEN C3 IS NULL THEN 1 ELSE 0 END),0) FROM (SELECT /*+ NO_PARALLEL("AG") FULL("AG") NO_PARALLEL_INDEX("AG") */ 1 AS C1, 1 AS C2, "AG"."ID_AG_COM" AS C3 FROM "DWH_AG" "AG") SAMPLESUB
*** 2010-10-21 13:52:04.089
** Executed dynamic sampling query:
level : 4
sample pct. : 100.000000
actual sample size : 565
filtered sample card. : 565
orig. card. : 1634
block cnt. table stat. : 20
block cnt. for sampling: 20
max. sample block cnt. : 64
sample block cnt. : 20
ndv C3 : 557
scaled : 557.00
nulls C4 : 6
scaled : 6.00
min. sel. est. : -1.00000000
** Dynamic sampling col. stats.:
Column (#22): ID_AG_COM(NUMBER) Part#: 0
AvgLen: 22.00 NDV: 557 Nulls: 6 Density: 0.0017953
** Using dynamic sampling NULLs estimates.
** Using dynamic sampling NDV estimates.
Scaled NDVs using cardinality = 565.
** Using dynamic sampling card. : 565
** Dynamic sampling updated table card.
Table: DWH_AG Alias: AG
Card: Original: 565 Rounded: 565 Computed: 565.00 Non Adjusted: 565.00
END Single Table Cardinality Estimation
Access Path: TableScan
Cost: 3.05 Resp: 3.05 Degree: 0
Cost_io: 3.00 Cost_cpu: 713079
Resp_io: 3.00 Resp_cpu: 713079
Best:: AccessPath: TableScan
Cost: 3.05 Degree: 1 Resp: 3.05 Card: 565.00 Bytes: 0
SINGLE TABLE ACCESS PATH
BEGIN Single Table Cardinality Estimation
*** 2010-10-21 13:52:04.090
** Performing dynamic sampling initial checks. **
Column (#5): CD_AG(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 51 Nulls: 0 Density: 0.019584
** Dynamic sampling initial checks returning TRUE (level = 4).
** Dynamic sampling updated index stats.: IDX_DWH_AG_1, blocks=6
** Dynamic sampling updated index stats.: PK_DWH_AG, blocks=1
** Dynamic sampling index access candidate : IDX_DWH_AG_1
** Dynamic sampling updated table stats.: blocks=20
*** 2010-10-21 13:52:04.090
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), COUNT(DISTINCT C3), NVL(SUM(CASE WHEN C3 IS NULL THEN 1 ELSE 0 END),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("COM") FULL("COM") NO_PARALLEL_INDEX("COM") */ 1 AS C1, CASE WHEN "COM"."CD_AG"=121 THEN 1 ELSE 0 END AS C2, "COM"."PK_ID_AG" AS C3 FROM "DWH_AG" "COM") SAMPLESUB
*** 2010-10-21 13:52:04.091
** Executed dynamic sampling query:
level : 4
sample pct. : 100.000000
actual sample size : 565
filtered sample card. : 1
orig. card. : 1634
block cnt. table stat. : 20
block cnt. for sampling: 20
max. sample block cnt. : 64
sample block cnt. : 20
ndv C3 : 565
scaled : 565.00
nulls C4 : 0
scaled : 0.00
min. sel. est. : 0.01000000
** Dynamic sampling col. stats.:
Column (#1): PK_ID_AG(NUMBER) Part#: 0
AvgLen: 22.00 NDV: 565 Nulls: 0 Density: 0.0017699
** Using dynamic sampling NULLs estimates.
** Using dynamic sampling NDV estimates.
Scaled NDVs using cardinality = 565.
** Using recursive dynamic sampling card. est. : 565.000000
*** 2010-10-21 13:52:04.091
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), NVL(SUM(C3),0) FROM (SELECT /*+ NO_PARALLEL("COM") INDEX("COM" IDX_DWH_AG_1) NO_PARALLEL_INDEX("COM") */ 1 AS C1, 1 AS C2, 1 AS C3 FROM "DWH_AG" "COM" WHERE "COM"."CD_AG"=121 AND ROWNUM <= 2500) SAMPLESUB
*** 2010-10-21 13:52:04.091
** Executed dynamic sampling query:
level : 4
sample pct. : 100.000000
actual sample size : 565
filtered sample card. : 1
filtered sample card. (index IDX_DWH_AG_1): 1
orig. card. : 565
block cnt. table stat. : 20
block cnt. for sampling: 20
max. sample block cnt. : 4294967295
sample block cnt. : 20
min. sel. est. : 0.01000000
index IDX_DWH_AG_1 selectivity est.: 0.00176991
** Using dynamic sampling card. : 565
** Dynamic sampling updated table card.
** Using single table dynamic sel. est. : 0.00176991
Table: DWH_AG Alias: COM
Card: Original: 565 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
END Single Table Cardinality Estimation
Access Path: TableScan
Cost: 3.02 Resp: 3.02 Degree: 0
Cost_io: 3.00 Cost_cpu: 301409
Resp_io: 3.00 Resp_cpu: 301409
Access Path: index (AllEqRange)
Index: IDX_DWH_AG_1
resc_io: 3.00 resc_cpu: 28264
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
Cost: 3.00 Resp: 3.00 Degree: 1
Best:: AccessPath: IndexRange Index: IDX_DWH_AG_1
Cost: 3.00 Degree: 1 Resp: 3.00 Card: 1.00 Bytes: 0
OPTIMIZER STATISTICS AND COMPUTATIONS
GENERAL PLANS
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: DWH_AG[COM]#0 DWH_AG[AG]#1 DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
Now joining: DWH_AG[AG]#1
NL Join
Outer table: Card: 1.00 Cost: 3.00 Resp: 3.00 Degree: 1 Bytes: 706
Inner table: DWH_AG Alias: AG
Access Path: TableScan
NL Join: Cost: 6.06 Resp: 6.06 Degree: 1
Cost_io: 6.00 Cost_cpu: 741343
Resp_io: 6.00 Resp_cpu: 741343
Best NL cost: 6.06
resc: 6.06 resc_io: 6.00 resc_cpu: 741343
resp: 6.06 resp_io: 6.00 resp_cpu: 741343
Join Card: 1.00 = outer (1.00) * inner (565.00) * sel (0.0017763)
Join Card - Rounded: 1 Computed: 1.00
SM Join
Outer table:
resc: 3.00 card 1.00 bytes: 706 deg: 1 resp: 3.00
Inner table: DWH_AG Alias: AG
resc: 3.05 card: 565.00 bytes: 706 deg: 1 resp: 3.05
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 1 Row size: 787 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13013721
Total Temp space used: 0
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 55 Row size: 787 Total Rows: 565
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13246441
Total Temp space used: 0
SM join: Resc: 8.07 Resp: 8.07 [multiMatchCost=0.00]
SM cost: 8.07
resc: 8.07 resc_io: 6.00 resc_cpu: 27001505
resp: 8.07 resp_io: 6.00 resp_cpu: 27001505
HA Join
Outer table:
resc: 3.00 card 1.00 bytes: 706 deg: 1 resp: 3.00
Inner table: DWH_AG Alias: AG
resc: 3.05 card: 565.00 bytes: 706 deg: 1 resp: 3.05
using dmeth: 2 #groups: 1
Cost per ptn: 0.50 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 1 probefrag: 50 ppasses: 1
Hash join: Resc: 6.56 Resp: 6.56 [multiMatchCost=0.00]
HA cost: 6.56
resc: 6.56 resc_io: 6.00 resc_cpu: 7304854
resp: 6.56 resp_io: 6.00 resp_cpu: 7304854
Best:: JoinMethod: Hash
Cost: 6.56 Degree: 1 Resp: 6.56 Card: 1.00 Bytes: 1412
Now joining: DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
NL Join
Outer table: Card: 1.00 Cost: 6.56 Resp: 6.56 Degree: 1 Bytes: 1412
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
Access Path: TableScan
NL Join: Cost: 4017.38 Resp: 4017.38 Degree: 1
Cost_io: 3716.00 Cost_cpu: 3922124065
Resp_io: 3716.00 Resp_cpu: 3922124065
Access Path: index (RangeScan)
Index: IDX_DWH_IND_CA_MARGE_2
resc_io: 329255.00 resc_cpu: 4355303457
ix_sel: 0.0089286 ix_sel_with_filters: 0.0089286
NL Join: Cost: 329596.23 Resp: 329596.23 Degree: 1
Cost_io: 329261.00 Cost_cpu: 4362608310
Resp_io: 329261.00 Resp_cpu: 4362608310
Access Path: index (AllEqJoinGuess)
Index: IDX_DWH_IND_CA_MARGE_5
resc_io: 98230.00 resc_cpu: 2748127704
ix_sel: 0.0089286 ix_sel_with_filters: 0.0089286
NL Join: Cost: 98447.73 Resp: 98447.73 Degree: 1
Cost_io: 98236.00 Cost_cpu: 2755432557
Resp_io: 98236.00 Resp_cpu: 2755432557
****** trying bitmap/domain indexes ******
****** finished trying bitmap/domain indexes ******
Best NL cost: 4017.38
resc: 4017.38 resc_io: 3716.00 resc_cpu: 3922124065
resp: 4017.38 resp_io: 3716.00 resp_cpu: 3922124065
Join Card: 17090.67 = outer (1.00) * inner (1907306.66) * sel (0.0089286)
Join Card - Rounded: 17091 Computed: 17090.67
SM Join
Outer table:
resc: 6.56 card 1.00 bytes: 1412 deg: 1 resp: 6.56
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
resc: 4014.54 card: 1907306.66 bytes: 452 deg: 1 resp: 4014.54
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 1 Row size: 1564 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13013721
Total Temp space used: 0
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 118623 Row size: 508 Total Rows: 1907307
Initial runs: 10 Merge passes: 1 IO Cost / pass: 42378
Total IO sort cost: 161001 Total CPU sort cost: 4727050693
Total Temp space used: 2604131000
SM join: Resc: 165386.34 Resp: 165386.34 [multiMatchCost=0.00]
SM cost: 165386.34
resc: 165386.34 resc_io: 164717.00 resc_cpu: 8710618919
resp: 165386.34 resp_io: 164717.00 resp_cpu: 8710618919
HA Join
Outer table:
resc: 6.56 card 1.00 bytes: 1412 deg: 1 resp: 6.56
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
resc: 4014.54 card: 1907306.66 bytes: 452 deg: 1 resp: 4014.54
using dmeth: 2 #groups: 1
Cost per ptn: 15.16 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 1 probefrag: 108032 ppasses: 1
Hash join: Resc: 4036.26 Resp: 4036.26 [multiMatchCost=0.00]
HA cost: 4036.26
resc: 4036.26 resc_io: 3716.00 resc_cpu: 4167792216
resp: 4036.26 resp_io: 3716.00 resp_cpu: 4167792216
Best:: JoinMethod: Hash
Cost: 4036.26 Degree: 1 Resp: 4036.26 Card: 17090.67 Bytes: 1864
Best so far: Table#: 0 cost: 3.0022 card: 1.0000 bytes: 706
Table#: 1 cost: 6.5613 card: 1.0036 bytes: 1412
Table#: 2 cost: 4036.2614 card: 17090.6711 bytes: 31857624
Join order[2]: DWH_AG[COM]#0 DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2 DWH_AG[AG]#1
Now joining: DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
NL Join
Outer table: Card: 1.00 Cost: 3.00 Resp: 3.00 Degree: 1 Bytes: 706
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
Access Path: TableScan
NL Join: Cost: 4017.55 Resp: 4017.55 Degree: 1
Cost_io: 3713.00 Cost_cpu: 3963277916
Resp_io: 3713.00 Resp_cpu: 3963277916
Best NL cost:
.55
resc: 4017.55 resc_io: 3713.00 resc_cpu: 3963277916
resp: 4017.55 resp_io: 3713.00 resp_cpu: 3963277916
Join Card: 1907306.66 = outer (1.00) * inner (1907306.66) * sel (1)
Join Card - Rounded: 1907307 Computed: 1907306.66
Best:: JoinMethod: NestedLoop
Cost: 4017.55 Degree: 1 Resp: 4017.55 Card: 1907306.66 Bytes: 1158
Now joining: DWH_AG[AG]#1
NL Join
Outer table: Card: 1907306.66 Cost: 4017.55 Resp: 4017.55 Degree: 1 Bytes: 1158
Inner table: DWH_AG Alias: AG
Access Path: TableScan
NL Join: Cost: 3148354.25 Resp: 3148354.25 Degree: 1
Cost_io: 3043540.00 Cost_cpu: 1364023464708
Resp_io: 3043540.00 Resp_cpu: 1364023464708
Access Path: index (UniqueScan)
Index: PK_DWH_AG
resc_io: 1.00 resc_cpu: 10031
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 874409.56 Resp: 874409.56 Degree: 1
Cost_io: 873283.21 Cost_cpu: 14658019216
Resp_io: 873283.21 Resp_cpu: 14658019216
Access Path: index (AllEqUnique)
Index: PK_DWH_AG
resc_io: 1.00 resc_cpu: 10031
ix_sel: 0.0089286 ix_sel_with_filters: 0.0089286
NL Join: Cost: 874409.56 Resp: 874409.56 Degree: 1
Cost_io: 873283.21 Cost_cpu: 14658019216
Resp_io: 873283.21 Resp_cpu: 14658019216
Best NL cost: 874409.56
resc: 874409.56 resc_io: 873283.21 resc_cpu: 14658019216
resp: 874409.56 resp_io: 873283.21 resp_cpu: 14658019216
Join Card: 17090.67 = outer (1907306.66) * inner (565.00) * sel (1.5860e-05)
Join Card - Rounded: 17091 Computed: 17090.67
SM Join
Outer table:
resc: 4017.55 card 1907306.66 bytes: 1158 deg: 1 resp: 4017.55
Inner table: DWH_AG Alias: AG
resc: 3.05 card: 565.00 bytes: 706 deg: 1 resp: 3.05
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 299827 Row size: 1284 Total Rows: 1907307
Initial runs: 24 Merge passes: 1 IO Cost / pass: 107112
Total IO sort cost: 406939 Total CPU sort cost: 9189380397
Total Temp space used: 5208237000
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 55 Row size: 787 Total Rows: 565
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13246441
Total Temp space used: 0
SM join: Resc: 411666.75 Resp: 411666.75 [multiMatchCost=0.00]
SM cost: 411666.75
resc: 411666.75 resc_io: 410655.00 resc_cpu: 13166617832
resp: 411666.75 resp_io: 410655.00 resp_cpu: 13166617832
HA Join
Outer table:
resc: 4017.55 card 1907306.66 bytes: 1158 deg: 1 resp: 4017.55
Inner table: DWH_AG Alias: AG
resc: 3.05 card: 565.00 bytes: 706 deg: 1 resp: 3.05
using dmeth: 2 #groups: 1
Cost per ptn: 50003.38 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 272406 probefrag: 50 ppasses: 1
Hash join: Resc: 54024.11 Resp: 54024.11 [multiMatchCost=0.13]
HA Join (swap)
Outer table:
resc: 3.05 card 565.00 bytes: 706 deg: 1 resp: 3.05
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
resc: 4017.55 card: 1907306.66 bytes: 1158 deg: 1 resp: 4017.55
using dmeth: 2 #groups: 1
Cost per ptn: 15.16 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 50 probefrag: 272406 ppasses: 1
Hash join: Resc: 4035.76 Resp: 4035.76 [multiMatchCost=0.00]
HA cost: 4035.76
resc: 4035.76 resc_io: 3716.00 resc_cpu: 4161313305
resp: 4035.76 resp_io: 3716.00 resp_cpu: 4161313305
Best:: JoinMethod: Hash
Cost: 4035.76 Degree: 1 Resp: 4035.76 Card: 17090.67 Bytes: 1864
Best so far: Table#: 0 cost: 3.0022 card: 1.0000 bytes: 706
Table#: 2 cost: 4017.5461 card: 1907306.6603 bytes: 2208661506
Table#: 1 cost: 4035.7635 card: 17090.6711 bytes: 31857624
Join order[3]: DWH_AG[AG]#1 DWH_AG[COM]#0 DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
Now joining: DWH_AG[COM]#0
NL Join
Outer table: Card: 565.00 Cost: 3.05 Resp: 3.05 Degree: 1 Bytes: 706
Inner table: DWH_AG Alias: COM
Access Path: TableScan
NL Join: Cost: 918.14 Resp: 918.14 Degree: 1
Cost_io: 905.00 Cost_cpu: 171009051
Resp_io: 905.00 Resp_cpu: 171009051
Access Path: index (UniqueScan)
Index: PK_DWH_AG
resc_io: 1.00 resc_cpu: 10081
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 568.49 Resp: 568.49 Degree: 1
Cost_io: 568.00 Cost_cpu: 6409092
Resp_io: 568.00 Resp_cpu: 6409092
Access Path: index (AllEqJoin)
Index: IDX_DWH_AG_1
resc_io: 3.00 resc_cpu: 28264
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 1699.28 Resp: 1699.28 Degree: 1
Cost_io: 1698.00 Cost_cpu: 16682420
Resp_io: 1698.00 Resp_cpu: 16682420
Access Path: index (AllEqUnique)
Index: PK_DWH_AG
resc_io: 1.00 resc_cpu: 10081
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 568.49 Resp: 568.49 Degree: 1
Cost_io: 568.00 Cost_cpu: 6409092
Resp_io: 568.00 Resp_cpu: 6409092
****** trying bitmap/domain indexes ******
Access Path: index (AllEqJoin)
Index: IDX_DWH_AG_1
resc_io: 1.00 resc_cpu: 8971
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 568.44 Resp: 568.44 Degree: 1
Cost_io: 568.00 Cost_cpu: 5781942
Resp_io: 568.00 Resp_cpu: 5781942
Access Path: index (AllEqUnique)
Index: PK_DWH_AG
resc_io: 0.00 resc_cpu: 1900
ix_sel: 0.0017699 ix_sel_with_filters: 0.0017699
NL Join: Cost: 3.14 Resp: 3.14 Degree: 1
Cost_io: 3.00 Cost_cpu: 1786579
Resp_io: 3.00 Resp_cpu: 1786579
Access path: Bitmap index - rejected
Cost: 577.49 Cost_io: 576.81 Cost_cpu: 8873906 Sel: 1.7763e-05
Not believed to be index-only
****** finished trying bitmap/domain indexes ******
Best NL cost: 568.49
resc: 568.49 resc_io: 568.00 resc_cpu: 6409092
resp: 568.49 resp_io: 568.00 resp_cpu: 6409092
Join Card: 1.00 = outer (565.00) * inner (1.00) * sel (0.0017763)
Join Card - Rounded: 1 Computed: 1.00
SM Join
Outer table:
resc: 3.05 card 565.00 bytes: 706 deg: 1 resp: 3.05
Inner table: DWH_AG Alias: COM
resc: 3.00 card: 1.00 bytes: 706 deg: 1 resp: 3.00
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 55 Row size: 787 Total Rows: 565
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13246441
Total Temp space used: 0
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 1 Row size: 787 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13013721
Total Temp space used: 0
SM join: Resc: 8.07 Resp: 8.07 [multiMatchCost=0.00]
SM cost: 8.07
resc: 8.07 resc_io: 6.00 resc_cpu: 27001505
resp: 8.07 resp_io: 6.00 resp_cpu: 27001505
HA Join
Outer table:
resc: 3.05 card 565.00 bytes: 706 deg: 1 resp: 3.05
Inner table: DWH_AG Alias: COM
resc: 3.00 card: 1.00 bytes: 706 deg: 1 resp: 3.00
using dmeth: 2 #groups: 1
Cost per ptn: 0.51 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 50 probefrag: 1 ppasses: 1
Hash join: Resc: 6.56 Resp: 6.56 [multiMatchCost=0.00]
HA Join (swap)
Outer table:
resc: 3.00 card 1.00 bytes: 706 deg: 1 resp: 3.00
Inner table: DWH_AG Alias: AG
resc: 3.05 card: 565.00 bytes: 706 deg: 1 resp: 3.05
using dmeth: 2 #groups: 1
Cost per ptn: 0.50 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 1 probefrag: 50 ppasses: 1
Hash join: Resc: 6.56 Resp: 6.56 [multiMatchCost=0.00]
HA cost: 6.56
resc: 6.56 resc_io: 6.00 resc_cpu: 7304854
resp: 6.56 resp_io: 6.00 resp_cpu: 7304854
Best:: JoinMethod: Hash
Cost: 6.56 Degree: 1 Resp: 6.56 Card: 1.00 Bytes: 1412
Now joining: DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
NL Join
Outer table: Card: 1.00 Cost: 6.56 Resp: 6.56 Degree: 1 Bytes: 1412
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
Access Path: TableScan
NL Join: Cost: 4017.38 Resp: 4017.38 Degree: 1
Cost_io: 3716.00 Cost_cpu: 3922124065
Resp_io: 3716.00 Resp_cpu: 3922124065
Access Path: index (RangeScan)
Index: IDX_DWH_IND_CA_MARGE_2
resc_io: 329255.00 resc_cpu: 4355303457
ix_sel: 0.0089286 ix_sel_with_filters: 0.0089286
NL Join: Cost: 329596.23 Resp: 329596.23 Degree: 1
Cost_io: 329261.00 Cost_cpu: 4362608310
Resp_io: 329261.00 Resp_cpu: 4362608310
Access Path: index (AllEqJoinGuess)
Index: IDX_DWH_IND_CA_MARGE_5
resc_io: 98230.00 resc_cpu: 2748127704
ix_sel: 0.0089286 ix_sel_with_filters: 0.0089286
NL Join: Cost: 98447.73 Resp: 98447.73 Degree: 1
Cost_io: 98236.00 Cost_cpu: 2755432557
Resp_io: 98236.00 Resp_cpu: 2755432557
****** trying bitmap/domain indexes ******
****** finished trying bitmap/domain indexes ******
Best NL cost: 4017.38
resc: 4017.38 resc_io: 3716.00 resc_cpu: 3922124065
resp: 4017.38 resp_io: 3716.00 resp_cpu: 3922124065
Join Card: 17090.67 = outer (1.00) * inner (1907306.66) * sel (0.0089286)
Join Card - Rounded: 17091 Computed: 17090.67
SM Join
Outer table:
resc: 6.56 card 1.00 bytes: 1412 deg: 1 resp: 6.56
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
resc: 4014.54 card: 1907306.66 bytes: 452 deg: 1 resp: 4014.54
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 1 Row size: 1564 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 13013721
Total Temp space used: 0
SORT resource Sort statistics
Sort width: 598 Area size: 1048576 Max Area size: 104857600
Degree: 1
Blocks to Sort: 118623 Row size: 508 Total Rows: 1907307
Initial runs: 10 Merge passes: 1 IO Cost / pass: 42378
Total IO sort cost: 161001 Total CPU sort cost: 4727050693
Total Temp space used: 2604131000
SM join: Resc: 165386.34 Resp: 165386.34 [multiMatchCost=0.00]
SM cost: 165386.34
resc: 165386.34 resc_io: 164717.00 resc_cpu: 8710618919
resp: 165386.34 resp_io: 164717.00 resp_cpu: 8710618919
HA Join
Outer table:
resc: 6.56 card 1.00 bytes: 1412 deg: 1 resp: 6.56
Inner table: DWH_IND_CA_MARGE Alias: DWH_IND_CA_MARGE
resc: 4014.54 card: 1907306.66 bytes: 452 deg: 1 resp: 4014.54
using dmeth: 2 #groups: 1
Cost per ptn: 15.16 #ptns: 1
hash_area: 256 (max=25600) buildfrag: 1 probefrag: 108032 ppasses: 1
Hash join: Resc: 4036.26 Resp: 4036.26 [multiMatchCost=0.00]
HA cost: 4036.26
resc: 4036.26 resc_io: 3716.00 resc_cpu: 4167792216
resp: 4036.26 resp_io: 3716.00 resp_cpu: 4167792216
Join order aborted: cost > best plan cost
Join order[4]: DWH_AG[AG]#1 DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2 DWH_AG[COM]#0
Now joining: DWH_IND_CA_MARGE[DWH_IND_CA_MARGE]#2
************** -
Issue while using SUBPARTITION clause in the MERGE statement in PLSQL Code
Hello All,
I am using the below code to update specific sub-partition data using oracle merge statements.
I am getting the sub-partition name and passing this as a string to the sub-partition clause.
The Merge statement is failing stating that the specified sub-partition does not exist. But the sub-partition do exists for the table.
We are using Oracle 11gr2 database.
Below is the code which I am using to populate the data.
declare
ln_min_batchkey PLS_INTEGER;
ln_max_batchkey PLS_INTEGER;
lv_partition_name VARCHAR2 (32767);
lv_subpartition_name VARCHAR2 (32767);
begin
FOR m1 IN ( SELECT (year_val + 1) AS year_val, year_val AS orig_year_val
FROM ( SELECT DISTINCT
TO_CHAR (batch_create_dt, 'YYYY') year_val
FROM stores_comm_mob_sub_temp
ORDER BY 1)
ORDER BY year_val)
LOOP
lv_partition_name :=
scmsa_handset_mobility_data_build.fn_get_partition_name (
p_table_name => 'STORES_COMM_MOB_SUB_INFO',
p_search_string => m1.year_val);
FOR m2
IN (SELECT DISTINCT
'M' || TO_CHAR (batch_create_dt, 'MM') AS month_val
FROM stores_comm_mob_sub_temp
WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val)
LOOP
lv_subpartition_name :=
scmsa_handset_mobility_data_build.fn_get_subpartition_name (
p_table_name => 'STORES_COMM_MOB_SUB_INFO',
p_partition_name => lv_partition_name,
p_search_string => m2.month_val);
DBMS_OUTPUT.PUT_LINE('The lv_subpartition_name => '||lv_subpartition_name||' and lv_partition_name=> '||lv_partition_name);
IF lv_subpartition_name IS NULL
THEN
DBMS_OUTPUT.PUT_LINE('INSIDE IF => '||m2.month_val);
INSERT INTO STORES_COMM_MOB_SUB_INFO T1 (
t1.ntlogin,
t1.first_name,
t1.last_name,
t1.job_title,
t1.store_id,
t1.batch_create_dt)
SELECT t2.ntlogin,
t2.first_name,
t2.last_name,
t2.job_title,
t2.store_id,
t2.batch_create_dt
FROM stores_comm_mob_sub_temp t2
WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val
AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
m2.month_val;
ELSIF lv_subpartition_name IS NOT NULL
THEN
DBMS_OUTPUT.PUT_LINE('INSIDE ELSIF => '||m2.month_val);
MERGE INTO (SELECT *
FROM stores_comm_mob_sub_info
SUBPARTITION (lv_subpartition_name)) T1 --> Issue Here
USING (SELECT *
FROM stores_comm_mob_sub_temp
WHERE TO_CHAR (batch_create_dt, 'YYYY') =
m1.orig_year_val
AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
m2.month_val) T2
ON (T1.store_id = T2.store_id
AND T1.ntlogin = T2.ntlogin)
WHEN MATCHED
THEN
UPDATE SET
t1.postpaid_totalqty =
(NVL (t1.postpaid_totalqty, 0)
+ NVL (t2.postpaid_totalqty, 0)),
t1.sales_transaction_dt =
GREATEST (
NVL (t1.sales_transaction_dt,
t2.sales_transaction_dt),
NVL (t2.sales_transaction_dt,
t1.sales_transaction_dt)),
t1.batch_create_dt =
GREATEST (
NVL (t1.batch_create_dt, t2.batch_create_dt),
NVL (t2.batch_create_dt, t1.batch_create_dt))
WHEN NOT MATCHED
THEN
INSERT (t1.ntlogin,
t1.first_name,
t1.last_name,
t1.job_title,
t1.store_id,
t1.batch_create_dt)
VALUES (t2.ntlogin,
t2.first_name,
t2.last_name,
t2.job_title,
t2.store_id,
t2.batch_create_dt);
END IF;
END LOOP;
END LOOP;
COMMIT;
end;
Much appreciate your inputs here.
Thanks,
MK.
(SORRY TO POST THE SAME QUESTION TWICE).
Edited by: Maddy on May 23, 2013 10:20 PMDuplicate question
-
How to truncate data in a subpartition
Hi All,
I am using oracle 11gr2 database.
I have a table as given below
CREATE TABLE SCMSA_ESP.PP_DROP
ESP_MESSAGE_ID VARCHAR2(50 BYTE) NOT NULL ,
CREATE_DT DATE DEFAULT SYSDATE,
JOB_LOG_ID NUMBER NOT NULL ,
MON NUMBER GENERATED ALWAYS AS (TO_CHAR("CREATE_DT",'MM'))
TABLESPACE SCMSA_ESP_DATA
PARTITION BY RANGE (JOB_LOG_ID)
SUBPARTITION BY LIST (MON)
PARTITION PMINVALUE VALUES LESS THAN (1)
( SUBPARTITION PMINVALUE_M1 VALUES ('01') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M2 VALUES ('02') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M3 VALUES ('03') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M4 VALUES ('04') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M5 VALUES ('05') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M6 VALUES ('06') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M7 VALUES ('07') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M8 VALUES ('08') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M9 VALUES ('09') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M10 VALUES ('10') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M11 VALUES ('11') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMINVALUE_M12 VALUES ('12') TABLESPACE SCMSA_ESP_DATA
PARTITION PMAXVALUE VALUES LESS THAN (MAXVALUE)
( SUBPARTITION PMAXVALUE_M1 VALUES ('01') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M2 VALUES ('02') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M3 VALUES ('03') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M4 VALUES ('04') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M5 VALUES ('05') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M6 VALUES ('06') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M7 VALUES ('07') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M8 VALUES ('08') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M9 VALUES ('09') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M10 VALUES ('10') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M11 VALUES ('11') TABLESPACE SCMSA_ESP_DATA,
SUBPARTITION PMAXVALUE_M12 VALUES ('12') TABLESPACE SCMSA_ESP_DATA
ENABLE ROW MOVEMENT;
I have populate two sets of data.
One with Positive job_log_id and another with Negative job logid as given below.
Step 1:
Data going to PMAXVALUE Partition
INSERT INTO PP_DROP ( ESP_MESSAGE_ID, CREATE_DT,JOB_LOG_ID)
SELECT LEVEL, SYSDATE+TRUNC(DBMS_RANDOM.VALUE(1,300)), 1 FROM DUAL CONNECT BY LEVEL <=300;
Step 2:
Data going to PMINVALUE partition
INSERT INTO PP_DROP ( ESP_MESSAGE_ID, CREATE_DT,JOB_LOG_ID)
SELECT LEVEL, SYSDATE+TRUNC(DBMS_RANDOM.VALUE(1,300)), -1 FROM DUAL CONNECT BY LEVEL <=300;
Now the question is how to truncate the data that is present only in the Positive partitions subpartition
Like in the PMAXVALUE partition I have 10 subpartitions and I need to truncate the data in the JAN MONTH Partition only of the PMAXVALUE partition.
Appreciate your valuable response.
Thanks,
MK.For future reference:
http://www.morganslibrary.org/reference/truncate.html
The library index is located at
http://www.morganslibrary.org/library.html -
Moving Subpartitions to a duplicate table in a different schema.
+NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
Hello Ladies and Gentlemen.
We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
Any helpful replies welcome.
Cheers.
JamesYou CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
See :
SQL> drop table part_subpart purge;
Table dropped.
SQL> drop table NEW_part_subpart purge;
Table dropped.
SQL> drop table STG_part_subpart purge;
Table dropped.
SQL>
SQL> create table part_subpart(col_1 number not null, col_2 varchar2(30))
2 partition by range (col_1) subpartition by list (col_2)
3 (
4 partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
5 ,
6 partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
7 )
8 /
Table created.
SQL>
SQL> create index part_subpart_ndx on part_subpart(col_1) local;
Index created.
SQL>
SQL>
SQL> insert into part_subpart values (1,'A');
1 row created.
SQL> insert into part_subpart values (2,'A');
1 row created.
SQL> insert into part_subpart values (2,'B');
1 row created.
SQL> insert into part_subpart values (2,'B');
1 row created.
SQL> insert into part_subpart values (2,'C');
1 row created.
SQL> insert into part_subpart values (11,'A');
1 row created.
SQL> insert into part_subpart values (11,'C');
1 row created.
SQL>
SQL> commit;
Commit complete.
SQL>
SQL> create table NEW_part_subpart(col_1 number not null, col_2 varchar2(30))
2 partition by range (col_1) subpartition by list (col_2)
3 (
4 partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
5 ,
6 partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
7 )
8 /
Table created.
SQL>
SQL> create table STG_part_subpart(col_1 number not null, col_2 varchar2(30))
2 /
Table created.
SQL>
SQL> -- ensure that the Staging table is empty
SQL> truncate table STG_part_subpart;
Table truncated.
SQL> -- exchanging a subpart out of part_subpart
SQL> alter table part_subpart exchange subpartition
2 p_2_s_1 with table STG_part_subpart;
Table altered.
SQL> -- exchanging the subpart into NEW_part_subpart
SQL> alter table NEW_part_subpart exchange subpartition
2 n_p_2_s_1 with table STG_part_subpart;
Table altered.
SQL>
SQL>
SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
COL_1 COL_2
11 A
SQL>
SQL> select * from part_subpart subpartition (p_2_s_1);
no rows selected
SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
Hemant K Chitale
Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
Added clarification for cross-schema exchange. -
HI,
I am getting error while creating a SUBPARTITION for my table.
CREATE TABLE Tab1
col1 NUMBER(12),
col2 CHAR(1 BYTE),
col3 CHAR(1 BYTE),
col4 NUMBER(12)
PARTITION BY LIST (col1 )
SUBPARTITION BY HASH (col4)
SUBPARTITION TEMPLATE(
SUBPARTITION SP_1 TABLESPACE C_D
PARTITION PAR_0 VALUES (0)
TABLESPACE C_D
Error
SUBPARTITION BY HASH (col4)
ERROR at line 9:
ORA-00922: missing or invalid option
The syntax is all correct... i am not understanding where i am going wrong. Can anyone tell me my mistake.
Thanks
SamiWhat Hoek said is correct, at least for 10.2.
11.2 supports list/hash and your code works as expected (after changing the tablespace to users):
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> CREATE TABLE Tab1
2 (
3 col1 NUMBER(12),
4 col2 CHAR(1 BYTE),
5 col3 CHAR(1 BYTE),
6 col4 NUMBER(12)
7 )
8 PARTITION BY LIST (col1 )
9 SUBPARTITION BY HASH (col4)
10 SUBPARTITION TEMPLATE(
11 SUBPARTITION SP_1 TABLESPACE users
12 )
13 (
14 PARTITION PAR_0 VALUES (0)
15 )
16 TABLESPACE users;
Table created.
SQL>But you didn't tell us your Oracle version so..... -
ORA-14189: this physical attribute may not be specified for an index subpar
Hi,
I have many partition table and their subpartition and also I create index partition ans index subpartition.
I moved partition table another new tablespace and then when I rebuild partition indexes I am getting below error.
ORA-14189: this physical attribute may not be specified for an index subpartition
I coulsn't understand the problem and I also not find a answer google.
Details:
ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP148ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP1ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP21ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP41ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP61ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP62ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP63ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP64ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP122ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP123ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP124ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP125ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP201ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP206ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP207ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP208ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP261ALTER INDEX ADMINISTRATOR.DX_IND REBUILD
SUBPARTITION SYS_SUBP262
Error at line 2
ORA-14189: this physical attribute may not be specified for an index subpartition
db version: 9.2.0.8
Could you any body help me please how can I solve this problem.
thanks and regards,Hi Justen,
I just run rebuild the index with TOAD. So just I copied toad error output so I haven't written it and also in my computer there is a space. I think when I paste the output here, space gone out.
Anyway, I found workaround solution. First I am taking the script from toad then drop and recreate the index. But this takes very long time, only one index takes 1 hour for recreation. so I have so many index like this and If I calculate all indexes takes 3 days for recreation again.
any other solution is there?
thanks and regards -
Where did the Partitions and SubPartitions go?
I created a table with partition Range (Transaction_Date, Retention_Period) and hash (Record_Id) subpartition template (for 32 subpartitions)
Then I add more partitions and while the loop is going on I can get counts of partitions and subpartitions. The job finished and my log table shows about 1800 partitions added and there should be 32 subpartitions for each of the partitions. However, user_tab_partitions shows zero records for the table, and user_tab_subpartitions also show zero record. After a few minutes the partitions show up but no subpartitions. The indexes on the table have also disappeared (one local and one global)
Any explanation for this behaviour?
Working on Exadata 11.2.0.3
Querying
USER_TABLES
USER_TAB_PARTITIONS
USER_TAB_SUBPARTITIONS
USER_INDEXES>
Step 1. Create Table xyz (c1 date, c2 integer c3 integer, etc)
partition by range (c1,c2)
subpartition template (s01, s02... s32)
create index i1 on xyz (c1,c2,c3) local;
Then, since I want to create about 1800 partitions I have a procedure that has a "loop around" ALTER TABLE add Partition .. until all the partitions are created. This is the "Job" which while running I query USER_TAB_PARTITIONS and USER_TAB_SUBPARTITIONS to see how things are progressing. And Yes ALTER Table has no progressing to verify.
So al the partitions get created. No errors from the procedure to go through creating all the partitions. So I would expect that at the end I should get to see all the new partitions for the Table. Instead I get "no records" from USER_TAB_PARTITIONS and USER_TAB_SUBPARTITIONS.
I am also aware that "ALTER TABLE ADD PARTITION .." cannot make indexes go away. However, if the query on USER_INDEXES returns nothing, what happend to the Index created before the partitions were added?
I am not using DBMS_REDEFINITION. The only procedure is to add partitions one at a time for each date for 3 years. If you have a better way than a procedure please advise accordingly.
>
In order to help you the first step is to understand what problem you are dealing with. Then comes trying to determine what options are available for addressing the problem. There are too many times , and yours may, or may not, be another one, where people seem to have settled on a solution before they have really identified the problem.
Anytime someone mentions the use of dynamic SQL it raises a red flag. And when that use is for DDL, rather than DMl, it raises a REALLY BIG red flag.
Schema objects need to be managed properly and the DDL that creates them needs to be properly written and kept in some sort of version control.
Scripts and procedures that use dynamic SQL are more properly used to create DDL, not to execute it. That is, rather than use a procedure to dynamically create or alter a table you would use the procedure to dynamically create a DDL script that would create or alter the table.
Let's assume that you know for certain that your table really needs to have 1800 partitions, be subpartitioned the way you say and have partition and subpartitions names that you assign. Well, that would be a pain to hand-write 1800 partition definitions.
So you would create a procedure that would produce a CREATE TABLE script that had the proper clauses and syntax to specify those 1800 partitions. Your 'loop' would not EXECUTE an ALTER TABLE for each partition but would create the partition specification and modify the partition boundaries for each iteration through the loop. Sort of like
for i from 1 to 365 loop
add partition spec for startDate + i
end loop;The number of iterations would be a parameter and you would start with 2 or 3. Always test with the smallest code that will produce the correct results. If the code works for 3 days it will work for any larger reasonable number.
Then you would save that script in your version control system and run it to create the table. There would be nothing to monitor since there is just one script and when it is done it is done.
That would be a proper use of dynamic sql: to produce DDL, not to execute it.
Back to your issue. If I were your manager then based on what you posted I would expect you to already have
1. a requirements document that stated the problem (e.g. performance, data management) that was being addressed
2. test results that showed that your proposed solution (a table partitioned the way you posted) solves the problem
The requirements doc would have detail about what the performance/management issues are and what impact they are having
You also need to document what the possible solutions are, the relative merits of each solution and the factors you considered when ranking the solutions. That is, why is your particular partitioning scheme the best solution for the problem.
You should have test results that show the execution plans and performance you achieved by using a test version of your proposed table and indexes.
Until you have 'proven' that your solution will work as you expect I wouldn't recommend implementing the full-blown version of it.
1. Create a table MANUALLY that has 2 or three days worth of partitions.
2. Load those partitions with a representative amount of data
3. Execute test queries to query data from one of those partitions
4. Execute the same test queries against your current table
5. Capture the execution plans (the actual ones) for those queries. Verify that you are getting the performance improvements that you expected.
Once ALL of that prep work is done and you have concluded that your table/index design is correct then go back to work on writing a script/procedure that will produce (not execute) DDL to produce the main table and partitioning you designed.
Just an aside on what you posted. The indexes should be created AFTER the table and its partitions are created. If you are creating your local index first, as you post suggests, you are forcing Oracle to revamp it 1800 times when each partition is added. Just create the index after the table.
p.s. the number of posts anyone has is irrevelant. The only thing that matters is whether the advice or suggestions they provide is helpful. And the helpfullness of those is limited to, and based on, ONLY the information a poster provides. For exampe, your proposed partitioning scheme might be perfectly appropriate for your use case or it could be totally inappropriate. We have no way of knowing without knowing WHY you chose that scheme.
But I haven't seen one like that so it makes me suspicious that you really need to get that complicated. -
Doubt in subpartitioning of a table
hi gems...good evening..
I have a table which previously had only range partitions.
Now I changed it to range-hash composite partitioning.
There are 6 partition tablespaces namely TS_PART1, TS_PART2.....TS_PART 6.
The default tablespace of the schema is TS_PROD.
The table had following structure previously:
create table ORDER_BOOK
CUST_ID NUMBER(10),
PROFILE_ID NUMBER(10),
PRODUCT_ID NUMBER(10),
SUB_PROFILE_ID VARCHAR2(25),
CASHFLOW_DATE DATE,
EARNINGS NUMBER(24,6),
constraint ORDER_BOOK_PK primary key(CUST_ID,PROFILE_ID,PRODUCT_ID,SUB_PROFILE_ID,CASHFLOW_DATE)
partition by range (CASHFLOW_DATE)
partition ORDER_BOOK_PART1 values less than (TO_DATE('01-01-2003', 'DD-MM-YYYY')) tablespace TS_PART1,
partition ORDER_BOOK_PART2 values less than (TO_DATE('01-01-2006', 'DD-MM-YYYY')) tablespace TS_PART2,
partition ORDER_BOOK_PART3 values less than (TO_DATE('01-01-2009', 'DD-MM-YYYY')) tablespace TS_PART3,
partition ORDER_BOOK_PART4 values less than (TO_DATE('01-01-2012', 'DD-MM-YYYY')) tablespace TS_PART4,
partition ORDER_BOOK_PART5 values less than (TO_DATE('01-01-2015', 'DD-MM-YYYY')) tablespace TS_PART5,
partition ORDER_BOOK_PART6 values less than (TO_DATE('01-01-2018', 'DD-MM-YYYY')) tablespace TS_PART6
create index ORDER_BOOK_IDX on ORDER_BOOK(PRODUCT_ID,CASHFLOW_DATE);
Now I did the following steps to change the previously existing partitions to the new range-hash composite partitions:
begin
dbms_redefinition.can_redef_table
(uname=>'DEMO_TEST',
tname=>'ORDER_BOOK',
options_flag=>DBMS_REDEFINITION.CONS_USE_PK);
end;
create table INTERIM_ORDER_BOOK
CUST_ID NUMBER(10),
PROFILE_ID NUMBER(10),
PRODUCT_ID NUMBER(10),
SUB_PROFILE_ID VARCHAR2(25),
CASHFLOW_DATE DATE,
EARNINGS NUMBER(24,6),
constraint INTERIM_ORDER_BOOK_PK primary key(CUST_ID,PROFILE_ID,PRODUCT_ID,SUB_PROFILE_ID,CASHFLOW_DATE)
partition by range(CASHFLOW_DATE)
subpartition by hash (CUST_ID)
subpartition template
subpartition SP1 tablespace TS_PART1,
subpartition SP2 tablespace TS_PART2,
subpartition SP3 tablespace TS_PART3,
subpartition SP4 tablespace TS_PART4,
subpartition SP5 tablespace TS_PART5,
subpartition SP6 tablespace TS_PART6
(partition P1 values less than (to_date('01-01-2003','DD-MM-YYYY')),
partition P2 values less than (to_date('01-01-2006','DD-MM-YYYY')),
partition P3 values less than (to_date('01-01-2009','DD-MM-YYYY')),
partition P4 values less than (to_date('01-01-2012','DD-MM-YYYY')),
partition P5 values less than (to_date('01-01-2015','DD-MM-YYYY')),
partition P6 values less than (to_date('01-01-2018','DD-MM-YYYY')))
enable row movement;
begin
dbms_redifinition.start_redef_table
(uname=>'DEMO_TEST',
orig_table=>'ORDER_BOOK',
int_table=>'INTERIM_ORDER_BOOK',
options_flag=>DBMS_REDEFINITION.CONS_USE_PK);
end;
begin
dbms_redefinition.finish_redef_table
(uname=>'DEMO_TEST',
orig_table=>'ORDER_BOOK',
int_table=>'INTERIM_ORDER_BOOK');
end;
After that I made the index with LOCAL clause i.e local index.
But the problem is that...initially when there is only range partitioning, then the datas are going to the corresponding partition tablespaces.
But after modifying the table, populating the table results in consumption of space in both partition tablespaces as well as the default tablespace.
I have checked the size of the tablespaces. From that I came to know about this.
The output of the USER_TAB_SUBPARTITIONS is okk...every subpartitions are in the corresponding tablespaces.
But the main partitions (USER_TAB_PARTITION) are in the default tablespace.
please help me....thanks in advance...
Edited by: user12780416 on Apr 13, 2012 7:46 AMuser12780416 wrote:
Thanks sir for your reply...
Yes, by MOVE syntax I can move the partitions in the corresponding tablespaces.
But i am not getting the reason of consumption of both the tablespaces.
The TS_PART1 increased 2MB, TS_PART2 increased 6MB, TS_PART3 increased 2MB, TS_PART4 increased 5MB, TS_PART5 increased 9MB.
and TS_PROD increased (2+6+2+5+9)=24MB
Why is this happening ?
I have read that when we make subpartitions, they main partitions are the logical entity only and the subpartitions are the physical entity.Where have you read this?
As RP rightly pointed out, you can specify a tablespace for each partition (each partition using a different tablespace) and a tablespace for each subpartitions (again, using many if you felt like it). -
Goldengate expects a column that is not in the unique constraint
I do not know golden gate. I am working with a golden gate engineer who doesn't really know oracle. I am the DBA supporting this. This is the issue we are having. Please bare with me if I have trouble explaining it.
I am pulling from oracle and loading to teradata. I confirmed that the unique index is correct in teradata (don't have access. I asked).
Oracle 10.2.0.5
golden gate: 11.1.1.0.29
error: the name of the schema listed in the error is from teradata. So TERADATA_SCHEMA. represents that.
Key column my_id is missing from update on table TERADATA_SCHEMA.MYTABLE
Missing 1 key columns in update for table TERADATA_SCHEMA.MYTABLEbelow is a create table statement. I have altered table and column names. but the structure is the same.
it does NOT have a primary key. It has a unique key. I am not allowed to add a primary key
UNIQUE INDEX: UNIQUE_ID
When we test an updates, golden gate is expecting MY_ID to be sent as well and golden gate abends
The DDL below includes the partitioning/subpartition, unique index, and supplemental logging command that golden gate runs.
I have also run the following 2 commands to turn on supplemental logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER SYSTEM SWITCH LOGFILE;
CREATE
TABLE MYTABLE
"UNIQUE_ID" NUMBER(10,0) NOT NULL ENABLE,
"MY_ID" NUMBER(10,0),
"MYNUMBER" NUMBER(8,0),
"TOTALNUMBER" NUMBER(8,0),
"USED" NUMBER(8,0),
"LOTSUSED NUMBER(8,0),
"LAST_UPDATE_USER" VARCHAR2(30 BYTE),
"LAST_UPDATE_DATE" DATE,
"MYDATESTAMP" DATE,
"MYTYPE" NUMBER(2,0) NOT NULL ENABLE,
"MYTHING" CHAR(1 BYTE) NOT NULL ENABLE
PARTITION BY RANGE
"MYTYPE"
SUBPARTITION BY LIST
"MYTHING"
SUBPARTITION TEMPLATE
SUBPARTITION "MYTHING_X" VALUES
'X'
SUBPARTITION "MYTHING_Z" VALUES
'Z'
PARTITION "MYTHING1" VALUES LESS THAN (2) ,
PARTITION "MYTHING2" VALUES LESS THAN (3) ,
PARTITION "MYTHING3" VALUES LESS THAN (4) ,
PARTITION "MYTHING4" VALUES LESS THAN (5) ,
PARTITION "MYTHING5" VALUES LESS THAN (6) ,
PARTITION "MYTHING6" VALUES LESS THAN (7) ,
PARTITION "MYTHING7" VALUES LESS THAN (8) ,
PARTITION "MYTHING8" VALUES LESS THAN (9) ,
PARTITION "MYTHING_OTHER" VALUES LESS THAN (MAXVALUE)
ALTER TABLE MYTABLE ADD SUPPLEMENTAL LOG GROUP
"MYGROUP_555"
"UNIQUE_ID"
ALWAYS;
CREATE UNIQUE INDEX MY_IND ON MYTABLE (
"UNIQUE_ID"
;Edited by: Guess2 on Nov 3, 2011 12:57 PM
Edited by: Guess2 on Nov 3, 2011 1:21 PMGoldenGate expects a primary key, a unique key, or a list of key columns.
The addition of supplemental logging for the table can be done via SQL, but typically, it is done via the GGSCI interface:
GGSCI 4> dblogin userid <your DB GoldenGate user>, password <your password?
GGSCI 5> add trandata schema_owner.table_name
How Oracle GoldenGate determines the kind of row identifier to useUnless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate selects a
row identifier to use in the following order of priority:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no function-based
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no function-based
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
NOTE If there are other, non-usable keys on a table or if there are no keys at all on the
table, Oracle GoldenGate logs an appropriate message to the report file.
Constructing a key from all of the columns impedes the performance of Oracle
GoldenGate on the source system. On the target, this key causes Replicat to use
a larger, less efficient WHERE clause.
How to specify your own key for Oracle GoldenGate to use
If a table does not have one of the preceding types of row identifiers, or if you prefer those
identifiers not to be used, you can define a substitute key if the table has columns that
always contain unique values. You define this substitute key by including a KEYCOLS clause
within the Extract TABLE parameter and the Replicat MAP parameter. The specified key will
override any existing primary or unique key that Oracle GoldenGate finds.>
"I have altered table and column names. but the structure is the same."
What column name did you alter?
The source table table and target table are either identical, or there must be a source definition file created on the source and copied over to the target and referenced in the replicat.
I don't see why my_id would cause a problem (based on what you posted), unless the tables are different. -
How to select Subpartition name in a Select query?
Hi,
I have a table that is partitioned on date range and subpartitioned based on and ID list. Lets assume the table name is something like: MY_TABLE
The partition name would look like: P_20110126160527
The subpartition list is as follows: GB, IN, AU, US etc. The sub partition name for GB would look like
P_20110126160527_GB
I need to run a select query to fetch data from MY_TABLE along with Sub partition name. The result set needs to look like:
Name|Location|SubPartition
Sam|UK|P_20110126160527_GB
Tom|UK|P_20110126160527_GB
Dave|AU|P_20110126160527_AU
The data available in ALL_TAB_SUBPARTITIONS and USER_TAB_SUBPARTITIONS can't be used just because the only join condition available is the TABLE Name but we would also have to join on SUBPARTITION KEY. I am not sure how to achieve this.
Does anyone here have a clue?In a pinch, you could do something like this.
select col1, col2, col3, 'PARTITION_1' from your_table where key_col in <values for partition_1>
union all
select col1, col2, col3, 'PARTITION_2' from your_table where key_col in <values for partition_2>
union all
select col1, col2, col3, 'PARTITION_3' from your_table where key_col in <values for partition_3>
union all
...Or better yet:
select col1, col2, col3, case when key_col = 'x' then 'PARTITION_1'
when key_col = 'y' then 'PARTITION_2'
when key_col = 'z' then 'PARTITION_3'
end
from ...Of course, none of these would be "dynamic".
Maybe you are looking for
-
Help please-"Application Not Responding" for Safari and Mail
I read in one discussion that if Safari is slow, you can go to Font Book and select the fonts with a bullet to get rid of duplicate fonts. I did that, and then decided to disable some japanese fonts I don't use. Now Safari and Mail don't work. Mail w
-
MBA instead of MBP for main computer?
Hi, I'm currently trying to decide between the 13in. MBA and MBP. I'll be a junior in college and have finally decided to update my aging five year old Dell (the screen is coming off, it's gotten so bad). I was considering the 13 MBP two years ago, b
-
Slideshow hero shadow issue.
I would like to apply a shadow effect on a hero image in a slide show, however when I apply the shadow, it affects the alignment of the hero image to the left, is there a way around this problem? Thank You in advance.
-
Disappointed with touch photo zooming
Wanted to test a couple of scanned comic pages to view on touch, but when you zoom in, the images are greatly pixelated, this is not the case with Archos. Does ipod touch re-compress images when loaded??
-
Smb-sharing auto-mounted ntfs-3g hotplugged device
hi@all, i have a smb-share for /media. i've got up to 2 partitions mounted there: one internal., mounted via fstab (/dev/sda2 /media/Transfer ntfs-3g locale=de_DE.utf8 0 0) and 1 that gets mounted automatically via udisk/gvfs (not really sure, the de