11.2.0.3 Parallel delete on non-partitioned table
Friends and mentors...
I want to know more about parallel deleted and it's requirements...I have gone through Oracle manuals and articles but not able to understand exactly about parallel delete (dml) feature...
Task: Trying to delete large data (20 mil rows out of 60 mil) from non-partitioned table
Job frequency: Once every month
Oracle: 11.2.0.3
OS: Linux
Questions:
1. Any idea on best approach?
2. Do I need to have table partitioned to use /*+parallel */ hint?
3. If I use /*+parallel*/ hint in delete statement then do I need to use "alter session enable parallel dm1" ?
4. How to decided degree of parallelism (DOP)? is it good to use auto for DOP?
Currently I am planning to use parallel hint in delete statement, is this enough or need better plan?
thanks..
khallas301 wrote:
Friends and mentors...
I want to know more about parallel deleted and it's requirements...I have gone through Oracle manuals and articles but not able to understand exactly about parallel delete (dml) feature...
Task: Trying to delete large data (20 mil rows out of 60 mil) from non-partitioned table
Job frequency: Once every month
Oracle: 11.2.0.3
OS: Linux
Questions:
1. Any idea on best approach?
2. Do I need to have table partitioned to use /*+parallel */ hint?
3. If I use /*+parallel*/ hint in delete statement then do I need to use "alter session enable parallel dm1" ?
4. How to decided degree of parallelism (DOP)? is it good to use auto for DOP?
Currently I am planning to use parallel hint in delete statement, is this enough or need better plan?
thanks..
It appears that you believe that parallel is always faster than non-parallel; which is not true in every case.
The slowest part of any DELETE is the physical I/O.
How many parallel processes accessing the same table before the disk gets saturated?
Similar Messages
-
Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column
Hi,
Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
INSERT AS SELECT
CREATE TABLE AS SELECT
DELETE
UPDATE
MERGE (conditional UPDATE and INSERT)
Multi-table INSERT
So I created and populated a simple table with a BLOB column:
SQL> CREATE TABLE T1 (A BLOB);
Table created.
Then, I tried to see the execution plan of a parallel DELETE:
SQL> EXPLAIN PLAN FOR
2 delete /*+parallel (t1,8) */ from t1;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3718066193
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 2048 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 2048 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement (level=2)
And I finished by executing the statement.
SQL> commit;
Commit complete.
SQL> alter session enable parallel dml;
Session altered.
SQL> delete /*+parallel (t1,8) */ from t1;
2048 rows deleted.
As we can see, the statement has been run as parallel:
SQL> select * from v$pq_sesstat;
STATISTIC LAST_QUERY SESSION_TOTAL
Queries Parallelized 1 1
DML Parallelized 0 0
DDL Parallelized 0 0
DFO Trees 1 1
Server Threads 5 0
Allocation Height 5 0
Allocation Width 1 0
Local Msgs Sent 55 55
Distr Msgs Sent 0 0
Local Msgs Recv'd 55 55
Distr Msgs Recv'd 0 0
11 rows selected.
Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
Thank you for your help.
MichaelYes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
SQL> explain plan for delete from t1;
Explained.
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 4 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
The DELETE is not performed in Parallel.
I tried with another statement :
SQL> explain plan for
2 insert into t1 select * from t1;
Here are the results:
11g
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | LOAD TABLE CONVENTIONAL | T1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL | T1 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
12c
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | T1 | | | | | Q1,00 | PCWP | |
| 4 | OPTIMIZER STATISTICS GATHERING | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
It seems that the DELETE statement has problems but not the INSERT AS SELECT ! -
How to find out the Non Partitioned Tables used 2Gb on oracle
Hi team
how to find out the Non Partitioned Tables used > 2Gb on oracle where not is sys & system
regardsheres 1 I made earlier
set pagesize 999
set linesize 132
col owner format a25
col segment_name format a60
select owner,segment_name,segment_type,(bytes/1024/1024)"MB size"
from dba_segments
where owner not in ('SYS','SYSTEM','XDB','MDSYS','SYSMAN') -- edit for taste
and segment_type = 'TABLE'
having (bytes/1024/1024) > 2000
group by bytes, segment_Type, segment_name, owner
order by 4 asc -
Partition an Non Partition Table in 11.2.0.1
Hi Friends,
I am using Oracle 11.2.0.1 Oracle Database.
I have a table with 10 Million records and it's a Non Partitioned Table.
1) I would like to partition the table (with partition by range ) without creating new table . I should do it in the existing table itself (not sure DBMS_REDEFINITION is the only option ) (or) can i use alter table ...?
2) Add one partition which will have data for the unspecified range.
Please let me know the inputs on the above
Regards,
DBHi,
what is the advantage of using DBMS_REDEFINITION over normal method (create partition table,grant access,insert records)You can't just add a partition in a non-partitioned table. You need to recreate existing table to have it partitioned (you can't just start adding new partitions to existing non-partitioned table). Advantage of dbms_redefinition is that it is online operation to re-create an existing table and and your data always remains available during table recreation
I would like to know how to copy the object privileges,constraints,indexes from Non Partitioned table (sales) to Partitioned table (sales_part) which i am creating. will >DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS help on this?First you need to tell us what method you are using to partition an existing table? If you are using dbms_redifiniiton, you really don't need to worry about triggers, indexex or constraints at all. Just follow any document which explains how to use dbms_redifinition. Dr. Tim has done a lot of work for dummys like us by writing documents for us. Follow this document.
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
If so can i use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS alone for copying the table dependents alone after i create partition table (or) it should be used along with >DBMS_REDEFINITION.START_REDEF_TABLE only?See above document which i mentioned.
Salman -
Convert non-partition table to partition table
Hello Everybody
I am just want to ask about how to Convert non-partition table to partition table ?
Thanks
Ramez S. SawiresDear ARF
First of all thank you for replying me , second do u have any links talking about dbms_redefinition package
I am using Database Oracle 10g
Thanks
Ramez S. Sawires
Message was edited by:
Ramez S. Sawires -
Move data from Non Partitioned Table to Partitioned Table
Hi Friends,
I am using Oracle 11.2.0.1 DB
Please let me know how can i copy /move the data from Non -Partitioned Oracle table to the currently created Partiotioned table.
Regards,
DB839396 wrote:
Hi All,
Created Partitioned table but unable to copy the data from Non Partitioned table:
SQL> select * from sales;
SNO YEAR NAME
1 01-JAN-11 jan2011
1 01-FEB-11 feb2011
1 01-JAN-12 jan2012
1 01-FEB-12 feb2012
1 01-JAN-13 jan2013
1 01-FEB-13 feb2013into which partition should row immediately above ("01-FEB-13") be deposited?
[oracle@localhost ~]$ oerr ora 14400
14400, 00000, "inserted partition key does not map to any partition"
// *Cause: An attempt was made to insert a record into, a Range or Composite
// Range object, with a concatenated partition key that is beyond
// the concatenated partition bound list of the last partition -OR-
// An attempt was made to insert a record into a List object with
// a partition key that did not match the literal values specified
// for any of the partitions.
// *Action: Do not insert the key. Or, add a partition capable of accepting
// the key, Or add values matching the key to a partition specification>
6 rows selected.
>
SQL>
SQL> create table sales_part(sno number(3),year date,name varchar2(10))
2 partition by range(year)
3 (
4 partition p11 values less than (TO_DATE('01/JAN/2012','DD/MON/YYYY')),
5 partition p12 values less than (TO_DATE('01/JAN/2013','DD/MON/YYYY'))
6 );
Table created.
SQL> SELECT table_name,partition_name, num_rows FROM user_tab_partitions;
TABLE_NAME PARTITION_NAME NUM_ROWS
SALES_PART P11
SALES_PART P12
UNPAR_TABLE UNPAR_TABLE_12 776000
UNPAR_TABLE UNPAR_TABLE_15 5000
UNPAR_TABLE UNPAR_TABLE_MX 220000
SQL>
SQL> insert into sales_part select * from sales;
insert into sales_part select * from sales
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition
Regards,
DB -
Create Partitioning to non-partitioning Table
Dear All,
I have a table which is non partitioning and it has about 20G data... If I want to partition this table with column say DATE_M,
Please can anyone suggest the best way to do this.
ThanksSo now in the partitioned table he creates one maxvalue partition and does exchange partition
That isn't the typical scenario. Typically you make the switch by using partitions for the NEW data and leave the existing data in the base range partition.
1. Existing app uses an unpartitioned table
2. New table is partitioned for NEW DATA
Assume you want monthly partitions (daily works the same). This is already April so there is already some April data.
So create the partitioned table so the base partition clause includes ALL of the data for April and before:
create table ipart
(time_id date
,cust_id number(4)
,amount_sold number(5))
partition by range(time_id)
interval(NUMTOYMINTERVAL(1,'month'))
(partition old_data values less than (to_date('01-may-2015','DD-MON-YYYY')))
Now you do the exchange with the unpartitioned table and all the current data goes into that 'OLD_DATA' partition.
New data for May and the future will have partitions created automatically.
That approach lets you ease into partitioning without disrupting your current processes at all.
As time goes by more and more of the data will be in the new monthly partitions. If you need to you can split that base partition
insert into ipart (time_id) values (sysdate - 90);
insert into ipart (time_id) values (sysdate - 60);
insert into ipart (time_id) values (sysdate - 30);
insert into ipart (time_id) values (sysdate);
commit;
alter table ipart split partition old_data
at (to_date('01-jan-2015', 'DD-MON-YYYY')) into
(partition old_data, partition JAN_FEB_MAR_APR); -
Importing partitioned table data into non-partitioned table
Hi Friends,
SOURCE SERVER
OS:Linux
Database Version:10.2.0.2.0
i have exported one partition of my partitioned table like below..
expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
OS:Linux
Database Version:10.2.0.4.0
Now when i am importing into another server i am getting below error
Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MAPPING"."SYS_IMPORT_FULL_01": MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE failed to create with error:
ORA-00959: tablespace 'MAPPING_ABC' does not exist
Failing sql is:
CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
Processing object type TABLE_EXPORT/TABLE/COMMENT
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
Regards
Umesh Guptayes, i have tried that option as well.
but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
Regards
UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
Wait for some EXPERT and GURU's review on this issue .........
Good luck ....
--neeraj -
Dbms_redefinition used to convert a non partitioned table to partitioned
A table is created with below DDL
CREATE TABLE TEST1("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
Interim table created with
CREATE TABLE INTERIM("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
(PARTITION "P0" VALUES LESS THAN (1)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
PARTITION "P1" VALUES LESS THAN (2)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
Performed dbms_Redefinition (start,sync and finish) to get the table test1 data from nonpartitioned to partitioned one. At the end of table conversion, dbms_metadata.get_ddl shows like below
CREATE TABLE TEST("EQUIPMENT_DIM_ID" NUMBER(9,0) CONSTRAINT "SYS_C005605" NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) CONSTRAINT "SYS_C005601" NOT NULL ENABLE);
PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
(PARTITION "P0" VALUES LESS THAN (1)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
PARTITION "P1" VALUES LESS THAN (2)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS
Can you help me how to hide or remove the "CONSTRAINT "SYS_C005605"" etc section showing in ddl for the table?The reason being if take this ddl definition and load in another database where it can give error in case same named constraint already exist.
Many thanks
Regards
Manoj Thakkan
Oracle DBA
BangaloreCreate your NOT NULL check constraints using ALTER TABLE and name them as you would a primary or foreign key ... with a name that makes sense.
I personally have a strong dislike for system generated naming and would be thrilled if this lazy practice of defining columns as NOT NULL during table creation went away. -
Gathering statistics on partitioned and non-partitioned tables
Hi all,
My DB is 11.1
I find that gathering statistics on partitioned tables are really slow.
TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
O_FCT_BP1 112123170 843140 11212317 8/30/2011 3:5 NO DISABLED
LEON_123456 112096060 521984 11209606 8/30/2011 4:2 NO ENABLED
O_FCT 115170000 486556 115170 8/29/2011 6:3 YES
SQL> SELECT COUNT(*) FROM user_tab_subpartitions
2 WHERE table_name =O_FCT'
3 ;
COUNT(*)
112I used the following script:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname => user,
tabname => O_FCT',
method_opt => 'for all columns size auto',
degree => 4,
estimate_percent =>10,
granularity => 'ALL',
cascade => false);
END;
/It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
The time of collecting statistics accounts for a large part of total batch time.
And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
Does anyone have some experiences on this subject? Thank you very much.
Best regards,
Leon
Edited by: user12064076 on Aug 30, 2011 1:45 AMHi Leon
Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
GRANULARITY=>'PARTITION' for partition level and
GRANULARITY=>'SUBPARTITION' for subpartition level
You are gathering global stats every time which you may not require.
Edited by: user12035575 on 30-Aug-2011 01:50 -
Move non-partition to Partition with order by cluse
Hi,
I have non partition table of around 9GB and want to convert it into Partiiton table. Following is the des:
SQL> select num_rows,blocks from dba_tables where table_name='DEMAND_DRAFT_STATUS';
NUM_ROWS BLOCKS
21720123 1228647
SQL> select index_name,index_type from dba_indexes where table_name='DEMAND_DRAFT_STATUS';
INDEX_NAME INDEX_TYPE
SYS_C0011138 NORMAL
IDX_DEMD_DRFT_STAT_INSERTED_BY BITMAP
IDX_DEMD_DRFT_STAT_UPDATED_BY BITMAP
SQL> select clustering_factor from dba_indexes where index_name='SYS_C0011138';
CLUSTERING_FACTOR
387978241
SQL> select column_name,column_position from dba_ind_columns where index_name='SYS_C0011138';
COLUMN_NAME
COLUMN_POSITION
ISSU_BR_CODE
1
ISSU_BANK_CODE
2
ISSU_EXTN_CNTR_CODE
3
COLUMN_NAME
COLUMN_POSITION
DD_NUM
4
ISSUE_DATE
5
DD_CRNCY_CODE
6
COLUMN_NAME
COLUMN_POSITION
OT_TYPE
7
PRODUCT_CODE
8
CURRENCY_CODE
9
9 rows selected.
SQL> desc DEMAND_DRAFT_STATUS
Name Null? Type
ISSU_BR_CODE NOT NULL VARCHAR2(6)
ISSU_BANK_CODE NOT NULL VARCHAR2(6)
ISSU_EXTN_CNTR_CODE NOT NULL VARCHAR2(2)
DD_NUM NOT NULL VARCHAR2(16)
ISSUE_DATE NOT NULL DATE
DD_CRNCY_CODE NOT NULL VARCHAR2(3)
OT_TYPE NOT NULL VARCHAR2(2)
PRODUCT_CODE NOT NULL VARCHAR2(5)
CURRENCY_CODE NOT NULL VARCHAR2(3)
DD_STATUS CHAR(1)
DD_STATUS_DATE DATE
DD_AMT NUMBER(20,4)
DD_REVAL_DATE DATE
PRNT_ADVC_FLG CHAR(1)
PRNT_RMKS VARCHAR2(50)
PAYEE_BR_CODE VARCHAR2(6)
PAYEE_BANK_CODE VARCHAR2(6)
PAYING_BR_CODE VARCHAR2(6)
PAYING_BANK_CODE VARCHAR2(6)
ROUTING_BR_CODE VARCHAR2(6)
ROUTING_BANK_CODE VARCHAR2(6)
INSTRMNT_TYPE VARCHAR2(6)
INSTRMNT_ALPHA VARCHAR2(6)
INSTRMNT_NUM VARCHAR2(16)
PUR_NAME VARCHAR2(80)
PAYEE_NAME VARCHAR2(80)
PRNT_OPTN CHAR(1)
PRNT_FLG CHAR(1)
PRNT_CNT NUMBER(3)
DUP_ISS_CNT NUMBER(3)
DUP_ISS_DATE DATE
RECTIFED_CNT NUMBER(3)
PAID_EX_ADVC CHAR(1)
ADVC_RCV_DATE DATE
BC_FLG CHAR(1)
ENTERED_BY CHAR(1)
CAUTIONED_STAT CHAR(1)
CAUTIONED_REASON VARCHAR2(50)
PAID_ADVC_FLG CHAR(1)
INVT_SRL_NUM VARCHAR2(16)
PRNT_REM_CNT NUMBER(2)
INSERTED_BY NUMBER(10)
UPDATED_BY NUMBER(10)
INSERTED_ON DATE
UPDATED_ON DATE
DEL_FLG CHAR(1)
LCHG_TIME DATE
RCRE_TIME DATE
BUSINESS_DATE DATEFollowing questions:
1) I want to Range partition by ISSUE_DATE (there is no issue), but i'm thinking of reordering the column of index i.e 'SYS_C0011138' (which is also primary key). For example low cardinality columns i will place first and high selective column i would place at last in order. Is that a good idea?
2) While creating Partition table i want to give insert /*+ parallel 10 */ into partitiontable select * from DEMAND_DRAFT_STATUS order by CURRENCY_CODE,DD_CRNCY_CODE,PRODUCT_CODE,OT_TYPE,ISSU_BANK_CODE,ISSU_EXTN_CNTR_CODE,ISSUE_DATE,ISSU_BR_CODE,DD_NUM;
Why i'm doing this because index is going to be created on CURRENCY_CODE,DD_CRNCY_CODE,PRODUCT_CODE,OT_TYPE,ISSU_BANK_CODE,ISSU_EXTN_CNTR_CODE,ISSUE_DATE,ISSU_BR_CODE,DD_NUM columns so clustering factore os that index would be good(this is primary key index btw). Is it good?
3) Once partition is done i want to compress the old partitions, but i'm bit worried that i wont get enough compress ratio if i create partition table with ORDER by clause. whats your thought?
Please give recommendation
Regardsrp0428 wrote:
>
whats your thought?
>
Your post raises nothing but questions about WHY you are doing any of this. We can't suggest appropriate solutions without knowing what the problems are or what goals you are trying to achieve.
>
I have non partition table of around 9GB and want to convert it into Partiiton table
>
Ok - but WHY? Are you currently having problems? What are they? Is performance poor? For what type of queries or operations? Is management a problem - can't easily roll-off old data?
>Well its management decision to partitions the table which are expected to have high size in future. There reference of 6GB i gave only for explanation, actually we have 2-3 tables which are 500+GB and will grow.
Its a DWH environment where daily incremental load happens. So we came out of decision to have tables(which have daily incremental load) partitioned on monthly basis. And keep last quatat partitions on Cell Flash cache(its exadata)
CLUSTERING_FACTOR
387978241
>
Yep - that is a high factor for 1.2 million blocks. But are you sure it is correct? It's 17 times greater than the number of rows in the table. That suggests that to read the entire table using the index every block has to be read 17 times. That doesn't make sense - odds are the table and index stats you provided are not current.
Yes thats seems to be correct figure. I'll check once again and post..
>
And it has this quote from Tom Kyte's book Expert Oracle Database Architecture
>
we could also view the clustering factor as a number that represents the number of logical i/o’s against the table, that would be performed to read the entire table via the index.
>
Even assuming that index has a high clustering factor the next question is:
So what? That would only be relevant if you are using that index to access the entire table or large numbers of blocks.
Is there a problem you are having that you can identify the clustering factor as the main culprit?
If it ain't broke, don't fix it.
>Yes that correct but clustering factor worry me much after seeing its value which is very high as compared to # of blocks.
1) I want to Range partition by ISSUE_DATE (there is no issue), but i'm thinking of reordering the column of index i.e 'SYS_C0011138' (which is also primary key). For example low cardinality columns i will place first and high selective column i would place at last in order. Is that a good idea?
>
Range partitioning by a date can be useful so lets just assume it is in your case; maybe you are thinking about dropping old data. Is your primary key going to be global or local? That will affect performance if you drop old partitions; the global index has to be maintained but the local one doesn't.
I'm not going to drop any partitions of old data as of now but will compress them because they wont anticipate DMLS.(but might be select)
Now the question is about Primary key > wll it be Local of Global? Well i thought of having this Key as Local(because one of my primary key column will contain the partition key as well).
Reasons
1) better manageability( like if i want to compress old partitions, then having a gloabl parimary key index would become unusable state)
2) Might increase in select search criteria (assumption)
But having this as local will it impact any performance?
The primary key does include the partitioning column so it could be a local index. But the partitioning column is not the leading column so this would be a local nonprefixed index. Is that what you need or do you need prefixed index?
Have you researched the difference and determined which one you need for your use case?
No. My partition Key would not be in leading column of Primary key. But is there any difference if i not keep partition key as leading column of primary key?
See Local Partitioned Indexes in the VLDB and Partitioning Guide
http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm
Is rearranging the key columns a good idea? Hard to say without knowing your usage pattern.
Why are you reordering the primary key columns? Why are you putting low cardinality columns first?Well i have read some where quite lot times that having Low cardinality columns as leading coluns would increase performance and later we can get better compression ratios. Let me know if i'm wrong.
You shouldn't be doing either of these without a solid reason. What are the reasons? How do you know that, for your usage pattern, you won't make things worse?
>
2) While creating Partition table i want to give insert /*+ parallel 10 */ into partitiontable select * from DEMAND_DRAFT_STATUS order by CURRENCY_CODE,DD_CRNCY_CODE,PRODUCT_CODE,OT_TYPE,ISSU_BANK_CODE,ISSU_EXTN_CNTR_CODE,ISSUE_DATE,ISSU_BR_CODE,DD_NUM;
Why i'm doing this because index is going to be created on CURRENCY_CODE,DD_CRNCY_CODE,PRODUCT_CODE,OT_TYPE,ISSU_BANK_CODE,ISSU_EXTN_CNTR_CODE,ISSUE_DATE,ISSU_BR_CODE,DD_NUM columns so clustering factore os that index would be good(this is primary key index btw). Is it good?
>
That should give a good initial clustering factor for the index unless a lot of DML is done.
>
3) Once partition is done i want to compress the old partitions, but i'm bit worried that i wont get enough compress ratio if i create partition table with ORDER by clause. whats your thought?
>
You are probably correct since having the low cardinality data first will break things up more. But you are still trying to solve a problem that you don't know even exists. How much space will each partition take if uncompressed? How much if compressed?
At a minimum you should perform a tests for a realistic subset of data (a sample issue date).
1. Create an unordered uncompressed table (new tablespace to make size data more accurate).
2. Create an unordered compressed table
3. Create an ordered uncompressed table
4. Create an ordered compressed table
Yes i have tries to simulate this for 9GB table(only) and size of unordered compressed table is 1.2GB and size of ordered compressed table is 1.9GB out of 9B(uncompressed)
Then you will have a good idea what the space savings might be.
I suggest you first document the issues and problems you are having and list specific goals for any architecture changes.
For partiioning I would focus on identifying the goals, the partition key(s), and the numbers and types (global or local) for the primary key and other indexes.
Application team have already identified the Partitions key on tables which are supposed to be partitioned And they say most of the queries are on those date columns. Secondly as i explained having partition we will keep old partition in compressed state and new partitions in Cell flash cache.
For the primary key index (that includes the partitioning key) determine if you need a prefixed or nonprefixed index.
again whats the difference if i dont have partition key column in leading column of index?
Based on your usage patterns determine the indexes and columns that are the most important for querying. The clustering factor might be ideal for the primary key but totally screwed up for the indexes and queries that really count. You can't get an ideal clustering factor for every index.
The data order for the initial load can be in the order that gives you the best clustering factor for the main index you need it for. That assumes that you actually need the index for a purpose that the clustering factor can help with.Thanks a lot your explanation was helpful.
This setup for tables (i.e. having high # of columns in primary key for example 6-7 out of 15 columns approx in a table) is common for mostly all of the tables which are going to be partition.
And most of the columns in parimary key have 2-3 distinct values out of 1Million rows, so was about to think that i should put them ahead while creating a new primary key (Local) on newly created partition.
While doing this i also take into consideration that i load the data in ORDER BY clause (i.e. order by all columns of PK) to get excellent CF and moreover we could benefit from Storage Index on Exadata. and thanks for inputs too highly appreciated -
How to move data from one DB (non-partitioned) to another DB (partitioned)
I have a huge database (2 TB) that is storing data in non-partitioned tables. I want to create another database to store the exact same data but in partitioned tables. How can I do this in the most efficient manner? Good old export and import should work but this would take ages.
Let me give you guys a full picture and see if you could help me.
The existing Unix server is running an Oracle 8i Database (datawarehouse) and I'm planning to create a new 10g database on a new Unix server. After which I will do parallel loading on both databases until I am ready to switch over to the new database. Most of the tables in the existing database is not partitioned and I need it to be partitioned on the new database. My database is around 2.4 TB and it is residing on a SAN (4.4 TB). I am planning only to have the 10g binary files on the new unix server, thus the new database would be on the SAN. Disk space availability is my most immediate concern as I would also have to set aside 600 GB on the SAN for another server.
Any suggestions on how can I do this best? -
Gathering Statistics on NON-PARTITIONED objects
Hi,
Is it possible to manually gather stats only for Non-Partitioned objects without touching any of the Partitioned objects?
Oracle Version: 11.1.0.7
Thanks,
Ishan@Hoek and Kuljeet:
I am sorry on missing out on this info.
What I mean is that, I don't want to hard-code the NON-PARTITIONED table names or take names from USER_TABLES. Oracle should pick it up automatically like the way it does it for PARTITIONED tables.
Thanks,
Ishan
Edited by: Ishan on Jun 7, 2012 6:01 PM -
Two billion record limit for non-partitioned HANA tables?
Is there a two billion record limit for non-partitioned HANA tables? I've seen discussion on SCN, but can't find any official SAP documentation.
Hi John,
Yes there is a limit for non-partitioned tables in HANA. In the first page of this document says: SAP HANA Database – Partitioning and Distribution of Large Tables
A non - partitioned table cannot store more than 2 billion rows. By using partitioning, this
limit may overcome by distributing the rows to several partitions. Please note, each partition must not contain more than 2 billion rows.
Cheers,
Sarhan. -
Trigger on a paritioned table to insert into a non-paritioned table
Hi,
I have a partitioned table which will have a high degree of concurrent DMLs (Updates). It has a initrans value set to 16. On this table a trigger is created which will insert into a non-partitioned table on update of highly updateable columns. I am planning to keep the initrans value and freelists value to 16 so that it does not serialize and wait for the block slots.
Is the above set up performance inefficient? Is partitioning the table in which the trigger inserts will improve the performance?
Thanks,
RajeshI think if you want to consider an efficient solution, I would look at not implementing your requirements using triggers. If possible consider an API approach where whatever "applicaiton" is being used calls a PL/SQL package that will update both tables if necessary. There are a number of disadvantages to using triggers.
HTH!
Edited by: Centinul on Jan 2, 2009 11:48 PM
Check out this recent thread on triggers: Should one really avoid triggers???
Maybe you are looking for
-
I updated the Mac OS to the 8.2 and now when I tried to update the OS 5 to the OS 6 I'm without my 2 devices the Iphone and the iPad. ITunes is asking me to restore the devices... Is anybody with the same problem?
-
Use of classes in com.sun.tools.doclets
Hello -- What is the best practice recommendation wrt using the classes in com.sun.tools.doclets -- particularly with the planned refactoring? I'm writing a doclet that doesn't subclass, say, the standard doclet. There are obviously several very usef
-
Unsupported format or damaged files message?
On importing files into CS5 I get the "Unsupported format or damaged files" message. I assume I need the correct Codec? I am on a PC windows 8.1. Anyone know how to solve this?
-
Is the MPEG-4 Export Quailty any Good?
I want to be able to use FCE4 to export a HDV 1080 HD movie to MPEG4 so I can put the video on a Vista Media Center PC and play it on my HD TV via an Xbox 360... Does anyone have any experience with this type of export? (HDV to 1080 MPEG 4?) Is there
-
Hi, I have a requirement to provide f4 help to a street address. Please help!! BR