Query with non partition key in partitioning table
i compare query select with non-partition key condition in table partition with 3, 4 and 6 partition.
time execution for 3 partition faster then 4 partition, time execution for 4 partition faster than 6 partition.
i know processing query select is selection all partition, but i don't know what make time execution different between 3, 4 and 6 partition??
whether the transfer from one partition to another partition takes time?
this example sql trace query select for 3,4 and 6 partition
3 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.70 7.57 25291 1157583
total 5.70 7.57 25291 1157583
Parsing user id: 61 (SKENARIO1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=3
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=34 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.46 8.03 25126 1157583
total 5.46 8.03 25126 1157583
Parsing user id: 62 (SKENARIO2)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=4
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=46 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.73 9.13 25190 1157583
total 5.73 9.13 25190 1157583
Parsing user id: 63 (SKENARIO3)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=6
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=6thanks
best regards
eko
ekopur wrote:
i compare query select with non-partition key condition in table partition with 3, 4 and 6 partition.
time execution for 3 partition faster then 4 partition, time execution for 4 partition faster than 6 partition.
i know processing query select is selection all partition, but i don't know what make time execution different between 3, 4 and 6 partition??
whether the transfer from one partition to another partition takes time?
this example sql trace query select for 3,4 and 6 partition
3 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.70 7.57 25291 1157583
total 5.70 7.57 25291 1157583
Parsing user id: 61 (SKENARIO1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=3
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=34 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.46 8.03 25126 1157583
total 5.46 8.03 25126 1157583
Parsing user id: 62 (SKENARIO2)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=4
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=46 partition
SELECT *
FROM
EQU_PARAM_MONITORINGHASH WHERE TO_CHAR(time_stamp,'mm/yy')=('02/09')
call cpu elapsed disk rows
Parse 0.00 0.00 0 0
Execute 0.00 0.00 0 0
Fetch 5.73 9.13 25190 1157583
total 5.73 9.13 25190 1157583
Parsing user id: 63 (SKENARIO3)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION HASH (ALL) PARTITION: START=1 STOP=6
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'EQU_PARAM_MONITORINGHASH' (TABLE) PARTITION: START=1 STOP=6
I'm assuming you recreated the table a couple of times with different numbers of hash partitions. (Tip: always use a power of two for the number of hash partitions - it keeps them all around the same size if you are using the feature on an appropriate data set.)
There isn't really enough difference in time within the database to make any sensible comment about the difference in times. I note that you have also edited out the fetch count for the 1.1 million rows fetched, and have not captured (or perhaps just not printed) the wait times, so we don't know where you spent the time inside and outside the database.
For all we can tell, the difference you are worried about might simply be network time on the fetch calls, and have nothing to do with the extract you've published.
Regards
Jonathan Lewis
Similar Messages
-
Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column
Hi,
Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
INSERT AS SELECT
CREATE TABLE AS SELECT
DELETE
UPDATE
MERGE (conditional UPDATE and INSERT)
Multi-table INSERT
So I created and populated a simple table with a BLOB column:
SQL> CREATE TABLE T1 (A BLOB);
Table created.
Then, I tried to see the execution plan of a parallel DELETE:
SQL> EXPLAIN PLAN FOR
2 delete /*+parallel (t1,8) */ from t1;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3718066193
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 2048 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 2048 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement (level=2)
And I finished by executing the statement.
SQL> commit;
Commit complete.
SQL> alter session enable parallel dml;
Session altered.
SQL> delete /*+parallel (t1,8) */ from t1;
2048 rows deleted.
As we can see, the statement has been run as parallel:
SQL> select * from v$pq_sesstat;
STATISTIC LAST_QUERY SESSION_TOTAL
Queries Parallelized 1 1
DML Parallelized 0 0
DDL Parallelized 0 0
DFO Trees 1 1
Server Threads 5 0
Allocation Height 5 0
Allocation Width 1 0
Local Msgs Sent 55 55
Distr Msgs Sent 0 0
Local Msgs Recv'd 55 55
Distr Msgs Recv'd 0 0
11 rows selected.
Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
Thank you for your help.
MichaelYes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
SQL> explain plan for delete from t1;
Explained.
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 4 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
The DELETE is not performed in Parallel.
I tried with another statement :
SQL> explain plan for
2 insert into t1 select * from t1;
Here are the results:
11g
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | LOAD TABLE CONVENTIONAL | T1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL | T1 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
12c
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | T1 | | | | | Q1,00 | PCWP | |
| 4 | OPTIMIZER STATISTICS GATHERING | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
It seems that the DELETE statement has problems but not the INSERT AS SELECT ! -
Partition Eliminiation on views and joins with other partitioned tables
I have a bunch of tables that are daily partioned and other which are not. We constantly join all these tables. I have noticed that partition elimination doesn't happen in most cases and I want some input or pointers on these.
Case 1
We have a view that joins a couple of partitioned tables on the id fileds and they the partition key is timestamp with local time zone.
TABLEA
tableaid
atime
TABLEB
tablebid
tableaid
btime
The view basically joins on tableaid, a.tableaid = b.tableaid(+) and a bunch of other non partitioned tables. atime and btime are the individal partition keys in the tables and these time do not match up like the id's in other words there is a little bit of correlation but they can be very different.
When I run a query against the view providing a time range for btime, I see partition elimination on tabled in the explain plan with KEY on Pstart/Pstop. But its a full tablescan on tablea. I was hoping there would be somekind of partition elimination here since its also partioned daily on the same datatype timestamp with local time zone.
Case 2
I have a couple of more partitioned tables
TABELC
tablecid
tablebid
ctime
TABLED
tabledid
tablebid
dtime
As you can see these tables are joined with tablebid and the times here generally correlate to tableb's timestamp as well.
Sub Case 1
When I join these tables to the view and give a time range on btime, I see partition elimination happening on tableb but not on tablea or any of the other tables.
Sub Case 2
Then I got rid of the view, wrote a query that us similar to the view where I join on tableaid (tablea and tableb), then on tablebid (tableb, tablec and tabled) and a few other tables and execute the query with some time range on btime and I still see that partition elimination happens only on tableb.
I thought that if other tables are also partitioned on a similark key that partition eliminition should happen? Or what am I missing here that is preventing from partition elimination on the other tables.Performance is of utmost importance and partition pruning is going to help in that. I guess that's what I'm trying to acheive.
To achive partition elimination on tablec, d, etc I'm doing an outer join with btime and that seems to work. Also since most of the time after the partition elimination, I don't need a full tablescan since the period I will be querying most of the time will be small I also created a local index on id field that I'm using to join so that it can do a "TABLE ACCESS BY LOCAL INDEX ROWID" this way it should peform better than a global index since the index traversal path should be small when compared to the global index.
Of couse I still have problem with the tablea not being pruned, since I cannot do an outer join on two fields in the same table (id and time). So might just include the time criteria again and may be increase the time range a little more when compared to what the actual user submitted to try not to miss those rows.
Any suggestions is always welcome. -
Issue with updating partitioned table
Hi,
Anyone seen this bug with updating partitioned tables.
Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
I'd be interested if anyone has seen this and has a version of Sybase without the issue.
Unfortunately when it happens on a replicated table - it takes down rep server.
CREATE TABLE #table1
( PK char(8) null,
FileDate date,
changed bit
CREATE TABLE partitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL
LOCK DATAROWS
PARTITION BY RANGE (ValidTo)
( p2014 VALUES <= ('20141231') ON [default],
p2015 VALUES <= ('20151231') ON [default],
pMAX VALUES <= (MAX) ON [default]
CREATE UNIQUE CLUSTERED INDEX pk
ON partitioned(PK, ValidFrom, ValidTo)
LOCAL INDEX
CREATE TABLE unpartitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
LOCK DATAROWS
CREATE UNIQUE CLUSTERED INDEX pk
ON unpartitioned(PK, ValidFrom, ValidTo)
insert partitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert unpartitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert #table1
select "ET00jPzh", "Jan 15 2015", 1
union all
select "ET00jPzh", "Jan 15 2015", 1
go
update partitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join partitioned p on (p.PK = t.PK)
where p.ValidTo = '99991231'
and t.changed = 1
go
update unpartitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join unpartitioned u on (u.PK = t.PK)
where u.ValidTo = '99991231'
and t.changed = 1
go
drop table #table1
go
drop table partitioned
drop table unpartitioned
gowrt to replication - it is a bit unclear as not enough information has been stated to point out what happened. I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should. You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making.
wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself. There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
-- testing with tinyint
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed tinyint
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned
-- duplicating with 'int'
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed int
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned -
Move data from Non Partitioned Table to Partitioned Table
Hi Friends,
I am using Oracle 11.2.0.1 DB
Please let me know how can i copy /move the data from Non -Partitioned Oracle table to the currently created Partiotioned table.
Regards,
DB839396 wrote:
Hi All,
Created Partitioned table but unable to copy the data from Non Partitioned table:
SQL> select * from sales;
SNO YEAR NAME
1 01-JAN-11 jan2011
1 01-FEB-11 feb2011
1 01-JAN-12 jan2012
1 01-FEB-12 feb2012
1 01-JAN-13 jan2013
1 01-FEB-13 feb2013into which partition should row immediately above ("01-FEB-13") be deposited?
[oracle@localhost ~]$ oerr ora 14400
14400, 00000, "inserted partition key does not map to any partition"
// *Cause: An attempt was made to insert a record into, a Range or Composite
// Range object, with a concatenated partition key that is beyond
// the concatenated partition bound list of the last partition -OR-
// An attempt was made to insert a record into a List object with
// a partition key that did not match the literal values specified
// for any of the partitions.
// *Action: Do not insert the key. Or, add a partition capable of accepting
// the key, Or add values matching the key to a partition specification>
6 rows selected.
>
SQL>
SQL> create table sales_part(sno number(3),year date,name varchar2(10))
2 partition by range(year)
3 (
4 partition p11 values less than (TO_DATE('01/JAN/2012','DD/MON/YYYY')),
5 partition p12 values less than (TO_DATE('01/JAN/2013','DD/MON/YYYY'))
6 );
Table created.
SQL> SELECT table_name,partition_name, num_rows FROM user_tab_partitions;
TABLE_NAME PARTITION_NAME NUM_ROWS
SALES_PART P11
SALES_PART P12
UNPAR_TABLE UNPAR_TABLE_12 776000
UNPAR_TABLE UNPAR_TABLE_15 5000
UNPAR_TABLE UNPAR_TABLE_MX 220000
SQL>
SQL> insert into sales_part select * from sales;
insert into sales_part select * from sales
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition
Regards,
DB -
Partition an Non Partition Table in 11.2.0.1
Hi Friends,
I am using Oracle 11.2.0.1 Oracle Database.
I have a table with 10 Million records and it's a Non Partitioned Table.
1) I would like to partition the table (with partition by range ) without creating new table . I should do it in the existing table itself (not sure DBMS_REDEFINITION is the only option ) (or) can i use alter table ...?
2) Add one partition which will have data for the unspecified range.
Please let me know the inputs on the above
Regards,
DBHi,
what is the advantage of using DBMS_REDEFINITION over normal method (create partition table,grant access,insert records)You can't just add a partition in a non-partitioned table. You need to recreate existing table to have it partitioned (you can't just start adding new partitions to existing non-partitioned table). Advantage of dbms_redefinition is that it is online operation to re-create an existing table and and your data always remains available during table recreation
I would like to know how to copy the object privileges,constraints,indexes from Non Partitioned table (sales) to Partitioned table (sales_part) which i am creating. will >DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS help on this?First you need to tell us what method you are using to partition an existing table? If you are using dbms_redifiniiton, you really don't need to worry about triggers, indexex or constraints at all. Just follow any document which explains how to use dbms_redifinition. Dr. Tim has done a lot of work for dummys like us by writing documents for us. Follow this document.
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
If so can i use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS alone for copying the table dependents alone after i create partition table (or) it should be used along with >DBMS_REDEFINITION.START_REDEF_TABLE only?See above document which i mentioned.
Salman -
How to find out the Non Partitioned Tables used 2Gb on oracle
Hi team
how to find out the Non Partitioned Tables used > 2Gb on oracle where not is sys & system
regardsheres 1 I made earlier
set pagesize 999
set linesize 132
col owner format a25
col segment_name format a60
select owner,segment_name,segment_type,(bytes/1024/1024)"MB size"
from dba_segments
where owner not in ('SYS','SYSTEM','XDB','MDSYS','SYSMAN') -- edit for taste
and segment_type = 'TABLE'
having (bytes/1024/1024) > 2000
group by bytes, segment_Type, segment_name, owner
order by 4 asc -
11.2.0.3 Parallel delete on non-partitioned table
Friends and mentors...
I want to know more about parallel deleted and it's requirements...I have gone through Oracle manuals and articles but not able to understand exactly about parallel delete (dml) feature...
Task: Trying to delete large data (20 mil rows out of 60 mil) from non-partitioned table
Job frequency: Once every month
Oracle: 11.2.0.3
OS: Linux
Questions:
1. Any idea on best approach?
2. Do I need to have table partitioned to use /*+parallel */ hint?
3. If I use /*+parallel*/ hint in delete statement then do I need to use "alter session enable parallel dm1" ?
4. How to decided degree of parallelism (DOP)? is it good to use auto for DOP?
Currently I am planning to use parallel hint in delete statement, is this enough or need better plan?
thanks..khallas301 wrote:
Friends and mentors...
I want to know more about parallel deleted and it's requirements...I have gone through Oracle manuals and articles but not able to understand exactly about parallel delete (dml) feature...
Task: Trying to delete large data (20 mil rows out of 60 mil) from non-partitioned table
Job frequency: Once every month
Oracle: 11.2.0.3
OS: Linux
Questions:
1. Any idea on best approach?
2. Do I need to have table partitioned to use /*+parallel */ hint?
3. If I use /*+parallel*/ hint in delete statement then do I need to use "alter session enable parallel dm1" ?
4. How to decided degree of parallelism (DOP)? is it good to use auto for DOP?
Currently I am planning to use parallel hint in delete statement, is this enough or need better plan?
thanks..
It appears that you believe that parallel is always faster than non-parallel; which is not true in every case.
The slowest part of any DELETE is the physical I/O.
How many parallel processes accessing the same table before the disk gets saturated? -
Convert non-partition table to partition table
Hello Everybody
I am just want to ask about how to Convert non-partition table to partition table ?
Thanks
Ramez S. SawiresDear ARF
First of all thank you for replying me , second do u have any links talking about dbms_redefinition package
I am using Database Oracle 10g
Thanks
Ramez S. Sawires
Message was edited by:
Ramez S. Sawires -
Problem with Non-cumulative key figure.
Hi all,
I am facing the problem with the Non-cumulative Key Figure (Quantity). I have created and loaded data to the non-cumulative InfoCube. <b>This cube is defined by me to test the non-cumulative key figure.</b>
<b>In BEx query the non-cumulative key figure and cumulative key figure (Value change) both display same values, i.e. non-cumulative key figure contains the same values which we have loaded for cumulative value change. Non-cumulative key figure is not calculated based on associated cumulative key figure.</b>
I have done the following while defining the non-cumulative InfoCube:
1. Created a non-cumulative key figure which is associated with a cumulative key figure (value change).
2. Loaded data to non-cumulative InfoCube from flat file.
3. Compressed data in non-cumulative InfoCube after the load.
Note:
1. Validity area is determined by the system based on the minimum and maximum date in data.
2. Validity determining characteristic, 0CALDAY is the default characteristic selected by the system.
Is there any other settings to be done?
Please help me in resolving this issue.
Thanks and regards
Pruthvi RBeing a non-cumulative KF, total stock is automatically takes care of that.
Try putting all the restrictions which you have included for total receipts and total issues, for eg, restrict Total Stock with the movement types used in Receipts as well as Issues.
Check and revert.
Regards
Gajendra -
Execute query with non database block
How to execute query with non database block when new form instance trigger.
Hi Kame,
Execute_Query not work with non database block. To do this Make a cursor and then assign values to non database block's items programmatically, see following example,
DECLARE
BEGIN
FOR i IN (SELECT col1, col2 FROM Table) LOOP
:block.item1 := i.col1;
:block.item2 := i.col2;
NEXT_RECORD;
END LOOP;
END;
Please mark if it help you or correct
Regards,
Danish -
InfoProvider with non-cumulative key figures
what does infoprovider with non-cumulative key figures...
noncumulative key figures..
looking for replyit's property of the KF... nothing do with Cubes or ODS.
Name itself telling non-Cumulative KF. We can't cumulate this KF. Suppose take the Case Sales Amount and No of Employees. Sales Amount can be cumulated over Time. But No Employees can't be cumulated. Same is the Case with Stock as well.
Nagesh Ganisetti. -
Query between in partition table
hello,
i have been trying to get this query optmized when using BETWEEN.
the below query has taking around 20 sec to execute, can someone suggest me
to improve the performance?
table is partioned with transaction_dt and have local index also statistics are collected.
1 explain plan for
2 SELECT *
3 FROM Tb_Bookkeeping_Trans_Base
4* WHERE TRANSACTION_DT between '06-apr-10' and '07-apr-2010'
13:26:57 SQL> /
Explained.
Elapsed: 00:00:00.15
13:26:58 SQL>
13:26:58 SQL>
13:27:01 SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3757902876
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 151K| 44M| 7363 (2)| 00:01:29 | | |
|* 1 | FILTER | | | | | | | |
| 2 | PARTITION RANGE ITERATOR | | 151K| 44M| 7363 (2)| 00:01:29 | KEY | 10
| 3 | TABLE ACCESS BY LOCAL INDEX ROWID| TB_BOOKKEEPING_TRANS_BASE | 151K| 44M| 7363
|* 4 | INDEX RANGE SCAN | TB_BOOKKEEPING_TRANS_BASE_IDX2 | 154K| | 757 (2)| 00:00
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter('06-apr-10'<=TO_DATE(' 2010-04-07 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
4 - access("TRANSACTION_DT">='06-apr-10' AND "TRANSACTION_DT"<=TO_DATE(' 2010-04-07 00:00:00', 'sSomething rather strange is happening to your date variables in the "Predicate Information" section below the plan.
The first thing I would try is passing proper dates instead of character strings, i.e
SELECT *
FROM tb_bookkeeping_trans_base
WHERE transaction_dt BETWEEN DATE '2010-04-06' AND DATE '2010-04-07' -
Create Partitioning to non-partitioning Table
Dear All,
I have a table which is non partitioning and it has about 20G data... If I want to partition this table with column say DATE_M,
Please can anyone suggest the best way to do this.
ThanksSo now in the partitioned table he creates one maxvalue partition and does exchange partition
That isn't the typical scenario. Typically you make the switch by using partitions for the NEW data and leave the existing data in the base range partition.
1. Existing app uses an unpartitioned table
2. New table is partitioned for NEW DATA
Assume you want monthly partitions (daily works the same). This is already April so there is already some April data.
So create the partitioned table so the base partition clause includes ALL of the data for April and before:
create table ipart
(time_id date
,cust_id number(4)
,amount_sold number(5))
partition by range(time_id)
interval(NUMTOYMINTERVAL(1,'month'))
(partition old_data values less than (to_date('01-may-2015','DD-MON-YYYY')))
Now you do the exchange with the unpartitioned table and all the current data goes into that 'OLD_DATA' partition.
New data for May and the future will have partitions created automatically.
That approach lets you ease into partitioning without disrupting your current processes at all.
As time goes by more and more of the data will be in the new monthly partitions. If you need to you can split that base partition
insert into ipart (time_id) values (sysdate - 90);
insert into ipart (time_id) values (sysdate - 60);
insert into ipart (time_id) values (sysdate - 30);
insert into ipart (time_id) values (sysdate);
commit;
alter table ipart split partition old_data
at (to_date('01-jan-2015', 'DD-MON-YYYY')) into
(partition old_data, partition JAN_FEB_MAR_APR); -
Importing partitioned table data into non-partitioned table
Hi Friends,
SOURCE SERVER
OS:Linux
Database Version:10.2.0.2.0
i have exported one partition of my partitioned table like below..
expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
OS:Linux
Database Version:10.2.0.4.0
Now when i am importing into another server i am getting below error
Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MAPPING"."SYS_IMPORT_FULL_01": MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE failed to create with error:
ORA-00959: tablespace 'MAPPING_ABC' does not exist
Failing sql is:
CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
Processing object type TABLE_EXPORT/TABLE/COMMENT
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
Regards
Umesh Guptayes, i have tried that option as well.
but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
Regards
UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
Wait for some EXPERT and GURU's review on this issue .........
Good luck ....
--neeraj
Maybe you are looking for
-
Require Planned order + Purchase Req. through MRP
Hello Sirs, I have a requirement for following scenario...... I have a FERT which has 2 Components, 1 HALB + 1 ROH (Purchasing Item) When i run MRP (MD02) for FERT, i want Planned Order for HALB + Purchase Requisition for ROH. As at present when i ru
-
What is the best way to job hunt?
First of all, relax and take a breath. You can do this, and we're behind you. :)So you want to get back into infrastructure... You'll be please to know that nothing has changed. Sure technology is a bit different now, but understanding people, buildi
-
dear all, os: window 2003 db: oracle 8i when i export on oracle 8i (export owner) , i get a error: EXP-00084: Unexpected DbmsJava error -1031 at step 6661 EXP-00008: ORACLE error 1031 encountered ORA-01031: insufficient privileges EXP-00000: Export t
-
Q. When I apply edits to an image using Revel, does it save a new file or just the edits? A. Revel cloud keeps a copy of your full resolution originals. Edits in revel are non-destructive, so your original is always preserved (unedited). When you ex
-
Reoccuring Appointments One Day Off
I just setup Apple Mail and iCal to use our new Exchange 2007 server. Everything looks good except that all of my recurring appoints show one day sooner than they should be. When I edit them it acknowledges that the appointment should be on Wednesday