CTAS - DOP
Version :
SQL> select * from gv$version;
INST_ID BANNER
2 Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
2 PL/SQL Release 10.2.0.4.0 - Production
2 CORE 10.2.0.4.0 Production
2 TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
2 NLSRTL Version 10.2.0.4.0 - Production
1 Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
1 PL/SQL Release 10.2.0.4.0 - Production
1 CORE 10.2.0.4.0 Production
1 TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
1 NLSRTL Version 10.2.0.4.0 - Production
10 rows selected.hi experts
I am planning to do CTAS in parallel mode on my production environment which has below configuration.
SQL> show parameter parallel
NAME TYPE VALUE
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_max_servers integer 485
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean TRUE
parallel_server_instances integer 2
parallel_threads_per_cpu integer 2
NAME TYPE VALUE
recovery_parallelism integer 0
SQL> show parameter cpu
NAME TYPE VALUE
cpu_count integer 32
parallel_threads_per_cpu integer 2I have opted for DOP 8 and below is my CTAS
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8;
ALTER SESSION FORCE PARALLEL DDL PARALLEL 8;
create table r_dummy PARALLEL
as
WITH fq AS
select tdocstorage_pk
from fnet_qdocstorage fq
where fq.batch_id is null
select bb.docstorageid,
bb.filenetdocid,
bb.filename,
nvl(bb.obsoleteflag, 0) Obsoleteflag,
cc.doccategorycode,
cc.docimagecode DOCFILETYPECODE,
ee.packetidtypecode,
nvl(ee.docdeliveryflag, 0) docdeliveryflag
from closr.t_docstorage bb,
closr.t_docstoragetype cc,
closr.t_docstorageattribute ee
where exists
select 1
from fq
where fq.tdocstorage_pk = bb.docstorageid
and bb.filenetdocid is not null
and bb.docstoragetypecode = cc.docstoragetypecode
and bb.docstorageid = ee.docstorageid (+)
Explain plan :
Plan
1 Every row in the table CLOSR.T_DOCSTORAGETYPE is read.
2 PX BLOCK ITERATOR
3 PX SEND BROADCAST
4 PX RECEIVE
5 Every row in the table Partitions accessed #1 is read.
6 PX BLOCK ITERATOR
7 PX SEND HASH
8 PX RECEIVE
9 The rows from step 8 were sorted to eliminate duplicate rows.
10 PX JOIN FILTER CREATE
11 Every row in the table CLOSR.T_DOCSTORAGE is read.
12 PX BLOCK ITERATOR
13 PX JOIN FILTER USE
14 PX SEND HASH
15 PX RECEIVE
16 The result sets from steps 10, 15 were joined (hash).
17 The result sets from steps 4, 16 were joined (hash).
18 PX SEND HASH
19 PX RECEIVE
20 Every row in the table CLOSR.T_DOCSTORAGEATTRIBUTE is read.
21 PX BLOCK ITERATOR
22 PX SEND HASH
23 PX RECEIVE
24 Rows from step 19 which matched rows from step 23 were returned (hash join).
25 The rows from step 24 were inserted into HLODS.R_DUMMY using direct-load insert.
26 PX SEND QC (RANDOM)
27 PX COORDINATOR
28 CREATE TABLE STATEMENT Fortunately I've select on all the tables, when I fire this query with DOP as 8 it's taking 2mins to retrive 16653835 rows.(which is good, I guess)
I've few questions here:
1) What would be the optimal DOP for DDL to run when the query is running at a DOP 8
2) Can that be also 8?
3) Is there any way to estimate the time that will be taken by CREATE TABLE if I go with DOP 8?
3) Can I use NOLOGGING here? Does that show any significant difference in time(as we could skip REDO) ?
Our database is in noarchive log mode and the table we are creating here is a just a dummy table, which would be dropped once after we export the dump.
Please help me out, unfortunately I'm not able to test this on other environments because we don't have same config as of PROD.
>
3) Can I use NOLOGGING here? Does that show any significant difference in time(as we could skip REDO) ?
>
Yes - no need to log since you have the source data available if you need it.
Similar Messages
-
I dopped my iphone on the floor today and nothing cracked but when i tried to turn it on, the screen had a bunch of barcode like stripes. even though i cant see the screen, i was still able to type in my password and answer a phone call. how can i fix it?
YOU can't fix it. But you can bring it in to Apple for an out of warranty replacement for $199.
-
Hi,
I am trying to use CTAS to create a copy of the table which is on remote server.
My select statement is fetching data.But when i am trying to use CTAS it is just taking too long.The script is been running for 5 hrs and still the table is not created.It has got approximately 20 miliion data.Actaully when i am running a count,it is taking too long.So i dont have idea of how much data is there in the table also?any work around please??user8731258 wrote:
Do u think export import would be a better option?It can be a faster option as it allows you to compress/zip the dump file and transport that across the network to the other server. Reducing the size of the data set that needs to be transported over the network, could reduce execution time.
However, this approach would be serial - 5 steps to follow, one after the other:
1. export the data (server 1)
2. zip the dump file (server 1)
3. ftp/sftp the dump file to server 2 from server 1
4. unzip the dump file (server 2)
5. import the data (server 2)
This can be run in parallel (on Linux/Unix servers) by the means of fifo pipes. All these steps can then run at the same time - in other words, data is zipped while being exported, zipped data is immediately transferred to server 2, and on server 2 incoming zipped data is immediately unzipped and imported.
Thus instead of 5 serial steps that results in an accumulated elapsed run time (sum of 5 steps), this approach will only last as long as the single slowest step of all 5 steps. -
How to Carry Forward Beginning Balances for CTA
Hi Experts,
We're doing a multi-currency AppSet wherein I need to carry forward the ending balances of the cumulative translation adjustment to it's beginning balance the next year but the problem is:
1. CTA resides only in USD reporting currency(not in LC).
2. My carry forward rules applies only to local currency. I've tried to include USD but when I run the default formula wherein I've included my currency conversion rule, the beginning balance for CTA zero out. I've tried removing the ratetype on CTA, replace it with blank, AS_IS,NOTRANS so as not to translate that account since it will be converting from a blank opening balace in LC(since CTA is only calculated in USD) to USD so it's reasonable that it would zero out, but still it's trnaslated.
Any idea how to work this out?
Thanks,
MarvinHi Marvin,
Set your CTA account ratetype to something like "HISTX" or whatever you want, add this ratetype to RATE dimension and in your business rule, set this rate type to [AS_IS].
We use it here and it works.
Thanks,
Regis -
CTAS using dbms_metadata.get_ddl for Partitioned table
Hi,
I would like to create a temporary table from a partitioned table using CTAS. I plan to use the following steps in a PL/SQL procedure:
1. Use dbms_metadata.get_ddl to get the script
2. Use raplace function to change the tablename to temptable
3. execute the script to get the temp table created.
SQL> create or replace procedure p1 as
2 l_clob clob;
3 str long;
4 begin
5 SELECT dbms_metadata.get_ddl('TABLE', 'FACT_TABLE','USER1') into l_clob FROM DUAL;
6 dbms_output.put_line('CLOB Length:'||dbms_lob.getlength(l_clob));
7 str:=dbms_lob.substr(l_clob,dbms_lob.getlength(l_clob),1);
8 dbms_output.put_line('DDL:'||str);
9 end;
12 /
Procedure created.
SQL> exec p1;
CLOB Length:73376
DDL:
PL/SQL procedure successfully completed.
I cannot see the DDL at all. Please help.Thanks Adam. The following piece of code is supposed to do that. But, its failing because the dbms_lob.substr(l_clob,4000,4000*v_intIdx +1); is putting newline and therefore dbms_sql.parse
is failing.
Please advice.
create table my_metadata(stmt_no number, ddl_stmt clob);
CREATE OR REPLACE package USER1.genTempTable is
procedure getDDL;
procedure createTempTab;
end;
CREATE OR REPLACE package body USER1.genTempTable is
procedure getDDL as
Description: get a DDL from a partitioned table and change the table name
Reference: Q: How Could I Format The Output From Dbms_metadata.Get_ddl Utility? [ID 394143.1]
l_clob clob := empty_clob();
str long;
l_dummy varchar2(25);
dbms_lob does not have any replace function; the following function is a trick to do that
procedure lob_replace( p_lob in out clob, p_what in varchar2, p_with in varchar2 )as
n number;
begin
n := dbms_lob.instr( p_lob, p_what );
if ( nvl(n,0) > 0 )
then
dbms_lob.copy( p_lob,
p_lob,
dbms_lob.getlength(p_lob),
n+length(p_with),
n+length(p_what) );
dbms_lob.write( p_lob, length(p_with), n, p_with );
if ( length(p_what) > length(p_with) )
then
dbms_lob.trim( p_lob,
dbms_lob.getlength(p_lob)-(length(p_what)-length(p_with)) );
end if;
end if;
end lob_replace;
begin
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',false);
DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',false);
execute immediate 'truncate table my_metadata';
-- Get DDL
SELECT dbms_metadata.get_ddl('TABLE', 'FACT','USER1') into l_clob FROM DUAL;
-- Insert the DDL into the metadata table
insert into my_metadata values(1,l_clob);
commit;
-- Change the table name into a temporary table
select ddl_stmt into l_clob from my_metadata where stmt_no =1 for update;
lob_replace(l_clob,'"FACT"','"FACT_T"');
insert into my_metadata values(2,l_clob);
commit;
-- execute immediate l_clob; <---- Cannot be executed in 10.2.0.5; supported in 11gR2
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');
end getDDL;
Procedure to create temporary table
procedure createTempTab as
v_intCur pls_integer;
v_intIdx pls_integer;
v_intNumRows pls_integer;
v_vcStmt dbms_sql.varchar2a;
l_clob clob := empty_clob();
l_str varchar2(4000);
l_length number;
l_loops number;
begin
select ddl_stmt into l_clob from my_metadata where stmt_no=2;
l_length := dbms_lob.getlength(l_clob);
l_loops := ceil(l_length/4000);
for v_intIdx in 0..l_loops loop
l_str:=dbms_lob.substr(l_clob,4000,4000*v_intIdx +1);
l_str := replace(l_str,chr(10),'');
l_str := replace(l_str,chr(13),'');
l_str := replace(l_str,chr(9),'');
v_vcStmt(v_intIdx) := l_str;
end loop;
for v_intIdx in 0..l_loops loop
dbms_output.put_line(v_vcStmt(v_intIdx));
end loop;
v_intCur := dbms_sql.open_cursor;
dbms_sql.parse(
c => v_intCur,
statement => v_vcStmt,
lb => 0,
--ub => v_intIdx,
ub => l_loops,
lfflg => true,
language_flag => dbms_sql.native);
v_intNumRows := dbms_sql.execute(v_intCur);
dbms_sql.close_cursor(v_intCur);
end createTempTab;
end;
/ -
Create table as select (CTAS)statement is taking very long time.
Hi All,
One of my procedure run a create table as select statement every month.
Usually it finishes in 20 mins. for 6172063 records and 1 hour in 13699067.
But this time it is taking forever even for 38076 records.
When I checked all it is doing is CPU usage. No I/O.
I did a count(*) using the query it brought results fine.
BUT CTAS keeps going on.
I'm using Oracle 10.2.0.4 .
main table temp_ip has 38076
table nhs_opcs_hier has 26769 records.
and table nhs_icd10_hier has 49551 records.
Query is as follows:
create table analytic_hes.temp_ip_hier as
select b.*, (select nvl(max(hierarchy), 0)
from ref_hd.nhs_opcs_hier a
where fiscal_year = b.hd_spell_fiscal_year
and a.code in
(primary_PROCEDURE, secondary_procedure_1, secondary_procedure_2,
secondary_procedure_3, secondary_procedure_4, secondary_procedure_5,
secondary_procedure_6, secondary_procedure_7, secondary_procedure_8,
secondary_procedure_9, secondary_procedure_10,
secondary_procedure_11, secondary_procedure_12)) as hd_procedure_hierarchy,
(select nvl(max(hierarchy), 0) from ref_hd.nhs_icd10_hier a
where fiscal_year = b.hd_spell_fiscal_year
and a.code in
(primary_diagnosis, secondary_diagnosis_1,
secondary_diagnosis_2, secondary_diagnosis_3,
secondary_diagnosis_4, secondary_diagnosis_5,
secondary_diagnosis_6, secondary_diagnosis_7,
secondary_diagnosis_8, secondary_diagnosis_9,
secondary_diagnosis_10, secondary_diagnosis_11,
secondary_diagnosis_12, secondary_diagnosis_13,
secondary_diagnosis_14)) as hd_diagnosis_hierarchy
from analytic_hes.temp_ip b
Any help would be greatly appreciatedHello
This is a bit of a wild card I think because it's going to require 14 fill scans of the temp_ip table to unpivot the diagnosis and procedure codes, so it's lilkely this will run slower than the original. However, as this is a temporary table, I'm guessing you might have some control over its structure, or at least have the ability to sack it and try something else. If you are able to alter this table structure, you could make the query much simpler and most likely much quicker. I think you need to have a list of procedure codes for the fiscal year and a list of diagnosis codes for the fiscal year. I'm doing that through the big list of UNION ALL statements, but you may have a more efficient way to do it based on the core tables you're populating temp_ip from. Anyway, here it is (as far as I can tell this will do the same job)
WITH codes AS
( SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
primary_PROCEDURE procedure_code,
primary_diagnosis diagnosis_code,
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_1 procedure_code,
secondary_diagnosis_1 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_2 procedure_code ,
secondary_diagnosis_2 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_3 procedure_code,
secondary_diagnosis_3 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_4 procedure_code,
secondary_diagnosis_4 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_5 procedure_code,
secondary_diagnosis_5 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_6 procedure_code,
secondary_diagnosis_6 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_7 procedure_code,
secondary_diagnosis_7 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_8 procedure_code,
secondary_diagnosis_8 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_9 procedure_code,
secondary_diagnosis_9 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_10 procedure_code,
secondary_diagnosis_10 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_11 procedure_code,
secondary_diagnosis_11 diagnosis_code
FROM
temp_ip
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_12 procedure_code,
secondary_diagnosis_12 diagnosis_code
FROM
temp_ip
), hd_procedure_hierarchy AS
( SELECT
NVL (MAX (a.hierarchy), 0) hd_procedure_hierarchy,
a.fiscal_year
FROM
ref_hd.nhs_opcs_hier a,
codes pc
WHERE
a.fiscal_year = pc.hd_spell_fiscal_year
AND
a.code = pc.procedure_code
GROUP BY
a.fiscal_year
),hd_diagnosis_hierarchy AS
( SELECT
NVL (MAX (a.hierarchy), 0) hd_diagnosis_hierarchy,
a.fiscal_year
FROM
ref_hd.nhs_icd10_hier a,
codes pc
WHERE
a.fiscal_year = pc.hd_spell_fiscal_year
AND
a.code = pc.diagnosis_code
GROUP BY
a.fiscal_year
SELECT b.*, a.hd_procedure_hierarchy, c.hd_diagnosis_hierarchy
FROM analytic_hes.temp_ip b,
LEFT OUTER JOIN hd_procedure_hierarchy a
ON (a.fiscal_year = b.hd_spell_fiscal_year)
LEFT OUTER JOIN hd_diagnosis_hierarchy c
ON (c.fiscal_year = b.hd_spell_fiscal_year)HTH
David -
Create Partition table using CTAS
Hi there,
Is it possible to create a duplicate partition table from a existing partition table using CTAS method? If yes, could you explain how ?If No , how to make a duplicate partition table?
Thanks in advance?
Rajesh MarathEasily:
conn / as sysdba
CREATE TABLESPACE part1
DATAFILE 'c:\temp\part01.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
CREATE TABLESPACE part2
DATAFILE 'c:\temp\part02.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
CREATE TABLESPACE part3
DATAFILE 'c:\temp\part03.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
ALTER USER uwclass QUOTA UNLIMITED ON part1;
ALTER USER uwclass QUOTA UNLIMITED ON part2;
ALTER USER uwclass QUOTA UNLIMITED ON part3;
conn uwclass/uwclass
CREATE TABLE hash_part (
prof_history_id NUMBER(10),
person_id NUMBER(10) NOT NULL,
organization_id NUMBER(10) NOT NULL,
record_date DATE NOT NULL,
prof_hist_comments VARCHAR2(2000))
PARTITION BY HASH (prof_history_id)
PARTITIONS 3
STORE IN (part1, part2, part3);
CREATE TABLE duplicate_hash_part
PARTITION BY HASH (prof_history_id)
PARTITIONS 3
STORE IN (part1, part2, part3) AS
SELECT * FROM hash_part;Follow the same logic for list and range partitions -
Automatic DOP take more time to execute query
We upgraded database to oracle 11gR2. While testing Automatic DOP feature with our existing query it takes more time than with parallel.
Note: No constrains or Index created on table to gain performance while loading data (5000records / sec)
Os : Sun Solaris 64bit
CPU = 8
RAM = 7456M
Default parameter settings:
parallel_degree_policy string MANUAL
parallel_degree_limit string CPU
parallel_threads_per_cpu integer 2
arallel_degree_limit string CPU
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
Query:
SELECT COUNT(*)
from (
SELECT
/*+ FIRST_ROWS(50), PARALLEL */
Query gets executed in 22minutes : execution plan
COUNT(*)
9600
Elapsed: 00:22:10.71
Execution Plan
Plan hash value: 3765539975
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 21 | 2164K (1)| 07:12:52 | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | |
| 2 | PARTITION RANGE OR| | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|
|* 3 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|Automatic DOP Query: parameters set
alter session set PARALLEL_DEGREE_POLICY = limited;
alter session force parallel query ;Query:
SELECT COUNT(*)
from (
SELECT /*+ FIRST_ROWS(50), PARALLEL*/
This query takes more than 2hrs to execute
COUNT(*)
9600
Elapsed: 02:07:48.81
Execution Plan
Plan hash value: 127536830
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart|Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 21 | 150K (1)| 00:30:01 | | | | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 21 | | | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 21 | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
Note
- automatic DOP: Computed Degree of Parallelism is 16 because of degree limitcan some one help us to find out where we did wrong or any pointer will really helpful to resolve an issue.
Edited by: Sachin B on May 11, 2010 4:05 AMGenerated AWR report for ADOP
Foreground Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> s - second, ms - millisecond - 1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
direct path read 522,173 0 125,051 239 628.4 99.3
db file sequential read 663 0 156 235 0.8 .1
log file sync 165 0 117 712 0.2 .1
Disk file operations I/O 267 0 63 236 0.3 .1
db file scattered read 251 0 36 145 0.3 .0
control file sequential re 217 0 32 149 0.3 .0
library cache load lock 2 0 10 4797 0.0 .0
cursor: pin S wait on X 3 0 9 3149 0.0 .0
read by other session 5 0 2 429 0.0 .0
kfk: async disk IO 613,170 0 2 0 737.9 .0
sort segment request 1 100 1 1007 0.0 .0
os thread startup 16 0 1 43 0.0 .0
direct path write temp 1 0 1 527 0.0 .0
latch free 51 0 0 2 0.1 .0
kksfbc child completion 1 100 0 59 0.0 .0
latch: cache buffers chain 19 0 0 2 0.0 .0
latch: shared pool 36 0 0 1 0.0 .0
PX Deq: Slave Session Stat 21 0 0 1 0.0 .0
library cache: mutex X 45 0 0 1 0.1 .0
CSS initialization 2 0 0 6 0.0 .0
enq: KO - fast object chec 1 0 0 11 0.0 .0
buffer busy waits 3 0 0 1 0.0 .0
cursor: pin S 9 0 0 0 0.0 .0
CSS operation: action 2 0 0 1 0.0 .0
direct path write 1 0 0 2 0.0 .0
jobq slave wait 17,554 100 8,942 509 21.1
PX Deq: Execute Reply 4,060 95 7,870 1938 4.9
SQL*Net message from clien 96 0 5,756 59962 0.1
PX Deq: Execution Msg 618 56 712 1152 0.7
KSV master wait 11 0 0 2 0.0
PX Deq: Join ACK 16 0 0 1 0.0
PX Deq: Parse Reply 14 0 0 1 0.0
Background Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
control file sequential re 6,249 0 2,375 380 7.5 55.6
control file parallel writ 2,003 0 744 371 2.4 17.4
db file parallel write 1,604 0 503 313 1.9 11.8
log file parallel write 861 0 320 371 1.0 7.5
db file sequential read 363 0 151 415 0.4 3.5
db file scattered read 152 0 64 421 0.2 1.5
Disk file operations I/O 276 0 21 77 0.3 .5
os thread startup 316 0 15 48 0.4 .4
ADR block file read 24 0 11 450 0.0 .3
rdbms ipc reply 17 12 7 403 0.0 .2
Data file init write 6 0 6 1016 0.0 .1
direct path write 21 0 6 287 0.0 .1
log file sync 7 0 6 796 0.0 .1
ADR block file write 10 0 4 414 0.0 .1
enq: JS - queue lock 1 0 3 2535 0.0 .1
ASM file metadata operatio 1,801 0 2 1 2.2 .0
db file parallel read 30 0 1 40 0.0 .0
kfk: async disk IO 955 0 1 1 1.1 .0
db file single write 1 0 0 415 0.0 .0
reliable message 10 0 0 23 0.0 .0
latch: shared pool 75 0 0 2 0.1 .0
latch: call allocation 26 0 0 2 0.0 .0
CSS initialization 7 0 0 6 0.0 .0
asynch descriptor resize 352 100 0 0 0.4 .0
undo segment extension 2 100 0 5 0.0 .0
CSS operation: action 9 0 0 1 0.0 .0
CSS operation: query 42 0 0 0 0.1 .0
latch: parallel query allo 4 0 0 0 0.0 .0
rdbms ipc message 37,948 97 104,599 2756 45.7
DIAG idle wait 16,762 100 16,927 1010 20.2
ASM background timer 1,724 0 8,467 4912 2.1
shared server idle wait 282 100 8,465 30019 0.3
pmon timer 3,123 90 8,465 2711 3.8
wait for unread message on 8,381 100 8,465 1010 10.1
dispatcher timer 141 100 8,463 60019 0.2
Streams AQ: qmn coordinato 604 50 8,462 14010 0.7
Streams AQ: qmn slave idle 304 0 8,462 27836 0.4
smon timer 35 71 8,382 239496 0.0
Space Manager: slave idle 1,621 99 8,083 4986 2.0
PX Idle Wait 2,392 99 4,739 1981 2.9
class slave wait 46 0 623 13546 0.1
KSV master wait 2 0 0 27 0.0
SQL*Net message from clien 7 0 0 1 0.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)
% of Waits
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 24 100.0
ADR block file write 10 100.0
ADR file lock 12 100.0
ASM file metadata operatio 1812 99.0 .3 .4 .2 .1
CSS initialization 9 100.0
CSS operation: action 11 90.9 9.1
CSS operation: query 54 100.0
Data file init write 6 16.7 16.7 16.7 50.0
Disk file operations I/O 533 88.7 2.6 .6 1.5 .2 6.4
PX Deq: Signal ACK EXT 4 100.0
PX Deq: Signal ACK RSG 2 100.0
PX Deq: Slave Session Stat 21 42.9 28.6 28.6
SQL*Net break/reset to cli 6 100.0
SQL*Net message to client 102 100.0
SQL*Net more data to clien 4 100.0
asynch descriptor resize 527 100.0
buffer busy waits 4 75.0 25.0
control file parallel writ 2003 9.3 .5 .0 .1 90.0
control file sequential re 6466 10.6 .0 .0 .0 .1 .2 89.0
cursor: pin S 9 100.0
cursor: pin S wait on X 3 33.3 33.3 33.3
db file parallel read 30 6.7 30.0 63.3
db file parallel write 1604 7.4 .1 .6 16.5 75.5
db file scattered read 403 3.7 .2 2.5 13.6 14.9 3.5 61.5
db file sequential read 1017 12.3 .8 2.3 7.3 6.6 2.0 68.8
db file single write 1 100.0
direct path read 522.2 2.2 2.1 .1 .0 1.8 17.9 75.9
direct path write 22 4.5 4.5 90.9
direct path write temp 1 100.0
enq: JS - queue lock 1 100.0
enq: KO - fast object chec 1 100.0
enq: PS - contention 1 100.0
kfk: async disk IO 614.1 100.0 .0
kksfbc child completion 1 100.0
latch free 58 46.6 27.6 15.5 10.3
latch: cache buffers chain 19 36.8 10.5 52.6
latch: call allocation 26 76.9 11.5 7.7 3.8
latch: parallel query allo 4 100.0
latch: shared pool 111 44.1 28.8 27.0
library cache load lock 2 100.0
library cache: mutex X 45 84.4 8.9 4.4 2.2
log file parallel write 861 10.0 .1 .1 89.5 .2
log file sync 172 6.4 90.1 3.5
os thread startup 332 100.0
rdbms ipc reply 18 72.2 11.1 16.7
read by other session 5 100.0
reliable message 11 81.8 9.1 9.1
sort segment request 1 100.0
undo segment extension 2 50.0 50.0
ASM background timer 1724 .8 .6 .1 .6 97.9
DIAG idle wait 16.8K 100.0
KSV master wait 13 7.7 23.1 61.5 7.7
PX Deq: Execute Reply 4060 .4 .0 .0 .1 3.4 96.0
PX Deq: Execution Msg 617 34.7 1.5 2.4 1.5 1.5 .2 .8 57.5
PX Deq: Join ACK 16 93.8 6.3
PX Deq: Parse Reply 14 71.4 7.1 14.3 7.1
PX Idle Wait 2384 .0 .6 99.3
SQL*Net message from clien 103 82.5 1.0 1.9 1.0 13.6
Space Manager: slave idle 1621 .2 99.8
Streams AQ: qmn coordinato 604 50.0 50.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)Edited by: Sachin B on May 11, 2010 4:52 AM -
CTAS method takes long time - Any suggestion.
Hi,
Please find the query below. The 'select' statement was taking more than a day time to execute and i tuned the same. Now i am getting the results in 5 seconds. But when i try to load the data that i get from 'select' statement into table using CTAS method, it takes more than 5 hours, but i don't see table created. Little background about this query...., this is for data warehousing project, every table in the 'select' has 5million records. The result set (executes in 5 second) will again have 5 million records. When loading the data into table for 5m records, it takes time. Is there a way to improve this.
create table tcu
tablespace tcu_cons
as
select /*+ ORDERED INDEX(a) INDEX(b) INDEX(c) INDEX(d) INDEX(e) INDEX(f) INDEX(g) USE_NL(a,b) USE_NL(b,c) USE_NL(c,d) USE_NL(d,e) USE_NL(e,f) USE_NL(f,g) */
a.prev_mo_tbr_amt,
(b.ytd_tbr_amt + NVL(g.tot_cris_tbr_amt,0)) ytd_tbr_amt,
c.prev_ytd_tbr_amt,
(d.lst_12_mo_tbr_amt + NVL(g.tot_cris_tbr_amt,0)) lst_12_mo_tbr_amt,
e.prev_lst_12_mo_tbr_amt,
f.sm_mo_lst_yr_tbr_amt,
g.tot_cris_tbr_amt,
a.row_id
from tmp_prevmo a,
tmp_ytd b,
tmp_prevytd c,
tmp_lst12 d,
tmp_prevlst12 e,
tmp_smmolstyr f,
csban_1 g
where a.acct_id = b.acct_id
and b.acct_id = c.acct_id
and c.acct_id = d.acct_id
and d.acct_id = e.acct_id
and e.acct_id = f.acct_id
and f.acct_id = g.acct_id;
Thanks,
Subbu.Hi Alex,
I am talking about the first rows that returned with in 5 seconds.
I have already taken my explain plan and read it. It seems to be fine including cost.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5116K| 356M| 107M (1)|419:26:06 |
| 1 | TABLE ACCESS BY INDEX ROWID | CSBAN_1 | 1 | 10 | 4 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 5116K| 356M| 107M (1)|419:26:06 |
| 3 | NESTED LOOPS | | 5116K| 307M| 87M (1)|339:30:36 |
| 4 | NESTED LOOPS | | 5116K| 263M| 71M (1)|279:29:30 |
| 5 | NESTED LOOPS | | 5120K| 219M| 51M (1)|199:26:29 |
| 6 | NESTED LOOPS | | 5120K| 175M| 35M (1)|139:22:34 |
| 7 | NESTED LOOPS | | 5120K| 136M| 20M (1)| 79:18:39 |
| 8 | TABLE ACCESS BY INDEX ROWID| TMP_PREVMO | 5125K| 102M| 4901K (1)| 19:03:36 |
| 9 | INDEX FULL SCAN | TMP_PREVMO_PK | 5125K| | 7590 (8)| 00:01:47 |
| 10 | TABLE ACCESS BY INDEX ROWID| TMP_YTD | 1 | 7 | 4 (0)| 00:00:01 |
|* 11 | INDEX RANGE SCAN | TMP_YTD_PK | 1 | | 2 (0)| 00:00:01 |
| 12 | TABLE ACCESS BY INDEX ROWID | TMP_PREVYTD | 1 | 8 | 3 (0)| 00:00:01
|* 13 | INDEX RANGE SCAN | TMP_PREVYTD_PK | 1 | | 2 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | TMP_LST12 | 1 | 9 | 3 (0)| 00:00:01 |
|* 15 | INDEX RANGE SCAN | TMP_LST12_PK | 1 | | 2 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID | TMP_PREVLST12 | 1 | 9 | 4 (0)| 00:00:01
|* 17 | INDEX RANGE SCAN | TMP_PREVLST12_PK | 1 | | 2 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | TMP_SMMOLSTYR | 1 | 9 | 3 (0)| 00:00:01
|* 19 | INDEX RANGE SCAN | TMP_SMMOLSTYR_PK | 1 | | 2 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | CSBAN_1_PK | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
11 - access("A"."ACCT_ID"="B"."ACCT_ID")
13 - access("B"."ACCT_ID"="C"."ACCT_ID")
15 - access("C"."ACCT_ID"="D"."ACCT_ID")
17 - access("D"."ACCT_ID"="E"."ACCT_ID")
19 - access("E"."ACCT_ID"="F"."ACCT_ID")
20 - access("F"."ACCT_ID"="G"."ACCT_ID")
Thanks,
Subbu. -
Question about CTAS in Enterprise Manager (definitely not urgent ;) )
Good afternoon,
I am following the OBE steps given in the link:
http://st-curriculum.oracle.com/obe/db/11g/r2/2day_dba/monitoring/monitoring.htm
On step *12.* the following instructions are given:
12.
Enter employees1 in the Name field. Specify SYSTEM as the
schema and TBSALERT as the tablespace. Click on the Define Using
drop-down list and select SQL. The page is refreshed. Enter
select * from hr.employees in the CREATE TABLE AS field.
Click Options.In my installation of EM, the "Define Using" drop down list simply won't drop down and I am forced to define the fields individually instead of through a CTAS.
The question is: Are you able to choose "SQL" from the "Define Using" drop down list (which would indicate that I am doing something wrong) or do you experience the same problem I am ?
Thank you for your help,
John.>
Not sure why my answer was "correct" ;)
>
well... it's correct because you tried it and you told me how it's working for you - that's all I needed :) Now I know that the problem is something wrong in my installation and not another "quirk" in EM.
>
None of the drop-down options look to be "special".
>
That's the only drop down that isn't working for me. All the drop downs in other pages work as expected. I thought it might be a bug in EM but, figured I'd ask this question to see if other people experience the same problem. Since the answer is "no"... then there is something wrong with my installation. Why only that drop down is affected is beyond me but, it is most likely an inadvertent mistake on my part.
Thanks,
John. -
I need to update millions of records to a column in a table that is mostly read only. The fastest way will be doing a CTAS, but the problem is that while doing the CTAS, there can be small updates to the table so if CTAS and renaming tables takes 30 minutes, there is a little bit of risk. Is there a solution to this to keep it in sync right before switching the tables?
970021 wrote:
I need to update millions of records to a column in a table that is mostly read only. The fastest way will be doing a CTAS, but the problem is that while doing the CTAS, there can be small updates to the table so if CTAS and renaming tables takes 30 minutes, there is a little bit of risk. Is there a solution to this to keep it in sync right before switching the tables?
Yes, there is usually a way.
You can add a trigger to the table that writes changes to a transaction - for that 30 minute duration the CTAS is running. After replacing the old table with the new table, run through the transaction table populated by the trigger and apply these changes to the new table.
There are a number of variations on this approach (e.g. using a column to identify changes instead).
It would however be easier to lock down the old table for the duration of the CTAS and prevent changes until the new table is available for use. -
Tablespace issue when running CTAS
I get the following error when I run the CTAS statement below:
ORA-01652: unable to extend temp segment by 8192 in tablespace ROTL_DATA
create table ROTL.test3 tablespace rotl_data nologging as select * from ROTL.prodrecs3;
I seem to get that even after creating it on 3 different tablespaces. I used a query to get the following information:
TABLESPACE_NAME
ROTL_DATA
CUR_USE_MB
198
CUR_SZ_MB
16,384
CUR_PRCT_FULL
1
FREE_SPACE_MB
16,186
MAX_SZ_MB
16,384
OVERALL_PRCT_FULL
1
What could be causing my problem and how can I fix it?
Query used for extracting tablespace data:
select tablespace_name,
round(sum(total_mb)-sum(free_mb),2) cur_use_mb,
round(sum(total_mb),2) cur_sz_mb,
round((sum(total_mb)-sum(free_mb))/sum(total_mb)*100) cur_prct_full,
round(sum(max_mb) - (sum(total_mb)-sum(free_mb)),2) free_space_mb,
round(sum(max_mb),2) max_sz_mb,
round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) overall_prct_full
from
(select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,
0 max_mb from DBA_FREE_SPACE where tablespace_name='ROTL_DATA' group by tablespace_name
union select tablespace_name,0 current_mb,sum(bytes)/1024/1024 total_mb,
sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb
from DBA_DATA_FILES where tablespace_name='ROTL_DATA' group by tablespace_name)
a group by tablespace_name;What could be causing my problem and how can I fix it?
01652, 00000, "unable to extend temp segment by %s in tablespace %s"
// *Cause: Failed to allocate an extent of the required number of blocks for
// a temporary segment in the tablespace indicated.
// *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
// files to the tablespace indicated -
IAS and CTA 802.1x wired client?
Hi,
We have IAS working with 802.1X authentication. All is good except when we enable dynamic VLAN assignment we come across the Winlogon issue as per MS KB article 935638.
We do however have available the CTA 802.1X wired client. From what I have read though it requires ACS due to use of EAP-FAST. Is this correct or is there some way I can get CTA 802.1X wired client working with MS IAS RADIUS?
Thank youYou will have to use ACS for authenticating using EAP-FAST for CTA 802.1x wired clients. It is not possible to get CTA 802.1X wired client working with MS IAS RADIUS.
-
CTAS with BLOB column very slow
Hi,
I am creating a new table from backup table having BLOB type of data.
Table size for backup table is approx 250G (along with lob segment) and I am creating new table with predicate selectivity of 110 GB.
Query used is "CREATE TABLE CLAIM AS SELECT /*+ PARALLEL(CLAIM_bkp_04,16) */ * FROM CWS_CLAIM_WORK_bkp_04 WHERE pxobjClass <> 'Claim-Controller' or pyStatusWork <> 'Resolved-Completed' ;
Even after using parallel degree of 16 for CTAS, peformance is very very slow.
Can anyone tell what's the reason for this behavior..
Best Regards
Naveen>
ID| PARENT_ID|OPERATION |OBJECT_NAME | COST|CARDINALITY
----------|----------|----------------------------------------|------------------------------|----------|-----------
0| |CREATE TABLE STATEMENT | | 47410|
1| 0| LOAD AS SELECT | | |
2| 1| PX COORDINATOR | | |
3| 2| PX SEND QC (RANDOM) |:TQ10000 | 24388| 4623495
4| 3| PX BLOCK ITERATOR | | 24388| 4623495
5| 4| TABLE ACCESS FULL |CWS_CLAIM_WORK_BKP_04 | 24388| 4623495it might be faster WITHOUT using Parallel -
Different DOP on tables nd indexes.
We have a few tables that have a different degree of paralellism from their indexes. I read that the table and indexes should have the same degree. Is that a best practice? If so, what are the issues if the DOP is different on the tables and the indexes? Why should they be the same and what is it's effect on CBO?
I have googled and have not been able to find out why they need to be the same. Can anyone point me to any links or articles on this?I'm not sure that they NEED to be the same, but I would imagine that it would impact index cost adjusting if you don't have the same DOP between your table and your indexes. You might be left scratching your head as to why Oracle is doing a FTS when your testing is showing an index is much faster - then it turns out you have a larger DOP on your index than your table, etc.
Personally, I would not set DOP at the object level, but force your developers (or yourself) to set it via a hint. Rampant parallelism can cause significant performance problems and I've seen a lot of times that OLTP transactions will fire off extraneous PQ sessions (and cause slower performance for the query) simply because parallelism was set at the object level.
At least if you're doing it through a hint, you're making an explicit decision to use parallelism, as opposed to potentially firing it off without realizing it. It's also easier to change an SQL statement than it is to perform DDL against an object.
Maybe you are looking for
-
Windows XP 32 bit SP3 and Windows 7 64 bit
Can I use Network Magic Pro to share files, and the printer between the two operating systems?
-
Gday, A most interesting thing happened on the weekend. I left my 5 month old, in as new condition N97 in the car on the weekend whilst in central Victoria during the heat of the day and when I returned I found it with a tiny crack in the screen maki
-
My itunes says there is a new version
i have recently download itunes9 and last night it says 9.3 is ready to download. is there a difference should i do this i have dial up and its a bit broken
-
Hello, I am trying out AE CS6, however any layer I create is showing red boxes similar to the way paid plugins use red boxes for their trial versions. Any help would be appreciated, Thanks
-
Is there any way to avoid downloading SD version when purchasing HD content?
I noticed that it is next to impossible to opt-out of downloading the SD content when the HD version is purchased. Considering I am playing the media on devices that can support the HD version, the SD version just uses up my available bandwidth at a