DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL
Is there a way to create chunks using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL and assign range values of a specific column?
The common way is to chunk_by_rowid.
For a better understanding: why would you want to assign a specific column?
It appears you can do it anyway:
http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_parallel_ex.htm#ARPLS67341
Similar Messages
-
DBMS_PARALLEL_EXECUTE multiple threads taking more time than single thread
I am trying to insert 10 million records from source table to target table.
Number of chunks = 100
There are two scenarios:
dbms_parallel_execute(..... parallel_level => 1) -- for single thread
dbms_parallel_execute(..... parallel_level => 10) -- for 10 threads
I observe that the average time taken by 10 threads to process each chunk is 10 times the average time taken in case of single thread.
Ideally it should be same which would reduce the time taken by a factor of 10 (due to 10 threads).
Due to the above mentioned behavior, the time taken is the same in both cases.
It would be great if anybody can explain me the reason behind such behavior.
Thanks in advanceSource Table = TEST_SOURCE
Target Table = TEST_TARGET
Both tables have 100 columns
Below is the code:
DECLARE
l_task VARCHAR2(30) := 'test_task_F';
l_sql_stmt VARCHAR2(32767);
l_try NUMBER;
l_stmt VARCHAR2(32767);
l_status NUMBER;
BEGIN
l_stmt := 'select dbms_rowid.rowid_create( 1, data_object_id, lo_fno, lo_block, 0 ) min_rid,
dbms_rowid.rowid_create( 1, data_object_id, hi_fno, hi_block, 10000 ) max_rid
from (
select distinct grp,
first_value(relative_fno)
over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) lo_fno,
first_value(block_id )
over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) lo_block,
last_value(relative_fno)
over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) hi_fno,
last_value(block_id+blocks-1)
over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) hi_block,
sum(blocks) over (partition by grp) sum_blocks
from (
select relative_fno,
block_id,
blocks,
trunc( (sum(blocks) over (order by relative_fno, block_id)-0.01) / (sum(blocks) over ()/100) ) grp
from dba_extents
where segment_name = upper(''TEST_REGION_SOURCE'')
and owner = ''FUSION'' order by block_id
(select data_object_id from user_objects where object_name = upper(''TEST_REGION_SOURCE'') )';
DBMS_PARALLEL_EXECUTE.create_task (task_name => l_task);
DBMS_PARALLEL_EXECUTE.create_chunks_by_sql(task_name => l_task,
sql_stmt => l_stmt,
by_rowid => true);
l_sql_stmt := 'insert into FUSION.TEST_REGION_TARGET(REGION_ID,REGION1,REGION2,REGION3,REGION4,
...., REGION99
SELECT REGION_ID,REGION1,REGION2,REGION3,REGION4,
.....,REGION99
from FUSION.TEST_REGION_SOURCE WHERE (1=1) AND rowid BETWEEN :start_id AND :end_id ';
DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 10);
-- If there is error, RESUME it for at most 2 times.
l_try := 0;
l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
WHILE(l_try < 2 and l_status != DBMS_PARALLEL_EXECUTE.FINISHED)
Loop
l_try := l_try + 1;
DBMS_PARALLEL_EXECUTE.resume_task(l_task);
l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
END LOOP;
DBMS_PARALLEL_EXECUTE.drop_task(l_task);
END;
Edited by: 943978 on Jul 2, 2012 9:22 AM -
Update statement not working inside DBMS_PARALLEL_EXECUTE
CREATE OR REPLACE procedure
proc_table_mask_par2 (p_tblName in varchar2
,p_strmCount in number)
IS
| Name: proc_table_mask
| Version: 1.0
| Function: The procedure generates the block which will mask the table based on inputrs from
MSK_INVENTORY_COLS table
| Modification History:
| Ver: Date: Who: What:
| 1.0 2012-11-26 HB Created
vtbl_Name varchar2(100);
vtbl_Count number(19,0);
vStrm_row_count number(19,0);
vCurs_count number(19,0) := 1;
vsql varchar2(30000);
vsql1 varchar2(30000);
vstragg varchar2(4000);
v_try number;
v_status number;
v_status1 number;
vtaskName varchar2(100) := 'Mask job for '||p_tblName ;
pstartnum number(19,0);
pendnum number(19,0);
v_prim_key varchar2(100);
--retries_in PLS_INTEGER DEFAULT 2;
--l_attempts PLS_INTEGER := 1;
begin
-- Use function Stragg to get the update statement from MSK_INVENTORY_COLS
select stragg(MIC.COLUMN_NAME || ' = ' || MIC.FUNCTION_NAME|| '(' ||MIC.COLUMN_NAME||')') into vstragg from MSK_INVENTORY_COLS mic
WHERE MIC.TABLE_NAME = p_tblName;
EXECUTE IMMEDIATE 'select count(1) from '||p_tblName into vtbl_Count;
--DBMS_OUTPUT.PUT_LINE ( 'vtbl_Count : ' ||vtbl_Count);
vStrm_row_count := round( vtbl_Count/p_strmCount);
dbms_output.put_line(' vStrm_row_count : ' || vStrm_row_count);
-- Update statement
vsql := vsql ||chr(10) || ' UPDATE '|| p_tblName || ' /*+ parallel ( '||p_tblName ||', '||p_strmCount||') */ ' ;
vsql := vsql ||chr(10) || ' SET '|| vstragg;
vsql := vsql ||chr(10) || ' , lock_id = -1 ' ;
vsql := vsql ||chr(10) || 'WHERE ROWID BETWEEN :starting_rowid AND :ending_rowid' ;
vsql := vsql ||chr(10) || ' and lock_id <> -1 ; ' ;
dbms_output.put_line (' vsql : ' || vsql);
DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName);
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID (task_name => vtaskName
, table_owner => SYS_CONTEXT ('USERENV', 'SESSION_USER') --USER
, table_name => p_tblName
, by_row => TRUE
, chunk_size => vStrm_row_count
DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => vtaskName
, sql_stmt => vsql
, language_flag => DBMS_SQL.native
, parallel_level => p_strmCount
-- Only resume for the following
-- INVALID_STATE_FOR_REDSUME ORA - 29495
-- Attempts to resume execution, but the task is not in FINISHED_WITH_ERROR or CRASHED state
-- Constant value for CRASHED = 8
-- Constant value for FINISHED_WITH_ERROR = 7
v_try := 0;
v_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
dbms_output.put_line (' v_status : ' || v_status);
dbms_output.put_line (' v_try : ' || v_try);
WHILE(v_try < 2 and v_status != DBMS_PARALLEL_EXECUTE.FINISHED and ((v_status = 7) or( v_status = 8) ))
LOOP
v_try := v_try + 1;
DBMS_OUTPUT.PUT_LINE (' Why am I getting into this loop : ' );
DBMS_PARALLEL_EXECUTE.RESUME_TASK(vtaskName);
v_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
END LOOP;
DBMS_PARALLEL_EXECUTE.DROP_TASK(vtaskName);
exception
when
others then
raise;
dbms_output.put_line(sqlerrm);
end;
Gurus
I am executing the procedure above using the following anonymous block.
DECLARE
P_TBLNAME VARCHAR2(32767);
P_STRMCOUNT NUMBER;
BEGIN
P_TBLNAME := 'EMPLOYEE_DIM';
P_STRMCOUNT := 10;
A516907.PROC_TABLE_MASK_PAR2 ( P_TBLNAME, P_STRMCOUNT );
COMMIT;
END;
I have used dbms_output for getting values for the following variables. When I check the values the update does not seem to be working.
vStrm_row_count : 60143
vsql :
UPDATE EMPLOYEE_DIM /*+ parallel ( EMPLOYEE_DIM, 10) */
SET
BUSINESS_TITLE_NM = FN_TITLE_DRM_ENCRYPTNSUM(BUSINESS_TITLE_NM),COST_CENTER_CD =
FN_COSTCTR_DRM_ENCRYPTNSUM(COST_CENTER_CD),DIRECT_MGR_NM =
FN_DRM_REG_ADDR_TEXT(DIRECT_MGR_NM),FIRST_NM =
FN_FNM_DRM_ENCRYPTNSUM(FIRST_NM),LAST_FIRST_FULL_NM =
FN_DRM_REG_ADDR_TEXT(LAST_FIRST_FULL_NM),LAST_FIRST_MIDDLE_FULL_NM =
FN_DRM_REG_ADDR_TEXT(LAST_FIRST_MIDDLE_FULL_NM),LAST_NM =
FN_LNM_DRM_ENCRYPTNSUM(LAST_NM),PHONE_NO =
FN_PHONE_DRM_ENCRYPTNSUM(PHONE_NO),PRIMARY_EMAIL_ADDRESS_NM =
FN_EMAIL_DRM_ENCRYPTNSUM(PRIMARY_EMAIL_ADDRESS_NM)
, lock_id = -1
WHERE ROWID
BETWEEN :starting_rowid AND :ending_rowid
and lock_id <> -1 ;
v_status : 4
v_try : 0I tried to do the update using chunk sql and ran the procedure again..No updates are made.
CREATE OR REPLACE procedure
proc_table_mask_chunk_sql (p_tblName in varchar2
,p_strmCount in number)
IS
| Name: A516907.proc_table_mask
| Version: 1.0
| Function: The procedure generates the block which will mask the table based on inputrs from
MSK_INVENTORY_COLS table
| Modification History:
| Ver: Date: Who: What:
| 1.0 2012-11-26 HB Created
vtbl_Name varchar2(100);
vtbl_Count number(19,0);
vStrm_row_count number(19,0);
vCurs_count number(19,0) := 1;
vsql varchar2(1000);
vsql_pk varchar2(1000);
vstragg varchar2(4000);
vtaskName varchar2(100) := 'Mask Data in table '||p_tblName ;
pstartnum number(19,0);
pendnum number(19,0);
upd_st number(19,0) := 1;
v_prim_key varchar2(100);
l_try NUMBER;
l_status NUMBER;
begin
DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName);
-- Use function Stragg to get the update statement from MSK_INVENTORY_COLS
select stragg(MIC.COLUMN_NAME || ' = ' || MIC.FUNCTION_NAME|| '(' ||MIC.COLUMN_NAME||')') into vstragg from MSK_INVENTORY_COLS mic
WHERE MIC.TABLE_NAME = p_tblName;
select stragg(UCC.COLUMN_NAME) COLUMN_NAME into v_prim_key
from user_constraints uc , user_cons_Columns ucc
where UC.CONSTRAINT_TYPE = 'P'
and UC.CONSTRAINT_NAME = UCC.CONSTRAINT_NAME
and UCC.TABLE_NAME = p_tblName;
vsql_pk := 'SELECT distinct ' || v_prim_key || ','|| v_prim_key || ' FROM ' || p_tblName;
DBMS_OUTPUT.PUT_LINE ( 'vsql_pk : ' ||vsql_pk);
--EXECUTE IMMEDIATE ' select stragg(COLUMN_NAME ||''=''||FUNCTION_NAME||''(''||COLUMN_NAME||'')'') from MSK_INVENTORY_COLS WHERE TABLE_NAME = ' ||p_tblName INTO vstragg ;
--EXECUTE IMMEDIATE 'select count(1) from vtbl_Name' into vtbl_Count;
EXECUTE IMMEDIATE 'select count(1) from '||p_tblName into vtbl_Count;
--DBMS_OUTPUT.PUT_LINE ( 'vtbl_Count : ' ||vtbl_Count);
vStrm_row_count := round( vtbl_Count/p_strmCount);
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(vtaskName, vsql_pk, false);
--DBMS_OUTPUT.PUT_LINE ( 'vStrm_row_count : ' ||vStrm_row_count);
--EXECUTE IMMEDIATE 'SELECT MIN( '||v_prim_key||') from ' ||p_tblName into pstartnum;
----dbms_output.put_line (' pstartnum : ' || pstartnum);
--pendnum := vStrm_row_count;
----dbms_output.put_line (' pendnum : ' || pendnum);
-- Update statement
vsql := vsql ||chr(10) || ' UPDATE '|| p_tblName || ' /*+ parallel ( '||p_tblName ||', '||p_strmCount||') */ ' ;
vsql := vsql ||chr(10) || ' SET '|| vstragg;
vsql := vsql ||chr(10) || ' , lock_id = -1 WHERE ' ;
vsql := vsql ||chr(10) || v_prim_key|| ' BETWEEN :start_id and :end_id ';
vsql := vsql ||chr(10) || ' and lock_id <> -1 ; ' ;
--DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName||'_'||upd_st);
DBMS_PARALLEL_EXECUTE.RUN_TASK ( vtaskName
, vsql
, DBMS_SQL.native
, parallel_level => p_strmCount
L_try := 0;
L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
WHILE(l_try < 2 and L_status != DBMS_PARALLEL_EXECUTE.FINISHED and ((L_status = 7) or( L_status = 8) ))
Loop
L_try := l_try + 1;
DBMS_PARALLEL_EXECUTE.RESUME_TASK(vtaskName);
L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
END LOOP;
end;
Block run :
DECLARE
P_TBLNAME VARCHAR2(32767);
P_STRMCOUNT NUMBER;
BEGIN
P_TBLNAME := 'EMPLOYEE_DIM';
P_STRMCOUNT := 10;
A516907.PROC_TABLE_MASK_CHUNK_SQL ( P_TBLNAME, P_STRMCOUNT );
COMMIT;
END;
/ -
Is there any database parameters we need to set when using DBMS_PARALLE_EXECUTE?
Hi,
I am using dbms_parallel_execute package to do some process, I call a procedure to do the some tasks and insert data using the chunks by sql. Here is my code, I am calling p_process procedure, I am using 11g 11.2.0.3.0 - 64 bit on windows server.
DBMS_PARALLEL_EXECUTE.CREATE_TASK ('process a');
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (
task_name => 'process a',
sql_stmt => 'SELECT startid, endid FROM chng_chunks',
by_rowid => FALSE);
dbms_parallel_execute.run_task
( task_name => 'process a',
sql_stmt => 'begin P_process( :start_id, :end_id ); end;',
language_flag => DBMS_SQL.NATIVE,
parallel_level => 24 );
This code runs very fast on a one database and I can see it uses lots of cpus but it runs very slow on a copy of the same database on another server which has more cpus and memory. I compare v$parameter vlues and those are pretty much identical between databases. I checked the disk spaces and both servers has plenty of free space on disks.
Now my question is, is there any other parameters that we need to set/check when using dbms_parallel_execute package.
Thanks in advance.
ggI don't get this. Ever. Why developers insist on comparing server1 with server2, simply because their code is running on both.
It is like comparing the athletic ability of two persons, arguing that h/w wise they are the same (i.e. human), and both have green eyes (your same software). And as because these are all the same, both persons should be able to run the 100m in the same time.
Yes, the analogy is silly.. as is the warped concept amongst many developers that server1 and server2 should exhibit the same performance when running the same code.
It. Does. Not. Work. Like. That.
Want to know why server2 is exhibiting the performance it does when running your code?
Do that by ignoring server1 as it is NOT RELEVANT.
Do that by examining the workloads and resource usage of server2, and the run and wait states of your code on server2. -
*Dynamic* Table Name in From Clause with input from cursor
Hello
I have a cursor say...
select table_name from dba_tables where <blah>
The result is
row1:: emp
rwo2:: emp_1 ---> Both the tables have same structure and entirely different data... please dont ask why... that's the way it is and we cant change it..
Now we need to run an Insert...
insert into tableX (col1,col2,...) select a,b,... from <o/p of the cursor> where <blah> ...
Note: The table name changes and it the cursor can o/p emp,emp_a and emp_b.
Am looking to do it parallel instead of doing it serially and with best performance ....no sql injection issues.
By parallel i mean
insert into tableX (col1,col2,...) select a,b,... from emp where <blah>
and insert into tableX (col1,col2,...) select a,b,... from emp_1 where <blah> statements to fire parallel/at the same time to the database. If you can share procedure if you guys already have with you.. is really appreciated
Thanks a lot for your time....
Edited by: user007009 on Apr 27, 2013 8:33 PMHello thanks for your time..
I tried to implement the chunk by sql parallel execution approach and it took 3.1 seconds to complete.. while the SP took around 0.042 seconds and the parallel process didn't throwed any errors and it didn't insert any data either... I am not sure what I am doing wrong... can you please let me know your thoughts...
Sample Data Creation::::::::::::::*
drop table table_ASERCARE purge;
drop table table_MEDCARE purge;
DROP TABLE TABLE_XYCARE PURGE;
DROP TABLE TABLE_TIME PURGE;
DROP TABLE TABLE_LOCATION PURGE;
drop table table_group purge;
drop table tablex purge;
-- select distinct TABLE_NAME from ALL_TAB_COLS where TABLE_NAME like 'EMP%';
create table table_asercare (time number(30), location_number number(5), value number(5),catg_id number(5));
insert into table_asercare values (20110111, 01, 55, 1200);
insert into table_asercare values (20110131, 01, 31, 1223);
insert into table_asercare values (20120131, 15, 24,1224);
insert into table_ASERCARE values (20130131, 03, 555,1200);
-- Truncate table table_MEDCARE
create table table_medcare (time number(30), location_number number(5), value number(5),catg_id number(5));
insert into table_medcare values (20110113, 01, 23, 1200);
insert into table_medcare values (20110128, 02, 78, 1223);
insert into table_medcare values (20110130, 03, 100, 1224);
insert into table_medcare values (20120111, 04, 57, 1200);
insert into table_medcare values (20120221, 05, 64, 1223);
insert into table_MEDCARE values (20130321, 15, 48, 1224);
create table table_xycare (time number(30), location_number number(5), value number(5),catg_id number(5));
insert into table_xycare values (20100113, 01, 99, 1200);
insert into table_xycare values (20110128, 02, 90, 1223);
insert into table_XYCARE values (20130128, 03, 24, 1224);
create table table_LOCATION ( LOCATION_NUMBER number(5), LOCATION_NAME varchar2(50));
insert into table_LOCATION values (01, 'atlanta1');
insert into table_LOCATION values (02, 'atlanta2');
insert into table_LOCATION values (03, 'atlanta3');
insert into table_LOCATION values (04, 'atlanta4');
insert into table_LOCATION values (05, 'atlanta5');
insert into table_location values (15, 'atlanta15');
create table table_category (catg_id number(5), catg_name varchar2(30));
insert into table_category values (1200, 'EMS');
insert into table_category values (1223, 'LJM');
insert into table_category values (1224, 'LIO');
create table table_TIME (YEAR_MONTH_DATE number(30), YEAR_VAL number(4), MONTH_VAL number(2),DATE_VAL number(2));
insert into table_TIME values (20110111, 2011, 01,11 );
insert into table_TIME values (20110131, 2011, 01,31);
insert into table_TIME values (20120131, 2012, 01,31);
insert into table_TIME values (20130131, 2013, 01,31);
insert into table_TIME values (20110128, 2011, 01,28 );
insert into table_TIME values (20110130, 2011, 01,30 );
insert into table_TIME values (20120111, 2012, 01,11 );
insert into table_TIME values (20120221, 2012, 02,21 );
insert into table_TIME values (20130321, 2013, 03,21 );
insert into table_TIME values (20100113, 2010, 01,13 );
insert into table_TIME values (20130128, 2013, 01,28 );
--Truncate table table_group
CREATE TABLE table_group (group_key number,table_name VARCHAR2(30), group_name VARCHAR2(30), catg_name varchar2(30));
insert into table_group values (1,'table_ASERCARE', 'GROUP_ONE','EMS');
insert into table_group values (2,'table_MEDCARE', 'GROUP_ONE','LJM');
INSERT INTO TABLE_GROUP VALUES (3,'table_XYCARE', 'GROUP_TWO','LIO');
create table TABLEX (YEAR_VAL number(4) ,LOCATION_NAME varchar2(50),tablename VARCHAR2(30), cnt number ); --> Proc data will be inserted into this...
Stored Procedure++++++++_
CREATE OR REPLACE
PROCEDURE ABC(
GROUP_NAME_IN IN VARCHAR2 )
is
type c1 is ref cursor;
sql_stmt VARCHAR2(200);
v_sql VARCHAR2(30000);
c1_cv c1;
table_name_f VARCHAR2(30);
c1_rec TABLE_GROUP%rowtype;
BEGIN
SQL_STMT := 'SELECT * FROM TABLE_GROUP WHERE GROUP_NAME = :i';
OPEN c1_cv FOR SQL_STMT USING GROUP_NAME_IN ;
loop
fetch c1_cv INTO c1_rec;
exit when c1_cv%notfound;
-- forall i in c1_rec.FIRST ..c1_rec.last loop
table_name_f := c1_rec.table_name;
-- END LOOP;
EXECUTE immediate
'INSERT INTO tablex (YEAR_VAL,LOCATION_NAME, tablename, cnt)
SELECT
t.YEAR_VAL,l.location_name, :table_name, count(*) as cnt
FROM '
||table_name_f||
' variable_table
,table_time t
, table_location l
,table_group g
,table_category ctg
WHERE t.year_month_date = variable_table.TIME
and variable_table.location_number = l.location_number
and ctg.catg_id = variable_table.catg_id
--and ctg.catg_name = g.catg_name
GROUP BY t.YEAR_VAL,l.location_name,g.catg_name' USING table_name_f;
--dbms_output.put_line ( 'The SQL is'|| v_sql);
COMMIT;
--dbms_output.put_line ( c1_rec.table_name||','||c1_rec.group_name );
--dbms_output.put_line ( 'The table name is '|| c1_rec.table_name );
end loop;
CLOSE c1_cv;
--null;
END ABC;
Parallel Execution Code++++++++++_
begin
begin
DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
exception when others then null;
end;
DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME', sql_stmt =>'select distinct group_key, group_key from table_group', by_rowid => false);
end;
begin
DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
sql_stmt =>'declare
s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
table_name varchar2(30);
begin
select table_name into table_name from group_table where group_key=vstart_id;
s:=''INSERT INTO tablex (YEAR_VAL,LOCATION_NAME, tablename, cnt)
SELECT
t.YEAR_VAL,l.location_name, :table_name, count(*) as cnt
FROM ''||table_name||'' variable_table
,table_time t
, table_location l
,table_group g
,table_category ctg
WHERE t.year_month_date = variable_table.TIME
and variable_table.location_number = l.location_number
and ctg.catg_id = variable_table.catg_id
and ctg.catg_name = g.catg_name
and g.group_key =:vstart_id
GROUP BY t.YEAR_VAL,l.location_name,g.catg_name'';
execute immediate s using vstart_id;
commit;
end;',
language_flag => DBMS_SQL.NATIVE, parallel_level => 2 );
end;
/ thanks in advance for your time.
Edited by: user007009 on Apr 28, 2013 12:25 PM -
Hi,
I have source data which keeps on populating.I want to process the records parallely using Oracle Stored Procedure.I should not reprocess the records that are already processed.Please suggest an logic.
Right now i have only option of calling the procedure multiple times in unix environment.
Regards,
VenkatAn example of how to create a worloads table that enables target table (to process) to be chunked, and processed.
SQL> --// we have a table with a PK that is not a gap-free
SQL> --// sequential number
SQL> create table sample_data(
2 id primary key,
3 last_update,
4 name
5 ) organization index
6 nologging as
7 select
8 object_id,
9 sysdate,
10 object_name
11 from all_objects
12 /
Table created.
SQL>
SQL> --// first couple of rows - as can be seen, we cannot
SQL> --// use id column ranges as the 1-1000 range could be
SQL> --// 10 rows, the 1001-2000 range 1 row, and the 2001-3000
SQL> --// range 900 rows.
SQL> select * from(
2 select * from sample_data order by 1
3 ) where rownum < 11;
ID LAST_UPDATE NAME
100 2013/07/08 09:10:01 ORA$BASE
116 2013/07/08 09:10:01 DUAL
117 2013/07/08 09:10:01 DUAL
280 2013/07/08 09:10:01 MAP_OBJECT
365 2013/07/08 09:10:01 SYSTEM_PRIVILEGE_MAP
367 2013/07/08 09:10:01 SYSTEM_PRIVILEGE_MAP
368 2013/07/08 09:10:01 TABLE_PRIVILEGE_MAP
370 2013/07/08 09:10:01 TABLE_PRIVILEGE_MAP
371 2013/07/08 09:10:01 STMT_AUDIT_OPTION_MAP
373 2013/07/08 09:10:01 STMT_AUDIT_OPTION_MAP
10 rows selected.
SQL>
SQL> --// we create a workloads table - we'll use this to
SQL> --// hand out work to a parallel process
SQL> create table workloads(
2 workload_name varchar2(30),
3 workload_id number,
4 workload_data number,
5 constraint pk_workloads primary key
6 ( workload_name, workload_id )
7 ) organization index
8 /
Table created.
SQL>
SQL> --// we create the workloads (1 workload per rows in
SQL> --// this example) for processing the sample_data table
SQL> insert into workloads
2 select
3 'SampleData1',
4 rownum,
5 id
6 from sample_data;
57365 rows created.
SQL> commit;
Commit complete.
SQL>
SQL> --// we can now chunk the SampleData1 workload using the
SQL> --// workload_id as it is a sequential gap free number to
SQL> --// use for even distribution of work
SQL> col VALUE format 999,999,999
SQL> select 'Start at ' as "LABEL", min(workload_id) as "VALUE" from workloads where workload_name = 'SampleData1'
2 union all
3 select 'End at ', max(workload_id) from workloads where workload_name = 'SampleData1'
4 union all
5 select 'For rowcount ', count(*) from workloads where workload_name = 'SampleData1'
6 /
LABEL VALUE
Start at 1
End at 57,365
For rowcount 57,365
SQL>
SQL> --// for example, we want to create 10 workload buckets and
SQL> --// fill each with a range of workload identifiers
SQL> with workload_totals( start_id, end_id ) as(
2 select min(workload_id), max(workload_id) from workloads where workload_name = 'SampleData1'
3 ),
4 buckets_needed( b ) as(
5 select ceil(end_id/10) from workload_totals
6 ),
7 buckets( workload_id, bucket) as (
8 select workload_id, ceil(workload_id/b) from workloads, buckets_needed where workload_name = 'SampleData1'
9 order by 1
10 )
11 select
12 row_number() over(order by min(workload_id)) as BUCKET,
13 min(workload_id) as START_WORLOAD_ID,
14 max(workload_id) as END_WORKLOAD_ID
15 from buckets
16 group by bucket
17 order by 1
18 /
BUCKET START_WORLOAD_ID END_WORKLOAD_ID
1 1 5737
2 5738 11474
3 11475 17211
4 17212 22948
5 22949 28685
6 28686 34422
7 34423 40159
8 40160 45896
9 45897 51633
10 51634 57365
10 rows selected.
SQL>
SQL> --// we need now a procedure that will work as a thread and be called, in
SQL> --// parallel, to process a workload bucket
SQL> create or replace procedure ProcessWorkload(
2 workloadName varchar2,
3 startID number,
4 endID number
5 ) is
6 begin
7 --// we make this simple - we read the workload data from
8 --// workloads table and do an update of sample data
9 update(
10 select
11 s.*
12 from sample_data s
13 where s.id in(
14 select
15 w.workload_data
16 from workloads w
17 where w.workload_name = 'SampleData1'
18 and w.workload_id between startID and endID
19 )
20 )
21 set name = lower(name),
22 last_update = sysdate;
23 end;
24 /
Procedure created.
SQL>
SQL> --// process sample_data in parallel via 10 workload buckets, using 2
SQL> --// parallel processes
SQL> var c refcursor
SQL> declare
2 taskName varchar2(30);
3 begin
4 taskName := 'Parallel-PQ-Process1';
5 DBMS_PARALLEL_EXECUTE.create_task( taskName );
6
7 --// we use our SQL above to create 10 buckets, with each bucket
8 --// (or chunk) specifying the start and end workload id's to
9 --// process
10 DBMS_PARALLEL_EXECUTE.create_chunks_by_sql(
11 task_name => taskName,
12 sql_stmt =>
13 'with workload_totals( start_id, end_id ) as(
14 select min(workload_id), max(workload_id) from workloads where workload_name = ''SampleData1''
15 ),
16 buckets_needed( b ) as(
17 select ceil(end_id/10) from workload_totals
18 ),
19 buckets( workload_id, bucket) as (
20 select workload_id, ceil(workload_id/b) from workloads, buckets_needed where workload_name = ''SampleData1''
21 order by 1
22 )
23 select
24 min(workload_id),
25 max(workload_id)
26 from buckets
27 group by bucket ',
28 by_rowid => false
29 );
30
31 --// next we process the 10 buckets/chunks, two at a time
32 DBMS_PARALLEL_EXECUTE.Run_Task(
33 task_name => taskName,
34 sql_stmt => 'begin ProcessWorkload(''SampleData1'',:start_id,:end_id); end;',
35 language_flag => DBMS_SQL.NATIVE,
36 parallel_level => 2
37 );
38
39 --// wait for it to complete
40 while DBMS_PARALLEL_EXECUTE.task_status( taskName ) != DBMS_PARALLEL_EXECUTE.Finished loop
41 DBMS_LOCK.Sleep(10);
42 end loop;
43
44 --// stats cursor
45 open :c for
46 select
47 task_name,
48 job_name,
49 chunk_id,
50 status,
51 start_id,
52 end_id,
53 end_id-start_id as UPDATES_DONE,
54 to_char(start_ts,'hh24:mi:ss.ff') as START_TS,
55 to_char(end_ts,'hh24:mi:ss.ff') as END_TS,
56 end_ts-start_ts as DURATION
57 from user_parallel_execute_chunks
58 where task_name = taskName
59 order by 1,2;
60
61 --// remove task
62 DBMS_PARALLEL_EXECUTE.drop_task( taskName );
63 end;
64 /
PL/SQL procedure successfully completed.
SQL>
SQL> col TASK_NAME format a20
SQL> col JOB_NAME format a20
SQL> col START_TS format a15
SQL> col END_TS format a15
SQL> col DURATION format a28
SQL> print c
TASK_NAME JOB_NAME CHUNK_ID STATUS START_ID END_ID UPDATES_DONE START_TS END_TS DURATION
Parallel-PQ-Process1 TASK$_15266_1 5821 PROCESSED 1 5737 5736 09:10:19.066131 09:10:19.313965 +000000000 00:00:00.247834
Parallel-PQ-Process1 TASK$_15266_1 5822 PROCESSED 28686 34422 5736 09:10:19.315984 09:10:19.682781 +000000000 00:00:00.366797
Parallel-PQ-Process1 TASK$_15266_1 5823 PROCESSED 5738 11474 5736 09:10:19.684925 09:10:19.818145 +000000000 00:00:00.133220
Parallel-PQ-Process1 TASK$_15266_1 5824 PROCESSED 17212 22948 5736 09:10:19.818694 09:10:19.950409 +000000000 00:00:00.131715
Parallel-PQ-Process1 TASK$_15266_1 5830 PROCESSED 51634 57365 5731 09:10:20.605421 09:10:20.721599 +000000000 00:00:00.116178
Parallel-PQ-Process1 TASK$_15266_1 5826 PROCESSED 40160 45896 5736 09:10:20.080270 09:10:20.214443 +000000000 00:00:00.134173
Parallel-PQ-Process1 TASK$_15266_1 5827 PROCESSED 11475 17211 5736 09:10:20.217662 09:10:20.339758 +000000000 00:00:00.122096
Parallel-PQ-Process1 TASK$_15266_1 5828 PROCESSED 34423 40159 5736 09:10:20.342242 09:10:20.453802 +000000000 00:00:00.111560
Parallel-PQ-Process1 TASK$_15266_1 5829 PROCESSED 45897 51633 5736 09:10:20.454376 09:10:20.603116 +000000000 00:00:00.148740
Parallel-PQ-Process1 TASK$_15266_1 5825 PROCESSED 22949 28685 5736 09:10:19.950975 09:10:20.079311 +000000000 00:00:00.128336
10 rows selected.
SQL>
SQL> --// updated sample data
SQL> select * from(
2 select * from sample_data order by 1
3 ) where rownum < 11;
ID LAST_UPDATE NAME
100 2013/07/08 09:10:19 ora$base
116 2013/07/08 09:10:19 dual
117 2013/07/08 09:10:19 dual
280 2013/07/08 09:10:19 map_object
365 2013/07/08 09:10:19 system_privilege_map
367 2013/07/08 09:10:19 system_privilege_map
368 2013/07/08 09:10:19 table_privilege_map
370 2013/07/08 09:10:19 table_privilege_map
371 2013/07/08 09:10:19 stmt_audit_option_map
373 2013/07/08 09:10:19 stmt_audit_option_map
10 rows selected.
SQL>
To eliminate the workload table approach, you need to find an alternative method for chunking the target table. Whether such a method is available depends entirely on the structure and nature of your target table (and is the preferred approach over using a secondary worktable to drive parallel processing). -
Call to a procedure in DBMS_PARALLEL_EXECUTE package
Hi All,
I have a procedure that takes an input parameter, I need to call this procedure in parallel. I am going to use DBMS_PARALLEL_EXECUTE package to parallize the process. Here is a example of what I want to do, please note that there is a third parameter newParameter.
l_sql_stmt := 'BEGIN process_update(:start_id, :end_id, ' || newParameter || ' ); END;';
DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 10);
I create the task and chunck using SQL, but this run task does not start processing. Can we do this?
Thanks in advance?>
I have a procedure that takes an input parameter, I need to call this procedure in parallel. I am going to use DBMS_PARALLEL_EXECUTE package to parallize the process. Here is a example of what I want to do, please note that there is a third parameter newParameter.
l_sql_stmt := 'BEGIN process_update(:start_id, :end_id, ' || newParameter || ' ); END;';
DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 10);
I create the task and chunck using SQL, but this run task does not start processing. Can we do this?
>
We have no way of knowing if you can do what you are trying to do since you didn't post the code you are using to do it.
You can use a stored procedure to process the workload if that is what you are asking.
See this Oracle-base article for an example of using a stored procedure for the workload.
http://www.oracle-base.com/articles/11g/dbms_parallel_execute_11gR2.php#create_chunks_by_sql
>
The following example shows the processing of a workload chunked by a number column. Notice that the workload is actually a stored procedure in this case.
>
You did NOT provide any code that shows how you plan to provide that 'third' parameter so maybe that is where your problem is. -
Dbms_Parallel_Execute run as a Dbms_Scheduler job
Hi,
I have tried to use Dbms_Parallel_Execute to update a column in different tables.
This works fine when I run it from SQLPlus or similar.
But if I try to run the code as a background job using Dbms_Scheduler it hangs on the procedure Dbms_Parallel_Execute.Run_Task.
The session seems to hang forever.
If I kill the session of the background job, the task ends up in state FINISHED and the update has been completed.
If I look on the session it seems to be waiting for event "pl/sql lock timer".
Anyone who knows what can go wrong when running this code as a background job using Dbms_Scheduler?
Code example:
CREATE OR REPLACE PROCEDURE Execute_Task___ (
table_name_ IN VARCHAR2,
stmt_ IN VARCHAR2,
chunk_size_ IN NUMBER DEFAULT 10000,
parallel_level_ IN NUMBER DEFAULT 10 )
IS
task_ VARCHAR2(30) := Dbms_Parallel_Execute.Generate_Task_Name;
status_ NUMBER;
error_occurred EXCEPTION;
BEGIN
Dbms_Parallel_Execute.Create_Task(task_name => task_);
Dbms_Parallel_Execute.Create_Chunks_By_Rowid(task_name => task_,
table_owner => Fnd_Session_API.Get_App_Owner,
table_name => table_name_,
by_row => TRUE,
chunk_size => chunk_size_);
-- Example statement
-- stmt_ := 'UPDATE Test_TAB SET rowkey = sys_guid() WHERE rowkey IS NULL AND rowid BETWEEN :start_id AND :end_id';
Dbms_Parallel_Execute.Run_Task(task_name => task_,
sql_stmt => stmt_,
language_flag => Dbms_Sql.NATIVE,
parallel_level => parallel_level_);
status_ := Dbms_Parallel_Execute.Task_Status(task_);
IF (status_ IN (Dbms_Parallel_Execute.FINISHED_WITH_ERROR, Dbms_Parallel_Execute.CRASHED)) THEN
Dbms_Parallel_Execute.Resume_Task(task_);
status_ := Dbms_Parallel_Execute.Task_Status(task_);
END IF;
Dbms_Parallel_Execute.Drop_Task(task_);
EXCEPTION
WHEN OTHERS THEN
Dbms_Parallel_Execute.Drop_Task(task_);
RAISE;
END Execute_Task___;Hi,
Check job_queue_processes parameter, it must be greater than 0. -
DBMS_PARALLEL_EXECUTE package of oracle
Gurus,
Can anyone guide me how to use this oracle supplied package (DBMS_PARALLEL_EXECUTE) in a genric way
what I am looking for is a generic process which uses this package and triggers off by_rowid or by_col, or by_sql processes.
Also can this package execute any of the pl7sql blocks also in parallel?
Please help me understand this with some examples....
Thanks in advance!
Regards,
ManikManik wrote:
Can anyone guide me how to use this oracle supplied package (DBMS_PARALLEL_EXECUTE) in a genric way
what I am looking for is a generic process which uses this package and triggers off by_rowid or by_col, or by_sql processes.The basic concept behind this package (and the Oracle PQ feature) is take loads and loads of I/O, break it up into distinct (rowid) ranges, and process these ranges using separate and parallel processes.
So I'm not sure what "generic feature" you see in this. The data is read (I/O'ed) for a reason. That reason is specific. Not generic. E.g. scan all rows in large table and find rows that match a certain filter condition. That means a very specific SQL statement that is run in parallel. Such as selecting all blue widgets with a foo attachment, that are 2mm in diameter and are between 10 - 20mm in length, and were manufactured during the last month, and shipped from factory to shop in the truck with registration ca 12345.
This is is specific. Not generic.
So perhaps you need to explain what you imply with generic.
Also can this package execute any of the pl7sql blocks also in parallel?Not really. The SQL engine does parallel processing. Not the PL/SQL engine. Different languages. Different concepts.
You can call SQL from PL/SQL. You can call PL/SQL from SQL.
So the latter, if run in parallel, can call and use PL/SQL, in parallel. The PL/SQL code unit that supports being called like that is known as a pipeline table function.
It seems to me that you are approaching parallel processing in Oracle with some client threading preconceptions. PL/SQL and SQL are server side languages and the environment is different than that of a client language and environment. Maybe you should empty your cup of these preconceptions first, to understand what Tubs said quite rightly, and to understand what server-side parallel processing is about in Oracle. -
What is DBMS_PARALLEL_EXECUTE doing in the background
What is the best way to see all the actual SQL's that are being executed in the background when a package is executed?
For an example, I am interested in knowing what DBMS_PARALLEL_EXECUTE package is doing in the background. I've read what the procedures do and I understand the functionality. But I'd like to know how it does it. I wanted to know what create_chunks_by_number_col is doing in the background.970021 wrote:
What is the best way to see all the actual SQL's that are being executed in the background when a package is executed?
For an example, I am interested in knowing what DBMS_PARALLEL_EXECUTE package is doing in the background. I've read what the procedures do and I understand the functionality. But I'd like to know how it does it. I wanted to know what create_chunks_by_number_col is doing in the background.
OK - I'm confused.
You said you 'read what the procedures do' but the doc explains pretty clearly (IMHO) exactly how it creates the chunks.
http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_parallel_ex.htm#CHDHFCDJ
CREATE_CHUNKS_BY_NUMBER_COL Procedure
This procedure chunks the table (associated with the specified task) by the specified column. The specified column must be a NUMBER column. This procedure takes the MIN and MAX value of the column, and then divide the range evenly according to chunk_size. The chunks are:
CREATE_CHUNKS_BY_NUMBER_COL Procedure
This procedure chunks the table (associated with the specified task) by
the specified column. The specified column must be a NUMBER column. This
procedure takes the MIN and MAX value of the column, and then divide the
range evenly according to chunk_size. The chunks are:
START_ID END_ID
min_id_val min_id_val+1*chunk_size-1
min_id_val+1*chunk_size min_id_val+2*chunk_size-1
min_id_val+i*chunk_size max_id_val
So I am at a loss to know how that particular example is of any value to you.
That package creates a list of START_ID and END_ID values, one pair of values for each 'chunk'. It then starts a parallel process for each chunk that queries the table using a where clause that is basically just this:
WHERE userColumn BETWEEN :START_ID AND END_ID
The RUN_TASK Procedure explains part of that
RUN_TASK Procedure
This procedure executes the specified statement (sql_stmt) on the chunks in parallel. It commits after processing each chunk. The specified statement must have two placeholders called start_id, and end_id respectively, which represent the range of the chunk to be processed. The types of the placeholder must be rowid where ROWID based chunking was used, or NUMBER where number based chunking was used. The specified statement should not commit unless it is idempotent.
The SQL statement is executed as the current user.
Examples
Suppose the chunk table contains the following chunk ranges:
START_ID END_ID
1 10
11 20
21 30
And the specified SQL statement is:
UPDATE employees
SET salary = salary + 10
WHERE e.employee_id BETWEEN :start_id AND :end_id
This procedure executes the following statements in parallel:
UPDATE employees
SET salary =.salary + 10 WHERE employee_id BETWEEN 1 and 10;
COMMIT;
UPDATE employees
SET salary =.salary + 10 WHERE employee_id between 11 and 20;
COMMIT;
UPDATE employees
SET salary =.salary + 10 WHERE employee_id between 21 and 30;
COMMIT;
You could just as easily write those queries yourself for chunking by number. But you couldn't execute them in parallel unless you created a scheduler job.
So like the doc says Oracle is just:
1. getting the MIN/MAX of the column
2. creating a process for each entry in the 'chunk table'
3. executing those processes in parallel
4. commiting each process individually
5. maintain status for you.
I'm not sure what you would expect to see on the backend for an example like that. -
Dbms_parallel_execute
We want to use dbms_parallel_execute to create faster transactions over dblinks. As it stands dbms_parallel_execute does what we need it to do initially, however one problem we have encountered is that the interval for the next execute appears to take 3.5 seconds. We need one second or better. Can anyone explain how we can lower the 3.5 seconds?
864611 wrote:
We want to use dbms_parallel_execute to create faster transactions over dblinks. As it stands dbms_parallel_execute does what we need it to do initially, however one problem we have encountered is that the interval for the next execute appears to take 3.5 seconds. We need one second or better. Can anyone explain how we can lower the 3.5 seconds?SELECT SYSDATE FROM DUAL@REMOTE; -
DML Error logging with delete restrict
Hi,
I am trying to log all DML errors while performing ETL process. We encountered a problem in the process when one of the on delete cascade is missing in child tables but I was curious to know why we got that exception to the calling environment because we are logging all DML errors in err$_ tables. Our expectation is when we get child record found violation then that error will be logged into ERR$_ tables and process will carry on without any interruption but it interrupted in the middle and terminated. I can illustrate with below example
T1 -> T2 -> T3
T1 is parent and it is s root
Create table t1 (id number primary key, id2 number);
Create table t2(id number references t1(id) on delete cascade, id2 number);
create table t3 (id number references t2(id)); -- Missing on delete cascade
insert into t1 as select level, level from dual connect by level < 20;
insert into t2 as select level, level from dual connect by level < 20;
insert into t3 as select level from dual connect by level < 20;
exec dbms_errlog(t1);
exec dbms_errlog(t2);
exec dbms_errlog(t3);
delete from t1 where id = 1 log errors into err$_t1 reject limit unlimited; -- Child record found violation due to t3 raised but I am expecting this error will be trapped in log tables.
delete from t2 where id =1 log errors into err$_t2 reject limit unlimited; -- Got the same error child record violation. My expectation error will be logged into log tables.
I am using Oracle 11gR2.
Also, Please let me know if there is any restrictions to use DML error logging in DBMS_PARALLEL_EXECUTE.
Please advise
Thanks,
UmakanthWhat is the error you want me to fix? Missing on delete cascade?
The Code you posted has multiple syntax errors and can't possibly run. You should post code that actually works.
My expectation is all the DML errors will be logged into error logging tables even if it is child record found violation.
delete from t1 where id = 1 log errors into err$_t1 reject limit unlimited; -- Child record found violation due to t3 raised but I am expecting this error will be trapped in log tables.
delete from t2 where id =1 log errors into err$_t2 reject limit unlimited; -- Got the same error child record violation. My expectation error will be logged into log tables.
DML error logging logs DATA. When you delete from T1 there is an error because the T2 child record can NOT be deleted. So the T1 row that was being deleted is logged into the T1 error log table. The request was to delete a T1 row so that is the only request that failed; the child rows in T2 and T3 will not be put into log tables.
Same when you try to delete from T2. The T3 child record can NOT be deleted so the T2 row that was being deleted is logged into the T2 error log table.
The exceptions that occur are NOT logged, only the data that the DML could not be performed on.
After I fixed your code your example worked fine for me and logged into the DML error tables as expected. But I wasn't doing it from a client. -
Hi All..
I am trying to use the dbms_parallel_execute package to insert into my target table.
But at the end of execution, the rows not getting inserted into the target table.
Could any one please help on this?
Below are the statements....
create table target_table as select * from source_table where 1=0;
--source_table has 100000 rows.
BEGIN
DBMS_PARALLEL_EXECUTE.create_task (task_name => 'test1');
END;
BEGIN
DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name => 'test1',
table_owner => 'SYSTEMS',
table_name => 'TARGET_TABLE',
by_row => TRUE,
chunk_size => 10000);
END;
DECLARE
l_sql_stmt VARCHAR2(32767);
BEGIN
l_sql_stmt := 'insert into PRD_TAB
select * from dbmntr_prd_tab';
DBMS_PARALLEL_EXECUTE.run_task(task_name => 'test1',
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 10);
END;
After executing the above statement, I can find the targt_table has zero rows.. Could anyone please correct me If I am wrong with any of above statements?Could anyone please correct me If I am wrong with any of above statements?
As Hoek said you haven't created the SQL statement properly. See the 'RUN_TASK Procedure section of the DBMS_PARALLEL_EXECUTE chapter of the doc
http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_parallel_ex.htm#CHDIBHHB
sql_stmt
SQL statement; must have :start_id and :end_id placeholder
That doc has an example in it. -
PLSQL Error while using collections dATABASE:10G
Hi,
I am getting below error while compiling below code:
Error: DML statement without BULK In-BIND cannot be used inside FORALL
Could you suggest.
create or replace PROCEDURE V_ACCT_MTH ( P_COMMIT_INTERVAL NUMBER DEFAULT 10000)
is
CURSOR CUR_D_CR_ACCT_MTH
IS
SELECT * FROM D_ACCT_MTH;
TYPE l_rec_type IS TABLE OF CUR_D_CR_ACCT_MTH%ROWTYPE
INDEX BY PLS_INTEGER;
v_var_tab l_rec_type;
v_empty_tab l_rec_type;
v_error_msg VARCHAR2(80);
v_err_code VARCHAR2(30);
V_ROW_CNT NUMBER :=0;
--R_DATA NUMBER :=1;
BEGIN
OPEN CUR_D_CR_ACCT_MTH;
v_var_tab := v_empty_tab;
LOOP
FETCH CUR_D_CR_ACCT_MTH BULK COLLECT INTO v_var_tab LIMIT P_COMMIT_INTERVAL;
EXIT WHEN v_var_tab.COUNT=0;
FORALL R_DATA IN 1..v_var_tab.COUNT
INSERT INTO ACCT_F_ACCT_MTH
DATE_KEY
,ACCT_KEY
,P_ID
,ORG_KEY
,FDIC_KEY
,BAL
,BAL1
,BAL2
,BAL3
,BAL4
,BAL5
,BAL6
,BAL7
,BAL8
,BAL9
,BAL10
,BAL11
,BAL12
,BAL13
,BAL14
,BAL15
VALUES
DATE_KEY(R_DATA)
,ACCT_KEY(R_DATA)
,P_ID(R_DATA)
,ORG_KEY(R_DATA)
,FDIC_KEY(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
COMMIT;
END LOOP;
CLOSE CUR_D_CR_ACCT_MTH;
EXCEPTION
WHEN OTHERS THEN
v_error_msg:=substr(sqlerrm,1,50);
v_err_code :=sqlcode;
DBMS_OUTPUT.PUT_LINE(v_error_msg,v_err_code);
END V_ACCT_MTH;931832 wrote:
Here i am using above method using forall because of large volume of data.Which is a FLAWED approach. Always.
FORALL is not suited to "move/copy" large amounts of data from one table to another.
Any suggestion ?Use only SQL. It is faster. It has less overheads. It can execute in parallel.
So execute it in parallel to move/copy that data. You can roll this manually via the DBMS_PARALLEL_EXECUTE interface. Simplistic example:
declare
taskName varchar2(30) default 'PQ-task-1';
parallelSql varchar2(1000);
begin
--// create trask
DBMS_PARALLEL_EXECUTE.create_task( taskName );
--// chunk the table by rowid ranges
DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(
task_name => taskName,
table_owner => user,
table_name => 'D_ACCT_MNTH',
by_row => true,
chunk_size => 100000
--// create insert..select statement to copy a chunk of rows
parallelSql := 'insert into acct_f_acct_mth select * from d_acct_mnth
where rowid between :start_id and :end_id';
--// run the task using 5 parallel processes
DBMS_PARALLEL_EXECUTE.Run_Task(
task_name => taskName,
sql_stmt => parallelSql,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 5
--// wait for it to complete
while DBMS_PARALLEL_EXECUTE.task_status( taskName ) != DBMS_PARALLEL_EXECUTE.Finished loop
DBMS_LOCK.Sleep(10);
end loop;
--// remove task
DBMS_PARALLEL_EXECUTE.drop_task( taskName );
end;
/Details in Oracle® Database PL/SQL Packages and Types Reference guide.
For 10g, the EXACT SAME approach can be used - by determining the rowid chunks/ranges via a SQL and then manually running parallel processes as DBMS_JOB. See {message:id=1108593} for details. -
How to create special column which represents result of a query
Hi all,
I need your help once more.
The situation is the following:
I have a table MESSAGE which has some billion entries. The columns are msg_id, vehicle_id, timestamp, data, etc.
I have another table VEHICLE which holds static vehicle data (about 20k rows) such as vehicle_id, licenceplate, etc.
My first target was to partition the table via timestamp (by range) and subpartition by vehicle_id (by hash).
So I could easily drop old data by dropping old partitions and tablespaces.
Now comes the new difficult 2nd target: the messages of some vehicles must be kept forever.
My idea is to add a column KEEP_DATA to the table MESSAGE. I could try to partition by timestamp AND KEEP_DATA, subpartion by vehicle_id.
The problem of this idea is that i have to update billions of rows.
It would be perfect if there is a possibility to add this KEEP_DATA-flag to the table vehicle.
Is there any way to "link" this information to a column in MESSAGE table?
I mean something like this:
alter table MESSAGE
add column (select keep_data from vehicle where VEHICLE.vehicle_id = MESSAGE.vehicle_id as keep_message) ;
Is there some possibility like that?
Would the partitioning on this column / statement work?
Would the value of the keep_message be calculated on runtime?
If so will the performance influence be noticeable?
If so will the performance also sink if the application is querying all rows except the keep_message?
Kind regards,
AndreasWhat is your DB version?
The problem of this idea is that i have to update billions of rows. If this is your underlying problem then if you are in 11g and above you can use [url http://docs.oracle.com/cd/E14072_01/appdev.112/e10577/d_parallel_ex.htm]DBMS_PARALLEL_EXECUTE and split your update into multiple chunks and execute it parallel.
I mean something like this:
alter table MESSAGE
add column (select keep_data from vehicle where VEHICLE.vehicle_id = MESSAGE.vehicle_id as keep_message) ; As far as i know such a thing is not possible.
Maybe you are looking for
-
I'm trying to burn some DVD/DVD DL from disk images..... I never got a success with *disk utility*.. after burning one with *disk utility*, checking disk always fails: an error on a sector (never the same) in the other side, burning same image with t
-
Hole in MacBook Pro 15 Retina (Late 2013)
I have a small tiny hole next to my right speaker grill? Is that normal? https://www.dropbox.com/s/0cots14yaxcbljw/2014-05-08%2021.36.59.jpg (photo)
-
Ken Burns - **** you!
I've been attempting to create a stop-motion animation on Adobe Premier, but after several unsuccessful attempts, I decided to give iMovie a try. No Go. I can not remove the dastardly Ken Burns effect and it's KILLING ME! It is so incredibly frustrat
-
I wrote an ABAP program to clear BPS variables values defined for the user. DATA: lr_variable TYPE REF TO cl_sem_variable. * Get variable instance CALL METHOD cl_sem_variable=>get_instance EXPORTING i_area =
-
Last year, I got this message: This version of iTunes has not been correctly localized for this language. Please run the English version. So I got frustrated and quit and just kept iTunes 7.whatever. But now that I have an iPhone, I really need to qu