Problem in Merge statement
Hi All,
I am using merge statement to update 30000 records from the tables having 55 lacs records.
But it is taking much time as i have to kill the session after 12 hours,as it was still going on.
If,Same update i m doing using cursors,it will finish in less than 3 hours.
Merge i was using is :-
MERGE INTO Table1 a
USING (SELECT MAX (TO_DATE ( TO_CHAR (contact_date, 'dd/mm/yyyy')
|| contact_time,
'dd/mm/yyyy HH24:Mi:SS'
) m_condate,
appl_id
FROM Table2 b,
(SELECT DISTINCT acc_no acc_no
FROM Table3, Table1
WHERE acc_no=appl_id AND delinquent_flag= 'Y'
AND financier_id='NEWACLOS') d
WHERE d.acc_no = b.appl_id
AND ( contacted_by IS NOT NULL
OR followup_branch_code IS NOT NULL
GROUP BY appl_id) c
ON (a.appl_id = c.appl_id AND a.delinquent_flag = 'Y')
WHEN MATCHED THEN
UPDATE
SET last_contact_date = c.m_condate;
In this query table 1 has 30000 records and table2 and table 3 have 3670955 and 555674 records respectively.
Please suggest,what i am doing wrong in merge,because as per my understanding merge statement is much better than updates or updates using cursors.
Required info is as follows:
SQL> show parameter user_dump_dest
NAME TYPE VALUE
user_dump_dest string /opt/oracle/admin/FINCLUAT/udu
mp
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select
2 sname ,
3 pname ,
4 pval1 ,
5 pval2
6 from
7 sys.aux_stats$;
sys.aux_stats$
ERROR at line 7:
ORA-00942: table or view does not exist
Elapsed: 00:00:00.05
SQL> explain plan for
2 -- put your statement here
3 MERGE INTO cs_case_info a
4 USING (SELECT MAX (TO_DATE ( TO_CHAR (contact_date, 'dd/mm/yyyy')
5 || contact_time,
6 'dd/mm/yyyy HH24:Mi:SS'
7 )
8 ) m_condate,
9 appl_id
10 FROM CS_CASE_DETAILS_ACLOS b,
11 (SELECT DISTINCT acc_no acc_no
12 FROM NEWACLOS_RESEARCH_HIST_AYLA, cs_case_info
13 WHERE acc_no=appl_id AND delinquent_flag= 'Y'
14 AND financier_id='NEWACLOS') d
15 WHERE d.acc_no = b.appl_id
16 AND ( contacted_by IS NOT NULL
17 OR followup_branch_code IS NOT NULL
18 )
19 GROUP BY appl_id) c
20 ON (a.appl_id = c.appl_id AND a.delinquent_flag = 'Y')
21 WHEN MATCHED THEN
22 UPDATE
23 SET last_contact_date = c.m_condate
24 ;
Explained.
Elapsed: 00:00:00.08
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | MERGE STATEMENT | | 47156 | 874K| | 128K (1)|
| 1 | MERGE | CS_CASE_INFO | | | | |
| 2 | VIEW | | | | | |
| 3 | HASH JOIN | | 47156 | 36M| 5672K| 128K (1)|
| 4 | VIEW | | 47156 | 5111K| | 82339 (1)|
| 5 | SORT GROUP BY | | 47156 | 4236K| 298M| 82339 (1)|
| 6 | HASH JOIN | | 2820K| 247M| 10M| 60621 (1)|
| 7 | HASH JOIN | | 216K| 7830K| | 6985 (1)|
| 8 | VIEW | index$_join$_012 | 11033 | 258K| | 1583 (1)|
| 9 | HASH JOIN | | | | | |
| 10 | INDEX RANGE SCAN | IDX_CCI_DEL | 11033 | 258K| | 768 (1)|
| 11 | INDEX RANGE SCAN | CS_CASE_INFO_UK | 11033 | 258K| | 821 (1)|
| 12 | INDEX FAST FULL SCAN| IDX_NACL_RSH_ACC_NO | 5539K| 68M| | 5382 (1)|
| 13 | TABLE ACCESS FULL | CS_CASE_DETAILS_ACLOS | 3670K| 192M| | 41477 (1)|
| 14 | TABLE ACCESS FULL | CS_CASE_INFO | 304K| 205M| | 35975 (1)|
Note
- 'PLAN_TABLE' is old version
24 rows selected.
Elapsed: 00:00:01.04
SQL> rollback;
Rollback complete.
Elapsed: 00:00:00.03
SQL> set autotrace traceonly arraysize 100
SQL> alter session set events '10046 trace name context forever, level 8';
ERROR:
ORA-01031: insufficient privileges
Elapsed: 00:00:00.04
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> spool off
Edited by: user4528984 on May 5, 2009 10:37 PM
For one thing, alias your tables and use that in the column specifications (table1.column1 = table2.column3 for example)...
SELECT
DISTINCT
acc_no acc_no
FROM Table3, Table1
WHERE acc_no = appl_id
AND delinquent_flag = 'Y'
AND financier_id = 'NEWACLOS'We don't know what your tables look like, what columns come from where. Try and help us help you, assume we know NOTHING about YOUR system, because more likely than naught, that's going to be the case.
In addition to that, please read through this which will give you a better-er idea of how to post a tuning related question, you've not provided near enough information for us to intelligently help you.
HOW TO: Post a SQL statement tuning request - template posting
Similar Messages
-
Performance problem with MERGE statement
Version : 11.1.0.7.0
I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
INSERT INTO sch.tab1
(c1,c2,c3)
SELECT c1,c2,c3
FROM sch1.tab1@dblink
WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
MERGE INTO sch.tab1 t1
USING (SELECT c1,c2,c3
FROM sch1.tab1@dblink
WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
ON (t1.c1 = t2.c1)
WHEN NOT MATCHED THEN
INSERT (t1.c1,t1.c2,t1.c3)
VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
Is there any known issue with MERGE statement while implementing using above scenario?riedelme wrote:
Are your join columns indexed?
Yes, the join columns are indexed.
You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
>
BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL. -
Problem in merge statement -ORA-27432 Step does not exist for chain
Hi
I m getting ORA-27432 Step does not exist for chain error in merge statement.Please explain the same.
MERGE INTO fos.pe_td_hdr_sd B
USING (
SELECT ACTIVE, ADDUID, ADDUIDTIME,TDKEY FROM pe.pe_td_hdr
WHERE (adduidtime like '20070104%' or edituidtime like '20070104%')
AND NVL(legacy_td,'N')<>'Y'
AND SUBSTR(adduidtime,1,4)='2007'
AND AMENDMENT_NO=0)A ON ( B.TDKEY = A.TDKEY)
WHEN MATCHED THEN
UPDATE SET B.ACTIVE=A.ACTIVE,
B.ADDUID=A.ADDUID,
B.ADDUIDTIME=A.ADDUIDTIME
WHEN NOT MATCHED THEN
INSERT
B.ACTIVE,
B.ADDUID,
B.ADDUIDTIME)
VALUES(
A.ACTIVE,
A.ADDUID,
A.ADDUIDTIME)This query is a short version of the main query.It is same but having 180 columns in original table.What version of Oracle are you using? This message does not appear in my 10.1 Error Messages document, but the other messages in that range seem to be about DBMS_SCHEDULER.
Are you using scheduler somewhere around where you are getting the error message?
John -
Hi All,
We are encountering a strange problem with the merge command.
The following statement works :-
merge into ATTRIBUTE_GROUP@US_PRODUCT_UAT a
using
select
a1.group_id,
a1.NAME,
a1.CREATE_DATE,
a1.MODIFY_DATE
from
ATTRIBUTE_GROUP_LOG a1,
product_push_wrk a2
where
a2.column_id = a1.group_id and
a2.modify_date = a1.modify_date ) b
on ( a.group_id = b.group_id)
when matched then
update set
a.NAME = b.NAME,
a.CREATE_DATE = b.CREATE_DATE,
a.MODIFY_DATE = b.MODIFY_DATE
when not matched then
insert
a.group_id,
a.NAME,
a.CREATE_DATE,
a.MODIFY_DATE
values
b.group_id,
b.NAME,
b.CREATE_DATE,
b.MODIFY_DATE
However when we change the order of the columns in the select query as follows the an error occurs : -
merge into ATTRIBUTE_GROUP@US_PRODUCT_UAT a
using
select
a1.NAME,
a1.group_id,
a1.CREATE_DATE,
a1.MODIFY_DATE
from
ATTRIBUTE_GROUP_LOG a1,
product_push_wrk a2
where
a2.column_id = a1.group_id and
a2.modify_date = a1.modify_date ) b
on ( a.group_id = b.group_id)
when matched then
update set
a.NAME = b.NAME,
a.CREATE_DATE = b.CREATE_DATE,
a.MODIFY_DATE = b.MODIFY_DATE
when not matched then
insert
a.group_id,
a.NAME,
a.CREATE_DATE,
a.MODIFY_DATE
values
b.group_id,
b.NAME,
b.CREATE_DATE,
b.MODIFY_DATE
ERROR at line 15:
ORA-00904: "B"."GROUP_ID": invalid identifier
SQL> l 15
15* on ( a.group_id = b.group_id)
The structure of the attribute_log table is as follows :-
SQL> desc ATTRIBUTE_GROUP
Name Null? Type
GROUP_ID NOT NULL NUMBER
NAME NOT NULL VARCHAR2(96)
CREATE_DATE NOT NULL DATE
MODIFY_DATE NOT NULL DATE
Any pointers to the cause of this error will be highly appreciated.
Thanks and Regards,
SumanThe table structures are as follows :-
04:17:17 SQL> desc product_push_wrk
Name Null? Type
COLUMN_ID NOT NULL NUMBER
TYPE NOT NULL VARCHAR2(10)
PARENT_COLUMN_ID NUMBER
LEVEL_NO NUMBER
MODIFY_DATE NOT NULL DATE
04:17:25 SQL> desc ATTRIBUTE_GROUP_LOG
Name Null? Type
GROUP_ID NOT NULL NUMBER
NAME NOT NULL VARCHAR2(96)
CREATE_DATE DATE
MODIFY_DATE DATE
04:18:02 SQL> desc ATTRIBUTE_GROUP
Name Null? Type
GROUP_ID NOT NULL NUMBER
NAME NOT NULL VARCHAR2(96)
CREATE_DATE DATE
MODIFY_DATE DATE -
Help for merge statement?
I have a problem in Merge statement. My merge statement is following:
MERGE INTO hoadon hd
USING (
SELECT m.ma_ttoan, t.ma_ttoan ma_kh
FROM tai_anhxa_makh t
INNER JOIN hoadon m on t.ma_kh = m.ma_ttoan
WHERE m.thang_nam = '200908' and m.ma_ttoan not like 'DLC%' and t.ma_ttoan IS NOT NULL
GROUP BY m.ma_ttoan, t.ma_ttoan
) m ON (hd.ma_ttoan = m.ma_kh)
WHEN MATCHED THEN
UPDATE SET ma_ttoan = m.ma_ttoan
WHEN NOT MATCHED THEN
INSERT (thang_nam, ma_ttoan) VALUES('200908','thaodv')
After execute this query, PS/SQL show error message: "ORA-00904: hd.ma_ttoan invalid identifier"
I'm using Oracle version 9i
Can anyone help me to resolve this problem?
Thanks in advanceIn 9i you can't use the columns from the ON clause in your UPDATE part of the MERGE statement
this is invalid:
UPDATE SET ma_ttoan = m.ma_ttoanuse a different column here. -
Problem using TAPI triggers and merge statement
Hi,
I use Designer tapi triggers on a table. When I try to execute a merge statement, I get the following error:
ORA-06502: PL/SQL: numeric or value error: NULL index table key value.
Is there a restriction when using TAPI triggers and merge statements that anyone is aware of?No restrictions on MERGE commands that I know of. I have, however, seen the TAPI give inexplicable ORA-06502 errors. It would help to know what line in which procedure or trigger gave the error. That information should have been in the error stack.
-
MERGE Statement Problem for Storing Old Data
Hi,
I am using MERGE statement to update as well as insert rows on ta table.
I have a data like in a table 'TABLEA' as 10 20 30 ABCD
I want to update the table using 10 20 30 DEFG but i want the old data i.e 10 20 30 ABCD
to store in a History table i.e TABLEA_H.
Is there any way to store the data
Any help will be needful for meHi,
Trigger usage may affect the performance as we are handling Production environment.
is there any way to implement the scenario without using Triggers?
Any help will be needful for me -
i would like to know if it is possible to identify the row that is causing the problem when you use a merge statement in pl/sql. i know if you create a cursor and then loop through the data you can identify the column but what about if i have only a merge that will either insert or update. is it possible to identify which row of data cause the problem? thanks
You can use an Error Logging Table<br>
<br>
Nicolas. -
Error executing a stored procedure from SSIS using the MERGE statement between databases
Good morning,
I'm trying to execute from SSIS a stored procedure that compares the content of two tables on different databases in the same server and updates one of them. To perform this action, I've created a stored procedure in the destination database and I'm
comparing the data between tables with the MERGE statement. When I execute the procedure on the destination database the error that I obtain is:
"Msg 916, Level 14, State 1, Procedure RefreshDestinationTable, Line 13
The server principal "XXXX" is not able to access the database "XXXX" under the current security context."
Some things to take in account:
1. I've created a temporary table on the same destination database to check if the problem was on the MERGE statement and it works fine.
2. I've created the procedure with the option "WITH EXECUTE AS DBO".
I've read that it can be a problem of permissions but I don't know if I'm executing the procedure from SSIS to which user/login I should give permissions and which.
Could you give me some tip to continue investigating how to solve the problem?
Thank you,
VirgilioRead Erland's article http://www.sommarskog.se/grantperm.html
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Instead of trigger is NOT firing for merge statements in Oracle 10gR2
The trigger fires fine for a update statement, but not when I use a merge statement
with an update clause. Instead I get the normal error for the view ( which is a union all view, and therefore not updatable.)
The error is :-
ORA-01733: virtual column not allowed here
oracle release is 10.2.0.2 for AIX 64L
Is this a known bug ?
I've used a multi-table insert statement to work around the problem for inserts, but
for updates, I'd really like to be able to use a merge statement instead of an update.
Mark.This is my cut-down version :-
In this case case I'm getting an :-
ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.
rather then the ora-01733 error I get in the real code ( which is an update from an involved
XML expression - cast to a table form)
create table a ( a int primary key , b char(30) ) ;
create table b ( a int primary key , b char(30) ) ;
create view vw_a as
select *
from a
union all
select *
from b ;
ALTER VIEW vw_a ADD (
PRIMARY KEY
(a) DISABLE);
DROP TRIGGER TRG_IO_U_ALL_AB;
CREATE OR REPLACE trigger TRG_IO_U_ALL_AB
instead of update ON vw_a
for each row
begin
update a targ
set b = :new.b
where targ.a = :new.a
if SQL%ROWCOUNT = 0
then
update b targ
set b = :new.b
where targ.a = :new.a
end if ;
end ;
insert into a values (1,'one');
insert into a values (2,'two');
insert into a values (3,'three');
insert into b values (4,'quatre');
insert into b values (5,'cinq');
insert into b values (6,'six');
commit;
create table c as select a + 3 as a, b from a ;
commit;
merge into vw_a targ
using (select * from c ) src
on ( targ.a = src.a )
when matched
then update
set targ.b = src. b
select * from vw_a ;
rollback ;
update vw_a b
set b = ( select c.b from c where b.a = c.a )
where exists ( select c.b from c where b.a = c.a ) ;
select * from vw_a ;
rollback ; -
FORALL MERGE statement works in local database but not over database link
Given "list", a collection of NUMBER's, the following FORALL ... MERGE statement should copy the appropriate data if the record specified by the list exists on both databases.
forall i in 1..list.count
merge into tbl@remote t
using (select * from tbl
where id = list(i)) s
on (s.id = t.id)
when matched then
update set
t.status = s.status
when not matched then
insert (id, status)
values (s.id, s.status);
But this does not work. No exceptions, but target table's record is unchanged and "sql%rowcount" is 0.
If the target table is in the local database, the exact same statement works:
forall i in 1..list.count
merge into tbl2 t
using (select * from tbl
where id = list(i)) s
on (s.id = t.id)
when matched then
update set
t.status = s.status
when not matched then
insert (id, status)
values (s.id, s.status);
Does anyone have a clue why this may be a problem?
Both databases are on Oracle 10g.
Edited by: user652538 on 2009. 6. 12 오전 11:29
Edited by: user652538 on 2009. 6. 12 오전 11:31
Edited by: user652538 on 2009. 6. 12 오전 11:45Should throw an error in my opinion. The underlying reason for not working is basically because of
SQL> merge into t@remote t1
using ( select sys.odcinumberlist (1) from dual) t2
on (1 = 1)
when matched
then
update set i = 1
Error at line 4
ORA-22804: remote operations not permitted on object tables or user-defined type columnsSame reason as e.g.
insert into t@remote select * from table(sys.odcinumberlist(1,2,3))doesn't work. -
Automatic Parallelism causes Merge statement to take longer.
We have a problem in a new project as part of the ETL load into the Oracle datawarehouse we perform a merge statement to update rows in a global temporary table then load
the results into a permanant table, when testing with automatic parallel execution enabled the plan changes and the merge never finishes and consumes vast amounts of resources.
The database version is:-
Database version :11.2.0.3
OS: redhat 64bit
three node rac 20 cores per node
when executing serially the query response is typically similar to the following:
MERGE /*+ gather_plan_statistics no_parallel */ INTO T_GTTCHARGEVALUES USING
(SELECT
CASTACCOUNTID,
CHARGESCHEME,
MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT
FROM
V_CACHARGESALL
WHERE
CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
AND CHARGEDATE < TO_DATE(:B1,'YYYY-MM-DD')
GROUP BY
CASTACCOUNTID,
CHARGESCHEME
HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
ON
(T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME)
WHEN MATCHED
THEN UPDATE SET
CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT;
1448340 rows merged.
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | MERGE STATEMENT | | 1 | | 0 |00:03:08.43 | 2095K| 186K| | | |
| 1 | MERGE | T_GTTCHARGEVALUES | 1 | | 0 |00:03:08.43 | 2095K| 186K| | | |
| 2 | VIEW | | 1 | | 1448K|00:02:53.14 | 619K| 177K| | | |
|* 3 | HASH JOIN | | 1 | 1 | 1448K|00:02:52.70 | 619K| 177K| 812K| 812K| 1218K (0)|
| 4 | VIEW | | 1 | 1 | 203 |00:02:51.26 | 608K| 177K| | | |
|* 5 | FILTER | | 1 | | 203 |00:02:51.26 | 608K| 177K| | | |
| 6 | SORT GROUP BY | | 1 | 1 | 480 |00:02:51.26 | 608K| 177K| 73728 | 73728 | |
|* 7 | FILTER | | 1 | | 21M|00:02:56.04 | 608K| 177K| | | |
| 8 | PARTITION RANGE ITERATOR| | 1 | 392K| 21M|00:02:51.32 | 608K| 177K| | | |
|* 9 | TABLE ACCESS FULL | T_CACHARGES | 24 | 392K| 21M|00:02:47.48 | 608K| 177K| | | |
| 10 | TABLE ACCESS FULL | T_GTTCHARGEVALUES | 1 | 1451K| 1451K|00:00:00.48 | 10980 | 0 | | | |
Predicate Information (identified by operation id):
3 - access("T_GTTCHARGEVALUES"."CASTACCOUNTID"="MTOTAL"."CASTACCOUNTID" AND "T_GTTCHARGEVALUES"."CHARGESCHEME"="MTOTAL"."CHARGESCHEME")
5 - filter(MAX("CUMULATIVECOUNT") IS NOT NULL)
7 - filter(TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm')<TO_DATE(:B1,'YYYY-MM-DD'))
9 - filter(("LOGICALLYDELETED"=0 AND "CHARGEDATE">=TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm') AND "CHARGEDATE"<TO_DATE(:B1,'YYYY-MM-DD')))removing the no_parallel hint results in the following, (this is pulled from the sql monitoring report and editied to remove the lines relating to individual parallel servers)
I understand that the query is considered for parallel execution due to the estimated length of time it will run for and although the degree of parallleism seems excessive
it is the default maximum for the server configuration, what we are tryig to understand is which statistics could be inacurate or missing and could cause this kind of problem.
In this case we can add the no_parallel hint in the etl package as a workaround but would really liek to identify the root cause to avoid similar problems elsewhere.
SQL Monitoring Report
SQL Text
MERGE INTO T_GTTCHARGEVALUES USING (SELECT CASTACCOUNTID, CHARGESCHEME, MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT FROM V_CACHARGESALL WHERE CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
AND CHARGEDATE < to_date(:B1,'YYYY-MM-DD')
GROUP BY CASTACCOUNTID, CHARGESCHEME HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
ON (T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME) WHEN MATCHED THEN UPDATE SET
CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT
Error: ORA-1013
ORA-01013: user requested cancel of current operation
Global Information
Status : DONE (ERROR)
Instance ID : 1
Session : XXXX(2815:12369)
SQL ID : 70kzttjbyyspt
SQL Execution ID : 16777216
Execution Started : 04/27/2012 09:43:27
First Refresh Time : 04/27/2012 09:43:27
Last Refresh Time : 04/27/2012 09:48:43
Duration : 316s
Module/Action : SQL*Plus/-
Service : SYS$USERS
Program : sqlplus@XXXX (TNS V1-V3)
Binds
========================================================================================================================
| Name | Position | Type | Value |
========================================================================================================================
| :B1 | 1 | VARCHAR2(32) | 2012-04-25 |
========================================================================================================================
Global Stats
====================================================================================================================
| Elapsed | Queuing | Cpu | IO | Application | Concurrency | Cluster | Other | Buffer | Read | Read |
| Time(s) | Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Gets | Reqs | Bytes |
====================================================================================================================
| 7555 | 0.00 | 4290 | 2812 | 0.08 | 27 | 183 | 243 | 3M | 294K | 7GB |
====================================================================================================================
SQL Plan Monitoring Details (Plan Hash Value=323941584)
==========================================================================================================================================================================================================
| Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Read | Read | Mem | Activity | Activity Detail |
| | | | (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes | (Max) | (%) | (# samples) |
==========================================================================================================================================================================================================
| 0 | MERGE STATEMENT | | | | | | 1 | | | | | | |
| 1 | MERGE | T_GTTCHARGEVALUES | | | | | 1 | | | | | | |
| 2 | PX COORDINATOR | | | | 57 | +1 | 481 | 0 | 317 | 5MB | | 4.05 | latch: shared pool (40) |
| | | | | | | | | | | | | | os thread startup (17) |
| | | | | | | | | | | | | | Cpu (7) |
| | | | | | | | | | | | | | DFS lock handle (36) |
| | | | | | | | | | | | | | SGA: allocation forcing component growth (14) |
| | | | | | | | | | | | | | latch: parallel query alloc buffer (200) |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 19054 | | | | | | | | | |
| 4 | VIEW | | | | | | | | | | | | |
| 5 | FILTER | | | | | | | | | | | | |
| 6 | SORT GROUP BY | | 1 | 19054 | | | | | | | | | |
| 7 | PX RECEIVE | | 1 | 19054 | | | | | | | | | |
| 8 | PX SEND HASH | :TQ10002 | 1 | 19054 | | | 240 | | | | | | |
| 9 | SORT GROUP BY | | 1 | 19054 | 246 | +70 | 240 | 0 | | | 228M | 49.32 | Cpu (3821) |
| 10 | FILTER | | | | 245 | +71 | 240 | 3G | | | | 0.08 | Cpu (6) |
| 11 | HASH JOIN | | 1 | 19054 | 259 | +57 | 240 | 3G | | | 276M | 4.31 | Cpu (334) |
| 12 | PX RECEIVE | | 1M | 5 | 259 | +57 | 240 | 1M | | | | 0.04 | Cpu (3) |
| 13 | PX SEND HASH | :TQ10000 | 1M | 5 | 6 | +56 | 240 | 1M | | | | 0.01 | Cpu (1) |
| 14 | PX BLOCK ITERATOR | | 1M | 5 | 6 | +56 | 240 | 1M | | | | 0.03 | Cpu (1) |
| | | | | | | | | | | | | | PX Deq: reap credit (1) |
| 15 | TABLE ACCESS FULL | T_GTTCHARGEVALUES | 1M | 5 | 7 | +55 | 5486 | 1M | 5487 | 86MB | | 2.31 | gc cr grant 2-way (3) |
| | | | | | | | | | | | | | gc current block lost (7) |
| | | | | | | | | | | | | | Cpu (7) |
| | | | | | | | | | | | | | db file sequential read (162) |
| 16 | PX RECEIVE | | 78M | 19047 | 255 | +61 | 240 | 801K | | | | 0.03 | IPC send completion sync (2) |
| 17 | PX SEND HASH | :TQ10001 | 78M | 19047 | 250 | +66 | 240 | 3M | | | | 0.06 | Cpu (5) |
| 18 | PX BLOCK ITERATOR | | 78M | 19047 | 250 | +66 | 240 | 4M | | | | | |
| 19 | TABLE ACCESS FULL | T_CACHARGES | 78M | 19047 | 254 | +62 | 1016 | 4M | 288K | 6GB | | 37.69 | gc buffer busy acquire (104) |
| | | | | | | | | | | | | | gc cr block 2-way (1) |
| | | | | | | | | | | | | | gc cr block lost (9) |
| | | | | | | | | | | | | | gc cr grant 2-way (14) |
| | | | | | | | | | | | | | gc cr multi block request (1) |
| | | | | | | | | | | | | | gc current block 2-way (3) |
| | | | | | | | | | | | | | gc current block 3-way (2) |
| | | | | | | | | | | | | | gc current block busy (1) |
| | | | | | | | | | | | | | gc current grant busy (2) |
| | | | | | | | | | | | | | Cpu (58) |
| | | | | | | | | | | | | | latch: gc element (1) |
| | | | | | | | | | | | | | db file parallel read (26) |
| | | | | | | | | | | | | | db file scattered read (207) |
| | | | | | | | | | | | | | db file sequential read (2433) |
| | | | | | | | | | | | | | direct path read (1) |
| | | | | | | | | | | | | | read by other session (57) |
==========================================================================================================================================================================================================
Parallel Execution Details (DOP=240 , Servers Allocated=480)
Instances : 3chris_c wrote:
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
|* 9 | TABLE ACCESS FULL | T_CACHARGES | 24 | 392K| 21M|00:02:47.48 | 608K| 177K| | | |
Based on the discrepancy between the estimated number of rows and the actual, and the below posted bind value of 2012-04-25 i'd first be checking if the statistics on T_CACHARGES are up to date.
As a reference
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4399338600346902127
So that would be my first avenue of exploration.
Cheers, -
Doubt Regarding Merge Statement in Oracle
Hi,
I have an SP which takes 3 parameters Lets say
(in_empid, in_empname,in_age)
here in_empid corresponds to the empid ie primary key for update/insert
Now which of the approach will be better. Will there be problem in using Merge statements for updates/insert
1. Approach 1
Add one more flag in parameters in_action . Now if in_action = 'U' then write an update statement.If in_action='I' then write insert stmnt
2. Approach 2
write a merge stmnt as follows
merge into employee e using
( select in_empid, in_empname,in_age from dual ) b
on ( b.in_empid = e.empid)
WHEN MATCHED THEN
UPDATE SET e.ENAME = in_empname,
e.age = in_age
WHEN NOT MATCHED THEN
INSERT
VALUES (in_empid,in_empname,in_age) something like that
Which would be preferred? I mean is there any restriction that merge can be used only to merge 2 tables?what are the drawbacks of using Merge?
Regds,
SHi cd,
Thanks for the reply.
Actaully I was keeping the front-end code also in mind.
If we click an update button, then they will have to manage a flag till the end to say that transaction was update. whereas when its an insert of new record, they have to maintain a falg till end to imply that the transaction was insert.
I want to avoid this so that they need not maintain additional flag.
Hence I was thinking of using MERGE statement.
Will there be any problem in using merge for such scenarios?
Regds,
S -
Problem with merge using package (row)variable
We're doing an ETL project, where we have to validate data coming from staging tables before inserting / updating them in our data tables.
We use a validate function for this, in which we 'test' our mappings and business logic, and returns 'OK' if no errors were occured.This function is called in the WHERE clause, and populates a package row variable. From this variable, values are read in the insert / update statement, using a get function.
The problem is that sometimes, with a data set with NO duplicate keys, we get an ORA-00001 error, and sometimes not.
Does anyone have any idea why this error could occur ? We're on Oracle 9.2.0.5.0.
For the demo mentioned below, the following statement fails , while it succeeds if we add 'and rownum < 11' to the where clause :
merge into target_tab t
using (select st.* from source_tab st where demo_pack.assign_var(st.attrib_1, st.attrib_2, st.attrib_3) = 'OK') s
on (t.attrib_1 = s.attrib_1 and
t.attrib_2 = s.attrib_2)
when matched then update
set t.attrib_3 = demo_pack.get_attrib('ATTRIB_3')
when not matched then
insert(attrib_1, attrib_2, attrib_3)
values (demo_pack.get_attrib('ATTRIB_1'), demo_pack.get_attrib('ATTRIB_2'), demo_pack.get_attrib('ATTRIB_3'));
Can anyone explain to me why this statement sometimes fails, and other times not ?
Thanks in advance .
demo tables / packages :
create table source_tab
attrib_1 varchar2(30),
attrib_2 varchar2(30),
attrib_3 varchar2(200)
create table target_tab
attrib_1 varchar2(30) not null,
attrib_2 varchar2(30) not null,
attrib_3 varchar2(200),
constraint pk_target_tab primary key(attrib_1, attrib_2)
insert into source_tab
select table_name, column_name, data_type
from user_tab_columns;
commit;
create or replace package demo_pack as
function assign_var(p_attrib_1 in target_tab.attrib_1%type,
p_attrib_2 in target_tab.attrib_2%type,
p_attrib_3 in target_tab.attrib_3%type) return varchar2;
function get_attrib(p_name in varchar2) return varchar2;
end;
create or replace package body demo_pack as
g_rec target_tab%rowtype;
function assign_var(p_attrib_1 in target_tab.attrib_1%type,
p_attrib_2 in target_tab.attrib_2%type,
p_attrib_3 in target_tab.attrib_3%type) return varchar2 is
begin
g_rec.attrib_1 := p_attrib_1;
g_rec.attrib_2 := p_attrib_2;
g_rec.attrib_3 := p_attrib_3;
return 'OK';
end;
function get_attrib(p_name in varchar2) return varchar2 is
l_return varchar2(200);
begin
if p_name = 'ATTRIB_1' then
l_return := g_rec.attrib_1;
elsif p_name = 'ATTRIB_2' then
l_return := g_rec.attrib_2;
elsif p_name = 'ATTRIB_3' then
l_return := g_rec.attrib_3;
end if;
return l_return;
end;
end;
/It's not necessarily that the problem is within your package code. As long as UNIQUE constraint exists on DEST table, any MERGE statement may fail with ORA-00001 due to concurrent updates and inserts taking place.
Of course, it's just my guess but consider the following scenario (three sessions modify DEST table in parallel – and to keep this example clear, I put their commands in chronological order):
S#1> create table dest(x primary key, y) as
2 select rownum, rownum
3 from dual
4 connect by level <= 5;
Table created.
S#1> create table src(x, y) as
2 select rownum + 1, rownum + 1
3 from dual
4 connect by level <= 5;
Table created.
S#1> select * from src;
X Y
2 2
3 3
4 4
5 5
6 6
S#1> select * from dest;
X Y
1 1
2 2
3 3
4 4
5 5
S#2> -- Now, session #2 will update one row in DEST table
S#2> update dest
2 set y = 40
3 where x = 4;
1 row updated.
S#2> select * from dest;
X Y
1 1
2 2
3 3
4 40
5 5
S#1> -- Session #1 issues the following MERGE:
S#1> merge into dest d
2 using (select * from src) s
3 on (s.x = d.x)
4 when matched then update set d.y = s.y
5 when not matched then insert (d.x, d.y)
6 values (s.x, s.y);
-- At this point, session #1 becomes blocked as it can not modify row locked by session #2
S#3> -- Now, session #3 inserts new row into DEST and commits.
S#3> -- MERGE in session #1 is still blocked.
S#3> insert into dest values (6, 6);
1 row created.
S#3> select * from dest;
X Y
1 1
2 2
3 3
4 4
5 5
6 6
6 rows selected.
S#3> commit;
Commit complete.
S#2> -- Session #2 rolls back its UPDATE.
S#2> rollback;
Rollback complete.
-- Finally, session #1 is getting unblocked, but...
merge into dest d
ERROR at line 1:
ORA-00001: unique constraint (MAX.SYS_C0032125) violated
S#1>Hope this helps,
Andrew. -
hi gems..good afternoon...
My database version is 11.2.0.1.0 64 bit Solaris OS.
I am facing an "ORA-22813: operand value exceeds system limits" while running a procedure.
I have used loggers and found that it is getting failed in a MERGE statement.
That merge statement is used to merge a table with a collection. the code is like below:
MERGE /*+ INDEX(P BALANCE_HISTORIC_INDEX) */
INTO BALANCE_HOLD_HISTORIC P
USING TABLE(GET_BALANCE_HIST(V_MERGE_REC)) M
ON (P.CUSTOMER_ID = M.CUSTOMER_ID AND P.BOOK_ID = M.BOOK_ID AND P.PRODUCT_ID = M.PRODUCT_ID AND P.SUB_BOOK_ID = M.SUB_BOOK_ID AND)
WHEN MATCHED THEN
UPDATE
<set .....>
WHEN NOT MATCHED THEN
INSERT<.....>The parameter of the function GET_BALANCE_HIST(V_MERGE_REC) is a table type.
Now the function GET_BALANCE_HIST(V_MERGE_REC) is a pipelined function and we have used that because the collection V_MERGE_REC may get huge with data.
This proc was running fine from the beginning but from day before yesterday it was continously throwing ORA 22813 error in that line.
please help..thanks in advance..hi paul..thanks for your reply...
the function GET_BALANCE_HIST is not selecting data from any tables.
What this pipeline function is doing is, it is taking the huge collection V_MERGE_REC as parameter and releasing its datas in pipelined form. The code for the functions is :
CREATE OR REPLACE FUNCTION GET_BALANCE_HIST(P_MERGE IN TAB_TYPE_BALANCE_HISTORIC)
RETURN TAB_TYPE_BALANCE_HISTORIC
PIPELINED AS
V_MERGE TAB_TYPE_BALANCE_HISTORIC := TAB_TYPE_BALANCE_HISTORIC();
BEGIN
FOR I IN 1 .. P_MERGE.COUNT LOOP
V_MERGE.EXTEND;
V_MERGE(V_MERGE.LAST) := OBJ_TYPE_BALANCE_HISTORIC(P_MERGE(I).CUSTOMER_ID,
P_MERGE(I).BOOK_ID,
P_MERGE(I).PRODUCT_ID,
P_MERGE(I).SUB_BOOK_ID,
P_MERGE(I).EARNINGS,
P_MERGE(I).EARNINGS_HOUSE,
P_MERGE(I).QUANTITY,
P_MERGE(I).ACCOUNT_INTEGER);
END LOOP;
FOR J IN 1 .. V_MERGE.COUNT LOOP
PIPE ROW(OBJ_TYPE_BALANCE_HISTORIC(V_MERGE(I).CUSTOMER_ID,
V_MERGE(I).BOOK_ID,
V_MERGE(I).PRODUCT_ID,
V_MERGE(I).SUB_BOOK_ID,
V_MERGE(I).EARNINGS,
V_MERGE(I).EARNINGS_HOUSE,
V_MERGE(I).QUANTITY,
V_MERGE(I).ACCOUNT_INTEGER));
END LOOP;
RETURN;
END;I think the error is comming because of the parameter value of V_MERGE_REC. Since it is huge, so loading that into memory is causing problem. But in this case, how can I resolve it?? Can I use a global temporary table for this??
Please suggest...
Maybe you are looking for
-
XML report output displays xml code instead of pdf
Hi, My Release is R12.1.1, An xml report whose output is set as pdf. when running this report output is coming as xml code instead of pdf another xml report ouput is coming in word instead of pdf is there any additional settings required to display i
-
I have two email accts. set in my iphone,one is the default setting.when I send an email it shows that it was sent from a 3rd email acct. that I no longer use,and had been deleted from my phone?
-
Deleting iTune Home Movie Artwork
Community How does one (if we can) delete artwork associated with Home Movie in iTunes (version 12.0.1). I recently changed the artwork associate with a ripped CD but I can't seem to delete the previous artwork in iTunes 12.0.1. Windows 7 Jeff
-
Update item characteristic values using BAPI_SALESORDER_CHANGE
Hi Experts, Could anyone give me sample code on how to update the characteristic values of a sales order item? Points will be awarded... Thanks! Regards, LM
-
What are some system utilities programs that users have found helpful -for cleaning, permissions repair, etc.?