Merge query
actual syntax of merge stament is
MERGE <hint> INTO <table_name>
USING <table_view_or_query>
ON (<condition>)
WHEN MATCHED THEN <update_clause>
WHEN NOT MATCHED THEN <insert_clause>;
but is it possible or not.
MERGE <hint> INTO <table_name>
USING <table_view_or_query>
ON (<condition>)
WHEN MATCHED THEN <insert_clause into tab A>
WHEN NOT MATCHED THEN <insert_clause into tab B>;
but is it possible or not.
MERGE <hint> INTO <table_name>
USING <table_view_or_query>
ON (<condition>)
WHEN MATCHED THEN <insert_clause into tab A>
WHEN NOT MATCHED THEN <insert_clause into tab B>;No, not direct.
You can theoretically try to achieve it
with update - trigger on tab A (+ dummy update of tab A
in UPDATE-Caluse of merge), which inserts into tab B,
but probably, it's better to search in direction INSERT ALL ...
Message was edited by:
Dmytro Dekhtyaryuk
Similar Messages
-
Executing a merge query for a collection
Hello All,
I am trying to use a merge query to find common and uncommon ids between a table and a list I pass to my pl-sql proc. I am not sure if I am doing the right thing, please guide me...
Here is my code...
Procedure process_content(i_eidlist IN ocs_eid_list_t, i_id IN number, o_new_email_list OUT ocs_eid_list_t) AS
lv_last_processed_row_id number;
lv_common_email_list ocs_eid_list_t;
lv_internet_id varchar2;
Begin
lv_last_processed_row_id := 0;
MERGE INTO table c
USING TABLE(i_eidlist)a
ON (c.eid in a)
--WHEN MATCHED THEN UPDATE SET c.row_id = job_no_seq.NEXTVAL,c.copy_count=1 returning row_id bulk collect into lv_row_id_list;
WHEN MATCHED THEN SELECT c.eid bulk collect into lv_common_email_list returning row_id bulk collect into lv_row_id_list
WHEN NOT MATCHED THEN SELECT c.eid bulk collect into o_new_email_list;
I am assuming that the merge query is going to iterate over the i_eidlist, and find me the common and uncommon elements. However, I get an error saying the sql block is ignored.
Thanks
Abhishek
글 수정: A.J.I do not think it is possible in one pass. The best I could come up with:
DECLARE
COMMON_LIST NAME_LIST;
UNCOMMON_LIST NAME_LIST;
EMPLOYEE_LIST NAME_LIST;
CHECK_LIST NAME_LIST := NAME_LIST('KING','QUEEN');
BEGIN
SELECT ENAME
BULK COLLECT INTO EMPLOYEE_LIST
FROM EMP;
COMMON_LIST := EMPLOYEE_LIST MULTISET INTERSECT CHECK_LIST;
UNCOMMON_LIST := EMPLOYEE_LIST MULTISET EXCEPT CHECK_LIST;
DBMS_OUTPUT.PUT_LINE('-- COMMON_LIST --');
FOR i IN 1..COMMON_LIST.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(COMMON_LIST(i));
END LOOP;
DBMS_OUTPUT.PUT_LINE('-- UNCOMMON_LIST --');
FOR i IN 1..UNCOMMON_LIST.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(UNCOMMON_LIST(i));
END LOOP;
END;
/Run results showing contents:
SQL> DECLARE
2 COMMON_LIST NAME_LIST;
3 UNCOMMON_LIST NAME_LIST;
4 EMPLOYEE_LIST NAME_LIST;
5 CHECK_LIST NAME_LIST := NAME_LIST('KING','QUEEN');
6 BEGIN
7 SELECT ENAME
8 BULK COLLECT INTO EMPLOYEE_LIST
9 FROM EMP;
10 COMMON_LIST := EMPLOYEE_LIST MULTISET INTERSECT CHECK_LIST;
11 UNCOMMON_LIST := EMPLOYEE_LIST MULTISET EXCEPT CHECK_LIST;
12 DBMS_OUTPUT.PUT_LINE('-- COMMON_LIST --');
13 FOR i IN 1..COMMON_LIST.COUNT LOOP
14 DBMS_OUTPUT.PUT_LINE(COMMON_LIST(i));
15 END LOOP;
16 DBMS_OUTPUT.PUT_LINE('-- UNCOMMON_LIST --');
17 FOR i IN 1..UNCOMMON_LIST.COUNT LOOP
18 DBMS_OUTPUT.PUT_LINE(UNCOMMON_LIST(i));
19 END LOOP;
20 END;
21 /
-- COMMON_LIST --
KING
-- UNCOMMON_LIST --
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
TURNER
ADAMS
JAMES
FORD
MILLER
PL/SQL procedure successfully completed.
SQL> SY. -
Error in merge query of the Proc.
Hi all,
Below merge query working fine, if I execite independently. I am getting error as "ORA-00900: invalid SQL statement" when i paste this code inside the procedure. However, if I comment the lines written in bold, proc is working fine. Not sure why the comparison not happening even though code and data is correct. If i use like intead of IN then it is working fine.
Can you please suggest on this?
MERGE INTO ENTRYPOINTASSETS
USING
(SELECT
LAST_DAY(TRUNC(to_timestamp(oa.reqdate, 'yyyymmddhh24:mi:ss.ff4'))) as activity_month,
oa.acctnum as acctnum,
l.lkpvalue as assettype,
LOWER(trim(oa.disseminationmthd)) as deliverymthd,
epa.assetid as assetid,
epa.assetname as assetname,
count(1) as entrypointcount
FROM action oa, asset epa, lookupdata l
WHERE
oa.assetkey IS NOT NULL
AND oa.acctnum is not null
AND trim(oa.assettype) IN ('NL','WS','AL')
AND l.lkpid = epa.assettypeid
AND UPPER(trim(oa.disseminationmthd)) IN ('ABC','BCD', 'PODCAST', 'EMAIL','ED', 'WID')
AND epa.assetkey = oa.assetkey
AND epa.assetid <> 0
GROUP BY
LAST_DAY(TRUNC(to_timestamp(oa.reqdate, 'yyyymmddhh24:mi:ss.ff4'))),
oa.acctnum,
l.lkpvalue,
LOWER(trim(oa.disseminationmthd)),
epa.assetid,
epa.assetname
) cvd1
ON
(ENTRYPOINTASSETS.activity_month = cvd1.activity_month
AND ENTRYPOINTASSETS.acctnum = cvd1.acctnum
AND ENTRYPOINTASSETS.assettype = cvd1.assettype
AND UPPER(ENTRYPOINTASSETS.deliverymthd) = UPPER(cvd1.deliverymthd)
AND ENTRYPOINTASSETS.assetid = cvd1.assetid)
WHEN NOT MATCHED THEN
INSERT (activity_month, acctnum, assettype, deliverymthd, assetid, assetname, entrypointcount)
VALUES (cvd1.activity_month, cvd1.acctnum, cvd1.assettype, cvd1.deliverymthd, cvd1.assetid, cvd1.assetname, cvd1.entrypointcount)
WHEN MATCHED THEN
UPDATE
SET ENTRYPOINTASSETS.assetname = cvd1.assetname;
Edited by: Nagaraja Akkivalli on Aug 9, 2011 6:07 PMTried it. No luck. Facing the same problem.
MERGE INTO SUMMARYTABLE
USING
(SELECT
LAST_DAY(TRUNC(to_timestamp(oa.reqdate, 'yyyymmddhh24:mi:ss.ff4'))) as activity_month,
oa.acctnum as acctnum,
l.lkpvalue as assettype,
LOWER(TRIM(oa.disseminationmthd)) as deliverymthd,
epa.assetid as assetid,
epa.assetname as assetname,
count(1) as entrypointcount
FROM ods_action oa, ods_asset epa, ods_lookupdata l
WHERE
(lv_summm_type_indicator = c_summaryType_fullLoad
AND ( get_date_timestamp(oa.reqdate) BETWEEN :sum_startdate AND :sum_enddate
AND oa.uploaddatetime BETWEEN :partitioned_start_date AND :partitioned_end_date
OR (lv_summm_type_indicator = c_summaryType_incrementLoad
AND oa.uploaddatetime BETWEEN :sum_startdate AND :sum_enddate )
AND oa.assetkey IS NOT NULL
AND oa.acctnum is not null
AND UPPER(TRIM(oa.assettype)) IN ('NL','WS','AL')
AND l.lkpid = epa.assettypeid
AND UPPER(TRIM(oa.disseminationmthd)) IN ('RSS','PCAST', 'PODCAST', 'EMAIL','ED', 'WID')
AND epa.assetkey = oa.assetkey
AND epa.assetid <> 0
GROUP BY
LAST_DAY(TRUNC(to_timestamp(oa.reqdate, 'yyyymmddhh24:mi:ss.ff4'))),
oa.acctnum,
l.lkpvalue,
LOWER(TRIM(oa.disseminationmthd)),
epa.assetid,
epa.assetname
) cvd1
ON
(SUMMARYTABLE.activity_month = cvd1.activity_month
AND SUMMARYTABLE.acctnum = cvd1.acctnum
AND SUMMARYTABLE.assettype = cvd1.assettype
AND UPPER(TRIM(SUMMARYTABLE.deliverymthd)) = UPPER(TRIM(cvd1.deliverymthd))
AND SUMMARYTABLE.assetid = cvd1.assetid)
WHEN NOT MATCHED THEN
INSERT (activity_month, acctnum, assettype, deliverymthd, assetid, assetname, entrypointcount)
VALUES (cvd1.activity_month, cvd1.acctnum, cvd1.assettype, cvd1.deliverymthd, cvd1.assetid, cvd1.assetname, cvd1.entrypointcount)
WHEN MATCHED THEN
UPDATE
SET SUMMARYTABLE.assetname = cvd1.assetname,
SUMMARYTABLE.entrypointcount =
CASE WHEN NVL(lv_summm_type_indicator,c_summaryType_incrementLoad) = c_summaryType_fullLoad THEN cvd1.entrypointcount
ELSE SUMMARYTABLE.entrypointcount + cvd1.entrypointcount END;If I comment any one of the below piece of code present in merge then merge is working fine. If I retain both code then getting error.
lv_summm_type_indicator is a variable caclualted @ run time to check type of summarization and c_summaryType_incrementLoad is a constant value stored @ the top of the procedure. PLease let me now where I am going wrong.
(lv_summm_type_indicator = c_summaryType_fullLoad
AND ( get_date_timestamp(oa.reqdate) BETWEEN :sum_startdate AND :sum_enddate
AND oa.uploaddatetime BETWEEN :partitioned_start_date AND :partitioned_end_date
))OR
OR (lv_summm_type_indicator = c_summaryType_incrementLoad
AND oa.uploaddatetime BETWEEN :sum_startdate AND :sum_enddate )Edited by: Nagaraja Akkivalli on Aug 24, 2011 5:13 PM
Edited by: Nagaraja Akkivalli on Aug 24, 2011 5:16 PM -
Merge query error in Where clause
Following error is coming when i execute the merge query. Anything wrong with this? I am using Oracle 9.2.0.1.
Query:
MERGE
INTO incompletekalls ic
USING live_small ls
ON ((ls.callid = ic.callid) AND
(ls.sdate = ic.sdate) AND
(ls.stime = ic.stime))
WHEN MATCHED THEN
UPDATE
SET ic.adate = ls.adate,
ic.atime = ls.atime,
ic.edate = ls.edate,
ic.etime = ls.etime
WHERE
ls.sdate = '16-Apr-09' AND ls.stime >= '09:00:00' AND ls.stime <= '11:00:00' AND ((ls.adate IS NULL) OR
(ls.edate IS NULL))
WHEN NOT MATCHED THEN
INSERT (ic.callid,ic.cg,ic.cd,ic.re,ic.opc,ic.dpc,ic.sdate,ic.stime,ic.adate,ic.atime,ic.edate,ic.etime)
VALUES (ls.callid,ls.cg,ls.cd,ls.re,ls.opc,ls.dpc,ls.sdate,ls.stime,ls.adate,ls.atime,ls.edate,ls.etime)
WHERE ls.sdate >= '16-Apr-09' AND ls.stime >= '09:00:00' AND ls.stime <= '11:00:00'
Error:
SQL> /
WHERE
ERROR at line 13:
ORA-00905: missing keywordHi,
From looking at the documented examples
http://www.oracle.com/pls/db92/db92.drilldown?levelnum=2&toplevel=a96540&method=FULL&chapters=0&book=&wildcards=1&preference=&expand_all=&verb=&word=MERGE#a96540
and on http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5318183934935
I think that you cannot use the WHERE in your MERGE like that on 9i...
Something else I want to warn you for:
It's a bad idea to store your date and time separated as strings! You'll run into troubles sooner or later, for 100%...
Use a single DATE column instead, in which you store both the date and time components.. -
This is the first time i am using Merge Query and the query is
;MERGE saSalesStockRecevedSub AS T
USING @tableSRN AS S
ON (T.prodCode=S.prodCode AND T.packtypeID=S.packtypeID AND T.batchCode=S.batchCode and T.stockReceiveMainID=S.stockReceiveMainID)
WHEN NOT MATCHED BY T
THEN
INSERT(stockReceiveMainID, prodCode, packtypeID, batchCode, quantityClaim, quantityReceived,reasonId, qQuantity, saleableQty, mfgDefect, slDamageQty, nonSaleableQty, damageQty, breakageQty, leakageQty, expiryQty, remarks,activeStatus,cnCreateSl,cnCreateNsl)
VALUES(S.stockReceiveMainID, S.prodCode, S.packtypeID, S.batchCode, S.quantityClaim, S.quantityReceived,S.reasonId, S.qQuantity, S.saleableQty, S.mfgDefect, S.slDamageQty, S.nonSaleableQty, S.damageQty, S.breakageQty, S.leakageQty, S.expiryQty,
S.remarks,'ACTIVE','FALSE','FALSE')
WHEN MATCHED
THEN
UPDATE SET T.quantityClaim=S.quantityClaim, T.quantityReceived=S.quantityReceived,T.reasonId=S.reasonId, T.qQuantity=S.qQuantity, T.saleableQty=S.saleableQty, T.mfgDefect=S.mfgDefect, T.slDamageQty=S.slDamageQty, T.nonSaleableQty=S.nonSaleableQty,
T.damageQty=S.damageQty, T.breakageQty=S.breakageQty, T.leakageQty=S.leakageQty, T.expiryQty=S.expiryQty, T.remarks=S.remarks
WHEN NOT MATCHED BY S
THEN
DELETE ;
It showing incorrect syntax near 'T'.
Please help me.
ThanksHere below is the complete code, hope it will work with you.
You should use the key word source and target instead of using S and T
MERGE saSalesStockRecevedSub AS T
USING tableSRN AS S
ON (T.prodCode=S.prodCode AND T.packtypeID=S.packtypeID AND T.batchCode=S.batchCode and T.stockReceiveMainID=S.stockReceiveMainID)
WHEN NOT MATCHED BY TARGET
THEN
INSERT(stockReceiveMainID, prodCode, packtypeID, batchCode, quantityClaim, quantityReceived,reasonId, qQuantity, saleableQty, mfgDefect, slDamageQty, nonSaleableQty, damageQty, breakageQty, leakageQty, expiryQty, remarks,activeStatus,cnCreateSl,cnCreateNsl)
VALUES(S.stockReceiveMainID, S.prodCode, S.packtypeID, S.batchCode, S.quantityClaim, S.quantityReceived,S.reasonId, S.qQuantity, S.saleableQty, S.mfgDefect, S.slDamageQty, S.nonSaleableQty, S.damageQty, S.breakageQty, S.leakageQty, S.expiryQty,
S.remarks,'ACTIVE','FALSE','FALSE')
WHEN MATCHED
THEN
UPDATE SET T.quantityClaim=S.quantityClaim, T.quantityReceived=S.quantityReceived,T.reasonId=S.reasonId, T.qQuantity=S.qQuantity, T.saleableQty=S.saleableQty, T.mfgDefect=S.mfgDefect, T.slDamageQty=S.slDamageQty, T.nonSaleableQty=S.nonSaleableQty,
T.damageQty=S.damageQty, T.breakageQty=S.breakageQty, T.leakageQty=S.leakageQty, T.expiryQty=S.expiryQty, T.remarks=S.remarks
WHEN NOT MATCHED BY SOURCE
THEN
DELETE ;
Working as a Senior Database Analyst & Architect at Ministry of Higher Education in KSA -
Need help optimizing a merge query
Hi all, I hope someone can give me some assistance with this. I don't have a lot of experience with Oracle, so any help would be greatly appreciated. I have a MERGE query that I need to optimize as much as possible. I will give as much information as I can in this post.
Here is the actual query:
MERGE INTO quick_scene_lookup qsl
USING (
SELECT scene.*,
CASE
WHEN scene.data_category LIKE 'NOM%'
THEN 'NOM'
WHEN scene.data_category LIKE 'ENG%'
THEN 'ENG'
WHEN scene.data_category LIKE 'VAL%'
THEN 'VAL'
ELSE scene.data_category
END scn_data_category,
CASE
WHEN scene.data_category_original LIKE 'NOM%'
THEN 'NOM'
WHEN scene.data_category_original LIKE 'ENG%'
THEN 'ENG'
WHEN scene.data_category_original LIKE 'VAL%'
THEN 'VAL'
ELSE scene.data_category_original
END data_category_orig,
CASE
WHEN scene.full_partial_scene LIKE 'F%'
THEN 'F'
WHEN scene.full_partial_scene LIKE 'P%'
THEN 'P'
ELSE scene.full_partial_scene
END scn_full_partial_scene,
CASE
WHEN scene.date_entered_lam IS NULL
OR scene.deleted = 1
THEN 0
ELSE 1
END in_lam,
CASE
WHEN scene.in_uis LIKE 'Y%'
THEN 1
ELSE 0
END scn_in_uis,
CASE
WHEN scene.data_category LIKE 'NOM%'
AND scene.full_partial_scene LIKE 'F%'
THEN 1
ELSE 0
END monitor,
CASE
WHEN scene.date_updated_lam IS NOT NULL
AND scene.satellite_sensor_key = 8 -- L7 ETM
THEN
CASE
WHEN intv.match_in_tolerance = 'Y'
OR intv.match_in_tolerance = 'N'
THEN 0
ELSE 1
END
ELSE 0
END lam_orphan,
sat.satellite,
sat.sensor_id,
station.station_id,
intv.match_in_tolerance,
wrs.wrs_path,
wrs.wrs_row,
TO_DATE(SUBSTR(scene.scene_start_time, 0, 17),
'YYYY:DDD:HH24:MI:SS') scn_scene_start_time,
CASE
WHEN qsl.date_downlinked IS NOT NULL
AND scene.date_standing_request IS NOT NULL
THEN (qsl.date_downlinked - scene.date_standing_request) * 1440
ELSE NULL
END dd_downlinked_marketable
FROM all_scenes scene
INNER JOIN lu_satellite sat
ON (scene.satellite_sensor_key = sat.satellite_sensor_key)
INNER JOIN ground_stations station
ON (scene.station_key = station.station_key)
INNER JOIN all_intervals intv
ON (scene.landsat_interval_id = intv.landsat_interval_id)
INNER JOIN worldwide_reference_system wrs
ON (scene.wrs_key = wrs.wrs_key)
LEFT OUTER JOIN quick_scene_lookup qsl
ON (scene.landsat_scene_id = qsl.landsat_scene_id)
WHERE (scene.job_sequence_key IN (
SELECT job_sequence_key FROM jobs_subscript_execution_fact
WHERE job_key = 109)
OR qsl.landsat_scene_id IS NULL)
AND scene.scene_version = (
SELECT MAX(scene_version) FROM all_scenes
WHERE SUBSTR(landsat_scene_id, 1, 19) =
SUBSTR(scene.landsat_scene_id, 1, 19))) scenes
ON (qsl.landsat_scene_id = scenes.landsat_scene_id
OR (qsl.satellite = scenes.satellite
AND qsl.station_id = scenes.station_id
AND qsl.wrs_path = scenes.wrs_path
AND qsl.wrs_row = scenes.wrs_row
AND TRUNC(qsl.date_acquired) = TRUNC(scenes.date_acquired)
AND SUBSTR(qsl.sensor_id, 1, 3) = SUBSTR(scenes.sensor_id, 1, 3)))
WHEN MATCHED THEN
UPDATE SET
data_category = scenes.scn_data_category,
data_category_original = scenes.data_category_orig,
date_acquired = scenes.date_acquired,
date_entered = scenes.date_entered,
date_standing_request = scenes.date_standing_request,
date_updated = scenes.date_updated,
dd_downlinked_marketable = scenes.dd_downlinked_marketable,
full_partial_scene = scenes.scn_full_partial_scene,
in_lam = scenes.in_lam,
in_uis = scenes.scn_in_uis,
lam_orphan = scenes.lam_orphan,
monitor = scenes.monitor,
satellite = scenes.satellite,
scene_start_time_date = scenes.scn_scene_start_time,
sensor_id = scenes.sensor_id,
station_id = scenes.station_id,
wrs_path = scenes.wrs_path,
wrs_row = scenes.wrs_row,
cloud_cover = scenes.cloud_cover
WHEN NOT MATCHED THEN INSERT(
qsl_scene_id,
data_category,
data_category_original,
date_acquired,
date_entered,
date_standing_request,
date_updated,
full_partial_scene,
in_lam,
in_moc,
in_uis,
lam_orphan,
landsat_interval_id,
landsat_scene_id,
monitor,
satellite,
scene_start_time_date,
sensor_id,
station_id,
wrs_path,
wrs_row,
cloud_cover)
VALUES(
seq_qsl_scene_id.nextval,
scenes.scn_data_category,
scenes.data_category_orig,
scenes.date_acquired,
scenes.date_entered,
scenes.date_standing_request,
scenes.date_updated,
scenes.scn_full_partial_scene,
scenes.in_lam,
0, -- in_moc will always be 0 for archive inserts
scenes.scn_in_uis,
scenes.lam_orphan,
scenes.landsat_interval_id,
scenes.landsat_scene_id,
scenes.monitor,
scenes.satellite,
scenes.scn_scene_start_time,
scenes.sensor_id,
scenes.station_id,
scenes.wrs_path,
scenes.wrs_row,
scenes.cloud_cover)
LOG ERRORS INTO ingest_errors('Intervals error')
REJECT LIMIT 500;All of the columns used in the joins have indexes. Also, all columns referenced in the WHERE clause also have indexes. I have function based indexes on the two columns that are using a function.
The cost from the explain plan is at 14 million, and this query takes far too long to run. I can post the explain plan if anybody wants it. We are running Oracle 10.2g, and are a data warehouse. Any help I can get on this would be greatly appreciated, I and my dba's are unable to come up with any more ideas. Thanks in advance.Well, just in case this might help someone else out, I was able to resolve my issue with the code. It turns out that in the secondary condition of the ON clause for the merge, I had columns that were also being updated. By removing the those columns from the UPDATE clause. It dropped my cost down to 456,000. I was further able to reduce the cost of the query by removing the primary conditional completely. I am a C++ programmer and was counting on a short-circuit OR to speed up the process. Anyway, my cost is now down to 215,000, and all is good here.
Thanks. -
Hi ,
i m using merge query .
below is my query .
MERGE INTO tbltmonthlysales A
USING
(select DISTINCT 'JAN-2011' MON_YYYY,distributorname DISTRIBUTORNAME From SYN_VWABS) B
ON (a.distributorname = B.distributorname)
WHEN NOT MATCHED THEN
INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME);
------Now Firing this query ---
1)
SQL> MERGE INTO tbltmonthlysales A
2 USING
3 (select DISTINCT 'JAN-2011' MON_YYYY,distributorname DISTRIBUTORNAME From SYN_VWABS) B
4 ON (a.distributorname = B.distributorname)
5 WHEN NOT MATCHED THEN
6 INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME);
INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME)
ERROR at line 6:
ORA-00904: "B"."DISTRIBUTORNAME": invalid identifier
now i m createing one table
create table test2 as select * from SYN_VWABS
2)
SQL> MERGE INTO tbltmonthlysales A
2 USING
3 (select DISTINCT 'JAN-2011' MON_YYYY,distributorname DISTRIBUTORNAME From test2) B
4 ON (a.distributorname = B.distributorname)
5 WHEN NOT MATCHED THEN
6 INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME);
INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME)
71 rows merged.
My question is why my 1) query is not executing?thanks your efforts are apriciated but its not working
see i have created view as per ur suggestion,
create or replace view vwabc as
SELECT x.accountnumber accountnumber,
distributorname distributorname,
staffname staffname,
resellerno resellerno,
resellername resellername, resellercreatedate resellercreatedate, customerno customerno, custmername custmername,
customeractivationdate customeractivationdate, childaccountnumber childaccountnumber, childaccountname childaccountname,
childactivationdate childactivationdate, resellerstaff resellerstaff
--x.accountstaffid Disaccountstaffid,
--y.accountstaffid resaccountstaffid
FROM (SELECT Y.RELATIONACCOUNTNUMBER accountnumber, x.NAME distributorname, staffname,
y.accountnumber resellerno, y.accountid resaccountid,
x.accountstaffid
FROM ((SELECT a.accountid, b.accountstaffid, a.NAME,
b.NAME staffname, a.accountnumber
FROM tblmaccount a, tblmaccountstaff b
WHERE a.accounttypeid = 'ACT04' AND a.accountid = b.owneraccountid(+))
UNION ALL
(SELECT a.accountid, NULL, a.NAME, NULL, a.accountnumber
FROM tblmaccount a
WHERE a.accounttypeid = 'ACT04'
AND EXISTS (SELECT 1
FROM tblmaccountstaff c
WHERE c.owneraccountid = a.accountid))) x,
tblmaccountaccountrel y
WHERE NVL (x.accountstaffid, '-') = NVL (y.accountstaffid(+), '-')
AND x.accountid = y.relationaccountid(+)
--and x.accountid = 'ACC000537173'
) x,
(SELECT x.accountnumber, x.NAME resellername, resellerstaff,
x.accountstaffid, y.accountnumber customerno,
ta.NAME custmername, ta.accountid,
ta.activationdate customeractivationdate,
y.relationaccountid, cust.accountnumber childaccountnumber,
cust.NAME childaccountname,
TRUNC (cust.activationdate) childactivationdate,
x.resellercreatedate
FROM ((SELECT a.accountid, b.accountstaffid, a.NAME,
b.NAME resellerstaff, a.accountnumber,
TRUNC (a.createdate) resellercreatedate
FROM tblmaccount a, tblmaccountstaff b
WHERE a.accounttypeid = 'ACT03' AND a.accountid = b.owneraccountid(+))
UNION ALL
(SELECT a.accountid, NULL, a.NAME, NULL, a.accountnumber,
TRUNC (a.createdate) resellercreatedate
FROM tblmaccount a
WHERE a.accounttypeid = 'ACT03'
AND EXISTS (SELECT 1
FROM tblmaccountstaff c
WHERE c.owneraccountid = a.accountid))) x,
tblmaccountaccountrel y,
tblmaccount ta,
tblmaccount cust
WHERE NVL (x.accountstaffid, '-') = NVL (y.accountstaffid, '-')
AND x.accountid = y.relationaccountid(+)
AND y.accountid = ta.accountid(+)
AND ta.accountnumber = cust.parentaccountnumber
--and x.accountid = 'ACC000537856'
) y
WHERE x.resaccountid = y.relationaccountid(+)
Now firing Merge,
SQL> MERGE INTO tbltmonthlysales A
2 USING
3 (select DISTINCT MON_YYYY ,distributorname From vwabc) B
4 ON (a.distributorname = B.distributorname)
5 WHEN NOT MATCHED THEN
6 INSERT (A.MON_YYYY,A.distributorname) VALUES (B.MON_YYYY,B.DISTRIBUTORNAME);
(select DISTINCT MON_YYYY ,distributorname From vwabc) B
ERROR at line 3:
ORA-00904: "MON_YYYY": invalid identifier
still error :( -
Hi ,
I am writing a merge query for a Java application. I have a screen and I am going to take the values in the screen and check if those values exist in the database. If they exist, then I will have to update the data, else I will have to insert the data.
The query is like
MERGE INTO XYZ USING
(SELECT BONUS_ID,CUST_NBR FROM XYZ)B ON
(B.BONUS_ID = 2027 and B.CUST_NBR='181258225')
WHEN MATCHED THEN UPDATE SET
CUST_TYPE= 'S', REV_AMT= 123, POUND_TOTAL= 123, PKG_TOTAL= 123 WHERE
CUST_NBR = '181258225'
WHEN NOT MATCHED THEN INSERT
But this query is not working. I get the "Missing Keyword"I meant nothing else
update xyz set ... where ...;
and
insert into xyz select ... from xyz where ...;
I wrote "When I check B.Bonus_ID it refers to the BonusID from XYZ table".
Let me give you one example with SCOTT schema.
I hope it illustrates well my concern of your
statement:
SQL> select ename, empno, sal from emp;
 
ENAME EMPNO SAL
SMITH 7369 800
ALLEN 7499 1600
WARD 7521 1250
JONES 7566 2975
MARTIN 7654 1250
BLAKE 7698 2850
CLARK 7782 2450
SCOTT 7788 3000
KING 7839 5000
TURNER 7844 1500
ADAMS 7876 1100
JAMES 7900 950
FORD 7902 3000
MILLER 7934 1300
 
14 rows selected.
 
SQL> merge into emp using (select * from emp) b
2 on (b.ename = 'KING')
3 when matched then update set sal = 1000
4 when not matched then insert (emp.empno, emp.ename, emp.deptno, emp.sal
5 values(-b.empno, b.ename, b.deptno, 0)
6 /
 
27 rows merged.
 
SQL> select ename, empno, sal from emp;
 
ENAME EMPNO SAL
SMITH -7369 0
ALLEN -7499 0
WARD -7521 0
JONES -7566 0
MARTIN -7654 0
BLAKE -7698 0
CLARK -7782 0
SCOTT -7788 0
TURNER -7844 0
ADAMS -7876 0
JAMES -7900 0
FORD -7902 0
MILLER -7934 0
SMITH 7369 1000
ALLEN 7499 1000
WARD 7521 1000
JONES 7566 1000
MARTIN 7654 1000
BLAKE 7698 1000
CLARK 7782 1000
SCOTT 7788 1000
KING 7839 1000
TURNER 7844 1000
ADAMS 7876 1000
JAMES 7900 1000
FORD 7902 1000
MILLER 7934 1000
 
27 rows selected.Rgds. -
Issue with "read by other session" and a parallel MERGE query
Hi everyone,
we have run into an issue with a batch process updating a large table (12 million rows / a few GB, so it's not that large). The process is quite simple - load the 'increment' from a file into a working table (INCREMENT_TABLE) and apply it to the main table using a MERGE. The increment is rather small (usually less than 10k rows), but the MERGE runs for hours (literally) although the execution plan seems quite reasonable (can post it tomorrow, if needed).
The first thing we've checked is AWR report, and we've noticed this:
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 10,086 43.82
read by other session 3,968,673 9,179 2 39.88 User I/O
db file scattered read 1,058,889 2,307 2 10.02 User I/O
db file sequential read 408,499 600 1 2.61 User I/O
direct path read 132,430 459 3 1.99 User I/OSo obviously most of the time was consumed by "read by other session" wait event. There were no other queries running at the server, so in this case "other session" actually means "parallel processes" used to execute the same query. The main table (the one that's updated by the batch process) has "PARALLEL DEGREE 4" so Oracle spawns 4 processes.
I'm not sure how to fix this. I've read a lot of details about "read by other session" but I'm not sure it's the root cause - in the end, when two processes read the same block, it's quite natural that only one does the physical I/O while the other waits. What really seems suspicious is the number of waits - 4 million waits means 4 million blocks, 8kB each. That's about 32GB - the table has about 4GB, and there are less than 10k rows updated. So 32 GB is a bit overkill (OK, there are indexes etc. but still, that's 8x the size of the table).
So I'm thinking that the buffer cache is too small - one process reads the data into cache, then it's removed and read again. And again ...
One of the recommendations I've read was to increase the PCTFREE, to eliminate 'hot blocks' - but wouldn't that make the problem even worse (more blocks to read and keep in the cache)? Or am I completely wrong?
The database is 11gR2, the buffer cache is about 4GB. The storage is a SAN (but I don't think this is the bottleneck - according to the iostat results it performs much better in case of other batch jobs).OK, so a bit more details - we've managed to significantly decrease the estimated cost and runtime. All we had to do was a small change in the SQL - instead of
MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
USING (SELECT
FROM TMP_SODW_BB) S
ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y' AND S.DIFF_FLAG IN ('U', 'D'))
...(which is the query listed above) we have done this
MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
USING (SELECT
FROM TMP_SODW_BB AND DIFF_FLAG IN ('U', 'D')) S
ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y')
...i.e. we have moved the condition from the MERGE ON clause to the SELECT. And suddenly, the execution plan is this
OPERATION OBJECT_NAME OPTIONS COST
MERGE STATEMENT 239
MERGE T_NOTUNIFIED_CLIENT
PX COORDINATOR
PX SEND :TQ10000 QC (RANDOM) 239
VIEW
NESTED LOOPS OUTER 239
PX BLOCK ITERATOR
TABLE ACCESS TMP_SODW_BB FULL 2
Filter Predicates
OR
DIFF_FLAG='D'
DIFF_FLAG='U'
TABLE ACCESS T_NOTUNIFIED_CLIENT BY INDEX ROWID 3
INDEX AK_UQ_NOTUNIF_T_NOTUNI RANGE SCAN 2
Access Predicates
AND
D.NCLIENT_KEY(+)=NCLIENT_KEY
D.CURRENT_RECORD(+)='Y'
Filter Predicates
D.CURRENT_RECORD(+)='Y' Yes, I know the queries are not exactly the same - but we can fix that. The point is that the TMP_SODW_BB table contains 1639 rows in total, and 284 of them match the moved 'IN' condition. Even if we remove the condition altogether (i.e. 1639 rows have to be merged), the execution plan does not change (the cost increases to about 1300, which is proportional to the number of rows).
But with the original IN condition (that turns into an OR combination of predicates) in the MERGE ON clausule, the cost suddenly skyrockets to 990.000 and it's damn slow. It seems like a problem with cost estimation, because once we remove one of the values (so there's only one value in the IN clausule), it works fine again. So I guess it's a planner/estimator issue ... -
Ok, I have a Table TEST_ACCOUNTS that has all new customer information of new applicants for credit cards. Because of a Bank rules, customers are not assigned
a customer_id number right away, takes about a month, so in this table I will have some CUST_SKEYS that are [0] and some that have account numbers.
I pick off all the [0] from this Table above and find out which ones got an account and those go into a Table LATEST_SKEYS but not all of them will update, so
there will still be some [0]'s.
Anyways, so now I would like to MERGE the accounts that were found back to the master Table, which is TEST_ACCOUNTS using the query below, the query does
work, but all cust_skeys that used to be 0 were turned to null, and I would prefer for them to stay 0. I know that I can use NVL and NVL2 to turn a null value to whatever I want,
but I only have experience using that when I select columns form existing tables, am not sure how to do this with a MERGE, something that I want to learn more about.
It isn't really creating a headache for me, but I would like to keep things consistent and keep the 0, 0's and not NULL.
Another columns that rides along with the row information is app_num, which is a unique number that is assigned to an application for a credit card so I am doing the matching
on that and then I update the cust_skeys.
Hopefully this makes sense. Help appreciated.
MERGE INTO TEST_ACCOUNTS t1
USING (Select * from LATEST_SKEYS) t2
ON (t1.application_num = t2.application_num)
WHEN MATCHED THEN UPDATE
SET t1.cust_skey = t2.customer_skey;user621335 wrote:
... the query does
work, but all cust_skeys that used to be 0 were turned to null, and I would prefer for them to stay 0.
MERGE INTO TEST_ACCOUNTS t1
USING (Select * from LATEST_SKEYS) t2
ON (t1.application_num = t2.application_num)
WHEN MATCHED THEN UPDATE
SET t1.cust_skey = t2.customer_skey;You probably want to limit the records returned by t2 to just those whose customer_skey values are not null:
MERGE INTO TEST_ACCOUNTS t1
USING (Select * from LATEST_SKEYS where customer_skey is not null) t2
ON (t1.application_num = t2.application_num)
WHEN MATCHED THEN UPDATE
SET t1.cust_skey = t2.customer_skey; -
How merge query results from joined table into one additional column
<code>
Here is example
TABLE A:
id | value
1 | a
2 | a
3 | b
TABLE B
id | id_in_table_a | value
1 | 1 | d
2 | 1 | e
3 | 2 | g
</code>
this select should get all columns from table A where value = 'a' and all values related to this record from B merged to one column separated for example with pipe, so the output should looks like this
<code>
id | value | merged_column
1 | a | d|e
2 | a | g
</code>
thanks for helpIf you are on 10g, you can use this:
SQL> create table a
2 as
3 select 1 id, 'a' value from dual union all
4 select 2, 'a' from dual union all
5 select 3, 'b' from dual
6 /
Tabel is aangemaakt.
SQL> create table b
2 as
3 select 1 id, 1 id_in_table_a, 'd' value from dual union all
4 select 2, 1, 'e' from dual union all
5 select 3, 2, 'g' from dual
6 /
Tabel is aangemaakt.
SQL> select id
2 , value
3 , rtrim(v,'|') merged_column
4 from ( select id
5 , value
6 , v
7 , rn
8 from a
9 , b
10 where a.id = b.id_in_table_a
11 model
12 partition by (a.id)
13 dimension by (row_number() over (partition by a.id order by b.id) rn)
14 measures (a.value, cast(b.value as varchar2(20)) v)
15 rules
16 ( v[any] order by rn desc = v[cv()] || '|' || v[cv()+1]
17 )
18 )
19 where rn = 1
20 /
ID VALUE MERGED_COLUMN
1 a d|e
2 a g
2 rijen zijn geselecteerd.Regards,
Rob. -
Merge Query Taking Long time !!!
Hi all,
Im using loading type update/insert in one of my mappings.
mapping usually takes 3-4 mins to complete.
Suddenly It has taken 1 Hr to complete. If I take the select query from the mapping, its running fast and completes in 2 mins.
I Checked the source data count, Indexes in the source columns. Everything is the same
Mapping level, I didnt do any changes. Can anyone suggest what would be possible reason???
Thanks and Regards
ElaLots of things come into play when you're tuning a query.
An (unformatted) execution plan isn't enough.
Tuning takes time and understanding how (a lot of) things work, there is no ASAP in the world of tuning.
Please post other important details, like your database version, optimizer settings, how/when are table statistics gathered etc.
So, read the following informative threads (and please take your time, this really is important stuff), and adust your thread as needed.
That way you'll have a bigger chance of getting help that makes sense...
Your DBA should/ought to be able to help you in this as well.
Re: HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html -
Updating lookup table using merge query
Hi Experts,
My requirement is, we have total 4 tables called x,y,z and a_lookup table. join column between all these tables is deptno.
I need to update a_lookup table based on below conditions
condition1 : Update a_lookup table with matched records of X and Y
condition2: If there is any record in X is not matching with Y, then we need to compare with Z and update the a_lookup table accordingly
Here is the table scripts and my attempt as well.
Only doubt is, is my MERGE statement looks fine or is there any other better way to update the a_lookup table ?
Please share your thoughts on this.
create table x(empno number, deptno number);
-- sample data
insert into x
select level, level * 10 from dual connect by level <= 10;
create table y(empno number, deptno number);
-- sample data
insert into y
select level, level * 10 from dual connect by level <= 5;
create table z(empno number, deptno number);
-- sample data
insert into z
select level, level * 10 from dual connect by level <= 10;
create table a_lookup(empno number, deptno_lookup number);
-- sample data
insert into a_lookup
select null, level * 10 from dual connect by level <= 10;
-- Merging records into a_lookup based on X,Y,Z. Used right outer join on X and Y
merge into a_lookup a
using ( (select x.deptno,x.empno from x,y where x.deptno=y.deptno)
union all
(select z.deptno,z.empno from z, (select x.deptno from x,y where x.deptno=y.deptno and y.deptno is null) res1
where z.deptno = res1.deptno)
) res
on( a.deptno_lookup = res.deptno)
when matched then
update set a.empno = res.empno;
Cheers,
Suri ;-)Assuming empno is unique in X, Y and Z:
merge
into a_lookup a
using (
select nvl(y.empno,z.empno) empno,
x.deptno
from x,
y,
z
where y.deptno(+) = x.deptno
and z.deptno(+) = x.deptno
) b
on (
a.deptno_lookup = b.deptno
and
b.empno is not null
when matched
then
update
set a.empno = b.empno
10 rows merged.
SCOTT@orcl > select * from a_lookup;
EMPNO DEPTNO_LOOKUP
1 10
2 20
3 30
4 40
5 50
6 60
7 70
8 80
9 90
10 100
10 rows selected.
SCOTT@orcl >
SY. -
Improving performance in a merge between local and remote query
If you try to merge two queries, one local (e.g. table in Excel) and one remote (table in SQL Server), the entire remote table is loaded in memory in order to apply the NestedJoin condition. This could be very slow. In my case, the goal is to import only
those rows that have a product name listed in a local Excel table.
I used the SelectRows by using the list of values of the local query (having only one column) in order to apply an "IN ('value1', 'value2', ...)" condition in the SQL statement generated by Power Query (see examples below).
Questions:
Is there another way to do that in "M"?
Is there a way to build such a query (filter a table by using values obtained in another query) by using the user interface?
Is this a scenario that could be better optimized in the future by improving query folding made by Power Query?
Thanks for the feedback!
Local Query
let
LocalQuery = Excel.CurrentWorkbook(){[Name="LocalTable"]}[Content]
in
LocalQuery
Remote Query
let
Source = Sql.Databases("servername"),
Database = Source{[Name="databasename"]}[Data],
RemoteQuery = OasisDataMart{[Schema="schemaname",Item="tablename"]}[Data]
in
RemoteQuery
Merge Query (from Power Query user interface)
let
Merge = Table.NestedJoin(LocalQuery,{"ProductName"},RemoteQuery,{"ProductName"},"NewColumn",JoinKind.Inner),
#"Expand NewColumn" = Table.ExpandTableColumn(Merge, "NewColumn", {"Description", "Price"}, {"NewColumn.Description", "NewColumn.Price"})
in
#"Expand NewColumn"
Alternative merge approach (editing M - is it possible in user interface?)
let
#"Filtered Rows" = Table.SelectRows(RemoteQuery, each List.Contains ( Table.ToList(LocalQuery), [ProductName] ))
in
#"Filtered Rows"
Marco Russo (Blog,
Twitter,
LinkedIn) - sqlbi.com:
Articles, Videos,
Tools, Consultancy,
Training
Format with DAX Formatter and design with
DAX Patterns. Learn
Power Pivot and SSAS Tabular.Bingo! You've find a serious performance issue!
The very same result can be produced in a fast or slow way.
Slow technique: do the RemoveColumns before the SelectRows (maybe you don't have to apply any transformations to the table you want to filter before the SelectRows - I haven't tested this):
let
Source = Sql.Databases(".\k12"),
AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
#"Removed Columns" = Table.RemoveColumns(dbo_FactInternetSales,{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
"SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
"FactInternetSalesReason"}),
#"Filtered Rows" = Table.SelectRows(#"Removed Columns", each List.Contains(Selection[ProductKey],[ProductKey]))
in
#"Filtered Rows"
Fast technique: do the RemoveColumns after the SelectRows
let
Source = Sql.Databases(".\k12"),
AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
#"Filtered Rows" = Table.SelectRows(dbo_FactInternetSales, each List.Contains(Selection[ProductKey],[ProductKey])),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
"SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
"FactInternetSalesReason"})
in
#"Removed Columns"
I think that Power Query team should take a look at this.
Thanks!
Marco Russo (Blog,
Twitter,
LinkedIn) - sqlbi.com:
Articles, Videos,
Tools, Consultancy,
Training
Format with DAX Formatter and design with
DAX Patterns. Learn
Power Pivot and SSAS Tabular. -
Hello Friends - With following details, can some one help me in writing a MERGE query, when Matched update ArtsDate and when not Matched Insert new rwo in CE Table.
PT: Parameter Table
MSO
1
2
5
6
FO Table
MSO EngModel
1 RM713
2 TT344
3 TT189
4 TT349
5 RM735
6 TT119
7 RM734
8 RM710
SCH Table
MsO SchDate SchSlot
1 10/18/2012 1
2 3/16/2012 4
3 12/13/2011 7
4 12/14/2011 4
5 12/15/2011 2
6 12/19/2011 5
7 12/20/2011 8
8 12/5/2011 3
SD SafetyDays
EngModel SDays
RM710 4
RM713 9
RM734 4
RM735 4
TT344 7
TT119 8
TT189 16
TT349 16
CE: Table that needs to be updated
MSO ARTSDate SchDate SchSlot
2 9/30/2012 3/16/2012 4
4 4/26/2012 12/14/2011 4
5 10/15/2012 12/15/2011 2
7 2/2/2012 12/20/2011 8
CE: Result (updated Table) Remarks
MSO ARTSDate SCHdaTE SchSlot
2 3/23/2012 3/16/2012 4 matched
4 12/30/2011 12/14/2011 4
5 12/19/2011 12/15/2011 2 matched
7 12/24/2011 12/20/2011 8
1 10/27/2012 10/18/2012 1 not matched
6 12/27/2011 12/19/2011 5 not matched
Notes on updated CE Table in above table:
Match PT.MSO with CE.MSO
When matched (e.g. MSO# 2 & 5), Update ARTS per following:
1. Take SchDate for same MSO from SCH table
2. Add SD.Sdays from SafetyDays table that refers to same EngModel present in FO Table for the same MSO in SCH table
When not matched (e.g. MSO# 1 & 6), Insert new row into CE Table
VALUES:
SCH.MSO
ARTS (use same same formula as above)
SCH.SCH
SCH.SchSlot
Thanks for your help..
Sunil
Edited by: 865144 on Jun 10, 2011 12:38 PM
Edited by: 865144 on Jun 10, 2011 12:49 PMThanks a ton Ganesh for your help. This is what I created from the code (after needed modifications) you have given to me:
MERGE INTO cmopsexport tgt
USING (SELECT RA.factoryorderid, '030' recordcode, FO.ud_order_prty_code,
SCH.productionsequence, FO.frozen, FO.ud_orig_seq_schd_date,
SCH.productiondate, FO.ud_eng_mdl_no,
TO_CHAR (add_work_days(FO.resourceid, SCH.productiondate,
(Select SafetyDays from UD_SafetyDays
where ud_safetydays.effdate = (select max(SD.effdate) from UD_SafetyDays SD
where SD.effdate <= FO.ud_order_rcvng_date
AND TRIM(SD.EngModel) = TRIM(FO.ud_eng_mdl_no)
and TRIM(FO.ud_eng_mdl_no) = trim(UD_SafetyDays.EngModel)
'YYYYMMDD'
) ud_calculated_arts,
FO.ud_lane
FROM ud_recalc_arts RA,
schedule SCH,
factoryorder FO
WHERE RA.factoryorderid = SCH.factoryorderid
AND SCH.factoryorderid = FO.factoryorderid
) src
ON (tgt.factoryorderid = src.factoryorderid)
WHEN MATCHED THEN
UPDATE
SET
tgt.recordcode = src.recordcode,
tgt.ud_order_prty_code = src.ud_order_prty_code,
tgt.productionsequence = src.productionsequence,
tgt.frozen = src.frozen,
tgt.ud_orig_seq_schd_date = src.ud_orig_seq_schd_date,
tgt.productiondate = src.productiondate,
tgt.ud_eng_mdl_no = src.ud_eng_mdl_no,
tgt.ud_calculated_arts = src.ud_calculated_arts,
tgt.ud_lane = src.ud_lane
WHEN NOT MATCHED THEN
INSERT
VALUES (src.factoryorderid, src.recordcode, src.ud_order_prty_code,
src.productionsequence, src.frozen, src.ud_orig_seq_schd_date,
src.productiondate, src.ud_eng_mdl_no, src.ud_calculated_arts,
src.ud_lane);
And the above works too!!
Thank you once again...
Sunil
Maybe you are looking for
-
Uploading Photos to MobileMe Gallery results in "Request to Server Failed"
Hello everyone! I've run across a very frustrating problem. I am wanting to upload some photos to a MobileMe gallery from within iPhoto '08 (Version 7.1.4; Mac OS X 10.4.11). When I start the upload, it runs for several minutes and the progress bar a
-
Just got back from buying the iPhone
Just got back from the Apple Store, and finally got the 8 GB iPhone. I've heard the stories, read the discussion boards, but I love Apple products so I'm making the shift. I put the clear protector that I bought a few weeks back on before I even touc
-
Safari Error Console: Stop it from clearing or view it in another source?
Firefox is misbehaving when I try to edit content locally so I've turned to Safari. The problem is the Error Console only displays errors for the current page. The problem with this is I have a link that's not behaving properly and when I call it it
-
I have a new Mac Book Pro and a new Nikon D5000 Camera. I can play the video clips in iphoto and imovie but the video clip quality is very poor. The Nikon takes video in HD. Is there anyway to view the videos with good quality on my Mac?
-
MTO--Cost component transfer to COPA
Hi Friends, I have MTO with sale order as cost object. It means COPA documnet will be created not at the time of billing but at the time of Sale order settlement. So first i have finished the production order settlement , then sale order settlement.