WF_ITEM_ATTRIBUTE_VALUES
WF_ITEM_ATTRIBUTE_VALUES segment is consuming lots of space in our production instance and growing very fast. Currently it's size is near to 8Gb. We need to purge it.
Please see these docs/links.
Purge Obsolete Workflow Does Not Purge WF_ITEM_ATTRIBUTE_VALUES [ID 374308.1]
Speeding Up And Purging Workflow [ID 132254.1]
Unable to Purge Closed SERVEREQ Workflow Items [ID 803192.1
FAQ on Purging Oracle Workflow Data [ID 277124.1]
A Closer Examination Of The Concurrent Program Purge Obsolete Workflow Runtime Data [ID 337923.1]
What Tables Does the Workflow Purge Obsolete Data Program (FNDWFPR) Touch? [ID 559996.1]
https://forums.oracle.com/forums/search.jspa?threadID=&q=WF_ITEM_ATTRIBUTE_VALUES+AND+Purge&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein
Similar Messages
-
PURGE PROGRAM 수행 후 WF_ITEM_ATTRIBUTE_VALUES TABLE 에 데이타가 남아 있을 경우
제품: AOL
작성날짜 : 2005-11-29
PURGE PROGRAM 수행 후 WF_ITEM_ATTRIBUTE_VALUES TABLE 에 데이타가 남아 있을 경우
=================================================================
PURPOSE
Problem Description
Workflow Purge Program을 수행 하였음에도 WF_ITEM_ATTRIBUTE_VALUES TABLE 에 상당수의 데이타가 존재 하고 있을 경우가 발생합니다.
Workaround
Solution Description
1. Note 255048.1 에 나온 bde_wf_clean_worklist.sql을 실행 합니다. 이 script는 모든 WFERROR pending workflow processes.에 대해 retry를 시킵니다. 만일 WFERROR process 의 parent가 끝났으면 WFERROR process도 complete시키는 역할을 합니다.
2. Note 165316.1 에 나온 bde_wf_data.sql Workflow Purge Script을 실행 합니다. 이 script는 WF Runtime Data에 대한 data를 보여주고 Closed and Opened오로 분류시켜 줍니다
여기서 얼마나 많은 closed OEOL workflow processes 있는지를 확인 할 수 있습니다.
3. “Purge Obsolete Workflow Runtime Data “concurrent program 수행하고 request 및 Logfile을 점검 합니다.
수행시 parameters:
Item Type: OEOL
Persistent Type: Temporary
4. bde_wf_data.sql script 재수행 하여 얼마나 많은 closed OEOL workflow
processes있는지 점검하고 위의 step #2의 결과와 비교 해 보도록 합니다.
5. parent인 OEOH가 close안되 Purge가 안되는 OEOL은 다음과 같은 sql로 찾을 수 있습니다.
select s.item_type, s.item_key, s.end_date, f.item_type, f.item_key, f.end_date
from wf_items s
, wf_items f
where s.item_type = 'OEOL'
and s.PARENT_ITEM_TYPE = f.item_type
and s.PARENT_ITEM_KEY = f.item_key
and s.end_date is not null and f.end_date is null
and s.begin_date <= (sysdate - &number_of_days)
order by f.item_type, f.item_key, s.item_type, s.item_key ;
위와 같은 경우는 동일한 OEOH (father) 가 가진 모든 OEOL lines이 closed될때 까지 기다려야 합니다.
6. 다음 sql을 이용하여 각각의 OEOH 에 open 되어 있는 data를 확인 하실 수 있습니다.
select f.item_type, f.item_key, 'Closed Lines' Lines, count(*)
from wf_items f
, wf_items s
where f.ITEM_TYPE = 'OEOH'
and f.item_type = s.PARENT_ITEM_TYPE
and f.item_key = s.parent_item_key
and s.end_date is not null
group by f.item_type, f.item_key, 'Closed Lines'
Union
Select f.item_type, f.item_key, 'Opened Lines', count(*)
from wf_items f
, wf_items s
where f.ITEM_TYPE = 'OEOH'
and f.item_type = s.PARENT_ITEM_TYPE
and f.item_key = s.parent_item_key
and s.end_date is null
group by f.item_type, f.item_key, 'Opened Lines'
order by 1, 2;
You have to wait until all OEOL lines for the (children) related to the same OEOH (father) are closed to purge both (OEOH) parent and child (OEOL).
7. 만약 OEOL children open되있어도 이를 purge하고자 한다면 다음과 같은 방식으로 할 수 있다.
SQL*Plus에 APPS user로 붙어서 다음과 같은 pl sql을 실행해 이 결과를 spool하여 command를 만든후 이를 실행 한다.
spool item_purge_spc.sql
select 'exec
WF_PURGE.ITEMS('''||item_type||''','''||item_key||''',,SYSDATE,TRUE,TRUE);' --- sysdate
부분은 각 사이트에 맞게 조정.
from wf_items
where item_type = 'OEOL'
and end_date is not null;
spool off;
8. 위에서 만들어진 item_purge_spc.sql를 실행하여 삭제 작업을 하고 bde_wf_data.sql를 실행하여 OEOL 이 purged 되었는지 확인한다
Reference Documents
Note. 270765.1 -
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
Error ORA-02375 while trying to export/import JTF.JTF_PF_REPOSITORY table
We have already created an SR, In the mean time, trying to see whether anyone else has come across this issue. Thanks.
On : 11.2.0.3 version, Data Pump Import
Error ORA-02375 while trying to import JTF.JTF_PF_REPOSITORY table
We are getting the below error while performing the full db
import.
ORA-02375: conversion error loading table
"JTF"."JTF_PF_REPOSITORY" partition "EBIZ"
ORA-22337: the type of accessed
object has been evolved
ORA-02372: data for row: SYS_NC00040$ :
0X'8801FE000004AD0313FFFF0009198401190A434F4E4E454354'
This issue is
stopping our upgrade of database from 10.2.0.4 to 11.2.0.3. This is very
critical for us to be resolved.Hi,
seems this is Character set issue fo source and Target DB check this doc:Unable to Export Table WF_ITEM_ATTRIBUTE_VALUES due to errors ORA-02374, ORA-22337, and ORA-02372 (Doc ID 1522761.1)
HTH -
Apologies for my mistakes, I am learning pl/sql as I have to fix something at work that's broken.
I have this bit of code:
DECLARE
l_x_contract VARCHAR2(100);
BEGIN
SELECT DISTINCT wiav.text_value
INTO l_x_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = TO_CHAR(l_this_doc_num);
EXCEPTION
WHEN no_data_found THEN
l_x_contract := 'OLD';
INSERT INTO XX.XTMP(l_id, l_name, l_value) VALUES (APPS.XSEQ.NEXTVAL, 'l_x_contract no_data_found', l_x_contract);
COMMIT;
END;
IF l_x_contract = 'N' THEN
l_supplier_flagged := FALSE;
END IF;
IF l_x_contract = 'Y' THEN
l_supplier_flagged := TRUE;
END IF;
IF l_x_contract = 'OLD' THEN
l_supplier_flagged := l_supplier_flagged;
END IF;
INSERT INTO XX.XTMP(l_id, l_name, l_value) VALUES (APPS.XSEQ.NEXTVAL, 'l_x_contract end', l_x_contract);
COMMIT;When it runs, and I then select * from XX.XTMP, I see this:
l_id | l_name | l_value
1 | l_x_contract no_data_found | OLD
2 | l_x_contract end | Meaning that in the BEGIN block, the code knows the value of "l_x_contract".
Outside the block, by the time the 2nd debug insert runs, it doesn't know the value of "l_x_contract", hence the no value in the debug table for the 2nd insert.
I naively tried changing the top block to:
DECLARE
l_x_contract VARCHAR2(100);
BEGIN
SELECT DISTINCT wiav.text_value
INTO l_x_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
, po.po_headers_all pha
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wi.user_key = pha.segment1
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = TO_CHAR(l_this_doc_num);
RETURN l_x_contract;
EXCEPTION
WHEN no_data_found THEN
l_xccc_contract := 'OLD';
RETURN l_x_contract;
INSERT INTO XX.XTMP(l_id, l_name, l_value) VALUES (APPS.XSEQ.NEXTVAL, 'l_x_contract no_data_found', l_x_contract);
COMMIT;
END;But that returns this error:
PLS-00372: In a procedure, RETURN statement cannot contain an expressionHaving googled that, I think I need to create a function in order to return something, as it's not possible to include a return in my block.
Is that right? Is there no other way for the block to let the rest of the code know what the value of the l_x_contract, other than to use a function? I tried to write a function, but made a total mess of it.
Any advice much appreciated, and apologies for being useless, lazy, stupid etc. etc.
Thank youGiven that you say this is a snippet from a procedure in a package, I suspect that the whole procedure has a structure similar to this.
PROCEDURE x
variable declarations
BEGIN
some code
DECLARE
l_x_contract VARCHAR2(100);
BEGIN
SELECT text_value
EXCEPTION
END;
IF l_x_contract = 'N' THEN
END IF;
END IF;If my assumtion is correct. then the variable l_x_contract is only visible between the DECLARE and the END following the exception block. Just move the declaration of l_x_contract to the main declaration section for the procedure (where I have variable declarations in the skeleton above), then lose the declare altogether. You can have a begin exception end block anywhere without requiring a declare block.
John -
How to check for errors in starting workflow from plsql?
Hi All,
I am using the below code to start a custom workflow.
DECLARE
l_itemtype VARCHAR2(30) := 'XXPWA';
l_itemkey VARCHAR2(30) := '1116410C';
error_code VARCHAR2(2000);
error_msg VARCHAR2(2000);
BEGIN
wf_engine.createprocess(l_itemtype, l_itemkey, 'XX_WEBADI_APPROVAL');
wf_engine.setitemuserkey(itemtype => l_itemtype
,itemkey => l_itemkey
,userkey => 'USERKEY: ' || '1116410C');
wf_engine.setitemowner(itemtype => l_itemtype
,itemkey => l_itemkey
,owner => 'SYSADMIN');
wf_engine.setitemattrnumber(itemtype => l_itemtype
,itemkey => l_itemkey
,aname => 'BATCH_ID'
,avalue => 1116410);
wf_engine.startprocess(l_itemtype, l_itemkey);
EXCEPTION
WHEN OTHERS THEN
error_code := SQLCODE;
error_msg := SQLERRM(SQLCODE);
dbms_output.put_line(error_code||error_msg);
END ;
The script completes successfully without errors.
I am sending a notification from this workflow. I can see the records getting created in tables like WF_NOTIFICATIONS and WF_ITEM_ATTRIBUTE_VALUES. But i cannot see any thing if i query from Status Monitor. Also I am not getting the said notifications. How can i find what is the issue?Hi Manu,
Thanks for sharing the information, If you think of speeding up finding were exactly your notification is struck, You can use the below query (Input parameter would be your notification id), Hope this information is good, I liked this very much, the way it was narrated.
SELECT n.begin_date,
n.status,
n.mail_status,
n.recipient_role,
de.def_enq_time,
de.def_deq_time,
de.def_state,
ou.out_enq_time,
ou.out_deq_time,
ou.out_state
FROM applsys.wf_notifications n,
(SELECT d.enq_time def_enq_time,
d.deq_time def_deq_time,
TO_NUMBER((SELECT VALUE
FROM TABLE(d.user_data.parameter_list)
WHERE NAME = 'NOTIFICATION_ID')) d_notification_id,
msg_state def_state
FROM applsys.aq$wf_deferred d
WHERE d.corr_id = 'APPS:oracle.apps.wf.notification.send') de,
(SELECT o.deq_time out_deq_time,
o.enq_time out_enq_time,
TO_NUMBER((SELECT str_value
FROM TABLE(o.user_data.header.properties)
WHERE NAME = 'NOTIFICATION_ID')) o_notification_id,
msg_state out_state
FROM applsys.aq$wf_notification_out o) ou
WHERE n.notification_id = &NOTIFICATION_ID
AND n.notification_id = de.d_notification_id(+)
AND n.notification_id = ou.o_notification_id(+)
This single query links all together and shows you the current state of the message.
Column 5 & 6 shows the enqueue & dequeue time of WF_DEFFERRED queue.
Column 7 shows the message status in WF_DEFFERRED
Column 8 & 9 shows the enqueue & dequeue time of WF_NOTIFICATIONS_OUT queue.
Column 10 shows the message status in WF_NOTIFICATION_OUT.
Below is the sequence of activities going on between the PL/SQL trigger of the business event and the e-mail received from notification mailer in the tail -end
1. EBS user sends email – To send an email EBS modules use standard API. Email API is implemented in PL/SQL package WF_NOTIFICATION (I will cover it in the next article).
1.1. Provides application data – First of all user’s session inserts business data (recipient, message type, message text etc.) into WF_NOTIFICATIONS table (do not mix up with PL/SQL package mentioned above);
1.2. Defers processing Generates event – a user or process leaves EBS to run further email processing steps. It is done using a Business Events System (BES). Session raises an event k“oracle.apps.wf.notification.send” via the WF_EVENT PL/SQL package (BES processing to be covered in the next articles). Each deferred event is put in one of the two Advanced Queues WF_DEFERRED or WF_JAVA_DEFERRED for further processing. All email sending events go through the WF_DEFERRED queue.
2. Deferred Agent Listener – is a process responsible for ALL BES events processing. It executes all deferred events calling subscriptions’ functions defined for each business event. There are several more things to explain about Agent Listeners and subscription processing (e.g. there are several differed agents, subscriptions groups etc.) This is one more subject for further articles.
2.1. Reads event and starts subscriptions processing – Strictly speaking there is no any enabled subscription for the “oracle.apps.wf.notification.send” event (submitted during the first step). This event is a part of “oracle.apps.wf.notification.send.group” event group. The Deferred Agent executes subscriptions for that group rather than for the stand alone event. At this stage the Agent knows that it should process the notification with given notification id (it is a part of the event data passed via the event).
2.2. Reads application data – in order to generate the email/notification the Agent reads business data from the WF_NOTIFICATIONS table and a few related tables and during the next step builds up the email’s text in XML format.
2.3. Generates data for outbound interface – This is the last step executed by the Deferred Agent Listener. It generates XML representation of email to be sent and together with other important bits of information posts it to the Notification Mailer outbound queue WF_NOTIFICATION_OUT.
3. Notification Mailer – As you see it was a long journey even before we started to talk about the Notification Mailer. There are a lot of things which may go wrong and this is why it is important to know the whole flow of the events to troubleshoot the mail sending functionality in EBS. We’ve come to the last processing step before the message leaves EBS boundaries.
3.1. Reads message – the Notification Mailer dequeues messages fromWF_NOTIFICATION_OUT queue on regular basis. In fact this is the only place where it looks for the new messages to be sent. This means if a notification doesn’t has a corresponding event ready for dequeuing in the WF_NOTIFICATION_OUT queue it will never be send. As soon as a new message arrives Notification Mailer dequeues it and gets prepared for sending;
3.2. Sends email via SMTP – This is the step when the message leaves EBS. The Notification Mailer sends the email using text retrieved from the advanced queue during previous step;
3.3. Updates status – as the last step in the notification sending process the Notification Mailer updates a MAIL_STATUS column in WF_NOTIFICATION table. -
SQL to dig into Purchase Order Approval Workflow
Apologies for a potentially vague and irritating question.
Using Oracle Purchasing, I go to PO Summary > Find PO > Inquire > View Approval Through Workflow
This lists the WFLow Activities associated with the PO Approval - below is a sample.
| Status | Activity | Parent Activity | Started | Completed | Activity Result |
| Complete | PO New Communication | Email PO | 21-Sep-2007 15:12:55 | 21-Sep-2007 15:12:55 | Yes |
| Complete | Does User want document e-mailed | Email PO | 21-Sep-2007 15:12:55 | 21-Sep-2007 15:12:55 | Yes |
| Complete | Email PO | PO Approval Process | 21-Sep-2007 15:12:55 | 21-Sep-2007 15:13:11 | |
| Complete | End | Fax Document Process| 21-Sep-2007 15:12:55 | 21-Sep-2007 15:12:55 | |
| Complete | Does User Want Document Faxed? | Fax Document Process| 21-Sep-2007 15:12:55 | 21-Sep-2007 15:12:55 | No |
----------------------------------------------------------------------------------------------------------------------------------------------I would really like to know what SQL to use to extract this information, as it would be useful at work.
However, I'm very stuck in finding it.
I can do a very simple join between the PO and one of the WFlow tables:
SELECT pha.segment1
, pha.wf_item_type
, pha.wf_item_key
, wi.*
FROM po.po_headers_all pha
, applsys.wf_items wi
WHERE pha.wf_item_type = wi.item_type
AND pha.wf_item_key = wi.item_key
AND pha.segment1 = 1425280;However, I don't know which tables contain the details WFlow info which is listed above.
There are 2 other related tables:
applsys.wf_item_activity_statuses
applsys.wf_item_attribute_values
But when I look at them, they don't contain values that could tie up with the 'Activity' or 'Parent Activity' above.
Plus I am not 100% sure about the relationships between Activities and Parent Activities.
I can see that thw wf_item and wf_item_key link back to the PO.
But after that I am stuck.
Please feel free to tell me to clear off for asking stupid questions.
Thank youHi Jim,
It's probably worth having a look at the SQL that the standard wfstat.sql script ($FND_TOP/wf) runs to retrieve data from a Workflow. I would guess that the PO screens retrieve data from WF_ACTIVITY_STATUSES amongst other tables, but if you look at the script there will be some code that you can crib to use as you need.
HTH,
Matt
WorkflowFAQ.com - the ONLY independent resource for Oracle Workflow development
Alpha review chapters from my book "Developing With Oracle Workflow" are available via my website http://www.workflowfaq.com
Have you read the blog at http://thoughts.workflowfaq.com ?
WorkflowFAQ support forum: http://forum.workflowfaq.com -
Hello All,
I need to cancel a Purchase Order Programatically. Can this be done?
Thanks for any help,
BradleyIt should be taken from po_document_types_all. You can find document_type and sub_type there.
Have you changed the fnd_global.apps_initialize parameters right? Like user_id and responsibility_id and appl_id? Not sure your site setup for the security. If you navigate to setup-->Purchasing-->Document Types and select Purchase Order Standard, you can see Control options there.
To avoid doubt, select user_id of the buyer and initialize apps.
select b.user_id into l_user_Id from po_agents a, fnd_user b, po_headers_all c
where a.agent_id=b.employee_id
and a.agent_id=c.agent_id
and c.segment1='5396';
select wf_item_type,wf_item_key from po_agents a, fnd_user b, po_headers_all c
where a.agent_id=b.employee_id
and a.agent_id=c.agent_id
and c.segment1='5396';
select number_value
into l_responsibility_id
from wf_item_attribute_values
where item_type='POAPPRV' --wf_item_type from above
and item_key='81411-392214' --wf_item_key from above
and name in ('RESPONSIBILITY_ID');
select number_value
into l_application_id
from wf_item_attribute_values
where item_type='POAPPRV'
and item_key='81411-392214'
and name in ('APPLICATION_ID');
fnd_global.apps_initialize(l_user_id,l_responsibility_id,l_application_id);
then call po_document_control_pub.
You can enable FND Debug Profiles and check fnd_log_messages.
Also call this procedure to get any session specifif messages;
PROCEDURE get_error_stack (x_msg_data OUT VARCHAR2)
IS
l_msg_index NUMBER;
l_msg_data VARCHAR2 (32000);
BEGIN
l_msg_index := 1;
x_msg_data := '';
FOR i IN 1 .. fnd_msg_pub.count_msg
LOOP
fnd_msg_pub.get (p_msg_index => i, p_encoded => 'F', p_data => l_msg_data, p_msg_index_out => l_msg_index);
x_msg_data := x_msg_data || l_msg_data;
DBMS_OUTPUT.put_line (x_msg_data);
END LOOP;
END get_error_stack;
THanks
Nagamohan -
Help required regarding tunning the query mentioned
HI all ,
Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
QUery :
UPDATE WF_ITEM_ATTRIBUTE_VALUES WIAV SET WIAV.TEXT_VALUE = REPLACE(WIAV.TEXT_VALUE,:B1,:B2)
WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT WIA.ITEM_TYPE, WIA.NAME
FROM WF_ITEM_ATTRIBUTES WIA WHERE WIA.TYPE = 'URL'
AND WIA.ITEM_TYPE = WIAV.ITEM_TYPE
AND WIA.NAME = WIAV.NAME)
AND WIAV.TEXT_VALUE IS NOT NULL
AND INSTR(WIAV.TEXT_VALUE
, :B1) > 0
Plan :*
<pre>
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | UPDATE STATEMENT | | 453 | 14496 | 284K|
| 1 | UPDATE | WF_ITEM_ATTRIBUTE_VALUES | | | |
|* 2 | FILTER | | | | |
|* 3 | TABLE ACCESS FULL | WF_ITEM_ATTRIBUTE_VALUES | 453 | 14496 | 282K|
|* 4 | TABLE ACCESS BY INDEX ROWID| WF_ITEM_ATTRIBUTES | 1 | 33 | 2 |
|* 5 | INDEX UNIQUE SCAN | WF_ITEM_ATTRIBUTES_PK | 1 | | 1 |
Predicate Information (identified by operation id):
2 - filter(("SYS_ALIAS_2"."ITEM_TYPE","SYS_ALIAS_2"."NAME")= (SELECT /*+ */
"WIA"."ITEM_TYPE","WIA"."NAME" FROM "APPLSYS"."WF_ITEM_ATTRIBUTES" "WIA" WHERE
"WIA"."NAME"=:B1 AND "WIA"."ITEM_TYPE"=:B2 AND "WIA"."TYPE"='URL'))
3 - filter("SYS_ALIAS_2"."TEXT_VALUE" IS NOT NULL AND
INSTR("SYS_ALIAS_2"."TEXT_VALUE",:Z)>0)
4 - filter("WIA"."TYPE"='URL')
5 - access("WIA"."ITEM_TYPE"=:B1 AND "WIA"."NAME"=:B2)
</pre>
Index :*
<pre>
INDEX_NAME COLUMN_NAME
APPLSYS WF_ITEM_ATTRIBUTE_VALUES_PK 1 ITEM_TYPE
2 ITEM_KEY
3 NAME
</pre>
regds
Rahul
Edited by: RahulG on Jan 2, 2009 10:47 PM
Edited by: RahulG on Jan 2, 2009 10:48 PMRahulG wrote:
HI all ,
Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
A few notes:
1. Your query is using bind variables. If you're already on 9i or later (probably 9iR2 according to plan output), this statement will be subject to bind variable peeking and therefore the output of EXPLAIN PLAN is only of limited use, since the actual execution plan might be different and/or might be based on different cardinality estimates based on the actual bind values peeked at hard parse time. You can use the V$SQL_PLAN view to get the actual execution plan(s) if the statement is still cached in the shared pool, from 10g on DBMS_XPLAN.DISPLAY_CURSOR is available for that purpose.
2. The execution plan posted suggests that only 453 rows will correspond to the filter criteria (but, as mentioned in 1. is based on an unknown bind variable value when using EXPLAIN PLAN), and probably therefore the optimizer didn't unnest the subquery but runs this as recursive FILTER query potentially for each row passing the filter criteria on the driving table WF_ITEM_ATTRIBUTE_VALUES. Depending on the actual number of rows this might be inefficient, and unnesting the subquery and turning it into a join might be more appropriate. This might accomplished e.g. by providing more representative statistics to the optimizer (are the statistics up-to-date?).
Although you can't change the SQL you could try this manually by using the UNNEST hint to see if it makes any difference in the execution plan (and run time):
WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT /*+ UNNEST */ WIA.ITEM_TYPE, WIA.NAME
...3. The composite index WF_ITEM_ATTRIBUTE_VALUES_PK can only be used on the first column ITEM_TYPE for effective index access, the NAME column would have to be used as filter on all index leaf blocks that would be found using a range scan on ITEM_TYPE. This might be quite inefficient, and/or might lead to a lot of rows/blocks that need to be visited in the table using this index access path.
4. You could try to trace the execution by enabling extended SQL trace, e.g. using the (undocumented) DBMS_SUPPORT package in 9i. Running the "tkprof" utility on the generated trace file tells you the actual row source cardinalities (which can then be compared to the estimates of the optimizer) and - if the "waits" have been enabled - what your statement has waited for most.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Sorry - I have also posted a similar question to this in the Workflow form:
Workflow Customisation Error
But wanted to ask here too because this is related to PL/SQL too, so I'm not sure where the query best sits.
I have a customised workflow, and it is falling over here:
l_progress := '15 Got This Far';
IF (l_po_wf_debug = 'Y') THEN
PO_WF_DEBUG_PKG.insert_debug(p_item_type,p_item_key,l_progress);
END IF;
-- get attribute value
-- cannot use GetItemAttrText because if a PO
-- is sent for approval twice, it will generate 2 different item_keys
SELECT DISTINCT wiav.text_value
INTO l_x_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = TO_CHAR(l_this_doc_num);
l_progress := '16 Got This Far';
IF (l_po_wf_debug = 'Y') THEN
PO_WF_DEBUG_PKG.insert_debug(p_item_type,p_item_key,l_progress);
END IF;
-- ####When I check the po_wf_debug table, I can see there the '15 Got This Far' line is showing, but it's breaking before it gets to the next '16 Got This Far' section.
I'm assuming the problem is with the SQL statement I've used to try to set the l_x_contract variable.
I have tried to hard code values instead, to see if that avoids the error - e.g.
SELECT DISTINCT '50079161' -- wiav.text_value
INTO l_x_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
-- AND wi.user_key = TO_CHAR(l_this_doc_num)
AND wi.user_key = '50079161';But it's still erroring just the same.
No doubt I am making huge and stupid mistakes, but any pointers would be much appreciated.
ThanksHi,
Sorry for the slow reply. I have been doing some more testing.
Thanks for your answers so far.
The error is trapped via dealing with exceptions.
e.g. extract of that bit from the package:
EXCEPTION
WHEN OTHERS THEN
l_progress := 'X_TEST.MyWork: OTHERS: ' || sqlerrm;
IF (l_po_wf_debug = 'Y') THEN
/* DEBUG */ PO_WF_DEBUG_PKG.insert_debug(p_item_type,p_item_key,l_progress);
END IF;
-- ##################For an error example, the debug output is:
X_TEST.MyWork: OTHERS: ORA-01403: no data foundSo presumably my problem is that this bit of code:
SELECT DISTINCT wiav.text_value
INTO l_xccc_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = TO_CHAR(l_this_doc_num);i.e. when it returns nothing, the code errors.
I tried an NVL, via:
SELECT DISTINCT NVL(wiav.text_value,'test')
INTO l_xccc_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = TO_CHAR(l_this_doc_num);But that returns nothing either (i.e. it says "no rows returned" in TOAD instead of returning 'test') - e.g. if I run it direct in TOAD:
SELECT DISTINCT NVL(wiav.text_value,'test')
-- INTO l_xccc_contract
FROM apps.wf_item_attribute_values wiav
, apps.wf_item_attributes wia
, apps.wf_items wi
WHERE wiav.item_type = wia.item_type
AND wi.item_type = wiav.item_type
AND wi.item_key = wiav.item_key
AND wiav.NAME = wia.NAME
AND wiav.text_value IS NOT NULL
AND wiav.item_type = 'POAPPRV'
AND wia.NAME = 'X_CONTRACT_IN_PLACE'
AND wi.user_key = '1111';At the risk of invoking the wrath of others viewing this, and I realise I am being lazy and useless here, but is there any way I can get an output from this, or even set 'l_xccc_contract' to a set value if the SQL returns nothing?
Sorry for asking, but I am v. stuck.
Any advice much appreciated.
Thanks -
WSH_EXCEPTIONS Table becomes 38 GB
Follwoing are the tables which are consuming space for tablespace APPS_TS_TX_DATA.
select segment_name, sum(bytes)
from dba_segments
where tablespace_name = 'APPS_TS_TX_DATA'
group by segment_name
order by 2 desc;
SEGMENT_NAME SUM(BYTES)
1 WSH_EXCEPTIONS 38420873216
2 MLOG$_OE_ORDER_LINES_ALL 23646830592
3 WF_ITEM_ACTIVITY_STATUSES 20170014720
4 MLOG$_XXCS_SALES_MATRIX_SA 17160470528
5 MLOG$_WSH_DELIVERY_DETAILS 16620978176
6 ZX_LINES_DET_FACTORS 15658385408
7 FND_LOG_MESSAGES 13132890112
8 WF_ITEM_ATTRIBUTE_VALUES 13027115008
9 MTL_MATERIAL_TRANSACTIONS 12053774336
10 ZX_LINES 11223040000
11 OE_ORDER_LINES_ALL 9180807168
12 RA_CUSTOMER_TRX_LINES_ALL 8896380928
13 XXCS_ORDERS_AUDIT_DATA 8323989504
14 MRP_ATP_SCHEDULE_TEMP 6258294784
15 MLOG$_RA_CUSTOMER_TRX_LINE 6062604288
16 MTL_TRANSACTION_LOT_NUMBERS 5516427264
17 WSH_DELIVERY_DETAILS 5389287424
18 XXCS_SALES_MATRIX_CTS_DTL_2 4675993600
19 RA_CUST_TRX_LINE_GL_DIST_ALL 3809607680
20 ZX_REP_TRX_DETAIL_T 3797942272
21 XXCS_SALES_MATRIX_SALES_DATA 3698065408
22 MLOG$_MTL_MATERIAL_TRANSAC 3564634112
23 XXCS_RA_CUSTOMER_TRX_LINES_U1 3269984256
24 MLOG$_MTL_TRANSACTION_LOT_ 3174432768
25 AR_DISTRIBUTIONS_ALL 3041263616
26 MLOG$_MTL_ONHAND_QUANTITIE 2833252352
27 OE_PRICE_ADJUSTMENTS 2668625920
28 FND_ENV_CONTEXT 2499018752
29 MTL_TXN_REQUEST_LINES 2406088704
30 OE_PROCESSING_MSGS_TL 2246311936
31 MLOG$_RA_CUSTOMER_TRX_ALL 2076835840
32 XXCS_ORDERS_DATA 1659240448
33 AR_RECEIVABLE_APPLICATIONS_ALL 1582039040
34 WF_ITEMS 1580859392
35 RA_CUST_TRX_LINE_SALESREPS_ALL 1519517696
36 OE_PROCESSING_MSGS 1474953216
37 MLOG$_OE_PRICE_ADJUSTMENTS 1370882048
38 XXCS_SALES_MATRIX_D 1276772352
39 RA_CUSTOMER_TRX_ALL 1214251008
40 XXCS_SAP_EXTRACTS_ALL 1173749760
41 IDX$$_2DF80001 1141506048
42 AR_PAYMENT_SCHEDULES_ALL 1115029504
43 OE_SALES_CREDITS 1109524480
44 OE_PRICE_ADJ_ATTRIBS 1103626240
45 IDX$$_2AD80001 1038614528
46 XLA_EVENTS 1037303808
47 MTL_MATERIAL_TRANSACTIONS_PZ_1 1013579776
48 XXCS_SAP_EXTRACTS_B 989855744
49 MRP_SO_LINES_TEMP 978452480
50 WSH_NEW_DELIVERIES 937295872
51 WSH_TRIP_STOPS 894566400
52 XXCS_SAP_EXTRACTS_ALL_PRN 878182400
53 MTL_MATERIAL_TRANSACTIONS_PZN4 839385088
54 XLA_TRANSACTION_ENTITIES 833093632
55 WSH_DELIVERY_ASSIGNMENTS 804782080
56 XXPZ_OE_ORD_LNES_ALL 790102016
57 RCV_TRANSACTIONS 752222208
58 WSH_DELIVERY_DETAILS_N99 724697088
59 XLA_DISTRIBUTION_LINKS 709492736
60 FND_CONCURRENT_REQUESTS 692715520
61 OE_ORDER_HEADERS_ALL 684982272
62 MLOG$_AR_PAYMENT_SCHEDULES 632684544
63 XXCS_DATA_POOL 625213440
64 RA_CUSTOMER_TRX_LINES_ALL_P_1 609091584
65 XXCS_RA_CUSTOMER_TRX_U1 594280448
66 WSH_DELIVERY_DETAILS_PZ_1 574095360
67 XXCS_SALES_MATRIX_D_PZ_4 554303488
68 RA_SALES_ORDER_LINE_N13 551157760
69 XXCS_RCTL_LINECONTEXT_IDX 508559360
71 IDX$$_2DDB0007 471728128
70 AR_CASH_RECEIPTS_ALL 471728128First of all i tried to cop with WSH_EXCEPTIONS , for that i ran the script which mentioned at metalink Note: 842728.1 - Sample API To Purge WSH_EXCEPTIONS Using WSH_EXCEPTIONS_PUB, but invane sripts given within this document ran successfully with the user WSH (owner of WSH_EXCEPTIONS) but still the size of this table presist as before , how can i purge this table to reclaim space as i am running out of space for database.
the output of that scrip as follows.
SQL> declare
2 x_msg_count NUMBER;
3 x_msg_data VARCHAR2(200);
4 x_return_status VARCHAR2(200);
5 p_exception_rec WSH_EXCEPTIONS_PUB.XC_ACTION_REC_TYPE;
6 begin
7 FND_GLOBAL.apps_initialize(-1, 51277, 665);
8 p_exception_rec.status := 'CLOSED';
9 --call WSH_EXCEPTIONS_PUB.Exception_Action,
10 WSH_EXCEPTIONS_PUB.Exception_Action (
11 -- Standard parameters
12 1, --p_api_version
13 NULL, --p_init_msg_list
14 NULL, --p_validation_level
15 FND_API.G_TRUE, --p_commit
16 x_msg_count, --OUT
17 x_msg_data, --OUT
18 x_return_status, --OUT
19 -- Program specific parameters
20 p_exception_rec, --IN OUT
21 'PURGE' --p_action
22 );
23 COMMIT;
24 --
25 DBMS_OUTPUT.PUT_LINE (FND_API.G_TRUE);
26 DBMS_OUTPUT.PUT_LINE (x_msg_count);
27 DBMS_OUTPUT.PUT_LINE (x_msg_data);
28 DBMS_OUTPUT.PUT_LINE (x_return_status);
29 DBMS_OUTPUT.PUT_LINE ('Purging Change Status');
30 end;
31 --------------------------------------------------------------------------------
32
33
34 /
SQL> /
T
1
1966516 records Resolved/Purged successfully.
S
Purging Change Status
PL/SQL procedure successfully completedEdited by: user13653962 on Sep 11, 2011 11:53 AMI have found NO_ACTION_REQUIRED exception is to in abundant numbers , following are the output from UAT which is for dated 2 months back , in production it would be doubeld than this one.Anyway when i run the script to Purge WSH_EXCEPTIONS Using WSH_EXCEPTIONS_PUB [ID 842728.1] for the status NO_ACTION_REQUIRED , it does nothing.How can i purge it please help.
SQL>select count(*),status
from wsh_exceptions
group by status
COUNT(*) STATUS
62575 OPEN
29753088 NO_ACTION_REQUIRED
SQL> declare
2 x_msg_count NUMBER;
3 x_msg_data VARCHAR2(200);
4 x_return_status VARCHAR2(200);
5 p_exception_rec WSH_EXCEPTIONS_PUB.XC_ACTION_REC_TYPE;
6 begin
7 FND_GLOBAL.apps_initialize(-1, 51277, 665);
8 p_exception_rec.status := 'NO_ACTION_REQUIRED';
9 --call WSH_EXCEPTIONS_PUB.Exception_Action,
10 WSH_EXCEPTIONS_PUB.Exception_Action (
11 -- Standard parameters
12 1, --p_api_version
13 NULL, --p_init_msg_list
14 NULL, --p_validation_level
15 FND_API.G_TRUE, --p_commit
16 x_msg_count, --OUT
17 x_msg_data, --OUT
18 x_return_status, --OUT
19 -- Program specific parameters
20 p_exception_rec, --IN OUT
21 'PURGE' --p_action
22 );
23 COMMIT;
24 --
25 DBMS_OUTPUT.PUT_LINE (FND_API.G_TRUE);
26 DBMS_OUTPUT.PUT_LINE (x_msg_count);
27 DBMS_OUTPUT.PUT_LINE (x_msg_data);
28 DBMS_OUTPUT.PUT_LINE (x_return_status);
29 DBMS_OUTPUT.PUT_LINE ('Purging Change Status');
30 end;
31 --------------------------------------------------------------------------------
32
33
34 /
T
3
E
Purging Change Status
PL/SQL procedure successfully completed -
Slow WFBG for deferred activities
Dear All,
EBS: 12.1.0.3 - Shared Multi Node
DB: 11.2.0.2 - RAC
OS: RHEL x86-64 5.5
In our PROD system we have 3 WFBG schedulers. Below are the configurations:
REQUEST_ID REQUESTED_BY P USER_CONCURRENT_PROGRAM_NAME
Arguments
EVERY SO_OFTEN
RESUBMIT_END_DATE
8289882
0 P Workflow Background Process
, , , N, Y, N
1 HOURS
8287054
0 P Workflow Background Process
, , , N, N, Y
1 DAYS
8290413
0 P Workflow Background Process
, , , Y, N, N
5 MINUTES
Currently, we have performance issue with WFBG for the deferred activities. Last time it completed in almost 8 Hours. From v$active_session_history, below database events are captured:
select sql_id, event, sum(time_waited) from v$active_session_history where session_id=715 and
sample_time between
to_date('09-DEC-13 12:30:00','DD-MON-YY HH24:MI:SS')
and
to_date('09-DEC-13 15:00:00','DD-MON-YY HH24:MI:SS')
group by sql_id, event
having sum(time_waited) > 0
order by sum(time_waited);
SQL_ID EVENT SUM(TIME_WAITED)
56ubx75p2k7x5 db file sequential read 1693
aug9kqdfu0s6p gc cr multi block request 5036
66tmsr3446uqn db file sequential read 5948
9dzhn01z3qy3b Disk file operations I/O 7048
65qczrm0turbh db file sequential read 8076
aug9kqdfu0s6p db file parallel read 8543
1pmvmpmhdjdrk db file sequential read 12409
ccprbqfuu8wb1 db file sequential read 16375
dq5v9g4z9jbrq db file sequential read 18043
0fxqsqwhkn03b db file sequential read 22790
aug9kqdfu0s6p db file scattered read 43602
aug9kqdfu0s6p db file sequential read 65505
d02kbfjwq7ywv db file sequential read 89022
9dzhn01z3qy3b direct path read 167570
9n126p922amjm direct path read 173267
aug9kqdfu0s6p direct path read 282216
d02kbfjwq7ywv direct path read 330143
9dzhn01z3qy3b db file sequential read 2588002
18 rows selected.
SQL Plan for SQL_ID 9dzhn01z3qy3b is as below:
SELECT RILA.ROWID , RILA.TRX_DATE , RILA.INTERFACE_LINE_ATTRIBUTE1 ,
OOLA.INVOICED_QUANTITY , OOHA.REQUEST_DATE FROM RA_INTERFACE_LINES_ALL
RILA , OE_ORDER_LINES_ALL OOLA , OE_ORDER_HEADERS_ALL OOHA WHERE
RILA.INTERFACE_LINE_ATTRIBUTE6 = TO_CHAR(OOLA.LINE_ID) AND
RILA.INTERFACE_LINE_ATTRIBUTE6 = :B1 AND RILA.INTERFACE_LINE_CONTEXT =
'ORDER ENTRY' AND OOLA.LINE_CATEGORY_CODE = 'RETURN' AND OOHA.HEADER_ID
= OOLA.HEADER_ID
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | | 17483 (100)| |
| 1 | NESTED LOOPS | | | | | | |
| 2 | NESTED LOOPS | | 1 | 85 | | 17483 (2)| 00:03:30 |
| 3 | MERGE JOIN | | 1 | 72 | | 17482 (2)| 00:03:30 |
| 4 | TABLE ACCESS BY INDEX ROWID| RA_INTERFACE_LINES_ALL | 1 | 51 | | 1512 (2)| 00:00:19 |
| 5 | INDEX FULL SCAN | XX01_RA_INTERFACE_LINES_N7 | 2 | | | 1511 (2)| 00:00:19 |
| 6 | SORT JOIN | | 39404 | 808K| 2808K| 15970 (2)| 00:03:12 |
| 7 | TABLE ACCESS FULL | OE_ORDER_LINES_ALL | 39404 | 808K| | 15710 (2)| 00:03:09 |
| 8 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | | 0 (0)| |
| 9 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL | 1 | 13 | | 1 (0)| 00:00:01 |
Note:
We already added index,XX01_RA_INTERFACE_LINES_N7, on AR.RA_INTERFACE_LINES_ALL as suggested by Metalink Engineer.
But, based on the SQL Plan, it's still doing FTS on OE_ORDER_LINES_ALL.
Also WF tables are very huge since some parents are not close yet since they still have open child items (Purging is scheduled everyday to keep 14days runtime data).
SQL> select table_name, num_rows from dba_tables where table_name in ('WF_ITEM_ATTRIBUTE_VALUES','WF_ITEM_ACTIVITY_STATUSES','WF_NOTIFICATION_ATTRIBUTES');
TABLE_NAME NUM_ROWS
WF_ITEM_ACTIVITY_STATUSES 3863143
WF_ITEM_ATTRIBUTE_VALUES 6350137
WF_NOTIFICATION_ATTRIBUTES 156220
Our questions are:
1. Should we change the interval of this WFBG. 5 minutes are suggested in MOS ID, but is it too frequent?
2. If 5 minutes are normal, is the performance issue in WFBG deferred activity caused by huge WF tables?
3. Should we add another index on OE_ORDER_LINES_ALL table to reduce the wait time?
Kind Regards,
AbipWe already added index,XX01_RA_INTERFACE_LINES_N7, on AR.RA_INTERFACE_LINES_ALL as suggested by Metalink Engineer.
But, based on the SQL Plan, it's still doing FTS on OE_ORDER_LINES_ALL.
Please update the SR with the above.
Please see:
Recommended Schedule For Workflow Background Process For Order Management (Doc ID 971925.1)
How Often or Frequent Should You Run Workflow Background Process to Improve Performance for Deferred OEOL? (Doc ID 1308607.1)
Order Management (OM) Sales Order Line Frequently Asked Questions (FAQ) (Doc ID 1308685.1)
And,
Deferred Workflow Transactions Performance - Invoice Interface (Doc ID 1567675.1)
Workflow Background Process Hangs on Deferred = Yes and OM Order Line (Doc ID 817642.1)
Worfklow Background Performance on R12.0.6 for OEOL When Invoicing (Doc ID 760976.1)
Workflow Background Engine Taking Lots Of System Resources (Doc ID 1337496.1)
For production issues, we do recommend logging SRs with Oracle support.
Thanks,
Hussein -
Reassign existing workflow - please help!!
We have an existing workflow that has thrown an error. We have corrected the backend code which results in the workflow now working but we have three existing WFs that we'd like to push through. However, we are getting the dreaded no performer error.
I've captured some screenshots here of what we have done in an attempt to reassign which may or may not be correct. Please can someone provide some pointers so that the process does not have to be completely restarted?
http://www.2ql.net/uploads/1255023160.jpg
http://www.2ql.net/uploads/1255058496.jpg
http://www.2ql.net/uploads/1255017068.jpg
http://www.2ql.net/uploads/1255000660.jpg
http://www.2ql.net/uploads/1255057613.jpg
Many thanks in advance,
PhilRewind the WF if you are not using Ad-hoc roles to approvals activity which will generate a notification or update the wf attribute values to required value in WF_ITEM_ATTRIBUTE_VALUES or in front end screen.
-
Need advice about coalesce and deallocate unused space
Hi experts;
Here looking for an advice about coalesce and deallocate unused space.
I got this tablespace with 87% full, one of the table in that tablespace has 1,150,325 records. I'm going to delete 500,000 records from that table, but to release the space used by those records I understand that I need to execute other procedure. I was reading about coalesce tablespace and deallocate unused space.
I found that apparently, both process can help me to free space. If you want to share with me your comments, about advantages or disadvantages about them, in order I can take the best solution?
Thanks for your comments.
AlHi
after deleted rows, the high water mark is still the same and so the size of the table. you need to bring down the water mark
here is what you need to do to bring down the high water mark. We do this monthly for performance purpose.
This is an EBS R12 system but the procedures are the same for EBS database or non EBS database.
After you purge or delete data in a table
1) alter table APPLSYS.WF_ITEM_ATTRIBUTE_VALUES move; <-- this operation will invalidate all indexes attache to the table
2)select owner, index_name, status from dba_indexes -- list all invalid object for user APPLSYS
where table_owner = upper('APPLSYS')
and
status NOT IN ('VALID','N/A');
3)spool idxrebuild.sql --generate script to rebuild indexes.
select 'alter index ' ||owner||'.'||index_name ||' rebuild online;' from dba_indexes
where table_owner = upper('APPLSYS')
and
status <> 'VALID';
4) run idxrebuild.sql -- to rebuild indexes. -- at this point if you check spaces on the table, it is still the same, you need to run #5
5)exec fnd_stats.gather_schema_stats ('APPLSYS'); --fnd_stat is for EBS system you can replace with the database equivalent command.
use this statement to count the block before and after the operation to see the different.
select DISTINCT(SEGMENT_NAME), count(blocks) "Total Block" from dba_extents
where
owner IN ('APPLSYS')
AND segment_name = 'WF_ITEM_ATTRIBUTE_VALUES'
Hope this help. -
Information of Business Rule monitor (BRM)
Hi,
I am new to oracle apps and I need some information.
Could you please tell me in which table is the information related to ITEMTYPE=JTFBRM and ITEMKEY is stored in EBS ?
Also, is it possible to track/check if the BRM ran successfully from backend ?Hi,
They are stored in Workflow tables. Basically, you'll need to query the following tables:
WF_ITEMS
WF_ITEM_ACTIVITY_STATUSES
WF_ITEM_ACTIVITY_STATUSES_H
WF_ITEM_ATTRIBUTE_VALUES
You can also use the Status Monitor in System Admin/WF Admin responsibilities to monitor the progress of the workflow processes.
Regards,
Bashar
Maybe you are looking for
-
Breadcrumbs not picking up settings file configuration
Hi I am using FM11, RH10 as part of TCS 4.0. I am creating a settings file that will be used by team to generate WebHelp outputs. In this, I want to change the default Times New Roman font of breadcrumbs to Arial. While I am aware how to set that dur
-
Someone emails me a document as a .pdf file. I save it in my documents as a .pdf. I need to delete a page in this file and can't find where the delete page (or extract page) is in the toolbar. I am using Adobe Reader v9.0. Where do I find it? Thanks
-
[F8] Problem with scrip when targeting player 8
When I target the flash 8 player my scripts wont work and I cant figure out why. Here is what I have (forgive me if my explanations arent up to snuff) On my tracker button Code: on (rollOver) { startDrag("/tracker", true); On my first frame Code: mov
-
I have a IPad, and a IPhone, I loaded a CC just for gaming, and every time ik don't have money on the card, I cannot download free apps, an this should not be happening because it is a free App..!!!??? What is going on!!??? I want to download an app
-
Submitting Customized Report Request, Completed with Error Status
Dear All , Good Day ,, 1- I am using Oracle Application 11.5.0 2- Application And Data Base Server Unix AI 3- created Oracle Report file (i testes all sql statmets using toad , successfly) 5- Tested the report , passed parameters , it is running succ