0BBP_BPART_BUYID_ATTR Datasource taking too much time for full load
Hello Gurus,
This Data source is fetching 95000 records for five hours in full load. There is no Delta capability. When I run trace and try to see code, It has Oracle and MS SQL inxes but not DB2, and we are on DB2 database. Also all the fields are hidden except Business partner and Buyer ID. I didnot find any notes on this datasource.
Did any one faced this issue, Help me how can I make this loading Faster in Daily Full loads.
Hi,
Divide the load using selections in different infopackages
Thanks
HP
Similar Messages
-
Taking too much time for saving PO with line item more than 600
HI
We are trying to save PO with line items more than 600, but it is taking too much time to save. It is taking more than 1 hour to save the PO.
Kindly let me know is there any restriction for no. of line items in the PO. Pls guide
regards
SanjayHi,
I suggest you to do a trace (tcode ST05) to identify the bottleneck.
You can know some reasons in Note 205005 - Performance composite note: Purchase order.
I hope this helps you
Regards
Eduardo -
Job is taking too much time during Delta loads
hi
when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
please let me know why this is taking too much time.
06:10:16 4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
06:56:46 Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
06:56:46 Result of customer enhancement: 193 records
06:56:46 Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
06:56:46 Result of customer enhancement: 193 records
06:56:46 Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
06:56:47 IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
06:56:47 IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
06:56:48 Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
06:56:48 Through selection conditions, 0 records filtered out in total
06:56:48 IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
06:56:48 IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
06:56:54 tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
06:56:54 tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
06:56:55 Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
06:56:55 IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
06:56:55 IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
06:56:55 Job finished
Regards
AtulHi Atul,
Have you written any customer exit code . If yes check for the optimization for it .
Kind Regards,
Ashutosh Singh -
JDBC adapter taking too much time for inserting
we` have given a "update insert" query to be used in the JDBC adapter to insert these records into the database. Consists of 3 tables with 29fields in first, 4 fields in the other two.
While message in XI gets processed in just 1-3 secs, the database processing time goes up to as much as 8 minutes for one record to get inserted.
As immediate solution, is there any way we can have the JDBC adapter process these messages faster? These messages get queued up and hence all the other messages also get queued up delaying the other interfaces. We have a central adapter engine...
Also is there any way we can get alert when the status is in "Processing/To be delivered/Delivering" and the message count exceeds a certain number say 1000I am using only one receiver JDBC channel
We have been inserting three different table by using 3 different statemets tags(i.e) statement1(for table1),statement2(for table2),statement3(for table3).
My structure is,
<messagetype Name>
<Statement1>
<tag1>
<action>UPDATE_INSERT</action>
<table>Table1</table>
<access>
<field1>
<field2>
<field28>
<key>
<MatNumber>
</tag1>
</statement1>
<Statement2>
<tag1>
<action>UPDATE_INSERT</action>
<table>Table2</table>
<access>
<field1>
<field2>
<field4>
<key>
<MatNumber>
</tag1>
</statement2>
<Statement3>
<tag3>
<action>UPDATE_INSERT</action>
<table>Table3</table>
<access>
<field1>
<field2>
<field4>
<key>
<MatNumber>
</tag3>
</statement3>
You can see we are also using key as well.In the first table we have 28 fields,second & third we are having 4.
Edited by: rajesh shanmugasundaram on Jul 31, 2008 11:08 AM -
UTL_FILE reqeust taking too much time for completion
Hi,
We are running utl_file reqeust.
the select statement fetching 6000 records, for this it taking 22 mins for completion.
the code is
CREATE OR REPLACE PACKAGE BODY "XXC"."XXC_MOD_IN_068_AP_TO_FIS_PKG"
AS
* Module Type : PL/SQL
* Module Name : XXC_MOD_IN_068_AP_FIS_PKG
* Description : This package is used for AP to Fiscal Interface.
* Run Env. : SQL*Plus
* Procedure Name Description
* XXC_MOD_068_AP_PR XXC_MOD_068_AP_PR Procedure is used to insert transactions
* into CSV OutPut File from Oracle Account Payables.
* Calling Module: None
* Modules Called: XXC_COMMON_INT_PK.INSERT_AUDIT
* Module Number : MOD_IN_068
* Known Bugs and Restrictions: none
* History
* =======
* Version Name Date Description of Change
* 0.1 Sanjeev Khurana 25-JULY-2011 Initial Creation.
* 0.2 Rohit 09-DEC-2011 Updated header details for the file
* 0.3 Amit Kulwal 28-AUG-2012 Updated the cursor query for incident 671520
* 0.4 Swaraj Goud 20-Nov-2012 Updated as per the CR 671520
| PACKAGE BODY
-- Actual Code Start Here
-- Procedure : XXC_MOD_068_AP_PR
-- Description : XXC_MOD_068_AP_PR Procedure is used to insert transactions
-- into CSV OUTPUT File from Oracle Account Payables.
-- Parameters:
-- Parm Name I/O Description
-- p_errbuf OUT Error message.
-- p_retcode OUT Error code. Returns 0 if no errors otherwise returns 1.
-- p_start_date IN Start Date
-- p_end_date IN End Date
PROCEDURE xxc_mod_068_ap_pr (
p_errbuf OUT VARCHAR2,
p_retcode OUT NUMBER,
p_start_date IN VARCHAR2,
p_end_date IN VARCHAR2
IS
-- Define variables and assign default values
l_sucess_count NUMBER := 0;
l_error_count NUMBER := 0;
-- Standard declaration
l_source VARCHAR2 (10);
l_target VARCHAR2 (10);
lc_module_description VARCHAR2 (50)
:= 'MOD_IN_068 - AP to Fiscal';
l_status CONSTANT VARCHAR2 (50) := 'NEW';
p_status NUMBER;
l_batch_id NUMBER;
l_batch_id_next NUMBER
:= apps_common_out_batch_id_s1.NEXTVAL;
l_proc_name VARCHAR2 (100) := 'XXC_MOD_IN_068';
l_request_id NUMBER
:= fnd_global.conc_request_id;
l_audit_master_id NUMBER := NULL;
l_mod_code VARCHAR2 (100);
l_log_type NUMBER := 1; --INFORMATION
l_det_status_success NUMBER := 0; --SUCCESS
l_det_status_inprocess NUMBER := 3; --INPROCESS
l_det_status_rejected NUMBER := 4; --REJECTED
l_det_status_err NUMBER := 3; --Error
l_det_status_complete NUMBER := 9; --COMPLETE
-- Standard who Columns
l_created_by NUMBER := fnd_global.user_id;
l_creation_date DATE := SYSDATE;
l_last_update_date DATE := SYSDATE;
l_last_update_login NUMBER := fnd_global.user_id;
v_file UTL_FILE.file_type;
l_location VARCHAR2 (150);
l_archive_location VARCHAR2 (150);
l_date VARCHAR2 (50);
l_filename VARCHAR2 (50);
l_open_mode VARCHAR2 (1) := 'W';
--- l_max_linesize NUMBER := 32767;
l_max_linesize VARCHAR2 (150); -- Updated 09-Nov-2012
--Cursor is used to fetch valid records for the interface
CURSOR get_ap_data_inv
IS
SELECT asp.segment1 supplier_ref,
-- asp.vendor_name supplier_name,
replace(asp.vendor_name, ',', ' ') supplier_name,
--aia.invoice_num invoice_number,
replace(aia.invoice_num, ',','') invoice_number,
aia.invoice_date,
aia.invoice_amount amount,
aia.doc_sequence_value unique_id,
aia.creation_date date_invoice_entered,
apsa.due_date date_invoice_paid,
aia.SOURCE user_id,
aia.payment_status_flag,
aia.invoice_type_lookup_code doc_type,
--aia.description,
replace(aia.description, ',' , ' ') description,
apsa.gross_amount
FROM ap_invoices_all aia,
ap_suppliers asp,
apps.ap_payment_schedules_all apsa
-- apps.iby_payments_all iba,
-- apps.iby_docs_payable_all dp
WHERE aia.invoice_id = apsa.invoice_id
AND aia.vendor_id = asp.vendor_id
AND aia.org_id = apsa.org_id
-- AND apsa.payment_status_flag != 'Y' -- commented for CR
-- AND dp.payment_id = iba.payment_id(+)
-- AND aia.invoice_id = dp.calling_app_doc_unique_ref2(+)
-- AND apsa.due_date <= (SYSDATE + 1)
AND TRUNC (aia.creation_date)
BETWEEN NVL (fnd_date.canonical_to_date (p_start_date),
TRUNC (aia.creation_date))
AND NVL (fnd_date.canonical_to_date (p_end_date),
TRUNC (aia.creation_date));
TYPE xxc_tbl IS TABLE OF get_ap_data_inv%ROWTYPE;
xxc_tbl1 xxc_tbl;
BEGIN
l_batch_id := apps_common_out_batch_id_s1.CURRVAL;
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_inprocess,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Process Starts',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
-- Get Module Code
BEGIN
SELECT TRIM (fval.flex_value),
TRIM (SUBSTR (fval.description,
1,
INSTR (fval.description, ' -')
INTO l_source,
l_mod_code
FROM fnd_flex_values_vl fval, fnd_flex_value_sets vset
WHERE vset.flex_value_set_id = fval.flex_value_set_id
AND vset.flex_value_set_name IN ('XXC_COMM_INT_CONFIG')
AND fval.enabled_flag = 'Y'
AND fval.description = lc_module_description;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_err,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Error Mode Code',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
raise_application_error (-20045, SQLERRM);
END;
--File Location Path for OutPut File
BEGIN
SELECT fnd_profile.VALUE ('XXC_MOD_IN_068_AP_OUTBOUND'),
fnd_profile.VALUE ('XXC_MOD_IN_068_AP_ARCHIVE')
INTO l_location,
l_archive_location
FROM DUAL;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Profile Value not found',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
BEGIN
SELECT TO_CHAR (SYSDATE, 'YYYYMMDDhh24miss')
INTO l_date
FROM DUAL;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'status not found',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
l_filename := 'AP_Fiscal_' || l_date || '.csv';
v_file :=
UTL_FILE.fopen (LOCATION => l_location,
filename => l_filename,
open_mode => l_open_mode,
max_linesize => l_max_linesize
-- Changed as per Sarah's email on 9th Decemeber
/* UTL_FILE.put_line (v_file,
'SUPPLIER_REF'
|| ','
|| 'SUPPLIER_NAME'
|| ','
|| 'INVOICE_NUMBER'
|| ','
|| 'INVOICE_DATE'
|| ','
|| 'AMOUNT'
|| ','
|| 'UNIQUE_ID'
|| ','
|| 'DATE_INVOICE_ENTERED'
|| ','
|| 'DATE_INVOICE_PAID'
|| ','
|| 'USER_ID'
|| ','
|| 'PAYMENT_STATUS_FLAG'
|| ','
|| 'DOC_TYPE'
|| ','
|| 'DESCRIPTION'
|| ','
|| 'PAYMENT_AMOUNT'
UTL_FILE.put_line (v_file,
'SUPPLIERREF'
|| ','
|| 'SUPPLIERNAME'
|| ','
|| 'INVOICENUMBER'
|| ','
|| 'DATE'
|| ','
|| 'AMOUNT'
|| ','
|| 'UNIQUEID'
|| ','
|| 'DATEINVOICEENTERED'
|| ','
|| 'DATEINVOICEPAID'
|| ','
|| 'USERID'
|| ','
|| 'PAYMENTSTATUS'
|| ','
|| 'DOCTYPE'
|| ','
|| 'DESCRIPTION'
|| ','
|| 'PAYMENTAMOUNT');
UTL_FILE.put_line (v_file,
'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX');
open get_ap_data_inv;
loop
fetch get_ap_data_inv bulk collect into xxc_tbl1 limit 6000;
fnd_file.put_line(fnd_file.log, 'Cursor Count is : '||xxc_tbl1.count);
for i in xxc_tbl1.first .. xxc_tbl1.count
--FOR cur_rec IN get_ap_data_inv
LOOP
BEGIN
--Common package used for proper sequence for Record_id and Bath_id
l_sucess_count := l_sucess_count + 1;
--Insert into CSV file
fnd_file.put_line (fnd_file.LOG, 'Before Utl file');
UTL_FILE.put_line (v_file,
xxc_tbl1(i).supplier_ref
|| ','
|| xxc_tbl1(i).supplier_name
|| ','
|| xxc_tbl1(i).invoice_number
|| ','
|| xxc_tbl1(i).invoice_date
|| ','
|| xxc_tbl1(i).amount
|| ','
|| xxc_tbl1(i).unique_id
|| ','
|| xxc_tbl1(i).date_invoice_entered
|| ','
|| xxc_tbl1(i).date_invoice_paid
|| ','
|| xxc_tbl1(i).user_id
|| ','
|| xxc_tbl1(i).payment_status_flag
|| ','
|| xxc_tbl1(i).doc_type
|| ','
|| xxc_tbl1(i).description
|| ','
|| xxc_tbl1(i).gross_amount);
fnd_file.put_line (fnd_file.LOG,
'Supplier Reference : ' || xxc_tbl1(i).supplier_ref);
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_complete,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Inserting records from AP to Fiscal Successfully',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
EXCEPTION
WHEN OTHERS
THEN
l_error_count := l_error_count + 1;
fnd_file.put_line (fnd_file.LOG,
'Error While Inserting from AP to Fiscal '
|| SQLERRM);
-- Create audit log for AP Inv records insert Exception
--Insert into the Audit table XXC_COMM_AUDIT_DETAIL_LOG and XXC_COMM_AUDIT_MASTER_LOG
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Error While Inserting from AP to Fiscal',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
END LOOP;
exit when get_ap_data_inv%NOTFOUND;
end loop;
close get_ap_data_inv;
UTL_FILE.fclose (v_file);
UTL_FILE.fcopy (l_location,
l_filename,
l_archive_location,
l_filename
-- Create audit log for Successfully processed records
-- Procedure call to insert in Audit tables
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_complete,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Compeleted Sucessfully AP to Fiscal',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
--Insert into the Audit table XXC_COMM_AUDIT_DETAIL_LOG and XXC_COMM_AUDIT_MASTER_LOG for populating email drop table
BEGIN
SELECT transaction_status
INTO p_status
FROM xxc_comm_audit_master_log
WHERE audit_master_id = l_audit_master_id;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_err,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Status Error',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
IF p_status <> 0
THEN
xxc_comm_audit_log_pk.populate_email_drop_table (l_audit_master_id,
l_batch_id);
END IF;
EXCEPTION
WHEN UTL_FILE.invalid_path
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20000, 'File location is invalid.');
WHEN UTL_FILE.invalid_mode
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20001,
'The open_mode parameter in FOPEN is invalid.');
WHEN UTL_FILE.invalid_filehandle
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20002, 'File handle is invalid.');
WHEN UTL_FILE.invalid_operation
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20003,
'File could not be opened or operated on as requested.');
WHEN UTL_FILE.write_error
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20005,
'Operating system error occurred during the write operation.');
WHEN UTL_FILE.file_open
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20008,
'The requested operation failed because the file is open.');
WHEN UTL_FILE.invalid_maxlinesize
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20009,
'The MAX_LINESIZE value for FOPEN() is invalid; it should '
|| 'be within the range 1 to 32767.');
COMMIT;
ROLLBACK TO data_extract;
WHEN OTHERS
THEN
raise_application_error (-20045, SQLERRM);
UTL_FILE.fclose (v_file);
END xxc_mod_068_ap_pr;
END xxc_mod_in_068_ap_to_fis_pkg;
Show Errors
Iam implemented BULK collect concept in programe,can anyone please suggest how can I imporve performance....
Thanks,
RakeshCREATE OR REPLACE PACKAGE BODY "XXC"."XXC_MOD_IN_068_AP_TO_FIS_PKG"
AS
PROCEDURE xxc_mod_068_ap_pr (
p_errbuf OUT VARCHAR2,
p_retcode OUT NUMBER,
p_start_date IN VARCHAR2,
p_end_date IN VARCHAR2
IS
-- Define variables and assign default values
l_sucess_count NUMBER := 0;
l_error_count NUMBER := 0;
-- Standard declaration
l_source VARCHAR2 (10);
l_target VARCHAR2 (10);
lc_module_description VARCHAR2 (50)
:= 'MOD_IN_068 - AP to Fiscal';
l_status CONSTANT VARCHAR2 (50) := 'NEW';
p_status NUMBER;
l_batch_id NUMBER;
l_batch_id_next NUMBER
:= apps_common_out_batch_id_s1.NEXTVAL;
l_proc_name VARCHAR2 (100) := 'XXC_MOD_IN_068';
l_request_id NUMBER
:= fnd_global.conc_request_id;
l_audit_master_id NUMBER := NULL;
l_mod_code VARCHAR2 (100);
l_log_type NUMBER := 1; --INFORMATION
l_det_status_success NUMBER := 0; --SUCCESS
l_det_status_inprocess NUMBER := 3; --INPROCESS
l_det_status_rejected NUMBER := 4; --REJECTED
l_det_status_err NUMBER := 3; --Error
l_det_status_complete NUMBER := 9; --COMPLETE
-- Standard who Columns
l_created_by NUMBER := fnd_global.user_id;
l_creation_date DATE := SYSDATE;
l_last_update_date DATE := SYSDATE;
l_last_update_login NUMBER := fnd_global.user_id;
v_file UTL_FILE.file_type;
l_location VARCHAR2 (150);
l_archive_location VARCHAR2 (150);
l_date VARCHAR2 (50);
l_filename VARCHAR2 (50);
l_open_mode VARCHAR2 (1) := 'W';
--- l_max_linesize NUMBER := 32767;
l_max_linesize VARCHAR2 (150); -- Updated 09-Nov-2012
--Cursor is used to fetch valid records for the interface
CURSOR get_ap_data_inv
IS
SELECT asp.segment1 supplier_ref,
-- asp.vendor_name supplier_name,
replace(asp.vendor_name, ',', ' ') supplier_name,
--aia.invoice_num invoice_number,
replace(aia.invoice_num, ',','') invoice_number,
aia.invoice_date,
aia.invoice_amount amount,
aia.doc_sequence_value unique_id,
aia.creation_date date_invoice_entered,
apsa.due_date date_invoice_paid,
aia.SOURCE user_id,
aia.payment_status_flag,
aia.invoice_type_lookup_code doc_type,
--aia.description,
replace(aia.description, ',' , ' ') description,
apsa.gross_amount
FROM ap_invoices_all aia,
ap_suppliers asp,
apps.ap_payment_schedules_all apsa
-- apps.iby_payments_all iba,
-- apps.iby_docs_payable_all dp
WHERE aia.invoice_id = apsa.invoice_id
AND aia.vendor_id = asp.vendor_id
AND aia.org_id = apsa.org_id
-- AND apsa.payment_status_flag != 'Y' -- commented for CR
-- AND dp.payment_id = iba.payment_id(+)
-- AND aia.invoice_id = dp.calling_app_doc_unique_ref2(+)
-- AND apsa.due_date <= (SYSDATE + 1)
AND TRUNC (aia.creation_date)
BETWEEN NVL (fnd_date.canonical_to_date (p_start_date),
TRUNC (aia.creation_date))
AND NVL (fnd_date.canonical_to_date (p_end_date),
TRUNC (aia.creation_date));
TYPE xxc_tbl IS TABLE OF get_ap_data_inv%ROWTYPE;
xxc_tbl1 xxc_tbl;
BEGIN
l_batch_id := apps_common_out_batch_id_s1.CURRVAL;
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_inprocess,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Process Starts',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
-- Get Module Code
BEGIN
SELECT TRIM (fval.flex_value),
TRIM (SUBSTR (fval.description,
1,
INSTR (fval.description, ' -')
INTO l_source,
l_mod_code
FROM fnd_flex_values_vl fval, fnd_flex_value_sets vset
WHERE vset.flex_value_set_id = fval.flex_value_set_id
AND vset.flex_value_set_name IN ('XXC_COMM_INT_CONFIG')
AND fval.enabled_flag = 'Y'
AND fval.description = lc_module_description;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_err,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Error Mode Code',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
raise_application_error (-20045, SQLERRM);
END;
--File Location Path for OutPut File
BEGIN
SELECT fnd_profile.VALUE ('XXC_MOD_IN_068_AP_OUTBOUND'),
fnd_profile.VALUE ('XXC_MOD_IN_068_AP_ARCHIVE')
INTO l_location,
l_archive_location
FROM DUAL;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Profile Value not found',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
BEGIN
SELECT TO_CHAR (SYSDATE, 'YYYYMMDDhh24miss')
INTO l_date
FROM DUAL;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'status not found',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
l_filename := 'AP_Fiscal_' || l_date || '.csv';
v_file :=
UTL_FILE.fopen (LOCATION => l_location,
filename => l_filename,
open_mode => l_open_mode,
max_linesize => l_max_linesize
-- Changed as per Sarah's email on 9th Decemeber
/* UTL_FILE.put_line (v_file,
'SUPPLIER_REF'
|| ','
|| 'SUPPLIER_NAME'
|| ','
|| 'INVOICE_NUMBER'
|| ','
|| 'INVOICE_DATE'
|| ','
|| 'AMOUNT'
|| ','
|| 'UNIQUE_ID'
|| ','
|| 'DATE_INVOICE_ENTERED'
|| ','
|| 'DATE_INVOICE_PAID'
|| ','
|| 'USER_ID'
|| ','
|| 'PAYMENT_STATUS_FLAG'
|| ','
|| 'DOC_TYPE'
|| ','
|| 'DESCRIPTION'
|| ','
|| 'PAYMENT_AMOUNT'
UTL_FILE.put_line (v_file,
'SUPPLIERREF'
|| ','
|| 'SUPPLIERNAME'
|| ','
|| 'INVOICENUMBER'
|| ','
|| 'DATE'
|| ','
|| 'AMOUNT'
|| ','
|| 'UNIQUEID'
|| ','
|| 'DATEINVOICEENTERED'
|| ','
|| 'DATEINVOICEPAID'
|| ','
|| 'USERID'
|| ','
|| 'PAYMENTSTATUS'
|| ','
|| 'DOCTYPE'
|| ','
|| 'DESCRIPTION'
|| ','
|| 'PAYMENTAMOUNT');
UTL_FILE.put_line (v_file,
'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX'
|| ','
|| 'XX');
open get_ap_data_inv;
loop
fetch get_ap_data_inv bulk collect into xxc_tbl1 limit 6000;
fnd_file.put_line(fnd_file.log, 'Cursor Count is : '||xxc_tbl1.count);
for i in xxc_tbl1.first .. xxc_tbl1.count
--FOR cur_rec IN get_ap_data_inv
LOOP
BEGIN
--Common package used for proper sequence for Record_id and Bath_id
l_sucess_count := l_sucess_count + 1;
--Insert into CSV file
fnd_file.put_line (fnd_file.LOG, 'Before Utl file');
UTL_FILE.put_line (v_file,
xxc_tbl1(i).supplier_ref
|| ','
|| xxc_tbl1(i).supplier_name
|| ','
|| xxc_tbl1(i).invoice_number
|| ','
|| xxc_tbl1(i).invoice_date
|| ','
|| xxc_tbl1(i).amount
|| ','
|| xxc_tbl1(i).unique_id
|| ','
|| xxc_tbl1(i).date_invoice_entered
|| ','
|| xxc_tbl1(i).date_invoice_paid
|| ','
|| xxc_tbl1(i).user_id
|| ','
|| xxc_tbl1(i).payment_status_flag
|| ','
|| xxc_tbl1(i).doc_type
|| ','
|| xxc_tbl1(i).description
|| ','
|| xxc_tbl1(i).gross_amount);
fnd_file.put_line (fnd_file.LOG,
'Supplier Reference : ' || xxc_tbl1(i).supplier_ref);
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_complete,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Inserting records from AP to Fiscal Successfully',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
EXCEPTION
WHEN OTHERS
THEN
l_error_count := l_error_count + 1;
fnd_file.put_line (fnd_file.LOG,
'Error While Inserting from AP to Fiscal '
|| SQLERRM);
-- Create audit log for AP Inv records insert Exception
--Insert into the Audit table XXC_COMM_AUDIT_DETAIL_LOG and XXC_COMM_AUDIT_MASTER_LOG
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_rejected,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Error While Inserting from AP to Fiscal',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
END LOOP;
exit when get_ap_data_inv%NOTFOUND;
end loop;
close get_ap_data_inv;
UTL_FILE.fclose (v_file);
UTL_FILE.fcopy (l_location,
l_filename,
l_archive_location,
l_filename
-- Create audit log for Successfully processed records
-- Procedure call to insert in Audit tables
xxc_common_int_pk.insert_audit
(p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_complete,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Compeleted Sucessfully AP to Fiscal',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
--Insert into the Audit table XXC_COMM_AUDIT_DETAIL_LOG and XXC_COMM_AUDIT_MASTER_LOG for populating email drop table
BEGIN
SELECT transaction_status
INTO p_status
FROM xxc_comm_audit_master_log
WHERE audit_master_id = l_audit_master_id;
EXCEPTION
WHEN OTHERS
THEN
xxc_common_int_pk.insert_audit (p_batch_id => l_batch_id,
p_request_id => l_request_id,
p_source_system => l_source,
p_proc_name => l_proc_name,
p_log_type => l_log_type,
p_det_status => l_det_status_err,
p_msg_code => NULL,
p_entity => NULL,
p_msg_desc => 'Status Error',
p_mast_request_id => l_request_id,
p_record_id => NULL,
p_source => l_source,
p_target => l_target,
p_email => NULL,
p_mod_code => l_mod_code,
p_audit_master_id => l_audit_master_id
END;
IF p_status <> 0
THEN
xxc_comm_audit_log_pk.populate_email_drop_table (l_audit_master_id,
l_batch_id);
END IF;
EXCEPTION
WHEN UTL_FILE.invalid_path
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20000, 'File location is invalid.');
WHEN UTL_FILE.invalid_mode
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20001,
'The open_mode parameter in FOPEN is invalid.');
WHEN UTL_FILE.invalid_filehandle
THEN
UTL_FILE.fclose (v_file);
raise_application_error (-20002, 'File handle is invalid.');
WHEN UTL_FILE.invalid_operation
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20003,
'File could not be opened or operated on as requested.');
WHEN UTL_FILE.write_error
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20005,
'Operating system error occurred during the write operation.');
WHEN UTL_FILE.file_open
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20008,
'The requested operation failed because the file is open.');
WHEN UTL_FILE.invalid_maxlinesize
THEN
UTL_FILE.fclose (v_file);
raise_application_error
(-20009,
'The MAX_LINESIZE value for FOPEN() is invalid; it should '
|| 'be within the range 1 to 32767.');
COMMIT;
ROLLBACK TO data_extract;
WHEN OTHERS
THEN
raise_application_error (-20045, SQLERRM);
UTL_FILE.fclose (v_file);
END xxc_mod_068_ap_pr;
END xxc_mod_in_068_ap_to_fis_pkg;Please verify... -
Variable screen is taking too much time for some reports
Hi all,
We are using a sales billing cube for our billing reports . For one query , variable screen takes lot of time to come , i have in RSRT , i have generated the reoprt there and also executed the report there , it s fine there.Morever iam not using any customer exit variable for this.
Can anybody suggets me whats is the exact problem;How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
I suggest you start with index tuning. Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible). Changing outer joins to inner joins is appropriate
if you don't need outer joins in the first place.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Query is taking too much time for inserting into a temp table and for spooling
Hi,
I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
Temp table is defined as follows:
DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
SELECT
C.CastID,
SO.SalesOrderID,
PO.ProductionOrderID,
F.CalculatedWeight,
PO.ProductionOrderNo,
SO.SalesOrderNo,
SC.Name,
SO.OrderQty
FROM
CastCast C
JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
WHERE
(C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
-PepHow I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
I suggest you start with index tuning. Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible). Changing outer joins to inner joins is appropriate
if you don't need outer joins in the first place.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
SAP GUI taking too much time to open transactions
Hi guys,
i have done system copy from Production to Quality server completely.
after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
i started SGEN, but it is giving time_out errors. please help me on this issue.
MY hardware details on quality server
operating system : SuSE Linux 10 SP2
Database : 10.2.0.2
SAP : ECC 6.0 SR2
RAM size : 8 GB
Hard disk space : 500 GB
swap space : 16 GB.
regards
RameshHi,
>i started SGEN, but it is giving time_out errors. please help me on this issue.
You are supposed to run SGEN as a batch job and so, it should be possible to get time out errors.
I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
Regards,
Olivier -
BPC application is taking too much time to load
Hi experts!
I'm facing a very weird problem...
We've developed a BPC application (app name: USM).
This application is taking too much time to be loaded in some computers (around 8 minutes to load). Yes, in SOME computers.
There is around 100.000 registers in the database and most coming from material master data.
If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
Have anybody ever seen something like this?
Thank you in advance.
Rubens
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PMHi Rubens,
I would try making a couple of test:
1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
hope it helps, -
Spatial query with sdo_aggregate_union taking too much time
Hello friends,
the following query taking too much time for execution.
table1 contains around 2000 records.
table2 contains 124 rows
SELECT
table1.id
, table1.txt
, table1.id2
, table1.acti
, table1.acti
, table1.geom as geom
FROM
table1
WHERE
sdo_relate
table1.geom,
select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
from table2
,'mask=(ANYINTERACT) querytype=window'
)='TRUE'
I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
ThanksHi Thanks lot for your reply.
But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
Let me give you clear picture....
What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
I hope this would help you to understand my query.
Thanks
I appreciate your efforts. -
Filling aggregates is taking too much time
Hello,
Filling aggregates is taking too much time for around 17000 records & we have checked the cube consistancy in rsrv is everything is fine .
how can i trace the problem .
THanks in advance.
Ravinderhi ravinder,
can you tell me how many records are there in the cube at line item level.
Regards
Amar. -
ACCTIT table Taking too much time
Hi,
In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
ThankuHi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku -
Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.
stop the download if it's stalled, and restart your download.
-
Full DTP taking too much time to load
Hi All ,
I am facing an issue where a DTP is taking too much time to load data from DSO to Cube via PC and also while manually running it.
There are 6 such similar DTP's which load data for different countries(different DSO's and Cubes as source and target respectively) for last 7 days based on GI Date. All the DTP's are pulling almost same no. of records and finish within 25-30 min. But only one DTP takes around 3 hours. The problem started couple of days back.
I have change the Parallel processes from 3->4->5 and packet size from 50,000->10,000->1.00.000 but no improvement. Also want to mention that all the source DSO's and target Cubes have the same structure. All the transformations have Field Routines and End Routines.
Can you all please share some pointers which can help.
Thanks
PrateekHI Raman ,
This is what I get when I check the report. Can this be causing issues as 2 rows have % >= 100
ETVC0006 /BIC/DETVC00069 rows: 1.484 ratio: 0 %
ETVC0006 /BIC/DETVC0006C rows: 15.059.600 ratio: 103 %
ETVC0006 /BIC/DETVC0006D rows: 242 ratio: 0 %
ETVC0006 /BIC/DETVC0006P rows: 66 ratio: 0 %
ETVC0006 /BIC/DETVC0006T rows: 156 ratio: 0 %
ETVC0006 /BIC/DETVC0006U rows: 2 ratio: 0 %
ETVC0006 /BIC/EETVC0006 rows: 14.680.700 ratio: 100 %
ETVC0006 /BIC/FETVC0006 rows: 0 ratio: 0 %
ETVC0007 rows: 13.939.200 density: 0,0 % -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards
Maybe you are looking for
-
Can anybody help me with this code?
I am a beginner to J2ME. The following is a small program I wrote just for test. Theoretically, in my opinion, I shall see in the screen a number increase from 1 to 29999 rapidly after execute the code. But instead, it displays nothing but only displ
-
hello guys, i woke up this morning and i received a call from Fedex GR and they told me that the playbook has arrived and it is waiting for me at the custom office in athens. OK i said. Then they told me that i have to pay to take it with me. It is a
-
Problem with FM MB_CREATE_GOODS_MOVEMENT while creating GRI .
Hi All, I am facing an issue with FM MB_CREATE_GOODS_MOVEMENT. I am using FM MB_CREATE_GOODS_MOVEMENT to create material document (Doing GRI) with ref to a delivery document . FM sucessfully creating Material doc but it is not updating delivery docum
-
there is a designation table and there is admins e.g supervisor,manager ,director so when supervisor upload document then he/she not able t see his/her documents i try this sp SELECT dbo.DocumentInfo.DocID, dbo.DocumentInfo.DocName as DocumentName, d
-
Mini-Bridge: accessing files
I am unable to access files/folders via the mini-bridge in PS. When I click "computer," or my user name, or favorites or anything else in the mini-bridge drop-down menu nothing happens.