BAM reports taking more time while fetching data from EDS
I have created an external Data Source(EDS) in the BAM, and when i create an object with that EDS, it is taking very long to fetch data from the EDS( more than 20 mins.), but as the Database is installed on my local system, when i fire a query there, it don't take much time. Please help me in this, what could be the possible reason?
Your messaging gateway question would be better addressed in the SQL PL/SQL forum
All Places > Database > Oracle Database + Options > SQL and PL/SQL > Discussions
Similar Messages
-
Oracle mgw more time while fetching data from MQ
Dear All,
Good morning.
I am using oracle 11.2.0.3 version and using MGW to connect to MQ series (IBM WebSphere MQ ,Version: 7.5.0.0).
We are experincing delay while retriving message from MQ for some times and it is not happening always.
MQ2AQ
Found MQ MSG
>>2014-12-22 09:25:49 MGW Engine 1 TRACE Polling
Polling thread: JOB_MQ2AQ new msgCount = 1
2014-12-22 09:25:49
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:49 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:49
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:50 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:50
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:51 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:51
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:52 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:52
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:53 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:53
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:54 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:54
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:55 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:55
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:56 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:56
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:57 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:57
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:58 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:58
MQ2AQ
MQ MSG DEQ Start
>>2014-12-22 09:25:59 MGW Engine 1 TRACE worker0
entering deqMessages for job JOB_MQ2AQ
2014-12-22 09:25:59
MQ2AQ
MQ MSG DEQ Retry
>>2014-12-22 09:25:59 MGW Engine 1 TRACE Polling
Polling thread: skip polling JOB_MQ2AQ msg count=1
2014-12-22 09:25:59
MQ2AQ
MQ MSG DEQ Finsh
>>2014-12-22 09:25:59 MGW Engine 1 TRACE worker0
leaving deqMessages for job JOB_MQ2AQ with seqno 1544 deqMsgCount = 1
2014-12-22 09:25:59
MQ2AQ
MQ MSG ENQ to AQ
>>2014-12-22 09:25:59 MGW Engine 1 TRACE worker0
entering enqMessages for job JOB_MQ2AQ with request 1544
2014-12-22 09:25:59
MQ2AQ
ENQ MSG 2 AQ Finish
>>2014-12-22 09:26:02 MGW Engine 1 TRACE worker0
leaving enqMessages for job JOB_MQ2AQ with seqno 1544
2014-12-22 09:26:02
MQ2AQ
MSG REC
>>2014-12-22 09:26:02 MGW Engine 1 TRACE worker0
leaving commitMessages for job JOB_MQ2AQ with seqno 1544
2014-12-22 09:26:02
Above log took 11 sec to retrive the message from MQ
But the below log took 1 sec and i use the same message in both time.
MQ2AQ
Found MQ MSG
>>2014-12-22 09:35:01 MGW Engine 1 TRACE Polling
Polling thread: JOB_MQ2AQ new msgCount = 1
2014-12-22 09:35:01
MQ2AQ
MQ MSG DEQ Start
>>2014-12-22 09:35:01 MGW Engine 1 TRACE worker0
entering deqMessages for job JOB_MQ2AQ
2014-12-22 09:35:01
MQ2AQ
MQ MSG DEQ Finsh
>>2014-12-22 09:35:01 MGW Engine 1 TRACE worker0
leaving deqMessages for job JOB_MQ2AQ with seqno 1545 deqMsgCount = 1
2014-12-22 09:35:01
MQ2AQ
MQ MSG ENQ to AQ
>>2014-12-22 09:35:01 MGW Engine 1 TRACE worker0
entering enqMessages for job JOB_MQ2AQ with request 1545
2014-12-22 09:35:01
MQ2AQ
ENQ MSG 2 AQ Finish
>>2014-12-22 09:35:01 MGW Engine 1 TRACE worker0
leaving enqMessages for job JOB_MQ2AQ with seqno 1545
2014-12-22 09:35:01
MQ2AQ
MSG REC
>>2014-12-22 09:35:01 MGW Engine 1 TRACE worker0
leaving commitMessages for job JOB_MQ2AQ with seqno 1545
2014-12-22 09:35:01
It showing Polling thread: skip polling JOB_MQ2AQ msg count=1" multiple times. what is it mean?
Can anyone share any good doc which i can refer to debug oracle message gate way issue.
Thanks,
Shine.Your messaging gateway question would be better addressed in the SQL PL/SQL forum
All Places > Database > Oracle Database + Options > SQL and PL/SQL > Discussions -
Hi,
we are using SQL SERVER 2008R2 X64 RTM version.
One of the SSRS report designed by developer is consuming much more time( 5 to 6 minutes ) to fetch data from DB. Even direct run of Stored Procedure (Called in report )takes less than a second to display the result set.
Please help.
Regards Naveen MSSQL DBAHi Naveen,
Based on my understanding, you spend a little time retrieving data with a stored procedure from database in dataset designer. However, it takes long time to run the report to display the data, right?
In Reporting Services, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. In your scenario, since you mentioned retrieving data costs a little time, you should check the table
Executionlog3 in the ReportServer database to find which section costs most of time, TimeProcessing or TimeRendering. Then you can refer to this article to optimize your report:
Troubleshooting Reports: Report Performance.
Besides, if parameters exist in the report, you should declare variables inside of the stored procedure and assign the incoming parameters to the variables. For more information, please refer to the similar thread:
Fast query runs slow in SSRS.
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu
Qiuyun Yu
TechNet Community Support -
Taking more time for retreving data from nested table
Hi
we have two databases db1 and db2,in database db2 we have number of nested tables were there.
Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
For feching records it's going to take much more time even records are less in table . what would be the reason.
plz help me daliy we are facing the problem regarding the samePlease avoid duplicate thread :
quaries taking more time
Nicolas.
+< mod. action : thread locked>+ -
Taking huge time to fetch data from CDHDR
Hi Experts,
To count the entries in CDHDR table it taking huge time and throught time_out dump.
I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
We are finding the data from CDHDR by following conditions.
Objclass - classify.
Udate - 'X' date
Utime - 'X' (( even selecton 1 Min))
We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
Can you suggest us is there any alternate to find the entries.
Regards,
VSHello,
at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
se16 > content, selection criteria and then Proram execute on background.
Best regards,
Peter -
SSRS report taking more time but fast in SSMS
SSRS report taking more time but fast in SSMS,We are binding a string which more than 43000 length each and total number records is 4000.
Plese do the needful.It is not possible to create index because it crosses 900 byte.Hi DBA5,
As per my understanding, when previewing the report, there are three options to affect the report performance. They are data retrieval time, processing time, rendering time.
The data retrieval takes time to retrieve the data from the server using a queries or stored procedures, the processing time takes time to process the data within the report using the operations in the layout of the report and the rendering time takes time
to display the information of the report and how it is displayed like excel, pdf, html formats. It is the reason why report take more time than in SSMS.
To improve the performance, we need to improve the three aspects. For the details, please see the great blog:http://blogs.msdn.com/b/mariae/archive/2009/04/16/quick-tips-for-monitoring-and-improving-performance-in-reporting-services.aspx
Hope this helps.
Regards,
Heidi Duan
Heidi Duan
TechNet Community Support -
How to use for all entires clause while fetching data from archived tables
How to use for all entires clause while fetching data from archived tables using the FM
/PBS/SELECT_INTO_TABLE' .
I need to fetch data from an Archived table for all the entries in an internal table.
Kindly provide some inputs for the same.
thanks n Regards
RameshHi Ramesh,
I have a query regarding accessing archived data through PBS.
I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
The call to the above FM is as follows :
CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
EXPORTING
archiv = 'CFI'
OPTION = ''
tabname = 'BKPF'
SCHL1_NAME = 'BELNR'
SCHL1_VON = belnr-low
SCHL1_BIS = belnr-low
SCHL2_NAME = 'GJAHR'
SCHL2_VON = GJAHR-LOW
SCHL2_BIS = GJAHR-LOW
SCHL3_NAME = 'BUKRS'
SCHL3_VON = bukrs-low
SCHL3_BIS = bukrs-low
SCHL4_NAME =
SCHL4_VON =
SCHL4_BIS =
CLR_ITAB = 'X'
MAX_ZAHL =
tables
i_tabelle = t_bkpf
SCHL1_IN =
SCHL2_IN =
SCHL3_IN =
SCHL4_IN =
EXCEPTIONS
EOF = 1
OTHERS = 2
OTHERS = 3
It gives me the following error :
Index for table not supported ! BKPF BELNR.
Please help ASAP.
Thnaks and Regards
Gurpreet Singh -
How can we improve the performance while fetching data from RESB table.
Hi All,
Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
SELECT aufnr posnr roms1 roanz
INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
FROM resb
WHERE kdauf = p_vbeln
AND ablad = itab-sposnr+2.
Here I am using 'KDAUF' & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
Regards,
HimanshuHi ,
Declare intenal table with only those four fields.
and try the beloe code....
SELECT aufnr posnr roms1 roanz
INTO table itab
FROM resb
WHERE kdauf = p_vbeln
AND ablad = itab-sposnr+2.
yes, you can also use secondary index for improving the performance in this case.
Regards,
Anand .
Reward if it is useful.... -
Eliminate duplicate while fetching data from source
Hi All,
CUSTOMER TRANSACTION
CUST_LOC CUT_ID TRANSACTION_DATE TRANSACTION_TYPE
100 12345 01-jan-2009 CREDIT
100 23456 15-jan-2000 CREDIT
100 12345 01-jan-2010 DEBIT
100 12345 01-jan-2000 DEBITNow as per my requirement, i need to fetch data from CISTOMER_TRANSACTION table for those customer which has transaction in last 10 years. In my above data, customer 12345 has transaction in last 10 years, whereas for customer 23456, does not have transaction in last 10 years so will eliminate it.
Now, CUSTOMER_TRANSACTION table has approximately 100 million records. So, we are fectching data in batches. Batching is divided into months. Total 120 months. Below is my query.
select *
FROM CUSTOMER_TRANSACTION CT left outer join
(select distinct CUST_LOC, CUT_ID FROM CUSTOMER_TRANSACTION WHERE TRANSACTION_DATE >= ADD_MONTHS(SYSDATE, -120) and TRANSACTION_DATE < ADD_MONTHS(SYSDATE, -119) CUST
on CT.CUST_LOC = CUST.CUST_LOC and CT.CUT_ID = CUST.CUT_IDThru shell script, months number will change. -120:-119, -119:-118 ....., -1:-0.
Now the problem is duplication of records.
while fetching data for jan-2009, it will get cust_id 12345 and will fetch all 3 records and load it into target.
while fetching data for jan-2010, it will get cust_id 12345 and will fetch all 3 records and load in into target.
So instead of having only 3 records, for customer 12345 it will be having 6 records. Can someone help me on how can i eliminate duplicate records from getting in.
As of now i have 2 ways in mind.
1. Fetch all records at once. Which is impossible as it will give space issue.
2. After each batch, run a procedure which will delete duplicate records based on cust_loc, cut_id and transaction_date. But again it will have performance problem.
I want to eliminate it while fetching data from source.
Edited by: ace_friends22 on Apr 6, 2011 10:16 AMYou can do it this way....
SELECT DISTINCT cust_doc,
cut_id
FROM customer_transaction
WHERE transaction_date >= ADD_MONTHS(SYSDATE, -120)
AND transaction_date < ADD_MONTHS(SYSDATE, -119)However please note that - if want to get the transaction in a month like what you said earlier jan-2009 and jan-2010 and so on... you might need to use TRUNC...
Your date comparison could be like this... In this example I am checking if the transaction date is in the month of jan-2009
AND transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27) AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)) Your modified SQL...
SELECT *
FROM customer_transaction
WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27) AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27))Testing..
--Sample Data
CREATE TABLE customer_transaction (
cust_loc number,
cut_id number,
transaction_date date,
transaction_type varchar2(20)
INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2009','dd-MON-yyyy'),'CREDIT');
INSERT INTO customer_transaction VALUES (100,23456,TO_DATE('15-JAN-2000','dd-MON-yyyy'),'CREDIT');
INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2010','dd-MON-yyyy'),'DEBIT');
INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2000','dd-MON-yyyy'),'DEBIT');
--To have three records in the month of jan-2009
UPDATE customer_transaction
SET transaction_date = TO_DATE('02-JAN-2009','dd-MON-yyyy')
WHERE cut_id = 12345
AND transaction_date = TO_DATE('01-JAN-2010','dd-MON-yyyy');
UPDATE customer_transaction
SET transaction_date = TO_DATE('03-JAN-2009','dd-MON-yyyy')
WHERE cut_id = 12345
AND transaction_date = TO_DATE('01-JAN-2000','dd-MON-yyyy');
commit;
--End of sample data
SELECT *
FROM customer_transaction
WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27) AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27));Results....
CUST_LOC CUT_ID TRANSACTI TRANSACTION_TYPE
100 12345 01-JAN-09 CREDIT
100 12345 02-JAN-09 DEBIT
100 12345 03-JAN-09 DEBITAs you can see, there are only 3 records for 12345
Regards,
Rakesh
Edited by: Rakesh on Apr 6, 2011 11:48 AM -
Fatal error while fetching data from bi
hi,
i am getting following error while fetching data from bi using select statement
i have written code in this way
SELECT [Measures].[D2GFTNHIOMI7KWV99SD7GPLTU] ON COLUMNS, NON EMPTY { [DEM_STATE].MEMBERS} ON ROWS FROM DEM_CUBE/TEST_F_8
error description when i click on test
Fatal Error
com.lighthammer.webservice.SoapException: The XML for Analysis provider encountered an errorthanks for answering .but when i tried writing the statement in transaction 'MDXTEST' and clicked on check i am getting following error
Error occurred when starting the parser: timeout during allocate / CPIC-CALL: 'ThSAPCMRCV'
Message no. BRAINOLAPAPI011
Diagnosis
Failed to start the MDX parser.
System Response
timeout during allocate / CPIC-CALL: 'ThSAPCMRCV'
Procedure
Check the Sys Log in Transaction SM21 and test the TCP-IP connection MDX_PARSER in Transaction SM59.
SO I WENT IN SM 59 TO CHECK THE CONNECTION.
CAN U TELL ME WHAT CONFIGERATION I NEED TO DO FOR MAKING SELECT STATEMENTS WORK? -
Custom Report taking more time to complete Normat
Hi All,
Custom report(Aging Report) in oracle is taking more time to complete Normal.
In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
We have enabled the trace and checked the trace file, but all the queries are working fine.
Could you please suggest me regarding this issue.
Thanks in advance!!TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
ORA-00922: missing or invalid option
parse error offset: 1049
EXPLAIN PLAN option disabled.
SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
FROM
HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.05 11 22 0 3
total 3 0.00 0.05 11 22 0 3
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 11 0.01 0.05
asynch descriptor resize 2 0.00 0.00
INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
VALUES
( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
:B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 20 0.00 0.03 4 1 304 20
Fetch 0 0.00 0.00 0 0 0 0
total 21 0.00 0.03 4 1 304 20
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.01 0.03
SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
SEVERITY
FROM
FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
A.APPLICATION_ID
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.03 4 12 0 2
total 5 0.00 0.03 4 12 0 2
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.00 0.03
DELETE FROM MO_GLOB_ORG_ACCESS_TMP
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 3 4 4 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 3 4 4 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.01 0.02
SET TRANSACTION READ ONLY
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.01 0 0 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
begin Fnd_Concurrent.Init_Request; end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 148 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 148 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log file sync 1 0.01 0.01
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
:DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 9 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 9 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 8 0.00 0.00
SQL*Net message from client 8 0.00 0.00
begin fnd_global.bless_next_init('FND_PERMIT_0000');
fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
:security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
:conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
:form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
:server_id); fnd_profile.put('ORG_ID', :org_id);
fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 8 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 8 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
FPI.SIZING_FACTOR
FROM
FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 7 0 1
total 4 0.00 0.00 0 7 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
USERENV('LANGUAGE'), '.') + 1)
FROM
V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
USERENV('SESSIONID')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 2 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.01 0.02 0 165 0 4
Fetch 1 0.00 0.00 0 0 0 1
total 13 0.01 0.02 0 165 0 5
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 37 0.00 0.00
SQL*Net message from client 37 1.21 2.19
log file sync 1 0.01 0.01
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 49 0.00 0.00 0 0 0 0
Execute 89 0.01 0.07 7 38 336 24
Fetch 29 0.00 0.09 15 168 0 27
total 167 0.02 0.16 22 206 336 51
Misses in library cache during parse: 3
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 6 0.00 0.00
db file sequential read 22 0.01 0.14
48 user SQL statements in session.
1 internal SQL statements in session.
49 SQL statements in session.
0 statements EXPLAINed in this session.
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
48 user SQL statements in trace file.
1 internal SQL statements in trace file.
49 SQL statements in trace file.
48 unique SQL statements in trace file.
928 lines in trace file.
1338833753 elapsed seconds in trace file. -
Taking More Time while inserting into the table (With foriegn key)
Hi All,
I am facing problem while inserting the values into the master table.
The problem,
Table A -- User Master Table (Reg No, Name, etc)
Table B -- Transaction Table (Foreign key reference with Table A).
While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
While inserting we need to query the Table A first to have the values in TableABean.java.
final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
Then, we need to create the instance for TableB
TableB tableB= (TableB)uow.newInstance(TableB.class);
tableB.setID(bean.getID);
tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
Time delay is there for different users when they enter transaction in TableB.
I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
Please help me to resolve this issue...I am facing it now in production.
Thanks & Regards
VBHello,
Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
Best Regards,
Chris -
RDF report taking more time to complete
Hi All,
Our custom RDF report is taking more time to complete.
Acutally on early hours of the login day, when i submit that request it completes in 5-6 min, but when i resubmit that request that is going on running.
Please do help in this.Please post the details of the application release, database version and OS.
Do you have the statistics collected up to date?
Please enable trace and generate the TKPROF file -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Concurrent+AND+Request+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
Select query taking too much time to fetch data from pool table a005
Dear all,
I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
select * from a005 into table t_a005 for all entries in it_itab
where vkorg in s_vkorg
and matnr in s_matnr
and aplp in s_aplp
and kmunh = it_itab-kmunh.
here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
Thanks ,it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
In general, you should code the order of your fields in the select in the same order as they appear in the database. If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time. The more key fields from the beginning of the structure that you can supply at faster the retrieval.
Regards,
Brent -
SELECT is taking lot of time to fetch data from cluster table BSET
<Modified the subject line>
Hi experts,
I want to fetch data of some fields from bset table but it is taking a lot of time as the table is cluster table.
Can you please suggest me any other process to fetch data from cluster table. I am using native sql to fetch data.
Regards,
SURYA
Edited by: Suhas Saha on Jun 29, 2011 1:51 PMHi Subhas,
As per your suggestion I am now using normal SQL statement to select data from BSET but it is still taking much time.
My SQL statement is :
SELECT BELNR
GJAHR
BUZEI
MWSKZ
HWBAS
KSCHL
KNUMH FROM BSET INTO CORRESPONDING FIELDS OF TABLE IT_BSET
FOR ALL ENTRIES IN IT_BKPF
WHERE BELNR = IT_BKPF-BELNR
AND BUKRS = IT_BKPF-BUKRS.
<Added code tags>
Can you suggest me anymore?
Regards,
SURYA
Edited by: Suhas Saha on Jun 29, 2011 4:16 PM
Maybe you are looking for
-
Can I set a recurring event which falls on a specific day
I would like to set events in my Calendar which fall on specific days of the Week or Month e.g. 2nd Monday of each month; or the last Friday on the month. I would be obliged if anyone can advise on how this can be done.
-
30" Cinema Resolution Option not Available
Hi, I just got a 30-inch cinema display, installed the software, but when trying to set the resolution to 1250x1600, my computer only goes up to 1280x800. I have a G5dual 2.5 GHz Power PC with 1GB of RAM running OSX 10.4.8 Does anyone know what is no
-
How can I get rid of the notification on the lock screen? I can't get it to go away since there is not a new message. I hope I'm explaining this well enough, any help is appreciated!
-
Visualize diacritics in a Word document that uses Times Ext Roman
I am editing a book whose chapters contain Arabic terms transliterated in Roman script and showing a variety of diacritics. I inserted these diacritic using Times Ext Roman in a Word document on a PC. When I open these word documents on my MacBook, I
-
Migration of oracle portal contents/applications to Weblogic portal
I have a website running on oracle portal. I want to migrate all those contents (applications/portlets) to weblogic portal. I dont know where to start from. Can somebody please guide me and give me some tutorials for this. Thanks Joseph