ExecuteQuery method of view object taking much time to execute
Hi All,
I am using a view object and execute the VO query using executeQuery method in VOImpl java.
But, problem is, it is taking long time to bring the results after executing the query after setting the parameters. Same query in TOAD gives 4 seconds. While executing using executeQuery method, it is taking 5 mins.
It is urgent issue. Please help me. Thanks.
Regards, Soorya
Hi Kali,
Thanks for your prompt response.
Yes. It has bind parameters. I have printed the statement before and after the executeQuery method
++VOImpl Code snippet++
setWhereClauseParams(params);
System.out.println("before executing query:Time:"+System.currentTimeMillis());
executeQuery();
System.out.println( "after executing query:Time:"+System.currentTimeMillis());
+++
I have removed some conditions in the query as it is business confidential. Please find the jdev log.
++++++++
before executing query:Time:1322071711046
[724] Column count: 41
[725] ViewObject close prepared statements...
[726] ViewObject : Created new QUERY statement
[727] ViewObject: VO1
[728] UserDefined Query: SELECT DISTINCT
ai.invoice_num invoice_num
FROM ap_invoices_all ai
, ap_checks_all ac
WHERE ...
ai.org_id = :p_orgid
AND ac.id = :p_id
[729] Binding param 1: ,468,
[730] Binding param 2: 247
[731] The resource pool monitor thread invoked cleanup on the application module pool, AM, at 2011-11-23 23:41:32.781
after executing query:Time:1322072052875
+++++++
Regards,
Soorya
Similar Messages
-
Query taking much time to execute
The following query is taking more than 4hrs to execute.
select l_extendedprice , count(l_extendedprice) from dbo.lineitem group by l_extendedprice
Cardinality of table : 6001215 ( > 6 million)
Index on l_extendedprice is there
ReadAheadLobThreshold = 500
Database version 7.7.06.09
I need to optimize this query. Kindly suggest way out.
Thanks
PriyankData Cache : 80296 KB
Ok, that's 8 Gigs for cache.
The index takes 16335 pages à 8 KB = 130.680 KB = 128 MB.
Fits completely into RAM - the same is true for the additional temp resultset.
So once the index has been read to cache I assume the query is a lot quicker than 4 hours.
6 Data Volumes
first 3 of size 51.200 KB
other 3 of size 1,048,576 KB
Well, that's not the smartest thing to do.
That way the larger volumes will get double the I/O requests which eventually saturates the I/O channel.
Yes looking at the cardinality of the table lots of I/O required but still more than 4 hrs is quite unrealistic. Some tuning is required.
We're not talking about the cardinality here - you want all the data.
We talk pages then.
And as we've seen, the table is not touched for this query.
Instead the smaller index is completely read in a index only strategy.
Loading 128MB from disk, creating temporary data in the same size and spilling out the information (and thereby reading the 128 MB temp size again) in 4 hours add up to ca. 384 MB/4 hours = 96 MB/hour = 1,6 MB/minute.
Not too good really - I suspect that the I/O system here is not the quickest one.
You may want to activate the time measurement and set the DB Analyzer interval to 120 secs.
Then activate Command and Resourcemonitor and look for statements taking longer than 10 minutes.
Now run your statement again and let us know the information from Command/Resourcemonitor and check for warnings in the DBanalyzer output.
regards,
Lars -
Sftp is taking much time to execute
Hi,
I am working on SFTP .
SFTP is working but while executing, sftp is taking 8min to execute.
Do i need to set properties regarding time ?
Regards,
Divya.Hi,
Use smaller polling interval for high throughput scenarios where the message size is not very large
Regards
Abhi -
Taking long time to execute views
Hi All,
my query is taking long time to execute(i am using standard views in my query)
XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
WHERE gjh.je_batch_id = gjb.je_batch_id AND
gjh.je_header_id = gjl.je_header_id AND
gjh.je_header_id = xlawip.je_header_id AND
gjl.je_header_id = xlawip.je_header_id AND
gjl.je_line_num = xlawip.je_line_num AND
gcc.code_combination_id = gjl.code_combination_id AND
gjl.code_combination_id = xlawip.code_combination_id AND
gjb.set_of_books_id = xlawip.set_of_books_id AND
gjh.je_source = 'Inventory' AND
gjh.je_category = 'WIP' AND
gp.period_set_name = 'Accounting' AND
gp.period_name = gjl.period_name AND
gp.period_name = gjh.period_name AND
gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
to_date(enddate,'DD-MON-YY') AND
gjh.status =nvl(lstatus,gjh.status)
Could any one help me to execute it fast?
Thanks
Madhu[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...
-
How to build sql query for view object at run time
Hi,
I have a LOV on my form that is created from a view object.
View object is read-only and is created from a SQL query.
SQL query consists of few input parameters and table joins.
My scenario is such that if input parameters are passed, i have to join extra tables, otherwise, only one table can fetch the results I need.
Can anyone please suggest, how I can solve this? I want to build the query for view object at run time based on the values passed to input parameters.
Thanks
Srikanth AddankiAs I understand you want to change the query at run time.
If this is what you want, you can use setQuery Method then use executeQuery.
http://download.oracle.com/docs/cd/B14099_19/web.1012/b14022/oracle/jbo/server/ViewObjectImpl.html#setQuery_java_lang_String_ -
Owb job taking too much time to execute
While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
-KCIt depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
Generate SQL at various stages and see if you can achieve the same with a different operator.
The other option would be passing HINTS to the tables selected.
- K -
Adding column is taking much time. How to avoid?
ALTER TABLE CONTACT_DETAIL
ADD (ISIMDSCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
,ISREACHCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
Is there any way that to speed up the execution time of the query?
It's more than 24 hrs completed after started running the above script.
I do not know why it is taking much time.
Size of the table is 30 MB.To add a column the row directory of every record must be rewritten.
Obviously this will take time and produce redo.
Whenever something is slow the first question you need to answer is
'What is it waiting for?' You can do so by investigating by various v$ views.
Also, after more than 200 'I can not be bothered to do any research on my own' questions, you should know you don't post here without posting a four digit version number and a platform,
as volunteers aren't mind readers.
If you want to continue to withheld information, please consider NOT posting here.
Sybrand Bakker
Senior Oracle DBA
Experts: those who did read documentatiion and can be bothered to investigate their own problems. -
Query taking much time Orace 9i
Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AMHi Srinivas;
Please follow this search and see its helpful
Regard
Helios -
Query taking much time.
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")user3479748 wrote:
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
) ;explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")
I dunno, it looks like you are getting all the things that are null with an outer join, so won't that decide to full scan anyways? Plus the union means it will do it twice and do a distinct to get rid of dups - see how it does a union all and then sort unique. Somehow I have the feeling there might be a more trick way to do what you want, so maybe you should state exactly what you want in English. -
Discoverer report is taking much time to open
Hi
All the discoverer report are taking much time to open,even query in lov is taking 20 -25 min.s.We have restart the services but on result found.
Please suggest what can be done ,my application is on 12.0.6.
RegardsThis topic was discussed many times in the forum before, please see old threads for details and for the docs you need to refer to -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
Database tableAUFM hitting is taking much time even secondary index created
Hi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AMHi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AM -
LOV is slow after selecting a value its taking much time to default
Hi,
I have a dependent LOV. Master LOV is executing fine and its populatin into the field fastly. But Child LOV is very slow after selecting a value its taking much time to default.
Can any one please help me if there is any way to default the value fast after selecting a value?
Thanks,
MaheshHi Gyan,
Same issues in TST and PROD instances.
In my search criteria if i get 1 record, even after selecting that value its taking much time to default that value into the field.
Please advice. Thanks for your quick resp.
Thanks,
Mahesh -
ODS Activation is taking much time...
Hi All,
Some times ods activation is taking much time. Generally it takes 30 mins and some times it take 6 hours.
If this activation is taking much, then if i check sm50 ...i can see that there is a piece of code is taking much time.
SELECT
COUNT(*) , "RECORDMODE"
FROM
"/BIC/B0000814000"
WHERE
"REQUEST" = :A0 AND "DATAPAKID" = :A1
GROUP BY
"RECORDMODE"#
Could you please let me know what are the possiblites to solve this issue.
thanksHello,
you have 2 options:
1) as already mentioned, cleanup some old psa data or change log data from this psa table or
2) create a addtional index for recordtype on this table via Tcode se11 -> indexes..
Regards, Patrick Rieken. -
Hi < i have updated my mac 10.5 with 10.5.8 sucessfully , but while instaling it is taking much time and status bar is not showing any progress
If I remember correctly one of the updates could hang after doing the update. Fixed by a restart but installing a combo update over the top of others rarely does any harm and it may be the best thing to do.
-
Taking much time for loading data
Dear All
While loading data in <b>PRD system</b> (for master data and transaction data), it's taking much time. For ex., 2LIS_05_QOITM, I am loading delta data only in <b>PSA</b> from R/3. Some times (yesterday) it's taking 2 mins and some times (today) it's taking 5 hrs not yet completed (it's yellow). Actually yesterday, we went to SM58 in R/3 side and executed that LUW for some other data sources. Now also we can do but we don't want to like that. Because we are expecting permanent soulution. Could u please advice me. I am getting below message in status tab
Errors while sending packages from OLTP to BI
Diagnosis
No IDocs could be sent to BI using RFC.
System Response
There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
Further analysis:
Check the TRFC log.
You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
Error handling:
If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the Errors while sending packages from OLTP to BI
Diagnosis
No IDocs could be sent to BI using RFC.
System Response
There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
Further analysis:
Check the TRFC log.
You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
Error handling:
If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the source system.
I am loading data through Process chain and user is <b>BWREMOTE (authorized user).</b>
Please help me.
Thanks a lot in advance
RajaDear Karthik
No I couldn't resolve till now
But Everything is fine.
Now status is yellow only (209 from 209). Now what i want to do.
Im getting below message <b>in status tab</b>
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not return any errors. The data transport from the source system to BI was probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path "Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In particular, check the remote user authorizations in BI.
<b>In detail tab</b>, I am getting below message
Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Thanks in advance
Raja
Maybe you are looking for
-
my iphone on russian sorry if i say some think wrong
-
Hello all, At my current project we are interfacing between SAP and PeopleSoft. The target system peoplesoft expects an xml exactly formed as follows. So within the CDATA area "<" and ">" are expected. NOT "< ;" and "> ;" AND NOT "< ;" and "&
-
I got a new computer and my old one is already sold. I have all my music etc. on my ipod touch and I want to download it to the new computer. I already downloaded a new Itunes but I don't know what to do next. Anyone an idea?
-
Recording compression "destructively" - Input Object?
My goal is to record some compression to disk "destructively," but leave the reverb, eq, etc. only for monitoring until final mixdown. Seems like a simple enough process in the manual, but I don't understand where to put the compressor insert. I am c
-
Unable to perform hiring for russian employees
Hi I'm Unable to perform hiring for russian employees and the problem i'm getting is after performing the action infotype screens control is not going to other infotype screen and simply exiting to pa40 screen. Pls suggest me where is wrong. Regards,