Poor Performance of the query.
Hi all,
i am using this query
select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website
from (select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,
row_number() over (partition by contactname, address1
order by contactname, address1) as rn
from vw_sub_cl_add1 where siteid=10 and bpcnum_0 = '0063') emp where rn =1I used explain plan for the query the result is
Plan hash value: 3976107967
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Inst |IN-OUT|
| 0 | SELECT STATEMENT | | | | 0 (0)| | |
| 1 | REMOTE | | | | | INFO | R->S |
8 rows returned in 0.04 seconds but, actually the query return 10 rows.
the view "vw_sub_cl_add1" is created using database links(remote database server).
this query i am using in for loop to retrieve the records and print it one by one.
The problem is : The perfomance of the query is so poor. it takes 1.08 sec to display all the records.
what are the steps i should do to minimize the retrival time.?
Thanks in advance
bye
Srikavi
Since this is query that is processed completely on the remote site, there are at least two potential issues that you should check if you don't want to use the "materialized view" approach:
1. The time it takes to transport the result set to your local database, i.e. potential network issues
2. The time it takes to process the query on the remote site
Since you're only fetching 10 rows - if I understand you correctly - the first point shouldn't be an issue in your case.
If you have suitable access to the remote site you would need to generate an execution plan of the "local" version of the query by logging directly into the remote size to find out why it takes longer than you expect. Probably it's missing some indexes if the number of rows to process should be only a few and you expect it to return more quickly.
Here are simple instructions how to generate a meaningful execution plan if you want to post it here:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
[When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Similar Messages
-
Performance of the query is poor
Hi All,
This is Prasad. I have a problem with the query it is taking more time to retrieve the data from the Cube. In the query they are using a Variable of type Customer Exit. The Cube is not at compressed. I think the issue with the F fact table is due to the high number of table partitions (requests) that it has to select from. If I compress the cube, the performance of the query is increased r not? Is there any alternative for improving the performance of the query. Somebody suggested Result set query, iam not aware of this technique if u know let me know.
Thanks in advanceHi Prasad,
Query performance will depend on many factors like
1. Aggregates
2. Compression of requests
3. Query read mode setting
4. Cache memory setting
5. By Creating BI Accelerator Indexes on Infocubes
6. Indexes
Proposing aggregates to improve query performance:
First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
--> execute.
Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
Again goto RSRT> Give query name> Execute+Debug-->
A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
I hope this will help u... go through the below links to know about aggregates more clear.
http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
Follow this thread for creation of BIA Indexes:
Re: BIA Creation
Hopr this helps...
Regards,
Ramki. -
Poor performance of the BDB cache
I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec. -
Please help me to increase the performance of the query
Hello
I am not an oracle expert or developer and i have a problem to resolve.
Below is the query and explaiation plan and seeking the help to improve the performance of the query.
Our Analysis,
The query runs good,takes less one minute and fetches the results but during peak time it takes 8 minutes
Require anyone suggestion's to improve the query.
The query is generated from the Microsft dll so we dont have SQL code and require some help on tuning the tables.
If tuning the query improves then also fine please suggest for that also.
Enviroment: Solaris 8
DB : oracle 9i
(SELECT vw.dispapptobjid, vw.custsiteobjid, vw.emplastname, vw.empfirstname,
vw.scheduledonsite AS starttime, vw.appttype, vw.latestart,
vw.endtime, vw.typetitle, vw.empobjid, vw.latitude, vw.longitude,
vw.workduration AS DURATION, vw.dispatchtype, vw.availability
FROM ora_appt_disp_view vw
WHERE ( ( vw.starttime >=
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.starttime <
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
OR vw.endtime >
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.endtime <=
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
OR ( vw.starttime <=
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.endtime >=
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
UNION
(SELECT 0 AS dispapptobjid, emp.emp_physical_site2site AS custsiteobjid,
emp.last_name AS emplastname, emp.first_name AS empfirstname,
TO_DATE ('1/1/3000', 'MM/DD/YYYY') AS starttime, 'E' AS appttype,
NULL AS latestart, NULL AS endtime, '' AS typetitle,
emp.objid AS empobjid, 0 AS latitude, 0 AS longitude, 0 AS DURATION,
'' AS dispatchtype, 0 AS availability
FROM table_employee emp, table_user usr
WHERE emp.employee2user = usr.objid AND emp.field_eng = 1 AND usr.status = 1)
ORDER BY empobjid, starttime, endtime DESC
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=HINT: ALL_ROWS 23 K 11312
SORT UNIQUE 23 K 3 M 11140
UNION-ALL
VIEW ORA_APPT_DISP_VIEW 17 K 3 M 10485
UNION-ALL
CONCATENATION
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
HASH JOIN 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 1 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_SCHED_REPAIR 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS FULL TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
NESTED LOOPS 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 1 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_SCHED_REPAIR 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN EMPLOYEE_OBJINDEX 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
NESTED LOOPS 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 1 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_REQ_ETA 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN EMPLOYEE_OBJINDEX 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
NESTED LOOPS 16 K 2 M 5812
HASH JOIN 16 K 2 M 5812
HASH JOIN 16 K 2 M 5286
TABLE ACCESS FULL TABLE_EMPLOYEE 13 K 441 K 28
HASH JOIN 16 K 1 M 5243
TABLE ACCESS FULL TABLE_SCHEDULE 991 11 K 2
HASH JOIN OUTER 16 K 1 M 5240
HASH JOIN OUTER 16 K 1 M 3866
HASH JOIN OUTER 16 K 1 M 450
HASH JOIN 16 K 1 M 44
TABLE ACCESS FULL TABLE_GBST_ELM 781 14 K 2
TABLE ACCESS FULL TABLE_APPOINTMENT 16 K 822 K 41
INDEX FAST FULL SCAN CASE_OBJINDEX 1 M 6 M 201
TABLE ACCESS FULL TABLE_SITE 967 K 11 M 3157
TABLE ACCESS FULL TABLE_ADDRESS 961 K 11 M 1081
INDEX FAST FULL SCAN SITE_OBJINDEX 967 K 5 M 221
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
HASH JOIN 6 K 272 K 51
TABLE ACCESS FULL TABLE_USER 6 K 51 K 21
TABLE ACCESS FULL TABLE_EMPLOYEE 6 K 220 K 28Hi,
First-off, it appear that you are querying a view. I would redo the auery against the base table.
Next, look at a function-based index for the DATE column. Here are my notes:
http://www.dba-oracle.com/t_function_based_indexes.htm
http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
Also, make sure you are analyzed properly with dbms_stats:
http://www.dba-oracle.com/art_builder_dbms_stats.htm
And histograms, if appropriate:
http://www.dba-oracle.com/art_builder_histo.htm
Lasty, look at increasing hash_area_size or pga_aggregate_tagtet, depending on your table sizes:
http://www.dba-oracle.com/art_so_undocumented_pga_parameters.htm
Hope this helps. . . .
Donald K. Burleson
Oracle Press Author -
How to improve the performance of the query
Hi,
Help me by giving tips how to improve the performance of the query. Can I post the query?
SureshBelow is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
SELECT rt.awb_number,
ar.activity_id as task_id,
t.assignee_org_unit_id,
t.task_type_code,
ar.request_id
FROM activity_task ar,
request_task rt,
task t
WHERE ar.activity_id =t.task_id
AND ar.request_id = rt.request_id
AND ar.complete_status != 'act.stat.closed'
AND t.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND ar.parent_task_id not in (SELECT tt.task_id
FROM task tt
WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND rt.awb_number is not null
ORDER BY rt.awb_numberCheers
Sarma. -
Performance of the query incresed to 1 hour 15 mins....
the view is working, but the performance of the query is horrible.
Our pull time has increased from 25 minutes to 1 hour and 15 minutes.Can you please advice me the same solution pplies for prodction box also...
Production database details::::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The query in production database::::
SELECT /*+ALL_ROWS*/
2 a .lcl_id AS Ora_Order, --Order_Number,
3 a.closed_date AS Closed_Date,
4 a.modified_date AS Modified_Date,
5 a.received_date AS Received_Date,
6 a.status AS Status,
7 b.seq AS Ora_Line, --Line_Number
8 b.sub_seq AS Ora_sub_line,
9 c.seq AS Unit_Number,
10 SUBSTR (c.olig_group_id, INSTR (c.olig_group_id,
11 '.',
12 -1,
13 1)
14 + 1)
15 AS shipment_number,
16 c.tag AS Tag,
17 c.special_tag AS Customer_Tag,
18 h.fmly_serial_id AS Serial_Number,
19 d.allocation_timestamp AS Alloc_Date,
20 MIN (f.closed_timestamp) AS First_Event_On_Floor,
21 -- CALIBRATION
22 MAX (DECODE (f.uutt_mstr_id, 1, f.closed_timestamp, NULL))
23 AS Calibration_Date,
24 -- PACKAGING
25 MAX (DECODE (f.uutt_mstr_id, 50, f.closed_timestamp, NULL))
26 AS Package_Date,
27 -- CAPS KITTING
28 MAX(DECODE (
29 f.uutt_mstr_id,
30 100,
31 DECODE (f.stnd_seq, 2024961, f.closed_timestamp, NULL)
32 ))
33 AS Caps_Kitting_Date,
34 lastprodsn.pm_mstr_id AS Tagged_Model,
35 b.CEP AS ETO_Number,
36 j.VALUE AS Product_Options,
37 a.PO AS PO_Number,
38 -- lastprodsn.uut_glbl_id as LastProdSN_UUT_Glbl_ID -- replaced on 3/31/2011 BJACK
39 MAX (DECODE (f.uutt_mstr_id, 2, f.glbl_id, NULL))
40 AS LastProdSN_UUT_Glbl_ID
41 FROM ssc.ordr_hdrs a, -- glbl_id = sales order number
42 ssc.ordr_lns b, -- oh_glbl_id = SO #, SEQ = line number
43 ssc.ordr_ln_itms c, -- ol_oh_glbl_id = SO #, ol_seq = line #, seq = unit #, olig_group_id = shipment #
44 ssc.omar_track_maps d, -- for tracking id, holds the allocation timestamp
45 (SELECT x.uut_glbl_id,
46 x.oli_ol_oh_glbl_id,
47 x.oli_ol_seq,
48 x.oli_ol_sub_seq,
49 x.oli_seq,
50 x.sm_glbl_id,
51 x.pm_mstr_id
52 FROM ssc.serial_prod_uut_maps x
53 JOIN
54 ( SELECT oli_ol_oh_glbl_id,
55 oli_ol_seq,
56 oli_ol_sub_seq,
57 oli_seq,
58 MAX (uut_glbl_id) Max_oli_uut_glbl_id
59 FROM ssc.serial_prod_uut_maps
60 GROUP BY oli_ol_oh_glbl_id,
61 oli_ol_seq,
62 oli_ol_sub_seq,
63 oli_seq) MAXOLIUUT
64 ON MAXOLIUUT.Max_oli_uut_glbl_id = x.uut_glbl_id
65 AND MAXOLIUUT.oli_ol_oh_glbl_id =
66 x.oli_ol_oh_glbl_id
67 AND MAXOLIUUT.oli_ol_seq = x.oli_ol_seq
68 AND MAXOLIUUT.oli_ol_sub_seq = x.oli_ol_sub_seq
69 AND MAXOLIUUT.oli_seq = x.oli_seq) lastprodsn, -- find latest uut for OLI (assumes UUT ids are in sequence so max is latest; needed to deal with SN or product chgs for OLI)
70 ssc.serial_prod_uut_maps e, -- go get all UUT IDs for the OLI's latest product number and serial number
71 ssc.uuts f, -- go get UUT details for all of the good OLI-product-SNs
72 ssc.uut_params g, -- go get the package void parameter (so can exclude them)
73 ssc.serial_mstrs h, -- go get serial number for the SN id
74 ssc.ORDR_LN_PARAMS j -- go get options for product number
75 WHERE -- join a to b sales orders to sales order lines
76 a .glbl_id = b.oh_glbl_id
77 AND -- join b to c to get sales order line items (units for a line item)
78 b.oh_glbl_id = c.ol_oh_glbl_id
79 AND b.seq = c.ol_seq
80 AND b.sub_seq = c.ol_sub_seq
81 AND -- join c to d to get allocation date if available (outer join)
82 c.otm_track_id = d.track_id(+)
83 AND -- join c to lastprodsn
84 c.ol_oh_glbl_id = lastprodsn.oli_ol_oh_glbl_id(+)
85 AND c.ol_seq = lastprodsn.oli_ol_seq(+)
86 AND c.ol_sub_seq = lastprodsn.oli_ol_sub_seq(+)
87 AND c.seq = lastprodsn.oli_seq(+)
88 AND -- join lastprodsn to k to get serial number for last product/serial number processed
89 lastprodsn.sm_glbl_id = h.glbl_id(+)
90 AND -- join lastprodsn to e to go get all the UUT ids for this OLI + Product # + Serial #
91 lastprodsn.oli_ol_oh_glbl_id = e.oli_ol_oh_glbl_id(+)
92 AND lastprodsn.oli_ol_seq = e.oli_ol_seq(+)
93 AND lastprodsn.oli_ol_sub_seq = e.oli_ol_sub_seq(+)
94 AND lastprodsn.oli_seq = e.oli_seq(+)
95 AND lastprodsn.pm_mstr_id = e.pm_mstr_id(+)
96 AND lastprodsn.sm_glbl_id = e.sm_glbl_id(+)
97 AND --join e to f to get UUT details for the good OLI-Product-SN combos
98 e.uut_glbl_id = f.glbl_id(+)
99 AND -- join f to g to get the voided parameter
100 f.glbl_id = g.uut_glbl_id(+)
101 AND -- join c to j to get the option codes for the product number (parameter 2070)
102 c.ol_oh_glbl_id = j.ol_oh_glbl_id(+)
103 AND c.ol_seq = j.ol_seq(+)
104 AND c.ol_sub_seq = j.ol_sub_seq(+)
105 AND c.seq = j.seq(+)
106 AND j.par_mstr_id(+) = 2070
107 AND j.VALUE(+) IS NOT NULL
108 AND -- un-voided packages only
109 g.par_mstr_id(+) = 1003
110 AND (g.uut_glbl_id IS NULL OR g.VALUE = 'N')
111 /* AND -- 1003 = package void status parameter
112 g.VALUE(+) = 'N' */
113 GROUP BY a.lcl_id,
114 b.seq,
115 b.sub_seq,
116 c.seq,
117 c.olig_group_id,
118 a.closed_date,
119 a.modified_date,
120 a.received_date,
121 a.status,
122 c.tag,
123 c.special_tag,
124 h.fmly_serial_id,
125 d.allocation_timestamp,
126 lastprodsn.pm_mstr_id,
127 b.CEP,
128 j.VALUE,
129 a.PO
130 /
SQL>=================
explain plan:::
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 82182 | 29M| | 160K (2)| 00:32:09 |
| 1 | HASH GROUP BY | | 82182 | 29M| 30M| 160K (2)| 00:32:09 |
| 2 | NESTED LOOPS OUTER | | 82182 | 29M| | 154K (2)| 00:30:51 |
| 3 | NESTED LOOPS OUTER | | 82182 | 26M| | 145K (2)| 00:29:12 |
|* 4 | HASH JOIN | | 82182 | 23M| 10M| 137K (2)| 00:27:33 |
| 5 | TABLE ACCESS FULL | ORDR_HDRS | 159K| 8716K| | 397 (4)| 00:00:05 |
|* 6 | HASH JOIN | | 89664 | 20M| 15M| 135K (2)| 00:27:09 |
| 7 | TABLE ACCESS FULL | ORDR_LNS | 506K| 9882K| | 688 (5)| 00:00:09 |
|* 8 | HASH JOIN RIGHT OUTER | | 89424 | 19M| 17M| 133K (2)| 00:26:39 |
| 9 | TABLE ACCESS FULL | OMAR_TRACK_MAPS | 567K| 10M| | 725 (5)| 00:00:09 |
|* 10 | FILTER | | | | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 89424 | 17M| 4440K| 130K (2)| 00:26:09 |
|* 12 | TABLE ACCESS FULL | UUT_PARAMS | 133K| 2869K| | 3608 (7)| 00:00:44 |
|* 13 | HASH JOIN RIGHT OUTER | | 3244K| 563M| 85M| 96934 (3)| 00:19:24 |
| 14 | TABLE ACCESS FULL | UUTS | 2247K| 60M| | 4893 (4)| 00:00:59 |
|* 15 | HASH JOIN RIGHT OUTER | | 3244K| 476M| 239M| 62078 (3)| 00:12:25 |
| 16 | TABLE ACCESS FULL | SERIAL_PROD_UUT_MAPS | 3639K| 197M| | 6481 (4)| 00:01:18 |
|* 17 | HASH JOIN RIGHT OUTER | | 3244K| 300M| | 26716 (4)| 00:05:21 |
| 18 | VIEW | | 1 | 48 | | 18639 (4)| 00:03:44 |
|* 19 | FILTER | | | | | | |
| 20 | HASH GROUP BY | | 1 | 85 | | 18639 (4)| 00:03:44 |
|* 21 | HASH JOIN | | 308K| 25M| 40M| 18587 (4)| 00:03:44 |
|* 22 | TABLE ACCESS FULL| SERIAL_PROD_UUT_MAPS | 1060K| 28M| | 6520 (5)| 00:01:19 |
|* 23 | TABLE ACCESS FULL| SERIAL_PROD_UUT_MAPS | 1060K| 57M| | 6520 (5)| 00:01:19 |
| 24 | TABLE ACCESS FULL | ORDR_LN_ITMS | 3244K| 151M| | 8011 (4)| 00:01:37 |
|* 25 | TABLE ACCESS BY INDEX ROWID | ORDR_LN_PARAMS | 1 | 35 | | 1 (0)| 00:00:01 |
|* 26 | INDEX RANGE SCAN | OLP_OL_FK_I | 1 | | | 1 (0)| 00:00:01 |
| 27 | TABLE ACCESS BY INDEX ROWID | SERIAL_MSTRS | 1 | 37 | | 1 (0)| 00:00:01 |
|* 28 | INDEX RANGE SCAN | SM_PK | 1 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("A"."GLBL_ID"="B"."OH_GLBL_ID")
6 - access("B"."OH_GLBL_ID"="C"."OL_OH_GLBL_ID" AND "B"."SEQ"="C"."OL_SEQ" AND
"B"."SUB_SEQ"="C"."OL_SUB_SEQ")
8 - access("C"."OTM_TRACK_ID"="D"."TRACK_ID"(+))
10 - filter("G"."UUT_GLBL_ID" IS NULL OR "G"."VALUE"='N')
11 - access("F"."GLBL_ID"="G"."UUT_GLBL_ID"(+))
12 - filter("G"."PAR_MSTR_ID"(+)=1003)
13 - access("E"."UUT_GLBL_ID"="F"."GLBL_ID"(+))
15 - access("LASTPRODSN"."OLI_OL_OH_GLBL_ID"="E"."OLI_OL_OH_GLBL_ID"(+) AND
"LASTPRODSN"."OLI_OL_SEQ"="E"."OLI_OL_SEQ"(+) AND "LASTPRODSN"."OLI_OL_SUB_SEQ"="E"."OLI_OL_SUB_SEQ"(+)
AND "LASTPRODSN"."OLI_SEQ"="E"."OLI_SEQ"(+) AND "LASTPRODSN"."PM_MSTR_ID"="E"."PM_MSTR_ID"(+) AND
"LASTPRODSN"."SM_GLBL_ID"="E"."SM_GLBL_ID"(+))
17 - access("C"."OL_OH_GLBL_ID"="LASTPRODSN"."OLI_OL_OH_GLBL_ID"(+) AND
"C"."OL_SEQ"="LASTPRODSN"."OLI_OL_SEQ"(+) AND "C"."OL_SUB_SEQ"="LASTPRODSN"."OLI_OL_SUB_SEQ"(+) AND
"C"."SEQ"="LASTPRODSN"."OLI_SEQ"(+))
19 - filter("X"."UUT_GLBL_ID"=MAX("UUT_GLBL_ID"))
21 - access("OLI_OL_OH_GLBL_ID"="X"."OLI_OL_OH_GLBL_ID" AND "OLI_OL_SEQ"="X"."OLI_OL_SEQ" AND
"OLI_OL_SUB_SEQ"="X"."OLI_OL_SUB_SEQ" AND "OLI_SEQ"="X"."OLI_SEQ")
22 - filter("OLI_OL_OH_GLBL_ID" IS NOT NULL AND "OLI_OL_SEQ" IS NOT NULL AND "OLI_SEQ" IS NOT NULL AND
"OLI_OL_SUB_SEQ" IS NOT NULL)
23 - filter("X"."OLI_OL_OH_GLBL_ID" IS NOT NULL AND "X"."OLI_OL_SEQ" IS NOT NULL AND "X"."OLI_SEQ" IS
NOT NULL AND "X"."OLI_OL_SUB_SEQ" IS NOT NULL)
25 - filter("J"."PAR_MSTR_ID"(+)=2070 AND "J"."VALUE"(+) IS NOT NULL AND "C"."SEQ"="J"."SEQ"(+))
26 - access("C"."OL_OH_GLBL_ID"="J"."OL_OH_GLBL_ID"(+) AND "C"."OL_SEQ"="J"."OL_SEQ"(+) AND
"C"."OL_SUB_SEQ"="J"."OL_SUB_SEQ"(+))
28 - access("LASTPRODSN"."SM_GLBL_ID"="H"."GLBL_ID"(+))
SQL>mod. action : adding tags , is that so difficult ? -
Help required for improving performance of the Query
Hello SAP Techies,
I have MRP Query which shows Inventory projection by Calendar Year/Month wise.
There are 2 variables Plant and Material in free charateristics where it has been restricted by replacement of Query result .
Another query is Control M Query which is based on multiprovider. Multiprovider is created on 5 cubes.
The Query is taking 20 -15 Mins to get the result.
Due to replacement path by query result for the 2 variables first the control M Query is excuted. Business wanted to see all those materials in MRP query which are allocated to base plant hence they designed the query to use replacement Path by Query result. So it will get all the materials and plants from the control M query and will find the Invetory projection for the same selection in MRP query.
Is there any way I can improve the performance of the Query.
Query performance has been discussed innumerable times in the forums and there is a lot of information on the blogs and the WIKI - please search the forums before posting and if the existing posts do no answer your question satisfactorily then please raise a new post - else almost all the answers you get will be rehashed versions of previous posts ( and in most cases without attribution to the original author )
Edited by: Arun Varadarajan on Apr 19, 2011 9:23 PMHi ,
Please see if you can make these changes currently to the report . It will help in improving the performance of the query
1. Select the right read mode.
Reading data during navigation minimizes the impact on
the application server resources because only data that
the user requires will be retrieved.
2. Leverage filters as much as possible. Using filters contributes to
reducing the number of database reads and the size of the result set,
hereby significantly improving query runtimes.
Filters are especially valuable when associated with u201Cbig
dimensionsu201D where there is a large number of characteristics such as
customers and document numbers.
3. Reduce RKFs in the query to as few as possible. Also, define
calculated & RKFs on the Infoprovider level instead of locally within the query.
Regards
Garima -
Observing poor performance on the execution of the quereis
I am executing a relatively simple query which is rougly taking about 48-50 seconds to execute. Can someone suggest an alternate way to query the semantic model where we can achieve response time of a second or under. Here is the query
PREFIX bp:<http://www.biopax.org/release/biopax-level3.owl#>
PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
PREFIX ORACLE_SEM_FS_NS:<http://oracle.com/semtech#dop=24,RESULT_CACHE,leading(t0,t1,t2)>
SELECT distinct ?entityId ?predicate ?object
WHERE
?entityId rdf:type bp:Gene .
?entityId bp:name ?x .
?entityId bp:displayName ?y .
?entityId ?predicate ?object .
FILTER(regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i"))
Same query executed from sqldeveloper takes about as long as well
SELECT distinct /*+ parallel(24) */subject,p,o
FROM TABLE
(sem_match ( '{?subject rdf:type bp:Gene .
?subject bp:name ?x .
?subject bp:displayName ?y .
?subject ?p ?o
filter (regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i") )
sem_models ('biopek'),
null,
sem_aliases
( sem_alias
('bp',
'http://www.biopax.org/release/biopax-level3.owl#'
NULL,
null,null ))
Is there anything I am missing, can we do anything to optimize our data retrieval?
Best Regards,
AmiFor better performance when using FILTER involving regular expression, you may want to create a full-text index on MDSYS.RDF_VALUE$ table as described in:
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e11828/sdo_rdf_concepts.htm#CIHJCHBJ
I am assuming that you are checking for case-insensitive occurrence of the string GB035698 in ?x or ?y. (On the other hand if you are checking if ?x or ?y is equal to a case-insensitive form of the string GB035698, then the filter could be written in an expanded form involving just value-equality checks and would not need a full-text index for performance.)
Thanks. -
Performance of the query for YTD info
Hi Experts,
I have a query it takes more time when you run that query for period or year. Like selection ceritaria is few days info work quick. But when it run for period it take more then 10 minutes.
Give me suggestions how to speed up the query
ThanksBuild aggregates. Goto RSRT, execute and debug, switch on statistics and display aggregates checkbox. See whether an aggregate is hit or why not and build aggregates accordingly.
Also look at statsitcs if database is performance killer see events 9000ff for this.
Other hints: reduce start level of your query and remove some chracteristics if possible.
Regards,
Juergen -
Hello Guys,
iam having performance problem with query .when i run the the query with intial variables its displaying report quickly but when i go drilling with filter values its taking 10 minutes to display report.can anybody suggest me possible solutions for the performance improvement.
Regards
PriyaHi Priya,
First, you have to check what is causing the performance issue. You can do this by running the query in transaction RSRT. Execute the query in debug mode with the option "Display Statistics Data". You can navigate the query as you would normally. After that, check the statistics information and see what causes the performance issue. My guess is that you need to build an aggregate.
If rhe Data Manager time is high (a large % of the total runtime) and the ratio of the number of records selected VS the number of records transfered is high (e.g. > 10), then try to build an aggregate to help on the performance. To check for aggregate suggestions, run RSRT again with the option "Display Aggregates Found". It will show you what characteristics and characteristics selections would help (note that the suggestion might not always be the optimal one).
If OLAP Data Transfer time is high, then try optimizing the query design (e.g. try reducing the amount of restricted KFs or try calculating some KFs during the data flow instead of calculating them in the query).
Hope this helps. -
Poor performance for the 1st select everyday for AFRU table
Hello everyone, I have performance problems with AFRU table. Every day, the first time I run a "Z" transaction, it takes around 100-120 seconds, but the second time and following it only takes four seconds. What could I do, in order to reduce the first execution time?
This is the select:
SELECT * FROM AFRU WHERE MANDT = :A0 AND CATSBELNR = :A1 AND BUDAT = :A2 AND PERNR = :A3 AND STOKZ <> :A4 AND STZHL = :A5
The execution plan for this select takes index AFRU~ZCA with an acceptable cost of 6.319. Index AFRU~ZCA is a nonunique index with these colums: MANDT + CATSBELNR + BUDAT + PERNR
I'll appreciate any ideas.
Thanks in advance,
Santi.What database system are you using (ASE, Oracle, etc?).
If ASE, for the general issue of the first exection of a query taking longer, the two most likely reasons would be
a) the table's data has aged out of cache so the query has to do a lot of physical i/o to read the data back into cache
or
b) the query plan for the query has aged out of statement cache and needs to be recompiled.
This query looks pretty simple, so the data cache seems much more likely.
To get a better feel, some morning run the query with
set statistics io on
set statistics time on
then run it again and look for differences in the physical vs logical i/o numbers and compile vs execution times.
You could use a scheduled event (Job Scheduler, cron job) to run the query or some query very like it a little earlier in the day to prime the data cache with the table data. -
BIBean poor performance when using Query.setSuppressRows()
Does anyone have experience in suppressing N/A cell values using BIBean? I was experimenting the use of Query.setSuppressRows(DataDirector.NA_SUPPRESSION). It does hide the rows that contains N/A values in a crosstab.
The problem is that the performance degrades significantly when I started drilling down the hierarchy. Without calling the method, I was able to drill into the pre-aggregated hierarchy in a few seconds. But with setSuppressRows(), it took almost 15 minutes to do the same drill.
Just for a comparison, I used DML to report on the same data that I wanted to drill into. With either 'zerorow' to yes or no, the data was fetched less than a second.
Thanks for any help.
- WeiAt the moment we are hoping this will be fixed in a 10g database patch which is due early 2005. However, if you are using an Analytic Workspace then you could use OLAP DML to filter the zero and NA rows before they are returned to the query. I think this involves modifying the OLAP views that return the AW objects via SQL commands.
Hope this helps
Business Intelligence Beans Product Management Team
Oracle Corporation -
Performance - reading the query cost
Which of these three represent the most efficient query? I'm wondering if the smallest plan in bytes is the best plan. Also, are eager spools expensive or inexpensive, and is a smaller sub-tree cost better than a larger sub-tree cost? The
first plan is 48 bytes and the second is 160 bytes. Both return the same records. project is smaller than architecture but architecture comes from a different database.
R, JThe architecture table is small with about 1700 records, the project table is small with about 1100 records, the access_level, approval_version, and security_classifications are tiny. This is a group of tables called repeatedly for security.
It's more like a scalar function (though it seeds a table) or perhaps it can be considered as a boolean qualifier. The upper one is a modification that removes the tables that are joined in from another databases. Those have been removed
and placed into a single schema-bound table. The query executes in about 10 milliseconds but is frequently blocked by itself.
My question is more about the way the scan plan changes (hashes, eager spools and so on). The cost of the architecture scan doesn't change much but it pushes the table into the calling database to the top of the query plan and most of the scan becomes
a hash. There's a third rendition where the LEFT JOIN is changed to a LEFT LOOP JOIN [which I recently discovered]. I expected the schema-binding to provide better results - so the first of the two would be the better plan, I suppose.
I'm not sure. With the loop join condition, I was able to force a seek. It also crossed my mind that given the tiny size of the tables, perhaps they'd be better off without an index.
Here is what happens when the LOOP JOIN condition is added.
R, J -
Problem facing in performance of the query which calls one function
Hello Team,
Actually am facing performance issue for the following query.Even though for columns in where condition indexes are there.Also I used the hints but no use.Pls suggest me how can I increase the performance.Currently it's taking around 5 mins to execute the select statement.
SELECT crf_id, crf_code,crf_nme,
DECODE(UPPER( fn_hrc_refs( crf_id)),'Y','Yes','No') ua_ind,
creation_date, datetime_stamp, user_id
FROM BC_FBCTOR
ORDER BY crf_nme DESC;
FUNCTION fn_hrc_refs (pi_crf_id IN NUMBER)
RETURN VARCHAR2
IS
childref VARCHAR2 (1) := 'N';
cnt NUMBER := 0;
BEGIN
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB_DET
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_MPG_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CNTRY_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_COR
WHERE x_axis_crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_RESI_COR
WHERE y_axis_crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_PRIME
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM DR_FBCT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM DR_RISK
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_RESULT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_LOSS
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_CRITERIA
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_CORR
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_PORT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
RETURN childref;
EXCEPTION
WHEN OTHERS
THEN
childref := 'N';
RETURN childref;
END;
Regards,
AshisYou are checking for the existence of detail records. What is the purpose of this. Most applications I know rely on normal foreign key constraint to ensure parent-child relationsships.
The select is slow for two reasons.
reason one) You count all details records when you only need to know if one detail records exists or not.
reason two) Multiple context switches between sql and pl/sql for each row of your parent table
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB
WHERE crf_id = pi_crf_id;
IF cnt 0
THEN
childref := 'Y';
RETURN childref;
END IF;This could be replaced by
begin
SELECT 'Y'
INTO childref
FROM BC_CALIB
WHERE crf_id = pi_crf_id
and rownum = 1; /* only fetch one row */
execption
when no_data_found then
/* continue with the next select */
end;
return childref;but this would still do a lot of context switches, especially for parents without a detail records.
Also consider this option:
SELECT crf_id, crf_code,crf_nme,
case when exists (SELECT null FROM BC_CALIB t WHERE t.crf_id = BC_FBCTOR.crf_id)
then 'Yes'
when exists (SELECT null FROM BC_CALIB_DET t WHERE t.crf_id = BC_FBCTOR.crf_id)
then 'Yes'
else
'No'
end ua_ind,
creation_date, datetime_stamp, user_id
FROM BC_FBCTOR
ORDER BY crf_nme DESC;It should also be possible to use a UNION ALL select instead of different single case lines. But i choose this way since it resembles your original selection a bit better. And you could change the 'Yes' to something different like 'Childs in Table abc'.
Edited by: Sven W. on Sep 5, 2008 2:29 PM -
Poor Performance of the WebLogic Portal System
Hi,
I am facing one issue which has become bottleneck over the time as far as the development of my application is concerned.
My problem is that when i run and wish to see my portal page (Web Page)on Internet Explorer/Mozzila Firefox it takes so much time to get rendered (appx 10 mins). This is affecting the productivity as the page rendering is a frequent process to see the output of your work/changes made.
I would be very thankful if anyone can guide me what is wrong. Is this problem is with me only? Why Weblogic Portal system so slow as compared to other Portal systems like Microsoft's Sharepoint and IBM's Webshpere Portal system.
I am using Weblogic Portal v10.
CPU is 3.2 Ghz, 4 GB RAM, 3 MB Cache.
Please guide. I would appreciate if one can provide some way out to speed up the page rendering. I have tried changing the Heap Size etc but failed.
Thank You all. Have a great Day.10 minutes?!
We need to narrow that down, it may be something in your portlet implementation. An easy way to get an idea would be to take a series of Java thread dumps of the WLP server instance while it is processing that portlet. On Windows, press CTRL-Break or Google for the way to do it for your platform.
It will print out what each thread is working on - if you see your code in there over a period of time, you've got a problem in your portlet. If it is stuck in WLP code, let us know.
I also did a blog entry about performance improvement tips during iterative development, some might apply for you:
http://peterlaird.blogspot.com/2007/05/optimized-development-for-weblogic.html
Peter
Maybe you are looking for
-
New Report Option in OBIEE 11.1.1.7
Hello All, I just started creating a New Report for the first time, for that i have created one Data Model using Essbase Datasource. After New -> Report -- It is asking to select Use DataModel (or) Use SpreadSheet (or) Use Subject Area. Then i selec
-
The Character Pallett repeatedly opens on its own and crashes
Starting this morning I am being driven crazy by the Character Pallett launching unexpectedly on its own and then crashing almost immediately. Then it attempts to launch again, flashes briefly on screen, and crashes again. My crash log shows the same
-
Page flow controller - instance variable thread safe?
I have page flow controller that is a subclass of org.apache.beehive.netui.pageflow.PageFlowController runing weblogic 921. The essence of my question is: is instance variable of page flow controller thread safe? Here are the details. I have an insta
-
Free virus protection for windows side of Mac
Can anyone tell me if I need virus protection for the windows side of my Mac? If so is there any reliable ones that are free?
-
Time from ordering to receiving?
Hi, I fully realize that my question may be fairly ambiguous, but i'll give it a shot regardless: I'm thinking of ordering some books for Christmas. I had heard of these books, but never looked into them until now and my oh my, I regret not doing it