Query Performance - Query not using proper plan
Hello,
I am experiencing a performance issue with queries that span multiple partitions/tablespaces. Specifically, a query utilizing a stored procedure does not use the indices, rather, full table scans are being done resulting in the query taking 30+ minutes to complete. The same query, when removed from the SP, returns results in milliseconds.
In an attempt to correct the issue, table stats were updated, the stored procedure was re-compiled as well packages that may have been affected after the table stats update. In addition, the database was bounced (shutdown, restarted) but no noticable performance increase was acheived.
I'm looking for any insight on how to correct this issue.
I can provide additional information if required.
Thanks,
Scott.
Post the query, the stored procedure, and the table structure. My first quess here is that the stored procedure is binding an incorrect datatype, but I need to see the requested info to be certain.
Similar Messages
-
Query tunning in Oracle using Explain Plan
Adding to my below question: I have now modified the query and the path shownby 'Explain plan' has reduced. The 'Time' column of plan_table is also showing much lesser value. However, some people are suggesting me to consider the time required by the query to execute on Toad. Will it be practical? Please help!!
Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...
Edited by: 885901 on Sep 20, 2011 2:10 AM885901 wrote:
Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...how fast is fast enough? -
Query views are not using OLAP cache
Hi,
I am trying to pre-fill the OLAP cache with data from a query so as to improve the performance of query views.
I have read several documents on the topic, such as How to Performance Tuning with the OLAP Cache (http://www.sapadvisors.com/resources/Howto...PerformanceTuningwiththeOLAPCache$28pdf$29.pdf)
As far as I can see, I have followed the instructions and guidelines in detail on how to set up the cache and pre-fill it with data. However, when I run the query views they never use the cache. For example, point 3.4 in the abovementioned document does not correspond with my results.
I would like some input on what I am doing wrong:
1. In RSRT I have Cache mode = 1 for the specific query.
2. The query has no variables, but the following restrictions (in filter): 0CALMONTH = 09.2007, 10.2007, 11.2008 and 12.2007.
3. I have one query view with the restriction 0CALMONTH = 10.2007, 11.2008 and 12.2007.
4. I have a second query view, which builds on the same query as the first query view. This second query view has the restriction 0CALMONTH = 11.2008 and 12.2007.
5. There are no variables in the query.
6. I run the query.
7. I run the first query view, and the second query view immediately after.
8. I check ST03 and RSRT and see that cache has not been used for either of the query views.
Looking at point 3.4 in the abovementioned document, I argue that the three criteria have been fulfilled:
1. Same query ID
2. The first query view is a superset of the second query view
3. 0CALMONTH is a part of the drill-down of the first query view.
Can someone tell me what is wrong with my set-up?
Kind regards,
ThorYou need to use following process of process chain: "Attribute change run (ATTRIBCHAN)". This process needs to be incorporated into your process chains which loads data into provider on top of which your query is based.
See following links on topic how to build it:
https://help.sap.com/saphelp_nw73/helpdata/en/4a/5da82c7df51cece10000000a42189b/frameset.htm
https://help.sap.com/saphelp_nw70ehp1/helpdata/en/9a/33853bbc188f2be10000000a114084/content.htm
cheers
m./ -
Query Performance - Query very slow to run
I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.
Hi Joel,
Walkthrough Checklist for Query Performance:
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
Regards
Vivek Tripathi -
Performance of query. Indexes not used
Hi all,
I have a select statement which is quite simple and straight forward. I join Accounts, Transactions and Customers to get some records for processing.
Query looks something like this
Select
a.x,
a.y,
a.z,
t.p,
t.q,
t.r,
c.m,
c.n,
c.o
from
accounts a, transactions t, customers c
where t.account_id=a.account_id,
and t.customer_id=c.customer_id
and transaction_date='1/2/2009'
Account's primary key is ACCOUNT_ID,
Transaction's primary key is TRANSACTION_ID
and
Customers' primary key is CUSTOMER_ID
I am having the where clause on primary keys and hence I expect to do a INDEX scan's rather than a full tablescan. However, from the explain place I see that there is a full table scan on accounts.
I removed all the three columns from accounts table in select statement (a.x,a.y and a.z) and I see that it is just doing a index scan rather than full table scan.
Is there a reason why full table scan happens if I include the columns?
Kindly suggest.1) Can you use the \ tag to preserve white space?
2) Can you use DBMS_XPLAN to generate the plan?
3) The plan you posted doesn't appear to have a filter on TRANSACTION_DATE
4) I don't understand the comment "We are comparing it to a date itself". The query you posted initially is comparing TRANSACTION_DATE to a string. If you are comparing TRANSACTION_DATE to a date, the query you posted initially must not really be what you are executing.
Justin
Edited by: Justin Cave on Jun 25, 2009 5:07 PM
5) Oh, and a quick look at the estimated cardinalities seems to indicate that your statistics are wildly out of whack if the 1.5 - 2 million figure you quote is correct. -
Query performance problem when using hierarchies
Hello All,
I've a query which is built on a hieracrhy with the following structure for eg.
<b>A</b>
|_<b>B</b>
| |_L1
|
|_<b>C</b>
|_L2
When I restrict the query to hierarchy levels B and C simultaneously the query executes fine. But when i directly restrict the query to heirarchy level A , the query runs endlessly.
Could some one please help me out as to why this is the case ?
I don't have aggregates built on any of the hierarchy level.
Best Regards,
SanjayHi Roberto,
thanks for your response. However, the problem is not solved even after applying the suggestions of the note 738098 :(. These queries used to execute fine until yesterday and there have been no major additions to the hierarchy. Please let me know if there is any thing else that can be done. We are planning to bounce the system and see if there are any performance improvements.
PS: I've awarded points to you nevertheless, as the option suggested in the note seems useful and should be tried in case of these kind of performance issues
Best Regards,
Sanjay -
Query Performance and reading an Explain Plan
Hi,
Below I have posted a query that is running slowly for me - upwards of 10 minutes which I would not expect. I have also supplied the explain plan. I'm fairly new to explain plans and not sure what the danger signs are that I should be looking out for.
I have added indexes to these tables, a lot of which are used in the JOIN and so I expected this to be quicker.
Any help or pointers in the right direction would be very much appreciated -
SELECT a.lot_id, a.route, a.route_rev
FROM wlos_owner.tbl_current_lot_status_dim a, wlos_owner.tbl_last_seq_num b, wlos_owner.tbl_hist_metrics_at_op_lkp c
WHERE a.fw_ver = '2'
AND a.route = b.route
AND a.route_rev = b.route_rev
AND a.fw_ver = b.fw_ver
AND a.route = c.route
AND a.route_rev = c.route_rev
AND a.fw_ver = c.fw_ver
AND a.prod = c.prod
AND a.lot_type = c.lot_type
AND c.step_seq_num >= a.step_seq_num
PLAN_TABLE_OUTPUT
Plan hash value: 2447083104
| Id | Operation | Name | Rows | Bytes | Cost
(%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 333 | 33633 | 1347
(8)| 00:00:17 |
|* 1 | HASH JOIN | | 333 | 33633 | 1347
(8)| 00:00:17 |
|* 2 | HASH JOIN | | 561 | 46002 | 1333
(7)| 00:00:17 |
|* 3 | TABLE ACCESS FULL| TBL_CURRENT_LOT_STATUS_DIM | 11782 | 517K| 203
(5)| 00:00:03 |
PLAN_TABLE_OUTPUT
|* 4 | TABLE ACCESS FULL| TBL_HIST_METRICS_AT_OP_LKP | 178K| 6455K| 1120
(7)| 00:00:14 |
|* 5 | TABLE ACCESS FULL | TBL_LAST_SEQ_NUM | 8301 | 154K| 13
(16)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND
"A"."FW_VER"=TO_NUMBER("B"."FW_VER"))
2 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV" AND
"A"."FW_VER"="C"."FW_VER" AND "A"."PROD"="C"."PROD" AND "A"."LOT_T
YPE"="C"."LOT_TYPE")
filter("C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
3 - filter("A"."FW_VER"=2)
PLAN_TABLE_OUTPUT
4 - filter("C"."FW_VER"=2)
5 - filter(TO_NUMBER("B"."FW_VER")=2)
24 rows selected.Guys thank you for your help.
I changed the type of the offending column and the plan looks a lot better and results seem a lot quicker.
However I have added to my SELECT, quite substantially, and have a new explain plan.
There are two sections in particular that have a high cost and I was wondering if you seen anything inherently wrong or can explain more fully what the PLAN_TABLE_OUTPUT descriptions are telling me - in particular
INDEX FULL SCAN
PLAN_TABLE_OUTPUT
Plan hash value: 3665357134
| Id | Operation | Name | Rows
| Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | |
4 | 316 | 52 (2)| 00:00:01 |
|* 1 | VIEW | |
4 | 316 | 52 (2)| 00:00:01 |
| 2 | WINDOW SORT | |
4 | 600 | 52 (2)| 00:00:01 |
|* 3 | TABLE ACCESS BY INDEX ROWID | TBL_HIST_METRICS_AT_OP_LKP |
1 | 71 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 4 | NESTED LOOPS | |
4 | 600 | 51 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 7
5 | 5925 | 32 (0)| 00:00:01 |
|* 6 | INDEX FULL SCAN | UNIQUE_LAST_SEQ | 8
9 | 2492 | 10 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| TBL_CURRENT_LOT_STATUS_DIM |
PLAN_TABLE_OUTPUT
1 | 51 | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | TBL_CUR_LOT_STATUS_DIM_IDX1 |
1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | TBL_HIST_METRIC_AT_OP_LKP_IDX1 | 2
9 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter("SEQ"=1)
3 - filter("C"."FW_VER"=2 AND "A"."PROD"="C"."PROD" AND "A"."LOT_TYPE"="C"."L
OT_TYPE" AND
"C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
6 - access("B"."FW_VER"=2)
filter("B"."FW_VER"=2)
PLAN_TABLE_OUTPUT
8 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND "A
"."FW_VER"=2)
9 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV") -
Lookup-table and query-database do not use global transaction
Hi,
following problem:
DbAdapter inserts data into DB (i.e. an invoice).
Process takes part in global transaction.
After the insert there is a transformation which uses query-database and / or lookup-table.
It seems these XPath / XSLT functions are NOT taking part in the transaction and so we can not access information from the current db transaction.
I know workarounds like using DbAdapter for every query needed, etc. but this will cost a lot of time to change.
Is there any way to share transaction in both DbAdapter insert AND lookup-table and query-database?
Thanks, Best Regards,
MartinOne dba contacted me and made this statement:
Import & export utilities are not independent from characterset. All
user data in text related datatypes is exported using the character set
of the source database. If the character sets of the source and target
databases do not match a single conversion is performed.So far, that does not appear to be correct.
nls_characterset = AL32UTF8
nls_nchar_characterset = UTF8
Running on Windows.
EXP produces a backup in WE8MSWIN1252.
I found that if I change the setting of the NLS_LANG registry setting for my oracle home, the exp utility exports to that character set.
I changed the nls_lang
from AMERICAN_AMERICA.WE8MSWIN1252
to AMERICAN_AMERICA.UTF8
Unfortunately , the export isn't working right, although it did change character sets.
I get a warning on a possible character set conversion issue from AL32UTF8 to UTF8.
Plus, I get an EXP_00056 Oracle error 932 encountered
ORA-00932: inconsistent datatypes: expected BLOB, CLOB, get CHAR.
EXP-00000: export terminated unsuccessfully.
The schema I'm exporting with has exactly one procedure in it. Nothing else.
I guess getting a new error message is progress. :)
Still can't store multi-lingual characters in data tables. -
Query performance affected with use of package.function in 11g
Hi,
I have a view that helps select account details for a particular account and use that in a form. The query in the view is:
select acc.* from accounts acc where acc.account_no = pkgacc.fGetAccNo;
Here "pkgacc" is a package that has set and get methods. ACCOUNT_NO is the PK for ACCOUNTS table. This same query when run in a 10g database makes use of the PK INDEX. However in 11g it does a FULL SCAN.
RegardsHi,
1/ Volume is the same
2/ All statistics are up to date
10g Plan
Plan
SELECT STATEMENT ALL_ROWS
Cost: 18 Bytes: 462 Cardinality: 3
23 NESTED LOOPS
21 NESTED LOOPS
Cost: 18 Bytes: 462 Cardinality: 3
19 VIEW VIEW SYS.VW_NSO_1
Cost: 12 Bytes: 39 Cardinality: 3
18 HASH UNIQUE
Cost: 12 Bytes: 110 Cardinality: 3
17 UNION-ALL
2 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.OLD_ACCOUNT_LINKS
Cost: 0 Bytes: 25 Cardinality: 1
1 INDEX RANGE SCAN INDEX SUMMIT.OACCL_2
Cost: 0 Cardinality: 1
8 NESTED LOOPS
6 NESTED LOOPS
Cost: 7 Bytes: 40 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.ACCOUNTS
Cost: 4 Bytes: 18 Cardinality: 1
3 INDEX RANGE SCAN INDEX (UNIQUE) SUMMIT.ACC_PRIME
Cost: 3 Cardinality: 1
5 INDEX RANGE SCAN INDEX SUMMIT.ACCL_2
Cost: 2 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.ACCOUNT_LINKS
Cost: 3 Bytes: 22 Cardinality: 1
16 NESTED LOOPS
Cost: 5 Bytes: 45 Cardinality: 1
14 MERGE JOIN CARTESIAN
Cost: 5 Bytes: 30 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE CVC.ACT01
Cost: 3 Bytes: 15 Cardinality: 1
9 INDEX RANGE SCAN INDEX (UNIQUE) CVC.PK_ACT01
Cost: 2 Cardinality: 1
13 BUFFER SORT
Cost: 2 Bytes: 30 Cardinality: 2
12 TABLE ACCESS BY INDEX ROWID TABLE CVC.ACF02
Cost: 2 Bytes: 30 Cardinality: 2
11 INDEX RANGE SCAN INDEX (UNIQUE) CVC.PK_ACF02
Cost: 1 Cardinality: 2
15 INDEX UNIQUE SCAN INDEX (UNIQUE) CVC.PK_BIT41
Cost: 0 Bytes: 15 Cardinality: 1
20 INDEX UNIQUE SCAN INDEX (UNIQUE) SUMMIT.CUST_PRIME
Cost: 1 Cardinality: 1
22 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.CUSTOMERS
Cost: 2 Bytes: 141 Cardinality: 1
11g Plan
Plan
SELECT STATEMENT ALL_ROWS
Cost: 1,136,322 Bytes: 138,218,223,528 Cardinality: 897,520,932
19 HASH JOIN
Cost: 1,136,322 Bytes: 138,218,223,528 Cardinality: 897,520,932
1 TABLE ACCESS FULL TABLE SUMMIT.CUSTOMERS
Cost: 14,455 Bytes: 355,037,154 Cardinality: 2,517,994
18 VIEW VIEW SYS.VW_NSO_1
Cost: 20,742 Bytes: 11,685,473,072 Cardinality: 898,882,544
17 HASH UNIQUE
Cost: 20,742 Bytes: 35,955,720,360 Cardinality: 898,882,544
16 UNION-ALL
3 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.OLD_ACCOUNT_LINKS
Cost: 0 Bytes: 25 Cardinality: 1
2 INDEX RANGE SCAN INDEX SUMMIT.OACCL_2
Cost: 0 Cardinality: 1
8 HASH JOIN
Cost: 20,354 Bytes: 35,951,952,800 Cardinality: 898,798,820
5 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.ACCOUNTS
Cost: 5,398 Bytes: 1,400,292 Cardinality: 77,794
4 INDEX RANGE SCAN INDEX (UNIQUE) SUMMIT.ACC_PRIME
Cost: 102 Cardinality: 28,006
7 TABLE ACCESS BY INDEX ROWID TABLE SUMMIT.ACCOUNT_LINKS
Cost: 4,634 Bytes: 4,575,208 Cardinality: 207,964
6 INDEX RANGE SCAN INDEX SUMMIT.ACCL_2
Cost: 145 Cardinality: 37,433
15 HASH JOIN
Cost: 388 Bytes: 3,767,535 Cardinality: 83,723
10 TABLE ACCESS BY INDEX ROWID TABLE CVC.ACT01
Cost: 271 Bytes: 4,065 Cardinality: 271
9 INDEX RANGE SCAN INDEX (UNIQUE) CVC.PK_ACT01
Cost: 3 Cardinality: 342
14 HASH JOIN
Cost: 115 Bytes: 92,580 Cardinality: 3,086
12 TABLE ACCESS BY INDEX ROWID TABLE CVC.ACF02
Cost: 76 Bytes: 46,290 Cardinality: 3,086
11 INDEX RANGE SCAN INDEX (UNIQUE) CVC.PK_ACF02
Cost: 4 Cardinality: 555
13 INDEX FAST FULL SCAN INDEX (UNIQUE) CVC.PK_BIT41
Cost: 38 Bytes: 557,220 Cardinality: 37,148 -
Hi,
we are discovering a significant time period during the execution of a
query and we cannot identify the reason.
The following is a timestamped series of log messages -
[2003-10-30 17:47:06,572] 'DEBUG'
Test=,Thread=main,Time=7741,Creator=com.algorithmics.oprisk.jdo.BaseJDOService,Message=loadByCriteria:
time to get query ready - 10
[2003-10-30 17:47:07,644] 'DEBUG'
Test=,Thread=main,Time=8813,Creator=com.solarmetric.kodo.impl.jdbc.JDBC,Message=[
C:535863; T:9956845; D:8752113 ] get
[com.solarmetric.datasource.PoolConnection@82d37[identityHashCode:26704795,wrapped:com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection@82d37[identityHashCode:3408129,wrapped:oracle.jdbc.driver.OracleConnection@82d37]:
[requests=2;size=2;max=70;hits=0;created=2;redundant=0;overflow=0;new=2;leaked=0;unavailable=0]]]
from [com.solarmetric.datasource.DataSourceImpl$SortablePool[min=10;
max=25; size=10; taken=0]]
[2003-10-30 17:47:07,654] 'DEBUG'
Test=,Thread=main,Time=8823,Creator=com.solarmetric.kodo.impl.jdbc.SQL,Message=[
C:535863; T:9956845; D:8752113 ] preparing statement <10168913>: SELECT
blah blah blah)
[2003-10-30 17:47:07,664] 'DEBUG'
Test=,Thread=main,Time=8833,Creator=com.solarmetric.kodo.impl.jdbc.SQL,Message=[
C:535863; T:9956845; D:8752113 ] executing statement <10168913>: (SELECT
blah blah blah)): [reused=1;params={bunch of params}]
[2003-10-30 17:47:08,525] 'DEBUG'
Test=,Thread=main,Time=9694,Creator=com.algorithmics.oprisk.jdo.BaseJDOService,Message=loadByCriteria:
time to execute query - 1953
There is a ~1000ms period between our log message and the kodo log
message when the connection is obtained.
The following is the code generating this log -
log.debug(methodName+": time to get query ready - "+(end-start));
// run query
start = System.currentTimeMillis();
Collection results = (Collection) query.executeWithMap(map);
end = System.currentTimeMillis();
log.debug(methodName+": time to execute query - "+(end-start));
As you can see the only thing going on is the query.execute() call. What
concerns me is that there are 2 largish periods. 1 I assume originates
from the actual query. So where does the other come from?
We are using Kodo2.5.3.
kodo.properties looks like this -
javax.jdo.option.MinPool=10
javax.jdo.option.MaxPool=25
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
javax.jdo.Multithreaded=true
com.solarmetric.kodo.EnableQueryExtensions=true
Thanks,
SimonWhat happens if you use Kodo 2.5.5 instead of 2.5.3?
-Patrick
Simon Horne wrote:
Yes the second period appears to be the actual query execution. This is
fine and no surprises there. But why is obtaining the connection taking
so long to complete. Our pool is set as min=10 max=25 and this is the
only 3rd or 4th query run in the complete test. Looking at the kodo log
output it states -
SortablePool[min=10; max=25; size=10; taken=0]]
so from this I am assuming there are 10 connections in the pool and none
currently in use.
Any ideas as to why it takes approximately 1100ms to grab one of these?
Is there any way to get kodo to log any extra information about the
connection pool. We've currently got TRACE logging enabled for
everything via log4j.
Thanks,
Simon
Marc Prud'hommeaux wrote:
Simon-
As you can see the only thing going on is the query.execute() call.
What concerns me is that there are 2 largish periods. 1 I assume
originates from the actual query. So where does the other come from?Well, I can only assume that the second delay if from the query
execution. Without seeing what was replaced with "blah,blah", we won't
be able to give any hints about why the query might be slow (short of
general hints like ensuring that all the appropriate columns are
indexed, etc).
I assume that the first delay you are referring to is T=7741 "time to
get query ready - 10" and T=8813 where the connection is obtained. I
suspect that reason for this delay might be that the connection pool is
exhausted out, and it need to make a new connection to the database. You
might try increasing the size of your connection pool, as well as
ensuring that you are releasing resources in a timely manner (i.e.,
closing
queries, extents, and PersistenceManagers immediately when they are
no longer needed).
In article <[email protected]>, Simon Horne wrote:
Hi,
we are discovering a significant time period during the execution of
a query and we cannot identify the reason.
The following is a timestamped series of log messages -
[2003-10-30 17:47:06,572] 'DEBUG'
Test=,Thread=main,Time=7741,Creator=com.algorithmics.oprisk.jdo.BaseJDOService,Message=loadByCriteria:
time to get query ready - 10
[2003-10-30 17:47:07,644] 'DEBUG'
Test=,Thread=main,Time=8813,Creator=com.solarmetric.kodo.impl.jdbc.JDBC,Message=[
C:535863; T:9956845; D:8752113 ] get
[com.solarmetric.datasource.PoolConnection@82d37[identityHashCode:26704795,wrapped:com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection@82d37[identityHashCode:3408129,wrapped:oracle.jdbc.driver.OracleConnection@82d37]:
[requests=2;size=2;max=70;hits=0;created=2;redundant=0;overflow=0;new=2;leaked=0;unavailable=0]]]
from [com.solarmetric.datasource.DataSourceImpl$SortablePool[min=10;
max=25; size=10; taken=0]]
[2003-10-30 17:47:07,654] 'DEBUG'
Test=,Thread=main,Time=8823,Creator=com.solarmetric.kodo.impl.jdbc.SQL,Message=[
C:535863; T:9956845; D:8752113 ] preparing statement <10168913>:
SELECT blah blah blah)
[2003-10-30 17:47:07,664] 'DEBUG'
Test=,Thread=main,Time=8833,Creator=com.solarmetric.kodo.impl.jdbc.SQL,Message=[
C:535863; T:9956845; D:8752113 ] executing statement <10168913>:
(SELECT blah blah blah)): [reused=1;params={bunch of params}]
[2003-10-30 17:47:08,525] 'DEBUG'
Test=,Thread=main,Time=9694,Creator=com.algorithmics.oprisk.jdo.BaseJDOService,Message=loadByCriteria:
time to execute query - 1953
There is a ~1000ms period between our log message and the kodo log
message when the connection is obtained.
The following is the code generating this log -
log.debug(methodName+": time to get query ready - "+(end-start));
// run query
start = System.currentTimeMillis();
Collection results = (Collection) query.executeWithMap(map);
end = System.currentTimeMillis();
log.debug(methodName+": time to execute query - "+(end-start));
As you can see the only thing going on is the query.execute() call.
What concerns me is that there are 2 largish periods. 1 I assume
originates from the actual query. So where does the other come from?
We are using Kodo2.5.3.
kodo.properties looks like this -
javax.jdo.option.MinPool=10
javax.jdo.option.MaxPool=25
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
javax.jdo.Multithreaded=true
com.solarmetric.kodo.EnableQueryExtensions=true
Thanks,
Simon -
EPS File Not Using Proper Fonts in Illustrator
I am trying to import an EPS file that I created in Finale 2014 into Illustrator CS6 (64 bit). It seems to not want to use the Maestro font for the noteheads, rather replacing it with Myriad Pro. I have tried this on multiple machines with the same result. I have tried to switch the font manually, but it will not change.
Likewise, I can open the file in Photoshop just fine with all fonts intact.
I need it as an EPS file because I need to be able to ungroup the vectors and alter them in Illustrator.devinc,
Maybe the maestro font is not up the requirements.
Illy is particularly particular about the quality of fonts, because she can work with them at a deeper/higher level than most, so you can easily have a font that works in all other applications, but Illy refuses to recognize it as a font.
Some failing fonts discussed in earlier threads have turned out to have rather basic flaws, such as redundant, superfluous, or (if I remember rightly) even stray points, or other clear errors in the paths that they consist of.
You may need to look for similar fonts that Illy is willing to work with. -
select *
from hrm_career x
WHERE x.begin_date = ( SELECT MAX(begin_date)
FROM hrm_career y
WHERE y.employee_id = x.employee_id AND
begin_date <= SYSDATE AND
primary_job = 'Y') AND
x.primary_job = 'Y'
I have the above query which is not using the index created on the BEGIN_DT column
I tried to force using still not using
but when i apply a value say
select *
from hrm_career x
WHERE x.begin_date ='10-20-2007'
It is using index and resulting in very fast response
Can some throw some ideas on it...
Where should i look into here ..SQL> set autotrace traceonly
SQL> select *
2 from hrm_career x
3 WHERE x.begin_date = ( SELECT MAX(begin_date)
4 FROM hrm_career y
5 WHERE y.employee_id = x.employee_id AND
6 begin_date <= SYSDATE AND
7 primary_job = 'Y') AND
8 x.primary_job = 'Y';
13454 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1417 Card=152 Bytes=
35568)
1 0 FILTER
2 1 SORT (GROUP BY) (Cost=1417 Card=152 Bytes=35568)
3 2 HASH JOIN (Cost=254 Card=47127 Bytes=11027718)
4 3 INDEX (FAST FULL SCAN) OF 'HRM_CAREER_PK' (UNIQUE) (
Cost=12 Card=25026 Bytes=500520)
5 3 TABLE ACCESS (FULL) OF 'HRM_CAREER' (Cost=81 Card=25
335 Bytes=5421690)
Statistics
3671 recursive calls
9 db block gets
1758 consistent gets
2130 physical reads
0 redo size
2217762 bytes sent via SQL*Net to client
10359 bytes received via SQL*Net from client
898 SQL*Net roundtrips to/from client
128 sorts (memory)
1 sorts (disk)
13454 rows processed
TKPROF
TKPROF: Release 9.2.0.6.0 - Production on Wed Dec 12 18:40:56 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: qnhg_ora_500.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 34.45 34.45
select condition
from
cdef$ where rowid=:1
call count cpu elapsed disk query current rows
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 8 0 4
total 12 0.00 0.00 0 8 0 4
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY USER ROWID CDEF$
select *
from hrm_career x
WHERE x.begin_date = ( SELECT MAX(begin_date)
FROM hrm_career y
WHERE y.employee_id = x.employee_id AND
begin_date <= SYSDATE AND
primary_job = 'Y') AND
x.primary_job = 'Y'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.07 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 898 0.00 2.39 2038 946 9 13454
total 900 0.00 2.46 2038 946 9 13454
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Rows Row Source Operation
13454 FILTER
25335 SORT GROUP BY
67496 HASH JOIN
25333 INDEX FAST FULL SCAN HRM_CAREER_PK (object id 25292)
25336 TABLE ACCESS FULL HRM_CAREER
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
13454 FILTER
25335 SORT (GROUP BY)
67496 HASH JOIN
25333 INDEX GOAL: ANALYZED (FAST FULL SCAN) OF 'HRM_CAREER_PK'
(UNIQUE)
25336 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'HRM_CAREER'
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 898 0.00 0.00
SQL*Net more data to client 877 0.00 0.05
db file sequential read 1 0.01 0.01
db file scattered read 60 0.00 0.14
direct path write 9 0.00 0.00
direct path read 125 0.05 0.13
SQL*Net message from client 898 0.02 1.47
DELETE FROM PLAN_TABLE
WHERE
STATEMENT_ID=:1
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 6 6 6
Fetch 0 0.00 0.00 0 0 0 0
total 4 0.00 0.00 0 6 6 6
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Rows Row Source Operation
0 DELETE
0 TABLE ACCESS FULL PLAN_TABLE
Rows Execution Plan
0 DELETE STATEMENT GOAL: CHOOSE
0 DELETE OF 'PLAN_TABLE'
0 TABLE ACCESS (FULL) OF 'PLAN_TABLE'
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 14.77 14.79
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,
o.dataobj#,o.flags
from
obj$ o where o.obj#=:1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)
EXPLAIN PLAN SET STATEMENT_ID='PLUS74964' FOR select *
from hrm_career x
WHERE x.begin_date = ( SELECT MAX(begin_date)
FROM hrm_career y
WHERE y.employee_id = x.employee_id AND
begin_date <= SYSDATE AND
primary_job = 'Y') AND
x.primary_job = 'Y'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 4 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.01 0 4 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
insert into plan_table (statement_id, timestamp, operation, options,
object_node, object_owner, object_name, object_instance, object_type,
search_columns, id, parent_id, position, other,optimizer, cost, cardinality,
bytes, other_tag, partition_start, partition_stop, partition_id,
distribution, cpu_cost, io_cost, temp_space, access_predicates,
filter_predicates )
values
(:1,SYSDATE,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,
:20,:21,:22,:23,:24,:25,:26,:27)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 3 6 6
Fetch 0 0.00 0.00 0 0 0 0
total 7 0.00 0.00 0 3 6 6
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN) (recursive depth: 1)
Rows Execution Plan
0 INSERT STATEMENT GOAL: CHOOSE
select o.name, u.name
from
sys.obj$ o, sys.user$ u where obj# = :1 and owner# = user#
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 1
Parsing user id: SYS (recursive depth: 1)
SELECT ID ID_PLUS_EXP,PARENT_ID PARENT_ID_PLUS_EXP,LPAD(' ',2*(LEVEL-1))
||OPERATION||DECODE(OTHER_TAG,NULL,'','*')||DECODE(OPTIONS,NULL,'','
('||OPTIONS||')')||DECODE(OBJECT_NAME,NULL,'',' OF '''||OBJECT_NAME||'''')
||DECODE(OBJECT_TYPE,NULL,'',' ('||OBJECT_TYPE||')')||DECODE(ID,0,
DECODE(OPTIMIZER,NULL,'',' Optimizer='||OPTIMIZER))||DECODE(COST,NULL,'','
(Cost='||COST||DECODE(CARDINALITY,NULL,'',' Card='||CARDINALITY)
||DECODE(BYTES,NULL,'',' Bytes='||BYTES)||')') PLAN_PLUS_EXP,OBJECT_NODE
OBJECT_NODE_PLUS_EXP
FROM
PLAN_TABLE START WITH ID=0 AND STATEMENT_ID=:1 CONNECT BY PRIOR ID=PARENT_ID
AND STATEMENT_ID=:1 ORDER BY ID,POSITION
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 22 0 6
total 4 0.00 0.00 0 22 0 6
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Rows Row Source Operation
6 SORT ORDER BY
6 CONNECT BY WITH FILTERING
1 NESTED LOOPS
1 TABLE ACCESS FULL PLAN_TABLE
1 TABLE ACCESS BY USER ROWID PLAN_TABLE
5 NESTED LOOPS
6 BUFFER SORT
6 CONNECT BY PUMP
5 TABLE ACCESS FULL PLAN_TABLE
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
6 SORT (ORDER BY)
6 CONNECT BY (WITH FILTERING)
1 NESTED LOOPS
1 TABLE ACCESS (FULL) OF 'PLAN_TABLE'
1 TABLE ACCESS (BY USER ROWID) OF 'PLAN_TABLE'
5 NESTED LOOPS
6 BUFFER (SORT)
6 CONNECT BY PUMP
5 TABLE ACCESS (FULL) OF 'PLAN_TABLE'
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.09 0.09
SELECT ID ID_PLUS_EXP,OTHER_TAG OTHER_TAG_PLUS_EXP,OTHER OTHER_PLUS_EXP
FROM
PLAN_TABLE WHERE STATEMENT_ID=:1 AND OTHER_TAG IS NOT NULL ORDER BY ID
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 0
total 3 0.00 0.00 0 3 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
Rows Row Source Operation
0 SORT ORDER BY
0 TABLE ACCESS FULL PLAN_TABLE
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
0 SORT (ORDER BY)
0 TABLE ACCESS (FULL) OF 'PLAN_TABLE'
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
ALTER SESSION SET EVENTS '10046 trace name context off'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (ADMIN)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 7 0.00 0.09 0 4 0 0
Execute 8 0.00 0.00 0 6 6 6
Fetch 901 0.00 2.39 2038 971 9 13460
total 916 0.00 2.49 2038 981 15 13466
Misses in library cache during parse: 6
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 906 0.00 0.00
SQL*Net message from client 906 34.45 50.82
SQL*Net more data to client 877 0.00 0.05
db file sequential read 1 0.01 0.01
db file scattered read 60 0.00 0.14
direct path write 9 0.00 0.00
direct path read 125 0.05 0.13
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 7 0.00 0.00 0 0 0 0
Execute 11 0.00 0.00 0 3 6 6
Fetch 5 0.00 0.00 0 11 0 5
total 23 0.00 0.00 0 14 6 11
Misses in library cache during parse: 4
Misses in library cache during execute: 2
9 user SQL statements in session.
6 internal SQL statements in session.
15 SQL statements in session.
5 statements EXPLAINed in this session.
Trace file: qnhg_ora_500.trc
Trace file compatibility: 9.02.00
Sort options: default
3 sessions in tracefile.
12 user SQL statements in trace file.
8 internal SQL statements in trace file.
15 SQL statements in trace file.
11 unique SQL statements in trace file.
5 SQL statements EXPLAINed using schema:
ADMIN.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
3945 lines in trace file.
Message was edited by:
Maran Viswarayar -
Hi,
I am working on a application Developed in Forms10g and Oralce 10g.
I have few very large transaction tables in db and most of the screens in my application based on these tables only.
When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
How can I keep these tables in memory (buffer) always to reduce the initial query time?
or
Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
or
Any query performance tuning suggestions will be appreciated.
Thanks in advanceThanks a lot for your posts, very large means around
12 million rows. Yep, that's a large table
I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
Even when I try the query in SQL PLUS it is taking
long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
My concern is, when I execute the same query again or
in another session (some other user or same user),
can I increase the performance because the tables are
already in memory. any possibility for this? Can I
set any database parameters to share the data between
sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
Am I thinking in the right way? or I lost some where?Don't know.
There's two approaches:
- try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
- MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
>
>
Thanks -
Effect of Restricted Keyfigure & calculated keyfigure in query performance
Hi,
What is the effect of Restricted Keyfigure & calculated keyfigure in Query Performance?
Regards
AnilAs compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.
other than performance, there might be other considerations to determine which one of the options should be used.
If the RKF's are query specific and not used anywhere in majority of other queries, I would go for structure selections. And from my personal exp, sometimes all the developers end up with so many RKF and CKF's that you get easily lost in the web and not to mention the duplication.
if the same structure is needed widely across most of the queries, that might be a good idea to have global structure to be available across the provider, which might considerable cut down the development time. -
Qeury not using the bitmap index
Hi,
Pls have a look at the query below:
SELECT
A.flnumber,
A.fldate,
SUBSTR(C.sec,1,3) sect,
D.element,
C.class,
SUM(C.qty) qty,
A.indicator,
DECODE(A.indicator, 'I', B.inrt, 'O', B.outrt, 'R', B.rting, NULL) direction,
B.rting
FROM
Header A,
Paths B,
PathData C,
ElementData D
WHERE
(D.category='N') AND
(A.rt=B.rt) AND
(C.element=D.element) AND
(A.fldate=C.fldate AND
A.flnumber=C.flnumber) AND
C.element IN (SELECT codes FROM Master_codes WHERE type='F')
GROUP BY A.flnumber,
A.fldate,
SUBSTR(C.sec, 1, 3),
D.element,
C.class,
A.indicator,
DECODE(A.indicator,'I', B.inrt, 'O', B.outrt,'R', B.rting, NULL),
B.rting
UNION ALL
SELECT
A.flnumber,
A.fldate,
SUBSTR(C.sec,1,3) sect,
D.element,
C.class,
SUM(C.qty) qty,
A.indicator,
DECODE(A.indicator, 'I', B.inrt, 'O', B.outrt, 'R', B.rting, NULL) ROUTE_direction,
B.rting
FROM
Header A,
Paths B,
PathData C,
ElementData D
WHERE
(D.category='N') AND
(A.rt=B.rt) AND
(C.element=D.element) AND
(A.fldate=C.fldate AND
A.flnumber=C.flnumber) AND
C.element NOT IN (SELECT codes FROM Master_codes WHERE type='F')
GROUP BY A.flnumber,
A.fldate,
SUBSTR(C.sec, 1, 3),
D.element,
C.class,
A.indicator,
DECODE(A.indicator,'I', B.inrt, 'O', B.outrt,'R', B.rting, NULL),
B.rting
The cost in the explain plan is very high. The table PathData* has 42710366 records and there is a bitmap index on the flnumber_ and fldate* columns. But the query above does not use the indexes. The other tables in the list are fine as their respective PK and indexes are used but the table PathData* is going for a "Table Access by Local Index Rowid". dont know what it means but the cost for this is 7126 which is high. I cant figure out why is the query not using the bitmap indexes for this table.
Pls let me know what should be done.???Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
SELECT a.flnumber,
a.fldate,
Substr(c.sec, 1, 3) sect,
d.element,
c.class,
SUM(c.qty) qty,
a.INDICATOR,
Decode(a.INDICATOR, 'I', b.inrt,
'O', b.outrt,
'R', b.rting,
NULL) direction,
b.rting
FROM header a,
paths b,
pathdata c,
elementdata d
WHERE ( d.category = 'N' )
AND ( a.rt = b.rt )
AND ( c.element = d.element )
AND ( a.fldate = c.fldate
AND a.flnumber = c.flnumber )
AND c.element IN (SELECT codes
FROM master_codes
WHERE TYPE = 'F')
GROUP BY a.flnumber,
a.fldate,
Substr(c.sec, 1, 3),
d.element,
c.class,
a.INDICATOR,
Decode(a.INDICATOR, 'I', b.inrt,
'O', b.outrt,
'R', b.rting,
NULL),
b.rting
UNION ALL
SELECT a.flnumber,
a.fldate,
Substr(c.sec, 1, 3) sect,
d.element,
c.class,
SUM(c.qty) qty,
a.INDICATOR,
Decode(a.INDICATOR, 'I', b.inrt,
'O', b.outrt,
'R', b.rting,
NULL) route_direction,
b.rting
FROM header a,
paths b,
pathdata c,
elementdata d
WHERE ( d.category = 'N' )
AND ( a.rt = b.rt )
AND ( c.element = d.element )
AND ( a.fldate = c.fldate
AND a.flnumber = c.flnumber )
AND c.element NOT IN (SELECT codes
FROM master_codes
WHERE TYPE = 'F')
GROUP BY a.flnumber,
a.fldate,
Substr(c.sec, 1, 3),
d.element,
c.class,
a.INDICATOR,
Decode(a.INDICATOR, 'I', b.inrt,
'O', b.outrt,
'R', b.rting,
NULL),
b.rting Edited by: sb92075 on Mar 13, 2011 7:58 AM
Maybe you are looking for
-
While in other app (e.g. Whatsapp) using the traditional chinese input (倉頡)and return to setting the call forwarding in the setting menu, it cannot input the number correctly (It duplicated all the number and cannot erase back), must be go to the app
-
How to auto zoom in imac ???
Hi, my name is Stephen. and i'm using the newest imac. and the problem is everytime i put data using number (iwork) for accounting, the font is very very little almost unable to see by normal. i must pinching my eyes a little bit to see it clearly. d
-
Hi All, In Sales module- Dealer wise Report. Ie. Related to Particular Dealer, Howmany Orders he have done, how many pending and closed. it Possible by Pld or Xl reports ,plz tell me Regards, nagababu
-
Well hello all, another problem with my problematic mac. I got it from the trash so it was expected but now it's finally stable with extensive upgrades. So now, when I go to apple.com and say I want to see the ads they have or a movie preview, I can
-
HI, can any one have idea about calling java api's in oracle apex 3.1?