Query Performance Tunning
Dear Experts,
I am executing a query which is build on a Multiprovider. The execution time is approx. 12 mins, again if I enable a dimension to analyze it takes another 15 mins.
The technical details of the query are:
1) The multiprovider fetches data from three different Cubes.
2) It contains three diff characteristics out of which one is having a 10 level hierarchy and one is having a 2 level hierarchy. Both the hierarchies are externally maintained.
3) It contains KPIs which calculate the sales on diff time lines such as CM MTD, LM MTD, CY YTD, LY YTD with the help of a customer exit.
4) It converts the quatities in alt unit of measures through an exit.
Kindly suggest for performance tunning. How shall i achieve min query execution time??
-Kushal
HI Kushal,
effective query on MP can be found
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
for nw2004s
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a9ab011a-0e01-0010-02a1-d496b94c9c0f
modeling on multiprovider
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7
docs on performance available in
FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
https://service.sap.com/bi
-> performance
Also,
check the parallel processing setting, your query is non cumulative ...
629541 Multiprovider: Parallel Processing
911939 Optimization hint for logical MultiProvider partitioning
907881 MultiProvider with (too) many part providers
Performance of non-cumulative queries in MultiProviders
903559 MultiProvider optimization is only partially active
942554 Perf when working with BI inp help with multiprov on Oracle
607164 MultiProvider: Sequential processing is faster than parallel
913975 Performance problems for MultiProviders with many partprov.
hope this helps
Best Regards,
VVenkat..
Similar Messages
-
SELECT Query performance tunning
Hi All,
our objective is to read value from three DSO table, for that we have written three select query .
In this we have used three internal talbes.
We have written in END routine.
A model select statement for reading the Values in DSO and move statement i have given .
for 1,75000 records it is taking about 8 hours for DTP to run .
Usually they are meaning that it will take just 20 minutes.
Can anbody help on this please ??????????????????????????????
SELECT logsys
doc_num
doc_item
comp_code
/bic/gpusiteid
/bic/gpumtgrid
/bic/gpuspntyp
/bic/gpuspndid
/bic/gpuprocmt
/bic/gpubufunc
co_area
order_quan
po_unit
entry_date
/bic/gpuitmddt
/bic/gpuovpoc
currency
/bic/gpudel_in
BT8695*
costcenter
/bic/gpuordnum
/bic/gpupostxt
BT8695*
FROM (c_poadm_det)
INTO TABLE t_podetails
FOR ALL ENTRIES IN result_package
WHERE logsys EQ result_package-logsys
AND doc_num EQ result_package-doc_num
AND doc_item EQ result_package-doc_item.
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item.
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.Hi,
In the Read statement just use BINARY SEARCH it will improve the performance. Before putting BINARY SEARCH first the
internal table should be sort like wht field you giving the condition in read statement.
sort t_podetails by logsys doc_num doc_item."add this line
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>."why your giving the unassigned here it will give the dump. why because the field symbol is not assigned after the read symbol only they going to assign.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item. " use BINARY SEARCH here
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.
Regards,
Dhina.. -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
Hi friends,
I am himansu, i am facing a problem during performance tunning of query in oracle , please guide me how to tune a query which will give better performance.907977 wrote:
Hi friends,
I am himansu, i am facing a problem during performance tunning of query in oracle , please guide me how to tune a query which will give better performance.Welcome to OTN
Please post your thread at SQL, PL/SQL. PL/SQL
and provide your sql query.
Hope this will help you. -
Need help for performance tunning
Hello,
I have 16K records return by query, it takes long time to proceed for 7K it takes 7.5 sec.
Note: I used all seeded tables only.
If possible please help me to tune it.
SELECT msi.inventory_item_id,msi.segment1,msi.rimary_uom_code , msi.primary_unit_of_measure
FROM mtl_system_items_b msi, qp_list_lines qpll,qp_pricing_attributes qppr,
mtl_category_sets_tl mcs,mtl_category_sets_b mcsb,
mtl_categories_b mc, mtl_item_categories mcb
WHERE msi.enabled_flag = 'Y'
AND qpll.list_line_id = qppr.list_line_id
AND qppr.product_attr_value = TO_CHAR (msi.inventory_item_id(+))
AND qppr.product_uom_code = msi.primary_uom_code
AND mc.category_id = mcb.category_id
AND msi.inventory_item_id = mcb.inventory_item_id
AND msi.organization_id = mcb.organization_id
AND TRUNC (SYSDATE) BETWEEN NVL (qpll.start_date_active,TRUNC (SYSDATE)) AND NVL (qpll.end_date_active,TRUNC (SYSDATE))
AND mcs.category_set_name = 'LSS SALES CATEGORY'
AND mcs.language = 'US'
AND mcs.category_set_id = mcsb.category_set_id
AND mcsb.structure_id = mc.structure_id
AND msi.organization_id = :p_organization_id
AND qpll.list_header_id = :p_price_list_id
AND mcb.category_id = :p_category_id;
Thanks and regards
Akil.Thanks Helios ,
here is answers
Databse version
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
PL/SQL Release 11.1.0.7.0
explain plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
|
0 | SELECT STATEMENT
| |
1 | 149 | 9439
(1)|
|
1 | NESTED LOOPS | | 1 |
149 | 9439 (1)|
|*
2 | HASH JOIN OUTER | | 1 |
135 | 9437 (1)|
|*
3 | HASH JOIN | | 1 |
71 | 9432 (1)|
|
4 | NESTED LOOPS | | 2 |
76 | 53 (0)|
|*
5 | TABLE ACCESS BY INDEX
ROWID| QP_LIST_LINES | 2 |
44 | 49 (0)|
|*
6 | INDEX SKIP SCAN | QP_LIST_LINES_N2 |
702 | | 20
(0)|
|*
7 | INDEX RANGE SCAN | QP_PRICING_ATTRIBUTES_N3 | 1 |
16 | 2 (0)|
|*
8 | TABLE ACCESS BY INDEX
ROWID | MTL_SYSTEM_ITEMS_B | 46254
| 1490K|
9378 (1)|
|*
9 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_N9 | 46254 | |
174 (1)|
|
10 | TABLE ACCESS FULL | XX_WEB_ITEM_IMAGE_TBL |
277 | 17728 | 5 (0)|
|* 11 | INDEX RANGE SCAN | MTL_ITEM_CATEGORIES_U1 |
1 | 14 | 2
(0)|
Predicate Information (identified
by operation id):
2 -
access("XWIIT"."IMAGE_CODE"(+)="MSI"."SEGMENT1")
3 -
access("QPPR"."PRODUCT_ATTR_VALUE"=TO_CHAR("MSI"."INVENTORY_ITEM_ID")
AND
"QPPR"."PRODUCT_UOM_CODE"="MSI"."PRIMARY_UOM_CODE")
5 - filter(NVL("QPLL"."START_DATE_ACTIVE",TRUNC(SYSDATE@!))<=TRUNC(SYSDATE@!)
AND
NVL("QPLL"."END_DATE_ACTIVE",TRUNC(SYSDATE@!))>=TRUNC(SYSDATE@!))
6 -
access("QPLL"."LIST_HEADER_ID"=TO_NUMBER(:P_PRICE_LIST_ID))
filter("QPLL"."LIST_HEADER_ID"=TO_NUMBER(:P_PRICE_LIST_ID))
7 -
access("QPLL"."LIST_LINE_ID"="QPPR"."LIST_LINE_ID")
filter("QPPR"."PRODUCT_UOM_CODE" IS NOT NULL)
8 - filter("MSI"."ENABLED_FLAG"='Y')
9 - access("MSI"."ORGANIZATION_ID"=TO_NUMBER(:P_ORGANIZATION_ID))
11 -
access("MCB"."ORGANIZATION_ID"=TO_NUMBER(:P_ORGANIZATION_ID)
AND
"MSI"."INVENTORY_ITEM_ID"="MCB"."INVENTORY_ITEM_ID"
AND
"MCB"."CATEGORY_ID"=TO_NUMBER(:P_CATEGORY_ID))
filter("MCB"."CATEGORY_ID"=TO_NUMBER(:P_CATEGORY_ID))
Note
- 'PLAN_TABLE' is old version
TKprof Plan
TKPROF: Release 11.1.0.7.0 - Production on Fri Nov 15 06:12:26 2013
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: LSSD_ora_19760.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT msi.inventory_item_id,
msi.segment1,
primary_uom_code,
primary_unit_of_measure,
xwiit.image_url
FROM mtl_system_items_b msi,
qp_list_lines qpll,
qp_pricing_attributes qppr,
mtl_item_categories mcb,
xx_web_item_image_tbl xwiit
WHERE msi.enabled_flag = 'Y'
AND qpll.list_line_id = qppr.list_line_id
AND qppr.product_attr_value = TO_CHAR (msi.inventory_item_id)
AND qppr.product_uom_code = msi.primary_uom_code
AND msi.inventory_item_id = mcb.inventory_item_id
AND msi.organization_id = mcb.organization_id
AND TRUNC (SYSDATE) BETWEEN NVL (qpll.start_date_active,
TRUNC (SYSDATE))
AND NVL (qpll.end_date_active,
TRUNC (SYSDATE))
AND xwiit.image_code(+) = msi.segment1
AND msi.organization_id = :p_organization_id
AND qpll.list_header_id = :p_price_list_id
AND mcb.category_id = :p_category_id
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 3.84 3.85 0 432560 0 1002
total 6 3.84 3.85 0 432560 0 1002
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Rows Row Source Operation
501 NESTED LOOPS (cr=216280 pr=0 pw=0 time=115 us cost=9439 size=149 card=1)
2616 HASH JOIN OUTER (cr=211012 pr=0 pw=0 time=39 us cost=9437 size=135 card=1)
78568 HASH JOIN (cr=210997 pr=0 pw=0 time=3786 us cost=9432 size=71 card=1)
78571 NESTED LOOPS (cr=29229 pr=0 pw=0 time=35533 us cost=53 size=76 card=2)
78571 TABLE ACCESS BY INDEX ROWID QP_LIST_LINES (cr=9943 pr=0 pw=0 time=27533 us cost=49 size=44 card=2)
226733 INDEX SKIP SCAN QP_LIST_LINES_N2 (cr=865 pr=0 pw=0 time=4122 us cost=20 size=0 card=702)(object id 99730)
78571 INDEX RANGE SCAN QP_PRICING_ATTRIBUTES_N3 (cr=19286 pr=0 pw=0 time=0 us cost=2 size=16 card=1)(object id 99733)
128857 TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=181768 pr=0 pw=0 time=9580 us cost=9378 size=1526382 card=46254)
128857 INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_N9 (cr=450 pr=0 pw=0 time=1657 us cost=174 size=0 card=46254)(object id 199728)
277 TABLE ACCESS FULL XX_WEB_ITEM_IMAGE_TBL (cr=15 pr=0 pw=0 time=22 us cost=5 size=17728 card=277)
501 INDEX RANGE SCAN MTL_ITEM_CATEGORIES_U1 (cr=5268 pr=0 pw=0 time=0 us cost=2 size=14 card=1)(object id 99557)
Note: I modified query and it gives good result, now it takes 3 to 4 sec for 16000 records.
If possible can you plz explain what we have to take care while doing performance tunning
I am a fresher so don't have that much idea.
and also Thanks Hussein for your replay -
How to do Performance tunning in OBIEE
Hi All,
We are using OBIEE 10.3.4 version on windows envorinment .In our OBIEE project we are using 9 reports my requriment is we need to do performance tunning for OBIEE side.For eace report accessing its taking around 80 sec.We need to decrease these accessing time,is there any possibility to access all the reports with less response time in OBIEE side.
Could you anyone suggest how to do performance tunning in OBIEE side.
Thanks,
Vijay.Vijay,
Plz refer
http://www.business-intelligence-quotient.com/?p=119
http://prolynxuk.com/blog/?p=173
http://businessdecisionsystems.com/blog/?p=486
Here is the section from the BIEE admin guide:
=======================
Usage Examples
This section provides a few examples of how to use Oracle hints in conjunction with the Oracle BI Server. For more information about Oracle hints, refer to the Oracle SQL Reference documentation for the version of the Oracle server that you use.
Index Hint
The Index hint instructs the optimizer to scan a specified index rather than a table. The following hypothetical example explains how you would use the Index hint. You find queries against the ORDER_ITEMS table to be slow. You review the query optimizer's execution plan and find the FAST_INDEX index is not being used. You create an Index hint to force the optimizer to scan the FAST_INDEX index rather than the ORDER_ITEMS table. The syntax for the Index hint is index(table_name, index_name). To add this hint to the repository, navigate to the Administration Tool's Physical Table dialog box and type the following text in the Hint field:
index(ORDER_ITEMS, FAST_INDEX)
Leading Hint
The Leading hint forces the optimizer to build the join order of a query with a specified table. The syntax for the Leading hint is leading(table_name). If you were creating a foreign key join between the Products table and the Sales Fact table and wanted to force the optimizer to begin the join with the Products table, you would navigate to the Administration Tool's Physical Foreign Key dialog box and type the following text in the Hint field:
leading(Products)
So, the table names "order_items" and "products" in the above documentation will not be the same after BIEE puts aliases on them.
============
Hope this is useful..
Edited by: Deepak Gupta on Aug 1, 2011 7:18 AM -
I have a table with about 500 million records in it stored within both Oracle and MySQL (MyISAM). I have a column (say phone_number) with a standard b-tree index on it in both places. Without data or indexes cached (not enough memory to fully cache the index), I run about 10,000 queries using randomly generated phone numbers as criteria in 10 concurrent threads. It seems that the average time to retrieve a record in MySQL is about 200 milliseconds whereas in Oracle it is about 400 milliseconds. I'm just wondering if MyISAM/MySQL is inherently faster for a basic index search than Oracle is, or should I be able to tune Oracle to get comparable performance.
Of course the hardware configurations and storage configurations are the same. It's not the absolute time I'm concerned about here but the relative time. Twice as long to perform basically the same query seems concerning. I enabled tracing and it seems like some of the problem may be the recursive calls Oracle is making. Is there some way to optimize this a bit further?
Realize, I just want to look at query performance right now...ignoring all of the issues (locking, transactional integrity, etc.)
Thanks,
GregIn Oracle, a standard table is heap-organized. A b-tree index then contains index keys and ROWIDs, so if you need to read a particular row in the table, you first do a few I/O's on the index to get the ROWIDs and then look up the ROWIDs in the table. For any given key, ROWIDs are likely to be scattered throughout the table, so this latter step generally involves multiple scattered I/O's.
You can create an index-organized table or a hash cluster in Oracle in order to minimize the cost of this particular sort of lookup by clustering data with the same key physically near each other and, in the case of IOTs, potentially eliminating the need to store the index and table separately. Of course, there are costs to doing this in that inserts are now more expensive and secondary indexes are likely to be less useful. That probably gets you closer to what MySQL is doing if, as ajallen indicates, a MySQL table is generally stored more like an IOT than a heap-organized table.
If you get really creative, you could even partition this single table to potentially improve performance further.
Of course, if you're only storing one table, I'm not sure that you could really justify the cost of an Oracle license. This may well be the case where MySQL is more than sufficient for what this particular customer needs (knowing, nothing, of course, about the other requirements for the system).
Justin -
Hi,
I am working on a application Developed in Forms10g and Oralce 10g.
I have few very large transaction tables in db and most of the screens in my application based on these tables only.
When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
How can I keep these tables in memory (buffer) always to reduce the initial query time?
or
Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
or
Any query performance tuning suggestions will be appreciated.
Thanks in advanceThanks a lot for your posts, very large means around
12 million rows. Yep, that's a large table
I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
Even when I try the query in SQL PLUS it is taking
long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
My concern is, when I execute the same query again or
in another session (some other user or same user),
can I increase the performance because the tables are
already in memory. any possibility for this? Can I
set any database parameters to share the data between
sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
Am I thinking in the right way? or I lost some where?Don't know.
There's two approaches:
- try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
- MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
>
>
Thanks -
Performance Tune Stored procedures
Hi Experts,
What is the best approach to performance tune stored procedures. Is it to run expalin plan for the queries used in cursors or there is more to it?
I need all your advise.
ThanksAs generic advice: Absolutely not.
The performance of PL/SQL may be irrelevant to the performance of specific SQL statements contained.
Consider the following:
1. A very inefficient query that runs one time and takes 2 seconds to execute.
2. A query that runs in 2 milliseconds inside a loop.
Which one is more likely to be the issue?
The answer is that it depends on the number of interations of the loop.
The proper tool for tuning PL/SQL, as discussed by Tom Kyte, are, depending on version, DBMS_PROFILER or DBMS_HPROF.
Find out where the time is being spent.
Then tune that which takes the most time.
It is not necessarily the slowest SQL statement. -
Query performance on RAC is a lot slower than single instance
I simply followed the steps provided by oracle to install a rac db of 2 nodes.
The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
However the performance on the select query is very slow compared to single instance.
I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
Could someone help me how to debug this problem?
Thanks,
Chau
Edited by: user638637 on Aug 6, 2009 8:31 AMtop 5 timed foreground events:
DB CPU: times 943(s), %db time (47.5%)
cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
You should check your sql statement from report and tuning them.
- Check from Execute Plan.
- If not index, determine to use index.
SQL> set autot trace explain
SQL> sql statement;
cursor: pin S wait on X.
A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
use variable SQL , avoid dynamic sql
http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
check about memory MEMORY_TARGET initialization parameter.
By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
Good Luck -
Hi
I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
Or should I work on the query performance?
What would be the most efficient solution for my problem?
Any advice would be greatly appreciated.
Thanks[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long
-
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
Report burst:To increase query performance in xcelsius
Is there anyway to increase query performance in xcelsius by using report bursting
Fremlin,
Report bursting is only for distributing your reports to your end users.
You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
-Anil -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Query Performance issue in Oracle Forms
Hi All,
I am using oracle 9i DB and forms 6i.
In query form ,qry took long time to load the data into form.
There are two tables used here.
1 table(A) contains 5 crore records another table(B) has 2 crore records.
The recods fetching range 1-500 records.
Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
But DBA team dont want to create index on table A.Because of table space problem.
If create the index on main table (A) ,then performance overhead in production.
Concurrent user capacity is 1500.
Is there any alternative methods to handle this problem.
Regards,
RS1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
http://en.wikipedia.org/wiki/Crore
I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
Justin
Maybe you are looking for
-
IPod mini battery discharges in just 3 days. Is this normal?
I left my ipod mini off for 3 days and it was completely discharged. This is a new battery. Its as if it doesnt turn off and discharges the battery while not in use. Any ideas? Is this normal?
-
How do I consume attachments in cXML?
This one is new to me. We have a third party wanting to send cXML with attachments. We are trying to figure out whats the best way for them to send the attachment to us and then how we can process them via PeopleCode. Does anyone have any experience
-
Problem report for mount notification?
Each time I reboot my computer I get a problem report for mount notification. Can someone direct me on how to fix this? Below is the string that comes up. Process: Mount Notification [241] Path: /Library/Application Support/Paragon
-
How to do prevent users from skipping slides?
Hi, I'm an Adobe Captivate noob, so I'd really appreciate some help. I want to lock the user's navigation option, in such a way that if the user hasn't finished a slide yet, he or she will not be able to proceed to the next slide or to other slides i
-
hi, i've set up a command line interface which gets the number of input lines and command lines a user requires for their factory layout how can i now assign an index variable to the user input values so i can arrange the factory layout. Any code hel