Slow Query Performance
Hi BI expert,
I have got a query and it's written basing on sales order line cube. This cube holds about 60,000 records at the moment. A selection secreen has also been designed for this for example; creation date, brand and sku. If the users specify one of the selection criteria, it takes about 45 minute to bring back 20,000 records but if they do not specify anything on the selection screen, it takes hours and hours to fetch records from the cube. In my process chain, it deletes the index, loads data and creates index everyday.
Would you be kindly suggest me as to why it's taking so long time to run the query please?
Thanks.
Hi
go for Aggregates
Most of the result sets of reporting and analysis processes consist of aggregated data. An aggregate is a redundantly stored, usually aggregated view on a specific InfoCube. Without aggregates, the OLAP engine would have to read all relevant records at the lowest level stored in the InfoCubeu2014which obviously takes some time for
large InfoCubes. Aggregates allow you to physically store frequently used aggregated result sets in relational or multidimensional databases. Aggregates stored in relational databases essentially use the same data model as used for storing InfoCubes. Aggregates stored in multidimensional databases (Microsoft SQL Server 2000) have been introduced with SAP BW 3.0.
Aggregates are still the most powerful means SAP BW provides to optimize the performance of reporting and analysis processes. Not only can SAP BW automatically take care of updating aggregates whenever necessary (upload of master or transaction data), it also automatically determines the most efficient aggregate available at query execution time.
mahesh
Similar Messages
-
Slow Query Performance During Process Of SSAS Tabular
As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
becomes very slow.
Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
I am using SQL Server 2012 SP2.
ThanksHi AL.M,
According to your description, when you query the tabular during Process Add, the performance is slow. Right?
In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
If you have any question, please feel free to ask.
Regards,
Simon Hou
TechNet Community Support -
Slow query performance in excel 2007 vs excel 2003
Hi,
Some of our clients recently did an upgrade towards BI 7.0 and also upgraded towards excel 2007.
They experience lots of performance problems when using the Bex analyser in excel 2007.
Refreshing queries, using 'simple' workbooks works till 10 times slower than before using excel 2003.
Has anyons experienced the same?
Any tips/tricks to solve that problem?
With regards,
Tom.Hello all,
1) Please set the following parameters to X in transaction
RS_FRONTEND_INIT and check the issue.
Parameters to be set are
ANA_USE_SIDGRIDDELTA
ANA_USE_SIDGRIDMASS
ANA_SINGLEDPREFRESH
ANA_CACHE_WORKBOOK
ANA_USE_OPTIMIZE_STG
ANA_USE_TABLE
2) Also refer to below KBA link which would help to resolve the issue.
1570478 BW Report in Excel 2007 or Excel 2010 takes much more time than
3) In the workbook properties please set the flag
- Use Compression When Saving Workbook
4) If you are working with big hierarchies please try to improve
performance with following setting directly in AnalysisGrid:
- Properties of Analysis Grid - Dispay Hierarchy Icons
- switch to "+/-"
Regards,
Arvind -
Database upgrade - slow query performance
Hi,
recently we have upgraded our 8i database to an 10g database.
While we was testing our forms application against the new
10g database there was a very slow sql-statements which runs
several minutes but it runs against the 8i database within seconds.
With sqlplus it runs sometimes fast, sometimes slow (see execution plan below)
in 10g.
The sql-statement in detail:
SELECT name1, vornam, aboid, liefstat
FROM aktuellerabosatz
WHERE aboid = evitadba.get_evitaid ('0000002100')
"aktuellerabosatz" is a view on a table with about 3.000.000 records.
The function get_evitaid gets only the substring of the last 4 diggits of the whole
number.
execution plan with slow responce time:
12:05:31 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:05:35 2 FROM aktuellerabosatz
12:05:35 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:55.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
100 recursive calls
0 db block gets
121353 consistent gets
121285 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
execution plan with fast response time:
12:06:43 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:06:58 2 FROM aktuellerabosatz
12:06:58 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
110 recursive calls
8 db block gets
49 consistent gets
0 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
In the fast response the consistent gets and physical reads are very small
but in the another time very high what (it seems) results in the slow performance.
What could be the reasons?
kind regards
MarcoThe two execution plans above are both 10g-sqlsessions on the same database with the same user. We gather statistics for the database with the dbms_stats package. Normally we have the all_rows option. The confusing thing is sometimes the sql-statement runs fas sometimes slow in a sqlplus session with the same executin plan only the physical gets, constent reads are extreme different.
If we rewrite the sql-statement to use the table evtabo with the an additional
where clause (which is from the view) instead of using the view then it runs fast:
14:24:04 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:24:14 2 FROM aktuellerabosatz
14:24:14 3 WHERE aboid = evitadba.get_evitaid ('0000000246');
Es wurden keine Zeilen ausgewählt
Abgelaufen: 00:00:43.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27315 Card=1204986
Bytes=59044314)
1 0 VIEW OF 'EVTABO_V1' (VIEW) (Cost=27315 Card=1204986 Bytes=
59044314)
2 1 TABLE ACCESS (FULL) OF 'EVTABO' (TABLE) (Cost=27315 Card
=1204986 Bytes=45789468)
14:24:59 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:25:26 2 FROM evtabo
14:25:26 3 WHERE aboid = evitadba.get_evitaid ('0000002100')
14:25:26 4 and gueltab <= TRUNC(sysdate) AND (gueltbs >=TRUNC(SYSDATE) OR gueltbs IS NULL);
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
What could be the reason for the different performance in 8i and 10g?
Thanks
Marco -
Slow query performance in Oracle 10.2.0.3
Hi,
We have Oracle 10.2.0.3 installed on RHEL 5(64 bit).We have two queries out of which one is a query using select while other query is using a insert.First we executed insert query which inserts 10000 rows in a table and then select query on this table.This works fine in one thread.But when we do samething in 10 threads, at that time INSERT is fine but select is taking very long time for 10 threads.Any bug related to parallel execution of queries for SELECT in 10.2.0.3?Any suggestion??
Thanks in advance.
Regards,
RJ.Justin,
We have a same queries for INSERT and Select in 10 manual sessions outof which select query is taking more time to execute.Please refer to WAITs given below.No there is no bottleneck as far as hardware is concerned because we tested it on different configuration of servers.
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 52 93.2
latch: cache buffers chains 45,542 6 0 10.7 Concurrency
log file parallel write 2,107 3 1 5.2 System I/O
log file sync 805 2 2 3.5 Commit
latch: session allocation 5,116 1 0 2.6 Other Wait Events
• s - second
• cs - centisecond - 100th of a second
• ms - millisecond - 1000th of a second
• us - microsecond - 1000000th of a second
• ordered by wait time desc, waits desc (idle events last)
Event Waits %Time-outs Total Wait Time (s) Avg wait (ms) Waits /txn
latch: cache buffers chains 45,542 0.00 6 0 22.99
log file parallel write 2,107 0.00 3 1 1.06
log file sync 805 0.00 2 2 0.41
latch: session allocation 5,116 0.00 1 0 2.58
buffer busy waits 20,482 0.00 1 0 10.34
db file sequential read 157 0.00 1 4 0.08
control file parallel write 1,330 0.00 0 0 0.67
wait list latch free 39 0.00 0 10 0.02
enq: TX - index contention 632 0.00 0 0 0.32
latch free 996 0.00 0 0 0.50
SQL*Net break/reset to client 1,738 0.00 0 0 0.88
SQL*Net message to client 108,947 0.00 0 0 55.00
os thread startup 2 0.00 0 19 0.00
cursor: pin S wait on X 3 100.00 0 11 0.00
latch: In memory undo latch 136 0.00 0 0 0.07
log file switch completion 4 0.00 0 7 0.00
latch: shared pool 119 0.00 0 0 0.06
latch: undo global data 121 0.00 0 0 0.06
buffer deadlock 238 99.58 0 0 0.12
control file sequential read 1,735 0.00 0 0 0.88
SQL*Net more data to client 506 0.00 0 0 0.26
log file single write 2 0.00 0 2 0.00
SQL*Net more data from client 269 0.00 0 0 0.14
reliable message 12 0.00 0 0 0.01
LGWR wait for redo copy 26 0.00 0 0 0.01
rdbms ipc reply 6 0.00 0 0 0.00
latch: library cache 7 0.00 0 0 0.00
latch: redo allocation 2 0.00 0 0 0.00
enq: RO - fast object reuse 2 0.00 0 0 0.00
direct path write 21 0.00 0 0 0.01
cursor: pin S 1 0.00 0 0 0.00
log file sequential read 2 0.00 0 0 0.00
direct path read 8 0.00 0 0 0.00
SQL*Net message from client 108,949 0.00 43,397 398 55.00
jobq slave wait 14,527 49.56 35,159 2420 7.33
Streams AQ: qmn slave idle wait 246 0.00 3,524 14326 0.12
Streams AQ:qmn coordinator-
idle wait 451 45.45 3,524 7814 0.23
wait for unread message on -
broadcast channel 3,597 100.00 3,516 978 1.82
virtual circuit status 120 100.00 3,516 29298 0.06
class slave wait 2 0.00 0 0 0.00 Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv -
Query Performance - Query very slow to run
I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.
Hi Joel,
Walkthrough Checklist for Query Performance:
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
Regards
Vivek Tripathi -
Query performance on RAC is a lot slower than single instance
I simply followed the steps provided by oracle to install a rac db of 2 nodes.
The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
However the performance on the select query is very slow compared to single instance.
I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
Could someone help me how to debug this problem?
Thanks,
Chau
Edited by: user638637 on Aug 6, 2009 8:31 AMtop 5 timed foreground events:
DB CPU: times 943(s), %db time (47.5%)
cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
You should check your sql statement from report and tuning them.
- Check from Execute Plan.
- If not index, determine to use index.
SQL> set autot trace explain
SQL> sql statement;
cursor: pin S wait on X.
A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
use variable SQL , avoid dynamic sql
http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
check about memory MEMORY_TARGET initialization parameter.
By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
Good Luck -
Hi Experts,
Please clarify my doubts.
1. How can we know the particular query performance slow in all?
2. How can we define a cell in BEx?
3. Info cube is info provider, Info Object is not Info Provider why?
Thanks in advanceHi,
1. How can we know the particular query performance slow in all?
When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
2. How can we define a cell in BEx?
In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
3. Info cube is info provider, Info Object is not Info Provider why?
Info object also info provider,
when your info object also you can convert into info provider using " Convert as data target".
Thanks and Regards,
Venkat.
Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM -
Its 11G R2 version, and query is performing very slow
SELECT OBJSTATE
FROM
SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00 Edited by: 842638 on Feb 2, 2013 5:52 AMHi Mark,
Just have few basic doubts regarding the below query performance :
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00
1] How do you determine that this query performance is +ok+ ?
2] What is the actual need of checking the query performance this way?
3] Is this the TKPROF output?
4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
Could you please help me with this?
Thanks.
Ranit B. -
Hi
I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
Or should I work on the query performance?
What would be the most efficient solution for my problem?
Any advice would be greatly appreciated.
Thanks[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long
-
SharePoint 2010 Slow query duration when setting metadata on folder
I'm getting "Slow Query Duration" when I programmatically set a default value for a default field to apply to documents at a specified location on a SP 2010 library.
It has nothing to do with performance most probably as I'm getting this working with a folder within a library with only a 1 document on a UAT environment. Front-end: AMD Opteron 6174 2.20GHz x 2 + 8gb RAM, Back-end: AMD Opteron 6174 2.20GHz x 2 + 16gb
RAM.
The specific line of code causing this is:
folderMetadata.SetFieldDefault(createdFolder, fieldData.Field.InternalName, thisFieldTextValue);
What SP says:
02/17/2014 16:29:03.24 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa42 Monitorable A large block of literal text was sent to sql. This can result in blocking in sql and excessive memory use on the front end. Verify that no binary parameters are
being passed as literals, and consider breaking up batches into smaller components. If this request is for a SharePoint list or list item, you may be able to resolve this by reducing the number of fields.
02/17/2014 16:29:03.24 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa43 High Slow Query Duration: 254.705556153086
02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High Slow Query StackTrace-Managed: at Microsoft.SharePoint.Utilities.SqlSession.OnPostExecuteCommand(SqlCommand command, SqlQueryData monitoringData) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand
command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock) at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock) at Microsoft.SharePoint.Library.SPRequestInternalClass.PutFile(String
bstrUrl, String bstrWebRelativeUrl, Object punkFile, Int32 cbFile, Object punkFFM, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Obje...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...ct varProperties, String bstrCheckinComment, Byte partitionToCheck, Int64 fragmentIdToCheck, String bstrCsvPartitionsToDelete, String bstrLockIdMatch, String bstEtagToMatch,
Int32 lockType, String lockId, Int32 minutes, Int32 fRefreshLock, Int32 bValidateReqFields, Guid gNewDocId, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage, String& pEtagReturn, Byte& piLevel, Int32& pbIgnoredReqProps) at Microsoft.SharePoint.Library.SPRequest.PutFile(String
bstrUrl, String bstrWebRelativeUrl, Object punkFile, Int32 cbFile, Object punkFFM, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Object varProperties,
String bstrCheckinComment, Byte partitionToCheck, Int64 fragmentIdToCheck...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ..., String bstrCsvPartitionsToDelete, String bstrLockIdMatch, String bstEtagToMatch, Int32 lockType, String lockId, Int32 minutes, Int32 fRefreshLock, Int32 bValidateReqFields,
Guid gNewDocId, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage, String& pEtagReturn, Byte& piLevel, Int32& pbIgnoredReqProps) at Microsoft.SharePoint.SPFile.SaveBinaryStreamInternal(Stream file, String checkInComment, Boolean checkRequiredFields,
Boolean autoCheckoutOnInvalidData, Boolean bIsMigrate, Boolean bIsPublish, Boolean bForceCreateVersion, String lockIdMatch, SPUser modifiedBy, DateTime timeLastModified, Object varProperties, SPFileFragmentPartition partitionToCheck, SPFileFragmentId fragmentIdToCheck,
SPFileFragmentPartition[] partitionsToDelete, Stream formatMetadata, String etagToMatch, Boolea...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...n bSyncUpdate, SPLockType lockType, String lockId, TimeSpan lockTimeout, Boolean refreshLock, Boolean requireWebFilePermissions, Boolean failIfRequiredCheckout, Boolean
validateReqFields, Guid newDocId, SPVirusCheckStatus& virusCheckStatus, String& virusCheckMessage, String& etagReturn, Boolean& ignoredRequiredProps) at Microsoft.SharePoint.SPFile.SaveBinary(Stream file, Boolean checkRequiredFields, Boolean
createVersion, String etagMatch, String lockIdMatch, Stream fileFormatMetaInfo, Boolean requireWebFilePermissions, String& etagNew) at Microsoft.SharePoint.SPFile.SaveBinary(Byte[] file) at Microsoft.Office.DocumentManagement.MetadataDefaults.Update()
at TWINSWCFAPI.LibraryManager.CreatePathFromFolderCollection(String fullPathUrl, SPListItem item, SPWeb web, Dictionary2...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ... folderToCreate, Boolean setDefaultValues, Boolean mainFolder) at TWINSWCFAPI.LibraryManager.CreatePathFromFolderCollection(String fullPathUrl, List1 resultDataList,
SPListItem item, SPWeb web, Boolean setDefaultValues, Boolean mainFolder) at TWINSWCFAPI.LibraryManager.CreateExtraFolders(List1
pathResultDataList, List1 resultDataList, String fullPathUrl, SPWeb web, SPListItem item, Boolean setDefaultValues) at TWINSWCFAPI.LibraryManager.CreateFolders(SPWeb web, List1
pathResultDataList, SPListItem item, String path, Boolean setDefaultValues) at TWINSWCFAPI.LibraryManager.MoveFileAfterMetaChange(SPListItem item) at TWINSWCFAPI.DocMetadataChangeEventReceiver.DocMetadataChangeEventReceiver.FileDocument(SPWeb web, SPListItem
listItem) at TWINSWCFAPI.DocMetadataChang...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...eEventReceiver.DocMetadataChangeEventReceiver.ItemCheckedIn(SPItemEventProperties properties) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiver(SPItemEventReceiver
receiver, SPUserCodeInfo userCodeInfo, SPItemEventProperties properties, SPEventContext context, String receiverData) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiverHelper(Object receiver, SPUserCodeInfo userCodeInfo, Object properties, SPEventContext
context, String receiverData) at Microsoft.SharePoint.SPEventManager.<>c__DisplayClassc1.b__6() at Microsoft.SharePoint.SPSecurity.RunAsUser(SPUserToken userToken, Boolean bResetContext, WaitCallback code, Object param) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](SPUserToken
userToken, Gu...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...id tranLockerId, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](Byte[]
userTokenBytes, Guid tranLockerId, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.HandleEventCallback[ReceiverType,PropertiesType](Object callbackData) at Microsoft.SharePoint.Utilities.SPThreadPool.WaitCallbackWrapper(Object
state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext
execu...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...tionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack)
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state)
02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzku High ConnectionString: 'Data Source=PFC-SQLUAT-202;Initial Catalog=TWINSDMS_LondonDivision_Content;Integrated Security=True;Enlist=False;Asynchronous Processing=False;Connect
Timeout=15' ConnectionState: Open ConnectionTimeout: 15
02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High SqlCommand: 'DECLARE @@iRet int;BEGIN TRAN EXEC @@iRet = proc_WriteChunkToAllDocStreams @wssp0, @wssp1, @wssp2, @wssp3, @wssp4, @wssp5, @wssp6;IF @@iRet <> 0 GOTO
done; DECLARE @@S uniqueidentifier; DECLARE @@W uniqueidentifier; DECLARE @@DocId uniqueidentifier; DECLARE @@DoclibRowId int; DECLARE @@Level tinyint; DECLARE @@DocUIVersion int;DECLARE @@IsCurrentVersion bit; DECLARE @DN nvarchar(256); DECLARE @LN nvarchar(128);
DECLARE @FU nvarchar(260); SET @DN=@wssp7;SET @@iRet=0; ;SET @LN=@wssp8;SET @FU=@wssp9;SET @@S=@wssp10;SET @@W=@wssp11;SET @@DocUIVersion = 512;IF @@iRet <> 0 GOTO done; ;SET @@Level =@wssp12; EXEC @@iRet = proc_UpdateDocument @@S, @@W, @DN, @LN, @wssp13,
@wssp14, @wssp15, @wssp16, @wssp17, @wssp18, @wssp19, @wssp20, @wssp21, @wssp22, @wssp23, @wssp24, @wssp25, @wssp26,...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ... @wssp27, @wssp28, @wssp29, @wssp30, @wssp31, @wssp32, @wssp33, @wssp34, @wssp35, @wssp36, @wssp37, @wssp38, @wssp39, @wssp40, @wssp41, @wssp42, @wssp43, @wssp44, @wssp45,
@wssp46, @wssp47, @wssp48, @wssp49, @wssp50, @wssp51, @@DocId OUTPUT, @@Level OUTPUT , @@DoclibRowId OUTPUT,@wssp52 OUTPUT,@wssp53 OUTPUT,@wssp54 OUTPUT,@wssp55 OUTPUT ; IF @@iRet <> 0 GOTO done; EXEC @@iRet = proc_TransferStream @@S, @@DocId, @@Level,
@wssp56, @wssp57, @wssp58; IF @@iRet <> 0 GOTO done; EXEC proc_AL @@S,@DN,@LN,@@Level,0,N'London/Broking/Documents/E/E _ E Foods Corporation',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,1,N'London/Broking/Documents/E',N'E _ E Foods Corporation',72,85,83,1,N'';EXEC
proc_AL @@S,@DN,@LN,@@Level,2,N'London/Broking/Documents/E/E _ E Foods Corporation',N'2013',72,85,...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ...83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,3,N'London/Broking/Documents/E/E _ E Foods Corporation/2013',N'QA11G029601',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,4,N'London/Broking/Documents/K',N'Konig
_ Reeker',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,5,N'London/Broking/Documents/K/Konig _ Reeker',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,6,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'QA12E013201',72,85,83,1,N'';EXEC proc_AL
@@S,@DN,@LN,@@Level,7,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'A12EL00790',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,8,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'A12DA00720',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,9,N'London/Broking/Documents/K/Konig
_ Reeker/2012',N'A12DC00800',72,85,83,1,N'';EXEC proc...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ..._AL @@S,@DN,@LN,@@Level,10,N'London/Broking/Documents/A',N'Ace European Group Limited',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,11,N'London/Broking/Documents/A/Ace
European Group Limited',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,12,N'London/Broking/Documents/A/Ace European Group Limited/2012',N'JXB88435',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,13,N'London/Broking/Documents/A/Ace European Group
Limited/2012/JXB88435/Closings',N'PRM 1',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,14,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'PRM 2',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,15,N'London/Broking/Documents/C',N'C
Moore-Gordon',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,16,N'London/Broking/Documents/C/C Moore-Gordo...
02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ...n',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,17,N'London/Broking/Documents/C/C Moore-Gordon/2012',N'QY13P700201',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,18,N'London/Broking/Documents/C/C
Moore-Gordon',N'2013',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,19,N'London/Broking/Documents/C/C Moore-Gordon/2013',N'Y13PF07010',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,20,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'ARP
7',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,21,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'ARP 8',72,85,83,1,N'';EXEC proc_AL . . .
Thanks in advance ASharePoint and SQL Server installed on same server or how is the setup?
i would start to enable the developer dashboard, analyze the report of the developer dashboard...
you will see if any webpart, or page or sql server query taking too much time.
http://www.sharepoint-journey.com/developer-dashboard-in-sharepoint-2013.html
Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Query Performance issue in Oracle Forms
Hi All,
I am using oracle 9i DB and forms 6i.
In query form ,qry took long time to load the data into form.
There are two tables used here.
1 table(A) contains 5 crore records another table(B) has 2 crore records.
The recods fetching range 1-500 records.
Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
But DBA team dont want to create index on table A.Because of table space problem.
If create the index on main table (A) ,then performance overhead in production.
Concurrent user capacity is 1500.
Is there any alternative methods to handle this problem.
Regards,
RS1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
http://en.wikipedia.org/wiki/Crore
I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
Justin -
How to improve query performance built on a ODS
Hi,
I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
Is there any method to improve or optimize th query performance that build on ODS.
The ODS got huge volume of data ~ 300 Million records for 2 years.
Thanx in advance,
Guru.Hi Raj,
Here are some few tips which helps you in improving ur query performance
Checklist for Query Performance
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
calculations. Try to avoid calculations before restrictions.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order. -
Slow query - db_cache_size ?
Hi,
Oracle 9.2.0.5.0 ( solaris )
I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
When running on preprod, the hit ratio for the query is .90 + , on production it drops down to .70 - .80 ( as per query below )
I have plenty of memory available on the machine, would it be wise to size up the caches? db_cache_size, db_keep_cache_size, db_recycle_cache_size ?
SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid = &sid
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid
PRE-PRODUCTION
call count cpu elapsed disk query current rows
Parse 1 0.64 0.64 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 186.92 329.88 162174 5144281 5 1
total 4 187.56 330.53 162174 5144281 5 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 160098 1.44 162.52
db file scattered read 1 0.00 0.00
direct path write 27 0.66 3.36
direct path read 97 0.00 0.02
SQL*Net message from client 2 985.79 985.79
PRODUCTION
call count cpu elapsed disk query current rows
Parse 1 2.41 2.34 79 16 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 844.76 12305.06 1507519 5226663 0 1
total 4 847.17 12307.41 1507598 5226679 0 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 1502104 4.40 11849.13
direct path write 361 0.57 3.06
direct path read 361 0.05 0.88
buffer busy waits 36 0.02 0.17
latch free 5 0.01 0.01
log buffer space 2 1.00 1.37
SQL*Net message from client 2 687.95 687.95
Suggestions for further investigation more than welcome.user12044475 wrote:
Hi,
Oracle 9.2.0.5.0 ( solaris )
I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
There are more physical reads, and the average read time is longer. This may simply be a reflection of the fact that other people are working on the production database at the same time and (a) kicking your data out of the cache and (b) causing you to queue at the disc as they do their I/O. A larger cache MIGHT protect your data a little longer, and MAY reduce their I/O at the same time so that the I/Os are faster - but we have no idea what side effects might then appear.
It's also worth considering whether you did something as you tranferred the data from production to pre-production that helped to improve query performance. (As a simple example, an export/import could have eliminated a lot of row migration - and the nature of your plan means you MIGHT be suffering a lot of excess I/O from "table fetch continued row"). So, how did you get the data from production to test, how long ago, what's happened to it since, and do you have any session statistics taken as you ran the two queries ?
Since your execution plan (prediction) is a long way off the actual run time, though, (even on the pre-production system), it's probably more important to work out whether you can make your query much more efficient before you make any dramatic changes to the system. I notice that you have three existences subqueries that appear at the end of the plan - such subqueries wreck the optimizer's arithmetic in your version of Oracle and can make it do very silly things. (See for example this blog note: http://jonathanlewis.wordpress.com/2006/11/08/subquery-selectivity )
The effect of the subqueries may (for example) be why you have a full tablescan on the second table in a nested loop join at one point in your query. The expectation of a vastly reduced number of rows may be why you are seeing nested loops all over the place when (possibly) a couple of hash joins would be more appropriate.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
Isaac Asimov
Maybe you are looking for
-
Executing a .bat file from java code
Im writing a utility and i need to be able to create and execute a .bat file from my program. I can create the .bat file fine but when i try to execute it nothing happens. I tried to use the following line of code to try to execute the .bat file (cal
-
My customer has enter their 1099 opening balances, but they don't appear on the BP balance. There is a check box for 'submitted' on the far right of the opening balances matrix that is grayed out and can't be 'checked'.. Does this need to be checke
-
How do you adjust the timing in the animation pane to 2 minutes
How do you adjust the timing in animation? I need 2 minutes for slide
-
When I open Photoshop CC it tells me my license has expired. I click "License This Software" and Sign In with my Creative Cloud details and it says "Congratulations on purchasing Creative Cloud using your Adobe ID. Click continue to enjoy the license
-
Hi All, I have come across a scenario wherein the factory calender are different in the plant and the shipping point. If i change the calender of shipping point on PRD what will be the implications of this ? What all aspects do i need to watch out fo