Slow Query Performance During Process Of SSAS Tabular
As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
becomes very slow.
Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
I am using SQL Server 2012 SP2.
Thanks
Hi AL.M,
According to your description, when you query the tabular during Process Add, the performance is slow. Right?
In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
If you have any question, please feel free to ask.
Regards,
Simon Hou
TechNet Community Support
Similar Messages
-
Hi BI expert,
I have got a query and it's written basing on sales order line cube. This cube holds about 60,000 records at the moment. A selection secreen has also been designed for this for example; creation date, brand and sku. If the users specify one of the selection criteria, it takes about 45 minute to bring back 20,000 records but if they do not specify anything on the selection screen, it takes hours and hours to fetch records from the cube. In my process chain, it deletes the index, loads data and creates index everyday.
Would you be kindly suggest me as to why it's taking so long time to run the query please?
Thanks.Hi
go for Aggregates
Most of the result sets of reporting and analysis processes consist of aggregated data. An aggregate is a redundantly stored, usually aggregated view on a specific InfoCube. Without aggregates, the OLAP engine would have to read all relevant records at the lowest level stored in the InfoCubeu2014which obviously takes some time for
large InfoCubes. Aggregates allow you to physically store frequently used aggregated result sets in relational or multidimensional databases. Aggregates stored in relational databases essentially use the same data model as used for storing InfoCubes. Aggregates stored in multidimensional databases (Microsoft SQL Server 2000) have been introduced with SAP BW 3.0.
Aggregates are still the most powerful means SAP BW provides to optimize the performance of reporting and analysis processes. Not only can SAP BW automatically take care of updating aggregates whenever necessary (upload of master or transaction data), it also automatically determines the most efficient aggregate available at query execution time.
mahesh -
Database upgrade - slow query performance
Hi,
recently we have upgraded our 8i database to an 10g database.
While we was testing our forms application against the new
10g database there was a very slow sql-statements which runs
several minutes but it runs against the 8i database within seconds.
With sqlplus it runs sometimes fast, sometimes slow (see execution plan below)
in 10g.
The sql-statement in detail:
SELECT name1, vornam, aboid, liefstat
FROM aktuellerabosatz
WHERE aboid = evitadba.get_evitaid ('0000002100')
"aktuellerabosatz" is a view on a table with about 3.000.000 records.
The function get_evitaid gets only the substring of the last 4 diggits of the whole
number.
execution plan with slow responce time:
12:05:31 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:05:35 2 FROM aktuellerabosatz
12:05:35 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:55.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
100 recursive calls
0 db block gets
121353 consistent gets
121285 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
execution plan with fast response time:
12:06:43 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:06:58 2 FROM aktuellerabosatz
12:06:58 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
110 recursive calls
8 db block gets
49 consistent gets
0 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
In the fast response the consistent gets and physical reads are very small
but in the another time very high what (it seems) results in the slow performance.
What could be the reasons?
kind regards
MarcoThe two execution plans above are both 10g-sqlsessions on the same database with the same user. We gather statistics for the database with the dbms_stats package. Normally we have the all_rows option. The confusing thing is sometimes the sql-statement runs fas sometimes slow in a sqlplus session with the same executin plan only the physical gets, constent reads are extreme different.
If we rewrite the sql-statement to use the table evtabo with the an additional
where clause (which is from the view) instead of using the view then it runs fast:
14:24:04 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:24:14 2 FROM aktuellerabosatz
14:24:14 3 WHERE aboid = evitadba.get_evitaid ('0000000246');
Es wurden keine Zeilen ausgewählt
Abgelaufen: 00:00:43.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27315 Card=1204986
Bytes=59044314)
1 0 VIEW OF 'EVTABO_V1' (VIEW) (Cost=27315 Card=1204986 Bytes=
59044314)
2 1 TABLE ACCESS (FULL) OF 'EVTABO' (TABLE) (Cost=27315 Card
=1204986 Bytes=45789468)
14:24:59 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:25:26 2 FROM evtabo
14:25:26 3 WHERE aboid = evitadba.get_evitaid ('0000002100')
14:25:26 4 and gueltab <= TRUNC(sysdate) AND (gueltbs >=TRUNC(SYSDATE) OR gueltbs IS NULL);
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
What could be the reason for the different performance in 8i and 10g?
Thanks
Marco -
Slow query performance in excel 2007 vs excel 2003
Hi,
Some of our clients recently did an upgrade towards BI 7.0 and also upgraded towards excel 2007.
They experience lots of performance problems when using the Bex analyser in excel 2007.
Refreshing queries, using 'simple' workbooks works till 10 times slower than before using excel 2003.
Has anyons experienced the same?
Any tips/tricks to solve that problem?
With regards,
Tom.Hello all,
1) Please set the following parameters to X in transaction
RS_FRONTEND_INIT and check the issue.
Parameters to be set are
ANA_USE_SIDGRIDDELTA
ANA_USE_SIDGRIDMASS
ANA_SINGLEDPREFRESH
ANA_CACHE_WORKBOOK
ANA_USE_OPTIMIZE_STG
ANA_USE_TABLE
2) Also refer to below KBA link which would help to resolve the issue.
1570478 BW Report in Excel 2007 or Excel 2010 takes much more time than
3) In the workbook properties please set the flag
- Use Compression When Saving Workbook
4) If you are working with big hierarchies please try to improve
performance with following setting directly in AnalysisGrid:
- Properties of Analysis Grid - Dispay Hierarchy Icons
- switch to "+/-"
Regards,
Arvind -
Slow query performance in Oracle 10.2.0.3
Hi,
We have Oracle 10.2.0.3 installed on RHEL 5(64 bit).We have two queries out of which one is a query using select while other query is using a insert.First we executed insert query which inserts 10000 rows in a table and then select query on this table.This works fine in one thread.But when we do samething in 10 threads, at that time INSERT is fine but select is taking very long time for 10 threads.Any bug related to parallel execution of queries for SELECT in 10.2.0.3?Any suggestion??
Thanks in advance.
Regards,
RJ.Justin,
We have a same queries for INSERT and Select in 10 manual sessions outof which select query is taking more time to execute.Please refer to WAITs given below.No there is no bottleneck as far as hardware is concerned because we tested it on different configuration of servers.
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 52 93.2
latch: cache buffers chains 45,542 6 0 10.7 Concurrency
log file parallel write 2,107 3 1 5.2 System I/O
log file sync 805 2 2 3.5 Commit
latch: session allocation 5,116 1 0 2.6 Other Wait Events
• s - second
• cs - centisecond - 100th of a second
• ms - millisecond - 1000th of a second
• us - microsecond - 1000000th of a second
• ordered by wait time desc, waits desc (idle events last)
Event Waits %Time-outs Total Wait Time (s) Avg wait (ms) Waits /txn
latch: cache buffers chains 45,542 0.00 6 0 22.99
log file parallel write 2,107 0.00 3 1 1.06
log file sync 805 0.00 2 2 0.41
latch: session allocation 5,116 0.00 1 0 2.58
buffer busy waits 20,482 0.00 1 0 10.34
db file sequential read 157 0.00 1 4 0.08
control file parallel write 1,330 0.00 0 0 0.67
wait list latch free 39 0.00 0 10 0.02
enq: TX - index contention 632 0.00 0 0 0.32
latch free 996 0.00 0 0 0.50
SQL*Net break/reset to client 1,738 0.00 0 0 0.88
SQL*Net message to client 108,947 0.00 0 0 55.00
os thread startup 2 0.00 0 19 0.00
cursor: pin S wait on X 3 100.00 0 11 0.00
latch: In memory undo latch 136 0.00 0 0 0.07
log file switch completion 4 0.00 0 7 0.00
latch: shared pool 119 0.00 0 0 0.06
latch: undo global data 121 0.00 0 0 0.06
buffer deadlock 238 99.58 0 0 0.12
control file sequential read 1,735 0.00 0 0 0.88
SQL*Net more data to client 506 0.00 0 0 0.26
log file single write 2 0.00 0 2 0.00
SQL*Net more data from client 269 0.00 0 0 0.14
reliable message 12 0.00 0 0 0.01
LGWR wait for redo copy 26 0.00 0 0 0.01
rdbms ipc reply 6 0.00 0 0 0.00
latch: library cache 7 0.00 0 0 0.00
latch: redo allocation 2 0.00 0 0 0.00
enq: RO - fast object reuse 2 0.00 0 0 0.00
direct path write 21 0.00 0 0 0.01
cursor: pin S 1 0.00 0 0 0.00
log file sequential read 2 0.00 0 0 0.00
direct path read 8 0.00 0 0 0.00
SQL*Net message from client 108,949 0.00 43,397 398 55.00
jobq slave wait 14,527 49.56 35,159 2420 7.33
Streams AQ: qmn slave idle wait 246 0.00 3,524 14326 0.12
Streams AQ:qmn coordinator-
idle wait 451 45.45 3,524 7814 0.23
wait for unread message on -
broadcast channel 3,597 100.00 3,516 978 1.82
virtual circuit status 120 100.00 3,516 29298 0.06
class slave wait 2 0.00 0 0 0.00 Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv -
Hello there,
I have an Excel report I created which works perfectly fine on my dev environment, but fails on my test environment when I try to do a data refresh.
The key difference between both dev and test environments is that in dev, everything is installed in one server:
SharePoint 2013
SQL 2012: Database Instance, SSAS Instance, SSRS for SharePoint, SSAS POWERPIVOT instance (Powerpivot for SharePoint).
In my test and production environments, the architecture is different:
SQL DB Servers in High Availability (irrelevant for this report since it is connecting to the tabular model, just FYI)
SQL SSAS Tabular server (contains a tabular model that processes data from the SQL DBs).
2x SharePoint Application Servers (we installed both SSRS and PowerPivot for SharePoint on these servers)
2x SharePoint FrontEnd Servers (contain the SSRS and PowerPivot add-ins).
Now in dev, test and production, I can run PowerPivot reports that have been created in SharePoint without any issues. Those reports can access the SSAS Tabular model without any issues, and perform data refresh and OLAP functions (slicing, dicing, etc).
The problem is with Excel reports (i.e. .xlsx files) uploaded to SharePoint. While I can open them, I am having a hard time performing a data refresh. The error I get is:
"An error occurred during an attempt to establish a connection to the external data source [...]"
I ran SQL Profiler on my SSAS Server where the Tabular instance is and I noticed that every time I try to perform a data refresh, I get the following entries:
Every time I try to perform a data refresh, two entries under the user name ANONYMOUS LOGON.
Since things work without any issues on my single-server dev environment, I tried running SQL Server Profiler there as well to see what I get.
As you can see from the above, in the dev environment the query runs without any issues and the user name logged is in fact my username from the dev environment domain. I also have a separated user for the test domain, and another for the production domain.
Now upon some preliminary investigation I believe this has something to do with the data connection settings in Excel and the usage (or no usage) of secure store. This is what I can vouch for so far:
Library containing reports is configured as trusted in SharePoint Central Admin.
Library containing data connections is configured as trusted in SharePoint Central Admin.
The Data Provider referenced in the Excel report (MSOLAP.5) is configured as trusted in SharePoint Central Admin.
In the Excel report, the Excel Services authentication settings is set as "use authenticated user's account". This wortks fine in the DEV environment.
Concerning SecureStore, PowerPivot Configurator has configured it the PowerPivotUnnattendedAccount application ID in all the environments. There is
NO configuration of an Application ID for Excel Services in any of the environments (Dev, test or production). Altough I reckon this is where the solution lies, I am not 100% sure as to why it fails in test and prod. But as I read what I am
writing, I reckon this is because of the authentication "hops" through servers. Am I right in my assumption?
Could someone please advise what am I doing wrong in this case? If it is the fact that I am missing an Secure Store entry for Excel Services, I am wondering if someone could advise me on how to set ip up? My confusion is around the "Target Application
Type" setting.
Thank you for your time.
Regards,
P.Hi Rameshwar,
PowerPivot workbooks contain embedded data connections. To support workbook interaction through slicers and filters, Excel Services must be configured to allow external data access through embedded connection information. External data access is required
for retrieving PowerPivot data that is loaded on PowerPivot servers in the farm. Please refer to the steps below to solve this issue:
In Central Administration, in Application Management, click Manage service applications.
Click Excel Services Application.
Click Trusted File Location.
Click http:// or the location you want to configure.
In External Data, in Allow External Data, click Trusted data connection libraries and embedded.
Click OK.
For more information, please see:
Create a trusted location for PowerPivot sites in Central Administration:
http://msdn.microsoft.com/en-us/library/ee637428.aspx
Another reason is Excel Services returns this error when you query PowerPivot data in an Excel workbook that is published to SharePoint, and the SharePoint environment does not have a PowerPivot for SharePoint server, or the SQL Server Analysis
Services (PowerPivot) service is stopped. Please check this document:
http://technet.microsoft.com/en-us/library/ff487858(v=sql.110).aspx
Finally, here is a good article regarding how to troubleshoot PowerPivot data refresh for your reference. Please see:
Troubleshooting PowerPivot Data Refresh:
http://social.technet.microsoft.com/wiki/contents/articles/3870.troubleshooting-powerpivot-data-refresh.aspx
Hope this helps.
Elvis Long
TechNet Community Support -
SSAS Tabular : MDX query goes OutOfMemory for a larger dataset
Hello all,
I am using SSAS 2012 Tabular to build the cube to support the organizational reporting requirements. Right now the server is Windows 2008 x64 with 16GB of Ram installed. I have the following MDX query. What this query does is get the member caption of the
“OrderGroupNumber” non-key attribute as a measure where order group numbers pertain to a specific day and which occurs in specific seconds of a day. As I want to find in which second I have order group numbers, I cross the time dimension’s members with a specific
day and filter the tuples using the transaction count. The transaction count is a non-zero value if an Order Group Number occurs within a specific second of a selected day.
At present “TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber]” has 170+ million members (Potentially this could grow rapidly) and time dimension has 86400 members.
WITH
MEMBER [Measures].[OrderGroupNumber]
AS IIF([Measures].[Transaction Count] > 0, [TransactionsInflight].[OrderGroupNumber].CURRENTMEMBER.MEMBER_CAPTION,
NULL)
SELECT
NON EMPTY{[TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber].MEMBERS}
ON COLUMNS,
{FILTER(([Date].[Calendar Hierarchy].[Date].&[2012-07-05T00:00:00], [Time].[Time].[Time].MEMBERS),
[Measures].[Transaction Count] > 0) } ON
ROWS
FROM [OrgDataCube]
WHERE [Measures].[OrderGroupNumber]
After I run this query it reaches to a dead-end and freezes the server (Sometimes SSAS server throws OutOfMemory exception but sometimes it does not). Even though I have 16GB of memory it uses all the memory and doing nothing. I have to do a hard-reset against
the server to get the server online. Even I limit the time members using the “:” range operator still the machine freeze. I have run out of solutions to fine-tune the design. Could you guys provide me some guidelines to optimize this query? I am willing to
do a design change if it is necessary.
Thanks and best regards,
ChandimaHi Greg,
Finally I found the problem why the query goes out of memory in tabular mode. I guess this information will helpful for others and I am posting my findings.
Some of the non-key attribute columns in the tabular model tables (mainly the tables which form dimensions) do not contain pretty names. So for the non-key attribute columns which I need to provide pretty names I renamed the columns to something else.
For an example, in my date dimension there is a non-key attribute named “DateAltKey”. This is the date column which I am using. As this is not pretty to the client tools I renamed this column as “Date” inside the designer (Dimension
design screen). I deployed the cube, processed the cube and no problem.
Now here comes the fun part. For every table, inside the Tables node (Tabular SSAS Database > Tables) you can view the partition details. You have single partition per dimension table if you do not create extra partitions. I opened the partitions screen
and clicked on the “Edit” icon and performed a Syntax Check. Surprisingly it failed. It complains about the renamed column. It complained “Date” cannot be found in source. So I realized that I cannot simply rename the columns like that.
After that I created calculated columns (with a pretty name) for all the columns which complained and all the source columns to the calculated columns were hid from the client tools. I deployed the cube, processed the cube and performed a
syntax check. No errors and everything were perfect.
I ran the query which gave me trouble and guess what... it executed within 5 seconds. My problem is solved. I really do not know who did this improve the performance but the trick worked for me.
Thanks a lot for your support.
Chandima -
Query Performance - Query very slow to run
I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.
Hi Joel,
Walkthrough Checklist for Query Performance:
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
Regards
Vivek Tripathi -
Its 11G R2 version, and query is performing very slow
SELECT OBJSTATE
FROM
SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00 Edited by: 842638 on Feb 2, 2013 5:52 AMHi Mark,
Just have few basic doubts regarding the below query performance :
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00
1] How do you determine that this query performance is +ok+ ?
2] What is the actual need of checking the query performance this way?
3] Is this the TKPROF output?
4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
Could you please help me with this?
Thanks.
Ranit B. -
SSAS Tabular model Performance Issue
Hello,
We have strange behavior with SSAS tabular model. The Model size is approx. 40GB in memory. Our production server has 200GB memory. Most of the users access the cube via Excel 2013 (64bit). We have been noticing that the performance starts degrading from
the next day after the Analysis service has been restarted or the server rebooted. The day either one of them is performed we get good response but then the next day response times becomes 2 to 3 times more than on the 1st day. Is this something which anyone
else has experienced. We are entering in a mode where we are required to restart the service almost everyday.
Any help in this matter would be greatly appreciated.
Thanks
DeepakWhy are you sure role based security is the culprit? Immediately after cube processing or immediately after ClearCache does the report perform fast for a user without any role based security filters? I would recommend reviewing the methodology on page
35 of the tabular performance guide to see if it is a storage engine or formula engine bottleneck and report back.
http://aka.ms/ASTabPerf2012
http://artisconsulting.com/Blogs/GregGalloway -
Data loaded to Power Pivot via Power Query is not yet supported in SSAS Tabular Cube
Hello, I'm trying to create a SSAS Tabular cube from a data loaded to Power Pivot via Power Query (SAP BOBJ connector) but looks like is not yet supported.
Any one tried this before? any workaround that make sense?
The final goal is pull data from SAP BW, BO Universe (using PowerQuery) and be able to create a SSAS Tabular cube.
Thanks in advance
SebastianSebastian,
Depending on the size of the data from Analysis Services, one work around could be to import the data into into Excel and then make an Excel table and then use the Excel table as a data source.
Reeves
Denver, CO -
Reg: Process Chain, query performance tuning steps
Hi All,
I come across a question like, There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
Please let me know the steps involved in query performance tuning and aggregate tuning.
Thanks & Regards
Omkar.KHi,
Process Chain
Method 1 (when it fails in a step/request)
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
How is it possible to restart a process chain at a failed step/request?
Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. copy the variant from the popup to the variante of table rspcprocesslog
2. copy the instance from the popup to the instance of table rspcprocesslog
3. copy the start date from the popup to the batchdate of table rspcprocesslog
Press F8 to display the entries of table rspcprocesslog.
Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid
2. rspcprocesslog-type -> i_type
3. rspcprocesslog-variante -> i_variant
4. rspcprocesslog-instance -> i_instance
5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
Query performance tuning
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Thanks,
JituK -
With SSAS Tabular using Excel:
If I place a single measure MyMeasure:=SUM([ColumnNameOnFactTable])
it happens very quickly.
I have 3 other dimensions from 3 other dimension tables on Excel with this "MyMeasure" as the value.
YearMonth in the columns and say Department ID, Account ID, and Call Center (just all made up for this example).
Now, when I place a second measure from that same table as "MyMeasure" call it SecondMeasure:SUM([AnotherColumnNameOnFactTable]) the OLAP query in Excel spins, and sometimes even throws the out of memory error.
The server has 24 GB of RAM, and the model is only a few hundred megs.
I assume something must be off here?
Either I've done something foolish with the model or I'm missing something?
EDIT:
It SEEMS to work better if I place all y measures on the Excel grid first, then go and add my "dimensions", adding the measures after the dimensions appears to incur a rather steep penalty?
Number of rows:
Largest table (account ID lookup has 180,000)
Fact table has 7,000
The others are 1,000 or less...Hi,
Thank you for your question.
I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
Thank you for your understanding and support.
Regards,
Charlie Liao
TechNet Community Support -
Rendering speed is slowing down during process, almost up to freeze
Hello there!
First of all, I'm a new person to After Effects. I bet I just doing something wrong, but anyway I would like to ask an advice how to fix my issue because googling such subject won't solve it.
The point is following:
I rendering 120-minutes long video for YouTube. It almost static with no effects, just with a few simple fade-in/out transitions on couple of layers. You can find an example of such video on YouTube by this link: Daniel Lesden - Rave Podcast 056: guest mix NitroDrop (Israel) - YouTube (for some reason, specifically this video was rendered successfully previously).
Process is going very well at first, 1 minute of playback rendering for about 30 seconds, so first hour of the video rendering for 30 minutes.
The problem starts at about 70% of overall progress: it start to slowing down, and then goes to 1 frame rendering for 1 second. As you can imagine, it's a pain for such long videos, it will take a days to finish last 20-30%.
Composition and render settings are following:
Size: 1280 x 720 px
FPS: 30
Format: Quicktime
Codec: ProRes 422. I've also tried: Animation, Photo JPEG, another ProRes and even H.264, but outcome was the same.
Output module settings screenshot:
Multi-processing settings:
On this point I have to say that I've tried many different options with amount of CPU, RAM or with Multi-processing option turned off completely, but in the end it all was the same.
After Effects version: CC 2014 (still on trial version, but it shouldn't be a problem since Adobe don't have any limitations for trial versions as far as I know).
Computer and OS information:
Any advices please? I'll very appreciate every possible solutions."open AE preferences>General then move to the bottom tab and set AE to purge every 10 frames or so"
— I've set this parameter to 10 as suggested, and here are the last render queue results:
First 60-minutes video rendered superb good for 1 hour 17 minutes.
Second 120-minutes video goes really badly: it took 8 hours to render about quarter due to error. Here is what I found in the error log: "There is an error in background rendering so switching to foreground rendering after 246 frames completed out of total 216000 frames. (26 :: 142)".
Later I've tried to render second video again, and it was great almost till finish: 1 hour 45 minutes has been rendered for less than an hour. But then it suddenly slowed down (as described in the original post), so I had to stop it. I installed Memory Dig and "optimized" memory before rendering and once during process, but it didn't help. In the end I just rendered last 15 minutes of the video (with no issues) and just merged it into 120-minutes long file using Video Joiner. Pretty sad method, although.
On a third attempt everything was fine until rendering stops by a new error in popup window: "After Effects error: internal verification failure, sorry! {could not find itemframe we just now checked in}".
So the original problem still wasn't solved yet unfortunately. -
Query performance on RAC is a lot slower than single instance
I simply followed the steps provided by oracle to install a rac db of 2 nodes.
The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
However the performance on the select query is very slow compared to single instance.
I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
Could someone help me how to debug this problem?
Thanks,
Chau
Edited by: user638637 on Aug 6, 2009 8:31 AMtop 5 timed foreground events:
DB CPU: times 943(s), %db time (47.5%)
cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
You should check your sql statement from report and tuning them.
- Check from Execute Plan.
- If not index, determine to use index.
SQL> set autot trace explain
SQL> sql statement;
cursor: pin S wait on X.
A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
use variable SQL , avoid dynamic sql
http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
check about memory MEMORY_TARGET initialization parameter.
By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
Good Luck
Maybe you are looking for
-
hey how are you guys listen i have an iphone 4s the one i use with the H20 CARRIEr and i trying to enable the option call forwarding and when i type tho number i go back and i notice to the call forwarding it turning off as soon i back to the main me
-
Mail subject line and safari Goggle Search window removing spaces between words
After upgrading to Lion I can no longer type out a subject line in Mac Mail witout the spaces automaicaly being removed. All words run together. Same situation in Safari's Google Search window. No issues in FireFox. Driving me nuts. Any others with t
-
In FD32 why credit value is calculating at cash sale cycle
Hello Every body In credit management, customer having credit limit and also having in ova8 it is static credit limit check in sales order level. My question while customer take cash sales in sales order level, he adopted payment terms as cash sales.
-
ISE 1.1.1 Windows NAC client posture checking loop
Hi all, Just upgraded Cisco ISE to 1.1.1 in my lab/demo environment and am now having problems with a basic posture implementation. In short I connect to a wireless SSID and check posture based on the presence of a file. The NAC agent is declaring my
-
We have 2 NAC appliance. customer wants to cover both L2 and L3 devices for posture validation. Can we have 2 NAC appliance in DC one operating in L2 mode covering L2 segments and other running in L3 mode covering branch sites? Are there any issues i