Easiest way to monitor query performance?
Hi,
What is the easiest method to track query performance? I need information about data collection from DataSource, data processing speed and front-end performance.
I would like to compare the results when:
1. the query is executed directly from BEx/WAD and
2. the query accessed from portal.
So basically I'm looking for a tool-set to monitor reporting performance at query level, per user.
-Miikka
It really depends on how indepth you want to go with your monitoring. You could issue the show interface command and view the details there. You could also view the top talkers on the ASDM dashboard. And another option is to use Netflow.
Outside of that you would need to get either get Cisco Prime or another 3rd party monitoring software.
Please remember to rate helpful posts and select a correct answer
Similar Messages
-
Is there a way to monitor System Performance for all the computers in my domain?
Like Resource Monitor for a local computer, Is there a way or tool to monitor the Resources usage of all the computers in my Domain. Can I do it using 2008 R2 server?
You can monitor all performance counters with perfmon.msc, also for remote computers.
If you want to monitor systems continuously and log the perforamnce data, you will need a real monitoring solution. Microsoft offers System Center Operations
Manager for this purpose, but many other third party solution also exist for this purpose.
MCP/MCSA/MCTS/MCITP -
What's the easiest way to show query results?
I have a button on the Business Partner form, and I want to show the results of a query when you push it. What's the easiest way to show this data to the user?
Am I going to have to create a new form and add a matrix to it? There has to be an easier way.Hi Bryan,
If the query is without parameter, you can activate the menu for this query, else I think the best way is a new form with a grid (much faster than matrix).
Regards
Ad -
Speed up table query performance
Hi,
I have a table with 200mil rows (approx 30G size). What are the multiple ways to improve query performance on this table? We have tried indexing, but doesn't help much. Partitioning - I am not sure how to use it, since there is no clear criteria for creating partitions. What other options could I use?
If this is in the wrong forum, please suggest appropriate forum.
Thanks in advance.Assuming you have purchased the partitioning option from Oracle and are not violating the terms of your license agreement then partitioning is the way to go.
For the concept docs go to http://tahiti.oracle.com
For demos of all of the various forms of partitioning go to Morgan's Library at www.psoug.org and look up partitioning.
New partitioning options have been added in 11gR1 so some of the demos may not work for you: Most will. -
Ways to improve the performance of my query?
Hi all,
I have created a multi provider which enables to fetch the data from 3 ods. And each ods contains huge amount of data. As a result my query performance is very slow..
apart from creating indexes on ods? is there any other to be carried out to improve the performance of my query. Since all the 3 info providers are ods.
thanxs
harithaHaritha,
If you still need more info, just have a look below:
There are few ways your queries can be improved:
1. Your Data volume in your InfoProviders.
2. Dim table, how you have manage your objects into your dim table.
3. Query that runs from multiprovider vs cube itself. when running from multiproviders at the time of execution the system has to create more tables hence the query performance will be affected.
4. Aggregates into the cube, and they perfection of designing the aggregates.
5. OLAPCHACHE
6. Calculation formula
etc.
and also you can go thru the links below:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
Hope this helps you.
****Assign Points*******
Gattu -
How many ways can i improve query performance?
Hi All,
can any body help me
How many ways can i improve query performance in oracle ?
Thanks,
narasimhaAs many as you can think of them!!!
-
Easiest way to know the query group name?
Hello All,
This has been nagging at me for quite some time now. Would appreciate you inputs on the ways to find out the query group name for a set of queries we see on the UI.
The obvious method is to search your query group from the list of query groups that system shows when we navigate to Setup->Queries and Reports->Query Groups. Do we have some other ways as well.
Thanks In Advance
DeveshHi Devesh,
If you know the query definition then the easiest way to find the query group would be to use "Search Query Groups by Queries Used" query on the Query Groups list page. This query allows you to select a query definition and returns the query group using the query.
Or you could use "Query Group Review" query on the Query Groups list page. This returns all queries within all Query Groups. You can further refine your search by Query Name or Internal Name of the query you are interested in.
If you want to do a general search for all buy-side list page or buy-side picker page query groups, search for <buylist> and <buypick> respectively.
Hope this helps.
Regards,
Vikram -
Export System Monitoring App Performance Dashboard data in CSV
Hello,
I would like to know if it was possible to extract the data of Solution Manager Dashboard (System Monitoring App Performance) inside a flat file ?
I found some information on this page :System Monitoring App: Performance - System Reporting in SAP Solution Manager - SAP Library
The query name is
0SMD_DASH_SYSMON_PERFORMANCE and I would like to use RSANWB on the cube 0CCMPDATA to create a flat file but maybe they are a easier solution ?
Thank youI have tried to find the information in a another way :
TEMPLATE_ID=0E2EREP_ALLSYS_SYSTEMPING
VAR_NAME_1=0SMDVTST
VAR_VALUE_LOW_EXT_1=201408010000 (begin date of the day)
VAR_VALUE_HIGH_EXT_1=201408132359 (end date of the day)
FILTER_IOBJNM_2=0SMD_TIHM
FILTER_VALUE_2=0SMDVTIHMCM
FILTER_VALUE_TYPE_2=VARIABLE_EXIT
FILTER_IOBJNM_3=0CALDAY
FILTER_VALUE_3=0VDAYCM
FILTER_VALUE_TYPE_3=VARIABLE_EXIT
FILTER_IOBJNM_4=0SMD_GRTI
FILTER_VALUE_4=DAY
FILTER_IOBJNM_5=0SMD_GRDA
FILTER_VALUE_5=DAY
USE_PAGE_WRAPPER=PING
VAR_NAME_7=0SMD_V_SID_M
VAR_VALUE_EXT_7= <SID> (SID of the system)
VAR_NAME_8=0SMD_V_SID_INT
VAR_VALUE_LOW_EXT_8=%23VAR_VALUE_HIGH_EXT_8=#
VAR_NAME_99=0SMD_V_INFOPR
VAR_VALUE_EXT_99=0SMD_PE2H
VAR_NAME_98=0SMD_V_INFOPR
VAR_VALUE_EXT_98=0SMD_PEH1
VAR_NAME_97=0SMD_V_INFOPR
VAR_VALUE_EXT_97=0SMD_PEH2
Do you think how can i export that information in a CSV ? -
Why am I Observing Poor Geospatial Query Performance?
Our HANA Rev. 72 Amazon instance is performing very poorly when selecting geospatial polygons from tables. I'm hoping someone out there knows why.
Here's one example. The table below has just 51 records; one for each state (includingPuerto Rico). The table has a GEOMETRY column that contains a polygon outline of the state.
The query below uses ST_Covers() to find the state at a given latitude, longitude. It takes more than 9 seconds to execute!
SELECT STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON
from GEO_SHAPES.US_STATES where SHAPE.ST_Covers(new ST_Point('POINT(-105.123 39.456)')) = 1
Statement 'SELECT STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON from ...' successfully executed in 9.326 seconds (server processing time: 8.629 seconds)
Essentially all the time goes into the SHAPE.ST_Covers() function. The same query without this function runs in 6 ms.
Does anybody have any idea why?JeffKasper wrote:
So I have been running activity monitor and what I am seeing is 40 MB of "Free" RAM with 630 MB "Wired", 2.5 GB "Inactive" and about 5 GB of "Active".
Both "Free" and "Inactive" are (supposed to be) available for use by any process that wants it.
When you first start up, most memory is, of course, "Free." As apps or system processes need memory, that's where OSX gets it. When they release it, however, it does not go back to Free, but to "Inactive" and is identified with the last process that used it. This is done to speed up assigning it back to the previous process if it requests it (which of course is quite common).
So as time passes, you'll see less and less Free memory and more and more Inactive memory; this means your Mac is working properly. In fact, after running for a long time, if there's much Free memory left, it is, in a sense, wasted!
The thing to watch for is Paging. If the "Page outs" figure is high, or changing rapidly, then OSX is having to page stuff out because it's out of both Free and Inactive memory.
A better way to monitor page-outs is via a Terminal command. (The Terminal app is in your Applications/Utilities folder.) Enter the following, exactly as shown, at the prompt:
sar -g 60 10
Leave the terminal window open, then try to re-create the unresponsive problem.
This should tell you if you are doing pageouts. You'll see a line in the Terminal window every 60 seconds for 10 minutes (or until you quit Terminal), showing the number of pageouts per second. A few pageouts is normal. If you have large numbers of pageouts, then you have a memory problem. -
Hi all,
I have an ODS with around 23000 records. The ODS has around 20 fields in it (3 key fields). I built a query on the ODS. The User only inputs Cost center Hierarchy. The report does not have any calculations. It is a direct view of the fields, but the report is taking a long time to run if I select first 2nd or 3rd level of the hierarchy as the starting point. If I select the 5th or 6th level for the hierarchy then the query output is fast. With just 23000 records on the ODS I thought query performance should be fast for any level of hierarchy.
I even created an index on Costcenter, even then no improvement in performance. Is there any way to achieve faster query performance on the ODS.
Thanks,
Prabhu.Technical content cubes[ if installed ] gives more generic statistics of query run time...
RSRT option mentioned by Sudheer is more targeted approach.
The same RSRT can be help full to do the following....[ from help.sap]
Definition
The read mode determines how the OLAP processor gets data during navigation. You can set the mode in Customizing for an InfoProvider and in the Query Monitor for a query.
Use
The following types are supported:
1. Query to be read when you navigate or expand hierarchies (H)
The amount of data transferred from the database to the OLAP processor is the smallest in this mode. However, it has the highest number of read processes.
In the following mode Query to read data during navigation, the data for the fully expanded hierarchy is requested for a hierarchy drilldown. In the Query to be read when you navigate or expand hierarchies mode, the data across the hierarchy is aggregated and transferred to the OLAP processor on the hierarchy level that is the lowest in the start list. When expanding a hierarchy node, the children of this node are then read.
You can improve the performance of queries with large presentation hierarchies by creating aggregates on a middle hierarchy level that is greater or the same as the hierarchy start level.
2. Query to read data during navigation (X)
The OLAP processor only requests data that is needed for each navigational status of the query in the Business Explorer. The data that is needed is read for each step in the navigation.
In contrast to the Query to be read when you navigate or expand hierarchies mode, presentation hierarchies are always imported completely on a leaf level here.
The OLAP processor can read data from the main memory when the nodes are expanded.
When accessing the database, the best aggregate table is used and, if possible, data is aggregated in the database.
3. Query to read all data at once (A)
There is only one read process in this mode. When you execute the query in the Business Explorer, all data in the main memory area of the OLAP processor that is needed for all possible navigational steps of this query is read. During navigation, all new navigational states are aggregated and calculated from the data from the main memory.
The read mode Query to be read when you navigate or expand hierarchies significantly improves performance in almost all cases compared to the other two modes. The reason for this is that only the data the user wants to see is requested in this mode.
Compared to the Query to be read when you navigate or expand hierarchies, the setting Query to read data during navigation only effects performance for queries with presentation hierarchies.
Unlike the other two modes, the setting Query to Read All Data At Once also has an effect on performance for queries with free characteristics. The OLAP processor aggregates on the corresponding query view. For this reason, the aggregation concept, that is, working with pre-aggregated data, is least supported in the Query to read all data at once mode.
We recommend you choose the mode Query to be read when you navigate or expand hierarchies.
Only choose a different read mode in exceptional circumstances. The read mode Query to read all data at once may be of use in the following cases:
§ The InfoProvider does not support selection. The OLAP processor reads significantly more data than the query needs anyway.
§ A user exit is active in a query. This prevents data from already being aggregated in the database. -
How to improve Query Performance
Hi Friends...
I Want to improve query performance.I need following things.
1.What is the process to findout the performance?. Any transaction code's and how to use?.
2.How can I know whether the query is running good or bad ,ie. in performance praspect.
3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
Eg..
Eg 1. Need to create aggregates.
Solution: where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
Any chenges I need to do in Development?.Because I'm in Production system.
So please tell me solution for my questions.
Thanks
Ganga
Message was edited by: Ganga Nhi ganga
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
CHEERS
RAVI -
Is there a way to find out :
Best performing Queries and
Worst performing queries in respect to the list of queries for a specific module ?
if so what is the procedure ?
any type of answers are most welcome and will assign points.Hi,
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0e6a6e3-0601-0010-e6bd-ede96db89ec7
Performance review
/people/marc.bernard/blog/2005/02/08/follow-me-into-the-world-of-business-intelligence
OSS notes to improve the query performence.
1111470 Poor performance during filtering with many single
1039781 Input-ready query: Performance
1101143 Collective note: BEx Analyzer performance
Performance Tuning with the OLAP Cache
http://www.sapadvisors.com/resources/Howto...PerformanceTuningwiththeOLAPCache$28pdf$29.pdf
OLAP: Cache Monitor
http://help.sap.com/saphelp_nw2004s/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm
Hope this helps.
Thanks,
JituK -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
Index not getting used in the query(Query performance improvement)
Hi,
I am using oracle 10g version and have this query:
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from Action_type at,
action_history ah,
sensitivity_audit sa,
logical_entity le,
feed_static fs,
feed_book_status fbs,
feed_instance fi,
marsnode bk
where at.description = 'Regress Positions'
and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and fi.most_recent = 'Y'
and bk.close_date is null
and ah.time_draft = 'after'
and sa.close_action_id is null
and le.close_action_id is null
and at.action_type_id = ah.action_type_id
and ah.action_id = sa.create_action_id
and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id
and sa.book_id = bk.node_id
and sa.feed_instance_id = fi.feed_instance_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 103
and bk.close_date is null
and ah.time_draft = 'after'
-- and ah.action_id = fbs.action_id
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 101
and bk.close_date is null
and ah.time_draft = 'after'
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id;This is the execution plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 231 | 43267 | 104K (85)|
| 1 | SORT UNIQUE | | 231 | 43267 | 104K (85)|
| 2 | UNION-ALL | | | | |
| 3 | NESTED LOOPS | | 1 | 257 | 19540 (17)|
| 4 | NESTED LOOPS | | 1 | 230 | 19539 (17)|
| 5 | NESTED LOOPS | | 1 | 193 | 19537 (17)|
| 6 | NESTED LOOPS | | 1 | 152 | 19534 (17)|
|* 7 | HASH JOIN | | 213 | 26625 | 19530 (17)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | 264 | 2 (0)|
|* 9 | HASH JOIN | | 4267 | 429K| 19527 (17)|
|* 10 | HASH JOIN | | 3602 | 90050 | 1268 (28)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 46826 | 22 (5)|
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| 3927K| 1217 (27)|
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| 19M| 18236 (17)|
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | 27 | 1 (0)|
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | 0 (0)|
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | 2 (0)|
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | 37 | 2 (0)|
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | 1 (0)|
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | 27 | 1 (0)|
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 23 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 24 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 25 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 26 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 27 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 29 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 39 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 40 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 41 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 42 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 43 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 45 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
------------------------------------------------------------------------------------------------------and the predicate info
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after')
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after')
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- 'PLAN_TABLE' is old versionIn this query, mainly the ACTION_HISTORY and FEED_BOOK_STATUS_HISTORY tables are getting accessed fullly though there are indexes createdon them like this
ACTION_HISTORY
ACTION_ID column Unique index
FEED_BOOK_STATUS_HISTORY
(FEED_INSTANCE_ID, BOOK_ID, COB_DATE, VERSION) composite indexI tried all the best combinations however the indexes are not getting used anywhere.
Could you please suggest some way so the query will perform better way.
Thanks,
AashishHi Mohammed,
This is what I got after your method of execution plan
SQL_ID 4vmc8rzgaqgka, child number 0
select distinct bk.name "Book Name" , fs.feed_description "Feed Name" , fbs.cob_date
"Cob" , at.description "Data Type" , ah.user_name " User" , ah.comments "Comments"
, ah.time_draft from Action_type at, action_history ah, sensitivity_audit sa, logical_entity
le, feed_static fs, feed_book_status fbs, feed_instance fi, marsnode bk where at.description =
'Regress Positions' and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011' and
fi.most_recent = 'Y' and bk.close_date is null and ah.time_draft='after' and
sa.close_action_id is null and le.close_action_id is null and at.action_type_id =
ah.action_type_id and ah.action_id=sa.create_action_id and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id and sa.book_id = bk.node_id and sa.feed_instance_id =
fi.feed_instance_id and fbs.feed_instance_id = fi.feed_instance_id and fi.feed_id = fs.feed_id
union select distinct bk.name "Book Name" , fs.
Plan hash value: 1006571916
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT UNIQUE | | 231 | 6144 | 6144 | 6144 (0)|
| 2 | UNION-ALL | | | | | |
| 3 | NESTED LOOPS | | 1 | | | |
| 4 | NESTED LOOPS | | 1 | | | |
| 5 | NESTED LOOPS | | 1 | | | |
| 6 | NESTED LOOPS | | 1 | | | |
|* 7 | HASH JOIN | | 213 | 1236K| 1236K| 1201K (0)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | | | |
|* 9 | HASH JOIN | | 4267 | 1023K| 1023K| 1274K (0)|
|* 10 | HASH JOIN | | 3602 | 1095K| 1095K| 1296K (0)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| | | |
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| | | |
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | | | |
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | | |
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | | | |
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | | | |
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 23 | NESTED LOOPS | | 115 | | | |
|* 24 | HASH JOIN | | 114 | 809K| 809K| 817K (0)|
| 25 | NESTED LOOPS | | 114 | | | |
|* 26 | HASH JOIN | | 114 | 868K| 868K| 1234K (0)|
|* 27 | HASH JOIN | | 957 | 933K| 933K| 1232K (0)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 29 | NESTED LOOPS | | 9570K| | | |
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 39 | NESTED LOOPS | | 115 | | | |
|* 40 | HASH JOIN | | 114 | 743K| 743K| 149K (0)|
| 41 | NESTED LOOPS | | 114 | | | |
|* 42 | HASH JOIN | | 114 | 766K| 766K| 208K (0)|
|* 43 | HASH JOIN | | 957 | 842K| 842K| 204K (0)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 45 | NESTED LOOPS | | 9570K| | | |
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter(("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after'))
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter(("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after'))
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
122 rows selected.
Elapsed: 00:00:02.18The action_type_id column is of NUMBER type.
Maybe you are looking for
-
I have followed every step in the various help pages I was directed to when troubleshooting the inability to install Adobe Flash Player on my Mac using either Safari or Chrome. Everything has failed. I always get the black box that says briefly, "Ret
-
Problem in Assembly Processing in Project System
Hello All, My Client want to implement Assembly Processing. We have configured the system as per SAP Standard. we are testing this in Quality System along with SD & PP Modules. we have template which contain Material WBS Process WBS & Other Cost WBS
-
Why do you tube videos say playing on tv and won't play on the iPad?
Why do some YouTube videos say playing on tv and can't be seen on iPad?
-
Windows/Mac Cross-Compatible Back-ups
I would think this is a common issue, but I haven't turned up anything that actually answers my question. Anyway, my Macbook is on its last legs. Screen's been intermittently dimming (forcing me to fiddle with the brightness to fix it), mouse has bee
-
Why does iMessages does not work properly ? Please help me ! Its works better with the previous version if compared with the latest iOS