To Check the execution time for each transaction.
Abapers,
How to find out the process time for each transaction eg.order entry,shippng,billing, etc in SAP.
TIA,
sinthu
Hi,
By default you can see the execution time at right side corner of sap session.
You can use SE30 to get in to more details like database time , abap time etc...
Hope it helps...
Regards,
Vijay
Similar Messages
-
How to find the Execution Time for Java Code?
* Hi everyone , i want to calculate the execution time for my process in java
* The following was the ouput for my coding,
O/P:-
This run took 0 Hours ;1.31 Minutes ;78.36 Seconds
*** In the above output , the output should come exactly what hours , minutes and seconds for my process,
but in my code the minutes are converted into seconds(It should not)...
* Here is my coding,
static long start_time;
public static void startTime()
start_time = System.currentTimeMillis();
public static void endTime()
DecimalFormat df = new DecimalFormat("##.##");
long end_time = System.currentTimeMillis();
float t = end_time - start_time;
float sec = t / 1000;
float min = 0, hr = 0;
if (sec > 60) {
min = sec / 60;
if (min > 60) {
hr = min / 60;
System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
}* How to Calcualte exact timing for my process....
* Thanks* Hi flounder, Is following code will wotk perfectly?
public static void endTime()
DecimalFormat df = new DecimalFormat("##.##");
long end_time = System.currentTimeMillis();
float t = end_time - start_time;
float sec = t / 1000;
float min = 0, hr = 0;
while(sec >= 60){
min++;
sec = sec -60;
if (min >= 60){
min = 0; //or min = min -60;
hr++;
System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
} -
Can I plot 2 locations at the same time for each record in a table
I'm trying to plot 2 Locations (2 points on a Power View Map) at the same time for each record/project in a table
I can plot 1 location for each project of course with no problem
but I'm trying to show the Begin Point and the
End Point for each project at the same time
Is this even possible?First of all THANKS this worked!
But now I stumbled on another issue. So I actually have 3 tables (and I've adopted them to the file you sent)
Table 1 => TripData
Trip
LongLat
Location
Type
TypeCode
Size
NW Tour
47.568077, -122.320800
Seattle, WA
Begin
0
1
NW Tour
47.035608, -122.884812
Olympia, WA
End
1
1
Cali Trip
37.808848, -122.412580
San Francisco, CA
Begin
0
1
Cali Trip
32.682611, -117.178348
San Diego, CA
End
1
1
Table 2 =>
TripInfo
Trip
OneLongLat
NTP
NW Tour
47.568077, -122.320800
1/1/2015
Cali Trip
37.808848, -122.412580
1/5/2015
Table 3 =>
ALLTrips
Trip
Stop
Owner
NW Tour
1
Owner1
NW Tour
2
Owner2
NW Tour
3
Owner3
NW Tour
4
Owner4
NW Tour
5
Owner5
Cali Trip
1
Owner6
Cali Trip
2
Owner7
Cali Trip
3
Owner8
Cali Trip
4
Owner9
Cali Trip
5
Owner10
Cali Trip
6
Owner11
This is how the Diagram View looks like in PowerPivot
Trip Data => Trip Info <= ALLTrips
Since I don't know how to post pictures
The MAP FIELDS are as follows
SIZE - Count of Stop(s)
LOCATIONS - OneLongLat followed by
LongLat (drill down feature)
COLOR - Trip
The problem now happens with the drill down feature
You can either plot OneLongLat which is the general location for each trip
Or LongLat of each trip which shows the begin and end points
But you can't use the drill down feature???
If instead of OneLongLat you use a
State Column it actually works!!!
I wonder if it has to do with the fact that both locations used for the drill down are Long/Lat numbers???
Any suggestions???
And again Thanks for the response! -
Reduce the execution time for the below query
Hi,
Please help me to reduce the execution time on the following query .. if any tuning is possible.
I have a table A with the columns :
ID , ORG_LINEAGE , INCLUDE_IND ( -- the org lineage is a string of ID's. If ID 5 reports to 4 and 4 to 1 .. the lineage for 5 will be stored as the string -1-4-5)
Below is the query ..
select ID
from A a
where INCLUDE_IND = '1' and
exists (
select 1
from A b
where b.ID = '5'
and b.ORG_LINEAGE like '%-'||a.ID||'-%'
order by ORG_LINEAGE;
The only constraint on the table A is the primary key on the ID column.
Following will be the execution plan :
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=406 Card=379 Bytes=2
653)
1 0 SORT (ORDER BY) (Cost=27 Card=379 Bytes=2653)
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'A' (Cost=24 Card
=379 Bytes=2653)
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'A' (Co
st=1 Card=1 Bytes=6)
5 4 INDEX (RANGE SCAN) OF 'ORG_LINEAGE'
(NON-UNIQUE)I order it by the org_lineage to get the first person. So it is a result problem? The order by doesn't give you the first person, it gives you a sorted result set (of which there may be zero, one, or thousands).
If you only want one row from that, then you're spending a lot of time tuning the wrong query.
How do you know which ORG_LINEAGE row you want?
Maybe it would help if you posted some sample data. -
Can I reduce the execution time for a step in a TestStand ?
Hi,
I calculated the a single step execution time for TestStand Ver 2.0. It comes to around 20 milliseconds/step. Can I reduce this excution time ?
Are there any settings available for configuring execution time parameters except result logging and exception handlings to reduce the execution time ?It's difficult to tell how you what time you are reporting for your step. Clearly we don't have control of the time it takes your code to execute. However, we are constantly working on reducing the overhead of calling the code. In addition, you don't mention the type of step you are calling. One way to have a common reference is to use the example \Examples\Benchmarks\Benchmarks.seq. Below have have posted the results of running this sequence with both tracing and result collection enabled and then disabled. I have a 700 MHz, 128 MB RAM, Dell PIII laptop. In this example there is no code within the code modules. You notice that calling a DLL has the least overhead with a minimum of 7.459 ms with tracing and results enabled and 0.092 ms with tracing and results disabled. Although not included below, if I enable results be disable tracing I get a minimum time of 0.201 ms, a 100x improvement on your time.
With Results and Tracing enabled.
7.578 milliseconds per step for CVI Standard Prototype - Object File
7.579 milliseconds per step for CVI Standard Prototype - DLL
7.459 milliseconds per step for DLL Flexible Prototype
8.589 milliseconds per step for DLL Flexible Prototype Numeric Limit
9.563 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
10.015 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
7.868 milliseconds per step for ActiveX Automation
8.892 milliseconds per step for LabVIEW Standard Prototype
With tracing and results disabled.
0.180 milliseconds per step for CVI Standard Prototype - Object File
0.182 milliseconds per step for CVI Standard Prototype - DLL
0.092 milliseconds per step for DLL Flexible Prototype
0.178 milliseconds per step for DLL Flexible Prototype Numeric Limit
0.277 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
0.400 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
0.270 milliseconds per step for ActiveX Automation
1.235 milliseconds per step for LabVIEW Standard Prototype -
Performance Tuning -To find the execution time for Select Statement
Hi,
There is a program that takes 10 hrs to execute. I need tune its performance. The program is basically reading few tables like KNA1,ANLA,ANLU,ADRC etc and updates to Custom table. I did my analysis and found few performance techniques for ABAP coding.
Now my problem is, to get this object approved I need to submit the execution statistics to client.I checked both ST05 and SE30. I heard of a Tcode where we can execute a select statement and note its time, then modify and find its improved Performance. Can anybody suggest me on this.
Thanks,
Rajani.Hi,
This is documentation regarding performance analysis. Hope this will be useful
It is a general practice to use Select * from <database> This statement populates all the values of the structure in the database.
The effect is many fold:-
It increases the time to retrieve data from database
There is large amount of unused data in memory
It increases the processing time from work area or internal tables
It is always a good practice to retrieve only the required fields. Always use the syntax Select f1 f2 fn from <database>
e.g. Do not use the following statement:-
Data: i_mara like mara occurs 0 with header line.
Data: i_marc like marc occurs 0 with header line.
Select * from mara
Into table i_mara
Where matnr in s_matnr.
Select * from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Instead use the following statement:-
Data: begin of i_mara occurs 0,
Matnr like mara-matnr,
End of i_mara.
Data: begin of i_marc occurs 0,
Matnr like marc-matnr,
Werks like marc-werks,
End of i_marc.
Select matnr from mara
Into table i_mara
Where matnr in s_matnr. -
Execution time for first transaction
Hi,
We recently implemented SCM SNC 5.1- Work Order Collaboration module. In the process we built several custom display screens to act as real time reports since SNC does not have a reporting framework of its own.
However it appears that while executing the custom screens the performance is really poor while executing the screen for the first time after loggin in to the web screen. Thereafter it seems to work well.
Is the system filling the buffer or cache when executing for the first time?
It is observed that all the Work order related standardweb transactions also perform in a similar fashion.
Is there a parameter that controls this?
Please advice.
Thanks,
KedarHi Kedar,
For the standard transactions SGEN transaction is used to generate all the program in the system.
As you are saying yours is custom development and working fine after the first execution, there is not any option to fasten the process. Try to optimization in your custom screen generation like,
1. Avoid extracting all the screen data extraction when first screen display.
2. Avoid unncessary data read during screen generation.
Regards,
Saravanan V -
Is it possible to limit the execution time for a query?
I have an application that will run a query to gather statistics. The time window is defined by the user. Since the polling period for data collection varies, it is not possible to say that a large time window will result in a large resultSet. I may have a polling period of 1 minute or a polling period of 1 hour.
I want to avoid a user executing a query that will consume too many resources and inpact the system's performce in general. Could I stop a query after it takes more that x secs? Is there a way to write an sql statement indicating the max response time? similar to rownum?You can also create an Oracle profile with limited resources and assign it to the Oracle account running the queries (this profile will be used for all queries run by the corresponding user). Resources can only specifed in cpu time (not elapsed time) or logical reads.
See http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96521/users.htm#15451
and http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_611a.htm#2065932 -
Need to get the exact time of each picture that I saved ( maybe as a excel sheet or word )
Hi guys,
I am actually a new labview user. I have successfullly been able to save pictures. But unfortunately I got two problems. The first one is that I need to use two cameras at the same time. The program I have so far is for one camera. The second problem is that I need to get the exact time for each picture saved in the profile.
I would appreciate if someone can help me with that. Anyway plz find attached the subvi.
Regards,
Abbas
Attachments:
Abbas progress.vi 54 KBHi Abbas,
First, I notice that you are using IMAQ for USB. Is there any particular reason you are not using IMAQdx? This is the newer, supported driver for use with USB cameras that supports acquiring from multiple cameras at the same time.
Check these two articles:
Can I Acquire from Multiple USB Cameras Simultaneously Using IMAQdx?
Can I Acquire from Two USB Cameras Simultaneously with NI-IMAQ for USB Cameras 1.0?
And also this example (if you camera does not support acquiring from multiple cameras at the same time):
Toggle Between Multiple USB Cameras
As for timestamps, some cameras output them with the frame timing. However, usually this is not a feature of USB cameras; if it is not, you can look into timing options within LabVIEW. Try playing with the property node for IMAQdx and the Acquisition Attributes.
Cheers,
Marti C
Applications Engineer
National Instruments
NI Medical -
Query Execution Time for a Query causing ORA-1555
dear Gurus
I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
My question -
1. Is it possible to accurately find the query duration time besides the Alert Log file ?abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
1- Tune the query. The faster a query runs the less likely a 1555 will occur.
2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
HTH -- Mark D Powell --
dear mark
Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
regards
abhishek -
Estimate execution time for CTAS
Hi,
I am searching for long to find a way to estimate the execution time for CTAS commands. I am a DBA. Our users run CTAS commands to load millions of rows. The commands fetch data from 4-5 very big tables each with millions of records and process them using where clause and group by clause and finally create the table. All these things are coded in the CTAS command. These CTAS sometime takes long time like 5 , 8 Hrs. Users frequently ask me to find how long it's going to take. I use both OEM and TOAD. But I couldn't find the time estimated from these tools. I feel that there must be some way, but I don't know the method.
Can any body please help me in this regard?
Thanks & Regards
Ananda BasakIt depends on a number of factors chief among them how accurate your estimate needs to be but also including things like what version of Oracle you're using, how accurate your database statistics are, etc.
One option is to look at the TIME column in the plan. For example, if I wanted to do a CTAS to create a copy of the EMP table, the optimizer expects that to take on the order of a second. Of course, the optimizer's estimates are only estimates and are only as accurate as the database statistics that are in place. If the optimizer generates a bad plan, it's likely because the optimizer expects some operation to take much more or much less time than it does in reality in which case the optimizer's runtime estimate is likely to be way off.
SQL> explain plan for create table emp_copy as select * from emp;
Explained.
SQL> ed
Wrote file afiedt.buf
1 select *
2* from table( dbms_xplan.display() )
SQL> /
PLAN_TABLE_OUTPUT
Plan hash value: 2748781111
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | CREATE TABLE STATEMENT | | 14 | 546 | 4 (0)| 00:00:01 |
| 1 | LOAD AS SELECT | EMP_COPY | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 546 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------------Depending on the query plan, you may be able to query the GV$SESSION_LONGOPS table to track the progress of any long-running operations in your session. If your query plan involves a lot of full table scans, sorts that take more than a few seconds, hash joins, etc. then it is likely that you'll be able to chart the progress of a query over time by watching GV$SESSION_LONGOPS change. Of course, if your query is going to need to do many long-running operations, you'll need to a human to interpret the data a bit in order to figure out where in the plan Oracle currently is and how far along that means the entire query is.
SELECT *
FROM gv$session_longops
WHERE time_remaining > 0If you're using 11g and you have the performance and tuning pack licensed, you could also potentially use the V$SQL_PLAN_MONITOR view.
Justin -
How to find out the execution time of a sql inside a function
Hi All,
I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
RETURN NUMBER
IS
exec_time NUMBER;
BEGIN
--Calculate the execution time for the incoming sql statement.
RETURN exec_time;
END function_name;
/Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
A simple test case shows the potential difference of work performed by the database:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Session altered.
SQL>
SQL> drop table count_test purge;
Table dropped.
Elapsed: 00:00:00.17
SQL>
SQL> create table count_test as select * from all_objects;
Table created.
Elapsed: 00:00:02.56
SQL>
SQL> alter table count_test add constraint pk_count_test primary key (object_id)
Table altered.
Elapsed: 00:00:00.04
SQL>
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.29
SQL>
SQL> set autotrace traceonly
SQL>
SQL> select * from count_test;
5326 rows selected.
Elapsed: 00:00:00.10
Execution Plan
Plan hash value: 3690877688
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5326 | 431K| 23 (5)| 00:00:01 |
| 1 | TABLE ACCESS FULL| COUNT_TEST | 5326 | 431K| 23 (5)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
419 consistent gets
0 physical reads
0 redo size
242637 bytes sent via SQL*Net to client
4285 bytes received via SQL*Net from client
357 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5326 rows processed
SQL>
SQL> select count(*) from (select * from count_test);
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 572193338
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| PK_COUNT_TEST | 5326 | 5 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to get the execution time of a query
Hi,
Environment: 10.2.0.4.0
Just wondering how I can get the query execution time? I am not interested in the query output nor do I want the statistics, just the execution time?
Any suggestions will be appreciated
Thanks in advance
rogers42If you're using SQL*Plus
SQL> set autotrace traceonly
SQL> set timing on
SQL> <<your query here>>SQL*Plus will fetch all the data and then report the query plan, execution statistics, and elapsed time. It will not display the actual data.
SET TIMING ON alone tells SQL*Plus to display the execution time of each SQL statement-- the problem is that it also displays all the data which can skew the results because you're including the time required by SQL*Plus to pipe a bunch of data to the screen.
Justin -
How to find the Response time for a particular Transaction
Hello Experts,
Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
Response time BEFORE BAdI Implementation
Response time AFTER BAdI Implementation
Where can i get this.
Help me in this regard
Best Regards
SRiNiHello,
Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
The selection screen also gives you a few options for display mode.
- Show all statistic records, sorted by star
This shows you all of the transaction steps, but they are not grouped in any way.
-Show all records, grouped by business transaction
This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
-Show Business Transaction Tots
This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
- Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
- Stats on the data selected within the execution
- Memory utilization of the transaction
- RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
- Much more.
As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
I hope this information is helpful!
Kind regards,
Geoff Irwin
Senior Support Consultant
SAP Active Global Support -
Acrobat DC insists on checking the adobe CC license each time it is run. After that it will run, but starting this dialog up 10x a day is not what's it's supposed to do , right ? I should have no other check in 30 days ...
Can this be fixed ? Other apps like photoshop and Lightroom do not do this. What do I need to do ?HiAnubha, I now have the messages again, both for Acrobat and the latest LightRoom 2015 CC
Maybe you are looking for
-
2 Problems with PDF Forms: Import and Launcing PDF Form in Browser
We are installing SAP sersion 2004s and I have encountered 2 problems while working through several tutorials. 1. When Adobe Designer opens from the layout in SE80, the IMPORT option (under TOOLS) does not appear. Is there an installation step(s) m
-
Office 365 Trial Password forgotten
Hi, I have been asked by a school to set up Office 365 for them. They have already started the process by downloading the trial version with the ######.onmicrosoft.com domain, They now have their own domain registered and want to activate this. Howev
-
HT201412 App Store Crashing !!
Hello, I updated my iPod 4th Gen today to iOS 6. I like the update, but the App Store keeps crashing on my device. If I search an app it will show some for a few seconds, and then it crashes. Help?
-
PHP help with nested repeat region
Hopefully someone can help me out with this one. I basically have some SQL returning results of a search page, where you can search on various keywords by checking boxes, and it returns Employers that match those keywords : mysql_select_db($database_
-
VSOM 7 - Motion detection save button greyed out
Hi. Would like to enable motion detection on my cameras (all are capable, a.e. CIVS-3520). But after defining the motion areas (and exclude), the button below "save motion config" is greyed out. I have enabled them in the camera settings screen (and