Sys.dm_exec_query_stats

In the DMV - sys.dm_exec_query_stats, I see in BOL:
total_elapsed_time
bigint
Total elapsed time, reported in microseconds (but only accurate to milliseconds), for completed executions of this plan.
last_elapsed_time
bigint
Elapsed time, reported in microseconds (but only accurate to milliseconds), for the most recently completed execution of this plan.
min_elapsed_time
bigint
Minimum elapsed time, reported in microseconds (but only accurate to milliseconds), for any completed execution of this plan.
max_elapsed_time
bigint
Maximum elapsed time, reported in microseconds (but only accurate to milliseconds), for any completed execution of this plan.
When I see last elapsed time as: 3906, is it 3.9 ms ?
So, if the total elapsed time is: 5255857 ans execution count is 1421 - Avg elapsed time - (5255857/1421)/1000 = 3.6 ms
Am i missing something ?
The reason for this post is , i have seen several queries that has the belowcalculation, are they reporting Microseconds as Milliseconds ?
qs.total_elapsed_time/qs.execution_count AS "Avg Duration (ms)"
Ranga

Hi Ranga,
According to your description, the total elapsed time and last elapsed time are reported in microseconds.
 There are some queries , you can refer to the difference between microsecond and milliseconds.
Select
d.total_elapsed_time,
----converted from microseconds
total_elapsed_timeinmilliseconds = d.total_elapsed_time/1000,
d.last_elapsed_time,
----converted from microseconds
last_elapsed_timeinmilliseconds =
d.last_elapsed_time/1000,
d.execution_count,
d.total_elapsed_time/d.execution_count AS [avg_elapsed_time],
----converted from microseconds
d.total_elapsed_time/d.execution_count/1000 AS [avg_elapsed_time(ms)]
FROM sys.dm_exec_procedure_stats as d
Regards,
Sofiya Li
If you have any feedback on our support, please click here.
Sofiya Li
TechNet Community Support

Similar Messages

  • How to join sys.dm_exec_query_stats to FROM sys.sql_modules as M INNER JOIN sys.objects

    0
    SELECT O.Name as ProcName
    ,M.Definition as CreateScript
    ,O.Create_Date
    ,O.Modify_Date
    FROM sys.sql_modules as M INNER JOIN sys.objects as O
    ON M.object_id = O.object_id
    WHERE O.type = 'P'
    --Procedure
    --WHERE O.type = 'V' View
    --WHERE O.type = 'FN' Function
    How can I join this to sys.dm_exec_query_stats to
    get the execution_count?

    You can join to sys.dm_exec_procedure_stats using object_id for procedures in cache and get execution count
    http://msdn.microsoft.com/en-in/library/cc280701(v=sql.110).aspx
    for ones not in cache you need to use logic like below
    SELECT O.Name as ProcName
    ,M.Definition as CreateScript
    ,O.Create_Date
    ,O.Modify_Date
    ,QS.execution_count
    FROM sys.dm_exec_query_stats AS QS
    CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) t
    INNER JOIN sys.sql_modules as M
    ON M.object_id = t.objectid
    INNER JOIN sys.objects as O
    ON M.object_id = O.object_id
    WHERE O.type = 'P'
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to find the list of unused stored procedures in SQL Server 2005?

    Hi,
    I need to find out the list of stored procedures which are not in use.
    I found there is something called "sys.dm_exec_procedure_stats " for SQL server 2008.
    Can you please suggest your ides here to do the same job for SQL server 2005.
    Many Thanks.

    In SQL 2005 there is, sort of. This is query lists the last execution
    time for all SQL modules in a database:
       SELECT object_name(m.object_id), MAX(qs.last_execution_time)
       FROM   sys.sql_modules m
       LEFT   JOIN (sys.dm_exec_query_stats qs
                    CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) st) 
              ON m.object_id = st.objectid
             AND st.dbid = db_id()
       GROUP  BY object_name(m.object_id)
    But there are tons of caveats. The starting point of this query is
    the dynamic management view dm_exec_query_stats, and the contents is
    per *query plan*. If a stored procedure contains several queries, 
    there are more than one entry for the procedure in dm_exec_query_stats.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • With 2008 - What would be the 'best practice' approach for giving a principal access to system views

    I want to setup a job that runs a few select statements from several system management views such as those listed below.  Its basically going to gather various metrics about the server, a few different databases and jobs.
    msdb.dbo.sysjobs
    msdb.dbo.sysjobhistory
    sys.dm_db_missing_index_groups
    sys.dm_db_missing_index_group_stats
    sys.dm_db_missing_index_details
    sys.databases
    sys.dm_exec_query_stats
    sys.dm_exec_sql_text
    sys.dm_exec_query_plan
    dbo.sysfiles
    sys.indexes
    sys.objects
    So, there a number of instance-level permissions that are needed, mainly VIEW SERVER STATE
    https://msdn.microsoft.com/en-us/library/ms186717.aspx
    Granting these permissions to a single login seems like introducing a maintenance headache for later.  What about a server role?
    Correct me if Im wrong, but this is a new feature of 2012 and above, the ability to create user-defined server roles.
    Prior to version 2012, I will just have to settle for granting these instance-level permissions to individual logins.  There wont be many logins that need this kind of permissions, but id rather assign them at a role level then add logins to that role.
     Then again, there is little point in creating a seperate role if there is only 1...and maybe 2 logins that might need this role?
    New for 2012
    http://www.mssqltips.com/sqlservertip/2699/sql-server-user-defined-server-roles/

    Just as any Active Directory Administrator will tell you you should indeed stick to the rule - "user in role- permissions to role" - in AD terms "A-G/DL-P. And since this is very much possible since SQL Server 2012 why not just do that. You
    lose nothing if you don't ever change that one single user. In the end you would only expect roles to have permissions and save some time when searching for permission problems.
    i.e.
    USE [master]
    GO
    CREATE SERVER ROLE [role_ServerMonitorUsers]
    GO
    GRANT VIEW SERVER STATE TO [role_ServerMonitorUsers]
    GO
    ALTER SERVER ROLE [role_ServerMonitorUsers]
    ADD MEMBER [Bob]
    GO
    In security standardization is just as much key as in administration in general. So even if it does not really matter, it may matter in the long run. :)
    Andreas Wolter (Blog |
    Twitter)
    MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
    www.SarpedonQualityLab.com |
    www.SQL-Server-Master-Class.com

  • The First execution of a Stored Proc shows a delay between SP:StmtStarting and SP:Starting

    We experience a performanceproblem with some of our Stored Procedures. SQL Server is "Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)"
    Situation:
    SQL Server Proc1 executes some SQL Statement and starts some other SQL Stored Procedures. I open a SQL Management Studio Session "example session_id 105", trace the session 105 with the SQL Server Profiler.
    I start Proc 1, when Proc1 starts the execution of Proc 2, the Profiler Trace shows a delay of 6 seconds between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute db..proc2 @SomeVar".
    All following executions of Proc1 in the Session 105 runs without a delay between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute db..proc2 @SomeVar".
    But when i open a new SQL Server Management Session "session_id 124", the first execution of Proc1 when it executes Proc 1, there is again the delay of 6 seconds between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute
    db..proc2 @SomeVar".
    Proc 1 starts the execution of Proc2 with a simple execute statement like this:
    Execute DB..Proc2 @SomeVar
    So its not dynamic SQL.
    What is SQL Server doing? I understand that SQL Server has to do some work when it executes the first time a Stored Procedure. But why is the SQL Server doing it in every new Session?
    How can i prevent this behavior or how to make it faster?
    Best Regards
    Paolo

    >In my case the temp tables ruined the performance.
    Creating temp tables takes time & resources. Temporary table usage should be justified and tested in stored procedures. There are cases when temporary table usage is helpful especially with very complex queries.
    In your case it appears that not one but several temp tables were applied. That can be punishing.
    Paul White's blog: "Ask anyone what the primary advantage of temporary tables over
    table variables is, and the chances are they will say that temporary tables support statistics and table variables do not.  This is true, of course; even the indexes that enforce PRIMARY KEY and UNIQUE constraints on table variables do not have
    populated statistics associated with them, and it is not possible to manually create statistics or non-constraint indexes on table variables.  Intuitively, then, any query that has alternative execution plans to choose from ought to benefit from using
    a temporary table rather than a table variable.  This is also true, up to a point.
    The most common use of temporary tables is in stored procedures, where they can be very useful as a way of simplifying a large query into smaller parts, giving the optimizer a better chance of finding good execution plans, providing statistical
    information about an intermediate result set, and probably making future maintenance of the procedure easier as well.  In case it is not obvious, breaking a complex query into smaller steps using temporary tables makes life easier for the optimizer in
    several ways.  Smaller queries tend to have a smaller number of possible execution plans, reducing the chances that the optimizer will miss a good one.  Complex queries are also less likely to have good cardinality (row count) estimates and statistical
    information, since small errors tend to grow quickly as more and more operators appear in the plan.
    This is a very important point that is not widely appreciated.  The SQL Server query optimizer is only as good as the information it has to work with.  If cardinality or statistical information is badly wrong
    at any point in the plan, the result will most likely be a poor execution plan selection from that point forward.  It is not just a matter of creating and maintaining appropriate statistics on the base tables, either.  The optimizer does
    use these as a starting point, but it also derives new statistics at every plan operator, and things can quickly conspire to make these (invisible) derived statistics hopelessly wrong.  The only real sign that something is wrong (aside from poor performance,
    naturally) is that actual row counts vary widely from the optimizer’s estimate.  Sadly, SQL Server does not make it easy today to routinely collect and analyse differences between cardinality estimates and runtime row counts, though some small (but welcome)
    steps forward have been made in SQL Server 2012 with new row count information in the
    sys.dm_exec_query_stats view.
    The benefits of using simplifying temporary tables where necessary are potentially better execution plans, now and in the future as data distribution changes and new execution plans are compiled.  On the cost side of the ledger we have
    the extra effort needed to populate the temporary table, and maintain the statistics.  In addition, we expect a higher number of recompilations for optimality reasons due to changes in statistics.  In short, we have a trade-off between potential
    execution plan quality and maintenance/recompilation cost.
    LINK:
    http://sqlblog.com/blogs/paul_white/archive/2012/08/15/temporary-tables-in-stored-procedures.aspx
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • ASDM with javaw.exe process 90% CPU usage

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;
    mso-fareast-language:EN-US;}
    Dear all,
    I had ASDM 6.2 with ASA 8.2 installed on my pc (windows 7 32 bit). When I launch ASDM process go to 80-90% and my RAM go to 2.5G. when I am looking on the process on the task manager there is a process “javaw.exe” linked with ASDM.
    I have done a lot of search on the google/SUN site and forums but I didn’t found anything to resolve this issue.
    Have someone had or have some issue with ASDM? Or have you the solution for this issue?
    Thanks a lot
    Note: I upgraded my ASDM to the latest version 6.4 but still have same issue.

    Hi Jazaib,
    As other said, you need to find out what was changed. In additional to environment/load changes, extra load could be triggered by suboptimal execution plans due to parameter sniffing and/or stale statistics. For example, you can have the situation when frequently
    executed query was recompiled using atypical parameter set and cached plan leads to much heavier I/O CPU activity.
    Other factor that often lead to CPU load is bad T-SQL code (multistatement UDF, imperative code, cursors) so check your application.
    You can also run the script below, which returns you the information on most CPU intensive queries in scope of the cached plans. Alternatively, you can setup XEvent/SQL Trace sessions capturing statements with cpu_time exceeding some duration.
    SELECT TOP 50
    SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
    CASE qs.statement_end_offset
    WHEN -1 THEN DATALENGTH(qt.TEXT)
    ELSE qs.statement_end_offset
    END - qs.statement_start_offset)/2)+1) as SQL,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_logical_writes) / qs.execution_count as [Avg IO],
    qp.query_plan,
    qs.total_logical_reads, qs.last_logical_reads,
    qs.total_logical_writes, qs.last_logical_writes,
    qs.total_worker_time / qs.execution_count as [Avg CPU],
    qs.total_worker_time,
    qs.last_worker_time,
    qs.total_elapsed_time/1000 total_elapsed_time_in_ms,
    qs.last_elapsed_time/1000 last_elapsed_time_in_ms,
    qs.last_execution_time
    from
    sys.dm_exec_query_stats qs with (nolock)
    cross apply sys.dm_exec_sql_text(qs.sql_handle) qt
    outer apply sys.dm_exec_query_plan(qs.plan_handle) qp
    order by -- or vy qs.total_worker_time desc
    [Avg CPU] desc
    option (recompile)
    Thank you!
    Dmitri V. Korotkevitch (MVP, MCM, MCPD)
    My blog: http://aboutsqlserver.com

  • Measuring the cpu and ram consumption of a given query in sql server

    hello there how you doing and am new to this sql server forum and i suddenly popped to ask questions related to sql server
    and my question goes like this
    in sql server i have table called testt and it have more thna10000000 records
    and i have written a query select * from testt limit 9000000; and what i want is to measure the
    CPU consumption of this given query=???????????????
    and
    RAM consumption of this given query=???????????????
    Time consumption of this given query=?????????????? this i will get it from the result bar but i don't know whether it is correct or not
    any help on this it may be query, tool or configurations all are acceptable and appreciated
    10Q for letting me ask questions and feeling freely.

    Please check these queries and this may helps you to get whats happening on with CPU currently..
    SELECT getdate() as "RunTime", st.text, qp.query_plan, a.* FROM sys.dm_exec_requests a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) as st CROSS APPLY sys.dm_exec_query_plan(a.plan_handle) as qp order by CPU_time desc
    Top 50 CPU usage :
    SELECT TOP 50 st.text
                   ,st.dbid
                   ,st.objectid
                   ,qs.total_worker_time
                   ,qs.last_worker_time
                   ,qp.query_plan
    FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
    ORDER BY qs.total_worker_time DESC
    SP_Whoisactive --- download from : http://www.brentozar.com/responder/log-sp_whoisactive-to-a-table/
    Activity Monitor and sp_who2 active also will give some idea...
    Reference links...
    http://channel9.msdn.com/posts/SQL-server-High-CPU--Part-1
    http://channel9.msdn.com/posts/Troubleshooting-SQL-server-High-CPU--Part-2
    http://jeffstevenson.karamazovgroup.com/2008/09/identifying-high-cpu-sql-processes.html
    http://searchsqlserver.techtarget.com/tutorial/Step-1-CPU-usage
    ================
    Execute this script and and start doing your analysis in your server so that you will get more idea how its working and all the details...
    Sample CPU Test : Dont try this in production and try this in your test or Dev server..
    -- Query to Keep CPU Busy for 2 0 Seconds
    DECLARE    @T DATETIME, @F BIGINT;
    SET @T = GETDATE();
    WHILE DATEADD(SECOND,20,@T)>GETDATE()
    SET    @F=POWER(2,30);
    You can easily change the parameter in the DATEADD and can make it run for 50 seconds.
    -- Query to Keep CPU Busy for 50 Seconds
    DECLARE    @T DATETIME, @F BIGINT;
    SET @T = GETDATE();
    WHILE DATEADD(SECOND,50,@T)>GETDATE()
    SET    @F=POWER(2,30);
    Raju Rasagounder MSSQL DBA

  • Parameterized queries running much slower than ones with hardcoded values

    Very often there is a huge performance difference when using parameters in a query, compared to running the same code after replacing the parameters with hardcoded values: the parameterized version of the code runs much slower!
    The case is not parameter sniffing as it is not a (compiled) stored proc, but code executed directly from the editor and the performance issue has been observed in different versions of SQL Server (2000 and 2005).
    How is this explained and how can the parameterized queries have similar performance with the hardcoded ones?
    Also, why does this happen in some cases and not always?
    Finally, the same is sometimes the case with stored procs: a very slow running proc speeds up tremendously when running its code directly, instead of calling the procedure --and even faster, according to the previous, when its parameters are replaced with
    hardcoded values 

    >>The case is not parameter sniffing as it is not a (compiled) stored proc, but code executed >>>directly
    from the editor ?>>>and the performance issue has been observed in different >>>versions of SQL Server (2000 and 2005).
    Something like below?
    --SQL Server creates 3 execution plan rather only one
    DBCC FREEPROCCACHE
    GO
    SELECT *
    FROM Sales.SalesOrderHeader
    WHERE SalesOrderID = 56000
    GO
    SELECT * FROM
    AdventureWorks.Sales.SalesOrderHeader WHERE
    SalesOrderID = 56001
    GO
    declare @i int
    set @i = 56004
    SELECT *
    FROM Sales.SalesOrderHeader
    WHERE SalesOrderID = @i
    GO
    select  stats.execution_count AS exec_count, 
    p.size_in_bytes as [size], 
    [sql].[text] as [plan_text]
    from sys.dm_exec_cached_plans p
    outer apply sys.dm_exec_sql_text (p.plan_handle) sql
    join sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handle
    GO
    ----This time only (we get parameterization)
    DBCC FREEPROCCACHE
    GO
    EXEC sp_executesql N'SELECT  SUM(LineTotal) AS LineTotal
    FROM Sales.SalesOrderHeader H
    JOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderID
    WHERE H.SalesOrderID = @SalesOrderID', N'@SalesOrderID INT', 56000
    GO
    EXEC sp_executesql N'SELECT  SUM(LineTotal) AS LineTotal
    FROM Sales.SalesOrderHeader H
    JOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderID
    WHERE H.SalesOrderID = @SalesOrderID', N'@SalesOrderID INT', 56005
    GO
    select  stats.execution_count AS exec_count, 
    LEFT([sql].[text], 80) as [plan_text]
    from sys.dm_exec_cached_plans p
    outer apply sys.dm_exec_sql_text (p.plan_handle) sql
    join sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handle
    GO
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Top utilising Query in SQL server

    Hi,
       Is there any query to get the top utilization query of the day?

    Hi,
    Top CPU utilizing query
    --This might take some time to give result on busy systemselect top 10
    sum(qs.total_worker_time) as total_cpu_time,
    sum(qs.execution_count) as total_execution_count,
    count(*) as number_of_statements,
    t.text
    from
    sys.dm_exec_query_stats qs
    cross apply sys.dm_exec_sql_text(qs.sql_handle) as t
    group by t.text
    order by sum(qs.total_worker_time) desc
    For memory utilization there is no perfect way to find out if query has completed. but
    sys.dm_exec_query_memory_grants would help you
    SELECT mg.granted_memory_kb, mg.session_id, t.text, qp.query_plan
    FROM sys.dm_exec_query_memory_grants AS mg
    CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) AS t
    CROSS APPLY sys.dm_exec_query_plan(mg.plan_handle) AS qp
    ORDER BY 1 DESC OPTION (MAXDOP 1)
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Dm_exec_cached_plans undocumented column parent_plan_id?

    I recently paid attention that dm_exec_cached_plans has a column called parent_plan_id. it's not mention in BOL (see below), and it appears to be always null. What gives?
    https://msdn.microsoft.com/en-us/library/ms187404(v=sql.110).aspx
    Mordechai Danielov

    Try the following statement which tells you the corresponding sql statement for that SQL_Handle with statistical information. plan_handle value from
    dm_exec_cached_plans may be used in other DMV's like below.
    SELECT s2.dbid,
    s1.sql_handle,
    (SELECT TOP 1 SUBSTRING(s2.text,statement_start_offset / 2+1 ,
    ( (CASE WHEN statement_end_offset = -1
    THEN (LEN(CONVERT(nvarchar(max),s2.text)) * 2)
    ELSE statement_end_offset END) - statement_start_offset) / 2+1)) AS sql_statement,
    execution_count,
    plan_generation_num,
    last_execution_time,
    total_worker_time,
    last_worker_time,
    min_worker_time,
    max_worker_time,
    total_physical_reads,
    last_physical_reads,
    min_physical_reads,
    max_physical_reads,
    total_logical_writes,
    last_logical_writes,
    min_logical_writes,
    max_logical_writes
    FROM sys.dm_exec_query_stats AS s1
    CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2
    WHERE s2.objectid is null
    ORDER BY s1.sql_handle, s1.statement_start_offset, s1.statement_end_offset;

  • Query to find adhoc queries taking high cpu on server

    Hi,
    I want to find all the adhoc and other queries consuming high cpu on sql server. sys.dm_exec_cahched_plan will only have cached queries and not the currently running ones.

    Hi Preetha7,
    According to your description, if you want to identify the most expensive SQL Server queries based on the cumulative CPU cost, you can use the DMVs of the sys.dm_exec_sql_text , the sys.dm_exec_query_plan and sys.dm_exec_query_stats. For example, you can
    refer to the following T-SQL statement.
    SELECT TOP 20
    qs.sql_handle,
    qs.execution_count,
    qs.total_worker_time AS Total_CPU,
    total_CPU_inSeconds = --Converted from microseconds
    qs.total_worker_time/1000000,
    average_CPU_inSeconds = --Converted from microseconds
    (qs.total_worker_time/1000000) / qs.execution_count,
    qs.total_elapsed_time,
    total_elapsed_time_inSeconds = --Converted from microseconds
    qs.total_elapsed_time/1000000,
    st.text,
    qp.query_plan
    FROM
    sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
    CROSS apply sys.dm_exec_query_plan (qs.plan_handle) AS qp
    ORDER BY qs.total_worker_time DESC
    Or you can run the "Performance - Top Queries By Total CPU Time" report in SSMS. In addition, you can use SQL Server Extended Events session for troubleshooting the high consumed of CPU time. For more information about finding currently session that is costing
    high CPU.
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cb3e29ca-f1ef-4440-8f1a-db4924a43c5c/find-currently-session-that-is-costing-high-cpu?forum=sqldatabaseengine
    http://blogs.msdn.com/b/batala/archive/2011/07/23/troubleshoot-high-cpu-issue-without-using-profile-traces.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Performance testing

    I compressed a database now I want to test the performance increase. I can't run a trace on production. Is there somewhere I can find some queries that I can set statistics IO on and measure the difference that way?
    Alan

    >Is there anyway to get the most executed sql and SP's? 
    The SSMS Built-In reports "Top Queries by Total CPU Time" and "Top Queries
    by Total IO" (right-click on your instance in SSMS, select Reports.)
     Or go straight to the DMVs, particularly
    sys.dm_exec_query_stats.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Procedure cache question

    Does SQL cache the entire TSQL batch or each individual TFS statement in a batch?  For cache match purposes, does it match on the entire batch or each statement within the batch will attempt to match a previously cached plan?
    When I batch together two TSQL queries in mgmt studio and query the dmvs (dm_exec views/functions), I get two separate rows back where each row has a a different plan_handle, sql_handle, query_hash,query_plan_hash, and text.  The text for each row represents
    a single query statement than the entire batch as the MSDN docs suggest.
    select * from mytable1 where id = 1
    select * from mytable2 where id = 2
    go
    SELECT 
    cp.objtype
    ,qs.plan_handle
    ,qs.SQL_HANDLE
    ,QS.query_hash
    ,QS.query_plan_hash
    ,ST.[TEXT]
    ,cp.usecounts
    ,QS.EXECUTION_COUNT
    ,qs.total_physical_reads
    ,qs.total_logical_reads
    ,P.query_plan
    FROM [SYS].[DM_EXEC_QUERY_STATS] AS [QS] 
    INNER JOIN SYS.dm_exec_cached_plans cp on cp.plan_handle = qs.plan_handle
    CROSS APPLY [SYS].[DM_EXEC_SQL_TEXT]([QS].[SQL_HANDLE]) AS [ST] 
    CROSS APPLY [sys].[dm_exec_query_plan]([qs].[plan_handle]) as [p]
    WHERE [st].[text] like '%mytable1%' or [st].[text] like '%mytable2%'
    ORDER BY 1, [qs].[execution_count] desc;
    go
    The MSDN docs suggest that sql handle from dm_exec_query_stats represent a given TSQL batch of statements.   For caching purposes what constitutes a batch?
    SQL2008

    SQL Server caches the plan for the entire batch, the match when looking for a cache entry is based on a hash that is computed over the entire batch. Note that the hash is computed over the batch text as-is. That is, everything counts: spaces, comments, and
    lowercase and uppercase counts differently.
    But that is not all. If two users submits the same query batch, and the batch includes one or more table references where the schema is not specified, and the users have different default schema, that will result in two cache entries.
    Furthermore, there are a number of SET options that must match for a cache hit. For instance, different settings for ARITHABORT will result in two cache entries.
    As I said, SQL Server initially compiles a plan for the entire batch. However, during execution, recompiles may occur for a number of reasons, and recompilation is on statement level. This causes the part of the plan to be replaced, and as I recall the plan_handle
    remains the same.
    What happens in your case, is something called autoparameterisation. You may note that the query text in the cache has changed, and reads:
    (@1 tinyint)SELECT * FROM [dbo].[mytable2] WHERE [id]=@1
    That is not what you submitted. If you take a query batch where autoparameterisation does not occur, you will still see two entries in the output, because there is always one row per statement, but the sql_handle and plan_handle will be the same. For instance
    try this:
    create table mytable1 (id int NOT NULL)
    create table mytable2 (id int NOT NULL)
    go
    DBCC FREEPROCCACHE
    go
    select * from dbo.mytable1 where id in (SELECT id FROM dbo.mytable2)
    select * from dbo.mytable2 where id in (SELECT id FROM dbo.mytable1)
    go
    SELECT
    cp.objtype
    ,qs.plan_handle
    ,qs.sql_handle
    ,qs.statement_start_offset
    ,qs.statement_end_offset
    ,qs.query_hash
    ,qs.query_plan_hash
    ,st.[text]
    ,cp.usecounts
    ,qs.execution_count
    ,qs.total_physical_reads
    ,qs.total_logical_reads
    ,p.query_plan
    FROM [sys].[dm_exec_query_stats] AS [qs]
    INNER JOIN sys.dm_exec_cached_plans cp on cp.plan_handle = qs.plan_handle
    CROSS APPLY [sys].[dm_exec_sql_text](qs.[sql_handle]) AS st
    CROSS APPLY [sys].[dm_exec_query_plan]([qs].[plan_handle]) as [p]
    WHERE [st].[text] like '%mytable1%' or [st].[text] like '%mytable2%'
    ORDER BY 1, [qs].[execution_count] desc;
    go
    DROP TABLE mytable1, mytable2
    I have added the column statement_start_offset and statement_end_offset, so that you can see the entries are per statement.
    By the way, all the DMVs are spelled in lowercase only, and I recommend that you stick to this. One day, you may need to run your queries on a case-sensitive system, and things like SYS.DM_EXEC_QUERY_STATS will not work for you in this case.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • DMV use Cross Apply and too much result

    Hi
    Here are my code for query DMV, but some column Is repeated, how can I filter it (last_exection_time) ?
    I just wnat find T-SQL query information(only select), and it must include query text、targert db、source application(from connection string )、and other needed !
    You can change database name and where conditional, for your testing ! Thanks
    SELECT DB_NAME(DMV_QueryText.dbid) as 'DBName',
    DMV_Sessions.program_name as 'ApplicationName',
    DMV_QueryText.text as 'SQL Statement',
    execution_count as 'Count',
    --last_rows, (for SQL 2012 only)
    last_execution_time as 'Last Execution Time(ms)',
    last_worker_time as 'Last Worker Time(ms)',
    last_physical_reads as 'Last Physical Reads(ms)',
    last_logical_reads as 'Last Logical Reads(ms)',
    last_logical_writes as 'Last Logical Writes(ms)',
    last_elapsed_time as 'Last Elapsed Time(ms)'
    FROM sys.dm_exec_query_stats AS DMV_QueryStats
    CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS DMV_QueryText
    cross Apply sys.dm_exec_sessions as DMV_Sessions
    WHERE DMV_QueryText.objectid is null and DB_Name(DMV_QueryText.dbid)='YourDB' and PatIndex('select %',DMV_QueryText.text)>0
    and DMV_Sessions.program_name is not null and DMV_Sessions.program_name in('app1','app2','app3','app4')
    -- order by DMV_QueryStats.execution_count desc
    order by DMV_QueryStats.last_worker_time desc
    my407sw

    There is no relationship between sys.dm_exec_query_stats and
    sys.dm_exec_sessions and thus you are getting a Cartesian product, every combination of query stat record and session.
    sys.dm_exec_sessions shows the "aggregate performance statistics for cached query plans in SQL Server 2012"
    http://msdn.microsoft.com/en-us/library/ms189741.aspx and is not tied in any way to a session.
    Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com

  • I want to know the top 10-20 Store procedures used in the table.

    Hello All, 
    There are total 3500+ Store procedures created in the server. So,  I want to know the top 10-20 Store procedures used in the table. Like which store procedures are important and what are they, is there any code to find them? 
    I think the question might be very silly, but i don't know which store procedure to look at it. 
    Please help me on this issue.
    Thanks.
    Thanks, Shyam.

    By what? CPU? Memory? Execution count?
    Glenn Berry wrote this
    -- HIGH CPU ************
          -- Get Top 100 executed SP's ordered by execution count
          SELECT TOP 100 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',  
          qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',
          qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',
          qs.total_worker_time AS 'TotalWorkerTime',
          qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',
          qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads, 
          DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'
          FROM sys.dm_exec_query_stats AS qs
          CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
          WHERE qt.dbid = db_id() -- Filter by current database
          ORDER BY qs.execution_count DESC
          -- HIGH CPU *************
          -- Get Top 20 executed SP's ordered by total worker time (CPU pressure)
          SELECT TOP 20 qt.text AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime', 
          qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',
          qs.execution_count AS 'Execution Count', 
          ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second',
          ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime', 
          qs.max_logical_reads, qs.max_logical_writes, 
          DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'
          FROM sys.dm_exec_query_stats AS qs
          CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
          WHERE qt.dbid = db_id() -- Filter by current database
          ORDER BY qs.total_worker_time DESC
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

Maybe you are looking for

  • Spool List Recipient

    System is ECC 6.0. SCOT has been configured and I am able to send mails from SAP to E-mail recipients.  I am able to give only one email ID as spool recipient when I schedule job in SM36.  I want the output(spool) that is generated to be sent to mult

  • How do I bend an image along a specific curve?

    I'm trying to bend a set of panoramic photos around a circle. Most of the discussions I see in this forum are about doing this with text using the Type on a path tool. I'd like to achieve the essentially same thing with a photograph. The warp tool in

  • A "how to" question!

    Does anyone know if it is possible to loop several slide shows together so that they play continuously when the DVD is inserted. I'd like to make separate slideshows (of different occasions and different music), but I don't want to have to start each

  • How to create am email subscribe form with EA?

    I know how to create the input elements, but how I create the form element directly in EA?

  • Cannot add user to ticket

    I have a few users added to my team.  However, on the kanban board, i cannot assign a ticket to anyone but myself. What do I need to do to allow this?