SQL response time is 698.18% of baseline? MEANS..!!!
I am getting "SQL response time is 698.18% of baseline." Warning on Oracle 10g Database
Can anyone know what does this means?
Thanks,
Waheed
Actually i have deployed application on Oracle 10g AS which is using Oracle 10g Database...and i am getting this warning on Oracle 10g DB
I hope you understand...
Similar Messages
-
SQL Response Time in Cloud Control 12c
Hi all gurus,
SQL Response Time Screen in Cloud Control 12c , (translated from portuguese to english) is not showing any information.
One week ago the most expensive SQL commands were showed.
Can someone tell me where can I find any documentation to clarify this?
Many thanks,
KazPlease check following MOS notes.
12c SQL Response Time Graph does not display values (Doc ID 1603800.1)
How to Troubleshoot SQL Response Metric Graph from EM DB Home Page (Doc ID 786487.1) -
Oracle 11g Windows 2003 (SQL Respose Time)
Hi Guys,
My SQL Response Time is more ttan 100% and keeps on increasing. How can I fix this? The Application users are complaining that the system is very slow.
Thank you.I cannot start/ stop dbconsoleHow do you do this? Control panel? Command line?
Post the output of
emctl start dbconsole
Werner -
SQL tune (High response time)
Hi,
I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
GENERAL INFORMATION SECTION
Tuning Task Name : BFG_TUNING1
Tuning Task Owner : ARADMIN
Scope : COMPREHENSIVE
Time Limit(seconds) : 60
Completion Status : COMPLETED
Started at : 01/28/2013 15:48:39
Completed at : 01/28/2013 15:49:43
Number of SQL Restructure Findings: 7
Number of Errors : 1
Schema Name: ARADMIN
SQL ID : 2d61kbs9vpvp6
SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
chg.Change_Title, chg.Change_Type, chg.Change_Description,
chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
chg.Scheduled_End_Date_Int, chg.Outage_Required,
chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
chg.Customer_Visible, chg.Change_Source,
chg.Related_Ticket_Type, chg.Related_Ticket_ID,
chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
chg.Element_id, chg.Element_Type, chg.Element_Name,
chg.Search_flag, chg.remedy_id, chg.Change_Manager,
chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
AND a.bfg_con_id = chg.contract_id ) UNION SELECT
/*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
chg.Related_Ticket_Type, chg.Related_Ticket_ID,
chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
chg.Element_id, chg.Element_Type, chg.Element_Name,
chg.Search_flag, chg.remedy_id, chg.Change_Manager,
chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
chg.contract_id AND a.bfg_con_id IS NOT NULL
FINDINGS SECTION (7 findings)
1- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 26 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
2- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
3- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 10 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
4- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
5- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 6 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
6- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
7- Restructure SQL finding (see plan 1 in explain plans section)
An expensive "UNION" operation was found at line ID 1 of the execution plan.
Recommendation
- Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
or uniqueness is guaranteed.
Rationale
"UNION" is an expensive and blocking operation because it requires
elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
assuming that duplicates are allowed or uniqueness is guaranteed.
ERRORS SECTION
- The current operation was interrupted because it timed out.
EXPLAIN PLANS SECTION
1- Original
Plan hash value: 1047651452
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
| 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
| 2 | UNION-ALL | | | | | | | |
|* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
| 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
| 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
|* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
|* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
| 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
|* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
|* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
| 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
| 12 | UNION-ALL | | | | | | | |
|* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
| 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
|* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
| 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
| 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
|* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
| 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
|* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
|* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
| 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
| 28 | UNION-ALL | | | | | | | |
|* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
| 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
|* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
| 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
| 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
|* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
7 - access("C536870913"="C536870914")
9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
23 - access("C536870913"="C536870914")
25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
"EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
39 - access("C536870913"="C536870914")
Remote SQL Information (identified by operation id):
14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
"PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
(accessing 'ARS_BFG_DBLINK.WORLD' )
21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
"PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
(accessing 'ARS_BFG_DBLINK.WORLD' )
37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
-------------------------------------------------------------------------------Please review the following threads:
{message:id=9360002}
{message:id=9360003} -
How to capture transaction response time in SQL
I need to capture Transaction response time (i.e. ping test) to calculated the peak hours and averaged
on a daily basis.
and
Page refresh time that is calculated no less than every 2 hours for peak hours and averaged on a daily basis.
Please assist
kMy best guess as to what you are looking for is something like the following (C#):
private int? Ping()
System.Data.SqlClient.SqlConnection objConnection;
System.Data.SqlClient.SqlCommand objCommand;
System.Data.SqlClient.SqlParameter objParameter;
System.Diagnostics.Stopwatch objStopWatch = new System.Diagnostics.Stopwatch();
DateTime objStartTime, objEndTime, objServerTime;
int intToServer, intFromServer;
int? intResult = null;
objConnection = new System.Data.SqlClient.SqlConnection("Data Source=myserver;Initial Catalog=master;Integrated Security=True;Connect Timeout=3;Network Library=dbmssocn;");
using (objConnection)
objConnection.Open();
using (objCommand = new System.Data.SqlClient.SqlCommand())
objCommand.Connection = objConnection;
objCommand.CommandType = CommandType.Text;
objCommand.CommandText = @"select @ServerTime = sysdatetime()";
objParameter = new System.Data.SqlClient.SqlParameter("@ServerTime", SqlDbType.DateTime2, 7);
objParameter.Direction = ParameterDirection.Output;
objCommand.Parameters.Add(objParameter);
objStopWatch.Start();
objStartTime = DateTime.Now;
objCommand.ExecuteNonQuery();
objEndTime = DateTime.Now;
objStopWatch.Stop();
objServerTime = DateTime.Parse(objCommand.Parameters["@ServerTime"].Value.ToString());
intToServer = objServerTime.Subtract(objStartTime).Milliseconds;
intFromServer = objEndTime.Subtract(objServerTime).Milliseconds;
intResult = (int?)objStopWatch.ElapsedMilliseconds;
System.Diagnostics.Debug.Print(string.Format("Milliseconds from client to server {0}, milliseconds from server back to client {1}, and milliseconds round trip {2}.", intToServer, intFromServer, intResult));
return intResult;
Now, while the round trip measurement is fairly accurate give or take 100ms, any measurement of latency to and from SQL Server is going to be subject to the accuracy of the time synchronization of the client and server. If the server's and client's
time isn't synchronized precisely then you will get odd results in the variables intToServer and intFromServer.
Since the round trip result of the test is measured entirely on the client that value isn't subject to the whims of client/server time synchronization. -
How to find the Response time for a particular Transaction
Hello Experts,
Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
Response time BEFORE BAdI Implementation
Response time AFTER BAdI Implementation
Where can i get this.
Help me in this regard
Best Regards
SRiNiHello,
Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
The selection screen also gives you a few options for display mode.
- Show all statistic records, sorted by star
This shows you all of the transaction steps, but they are not grouped in any way.
-Show all records, grouped by business transaction
This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
-Show Business Transaction Tots
This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
- Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
- Stats on the data selected within the execution
- Memory utilization of the transaction
- RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
- Much more.
As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
I hope this information is helpful!
Kind regards,
Geoff Irwin
Senior Support Consultant
SAP Active Global Support -
How to Tune the Transactions/ Z - reports /Progr..of High response time
Dear friends,
in <b>ST03</b> work load anlysis menu.... there are some z-reports, transactions, and some programmes are noticed contineously that they are taking the <b>max. response time</b> (and mostly >90%of time is DB Time ).
how to tune the above situation ??
Thank u.Siva,
You can start with some thing like:
ST04 -> Detail Analysis -> SQL Request (look at top disk reads and buffer get SQL statements)
For the top SQL statements identified you'd want to look at the explain plan to determine if the SQL statements is:
1) inefficient
2) are your DB stats up to date on the tables (note up to date stats does not always means they are the best)
3) if there are better indexes available, if not would a more suitable index help?
4) if there are many slow disk reads, is there an I/O issue?
etc...
While you're in ST04 make sure your buffers are sized adequately.
Also make sure your Oracle parameters are set according to this OSS note.
Note 830576 - Parameter recommendations for Oracle 10g -
How to obtain the Query Response Time of a query?
Given the Average Length of Row of tables and the number of rows in each table,
is there a way we get the query response time of a query involving
those tables. Query includes joins as well.
For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
time it takes for the following query:
Query
SELECT t1.col1, t2.col2
FROM t1, t2, t3
WHERE t1.col1 = t2.col2
AND t1.col2 IN ('a', 'c', 'd')
AND t2.col1 = t3.col2
AND t2.col1 = t1.col1 (+)
ORDER BY t1.col1
Given are:
Average Row Length of t1 = 200 bytes
Average Row Length of t2 = 100 bytes
Average Row Length of t3 = 500 bytes
No of rows in t1 = 100
No of rows in t2 = 1000
No of rows in t3 = 500
What is required is the 'query response time' for the said query.I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
http://www.databasejournal.com/features/oracle/article.php/3492521
http://www.databasejournal.com/features/oracle/article.php/3387011
http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
Have fun reading:
You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
Regards
Tim -
Query Tuning - Response time Statistics collection
Our Application is Load tested for a period of 1 hour with peak load.
For this specific period of time, say thousands of queries gets executed in the database,
What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
Number of times Executed
Average Response time
Maximum response time
minimum response time
90th percentile response time ( sorted in ascending order, 90th percentile guy)
All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
I tried using sql trace and TKPROF but unable to get all these statistics...
Application uses connection pooling, so connections are taken as and when needed...
Any thoughts on this?
Appreciate your help.I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
Any more thoughts on this? -
Significant difference in response times for same query running on Windows client vs database server
I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
In both cases the query plans are the same.
The query and plan is shown below :
{code}
SQL> explain plan
2 set statement_id = 'SLOW'
3 for
4 SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
5 FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
6 ;
Explained.
SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2852K| 46M| | 69851 (1)|
| 1 | HASH UNIQUE | | 2852K| 46M| 153M| 69851 (1)|
|* 2 | TABLE ACCESS FULL| DOCUMENTS | 2852K| 46M| | 54063 (1)|
{code}
Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
The version on the database server is 10.2.0.1.0
The version of the oracle client is also 10.2.0.1.0
I am happy to provide any further information if required.
Thank you in advance.I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using. -
Explain plan - lower cost but higher response time in 11g compared to 10g
Hello,
I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
In 11g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANALYZED NUM_ROWS
11-08-2012 18:21:12 3413956
Elapsed: 00:00:00.30
In 10g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANAL NUM_ROWS
07-NOV-12 3502160
Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
BTW - 'm running the queries directly on the server, so no network latency in play here.
Thanks in advance
aBBy.*11g Env:*
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
*10g Env:*
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
The query used is:
explain plan for
select
NCP_DETAIL_ID ,
NCP_ID ,
STATUS_ID ,
FIBER_NODE ,
NODE_DESC ,
GL ,
FTA_ID ,
OLD_BUS_ID ,
VIRTUAL_NODE_IND ,
SERVICE_DELIVERY_TYPE ,
HHP_AUDIT_QTY ,
COMMUNITY_SERVED ,
CMTS_CARD_ID ,
OPTICAL_TRANSMITTER ,
OPTICAL_RECEIVER ,
LASER_GROUP_ID ,
UNIT_ID ,
DS_SLOT ,
DOWNSTREAM_PORT_ID ,
DS_PORT_OR_MOD_RF_CHAN ,
DOWNSTREAM_FREQ ,
DOWNSTREAM_MODULATION ,
UPSTREAM_PORT_ID ,
UPSTREAM_PORT ,
UPSTREAM_FREQ ,
UPSTREAM_MODULATION ,
UPSTREAM_WIDTH ,
UPSTREAM_LOGICAL_PORT ,
UPSTREAM_PHYSICAL_PORT ,
NCP_DETAIL_COMMENTS ,
ROW_CHANGE_IND ,
STATUS_DATE ,
STATUS_USER ,
MODEM_COUNT ,
NODE_ID ,
NODE_FIELD_ID ,
CREATE_USER ,
CREATE_DT ,
LAST_CHANGE_USER ,
LAST_CHANGE_DT ,
UNIT_ID_IP ,
US_SLOT ,
MOD_RF_CHAN_ID ,
DOWNSTREAM_LOGICAL_PORT ,
STATE
from markethealth.NCP_DETAIL_TAB
WHERE UNIT_ID = :B1
ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
This is the query used for Query 1.
Stats differences are:
1. Rownum differes by apprx - 90K more rows in 10g env
2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g. -
Response times select on resource_view
Hi
I have an application handle quite large volumes of documents. All documents are stored in XDB. Now starting to get long response times. It has not been a problem in the past but are now becoming more acute. I use resource_view frequently to list files and folders. I know that you should use "equals_path" when searching in resource_view for best response times. But for various reasons, there are few places where I have to do select on RESID. My question is are there any easy ways to get better response time if you select by RESID. See the example below, if I search on RESID, it takes 4 seconds if I use "equals_path" a few milliseconds. Is there any way to speed up the search on RESID?
I'm running Oracle 11g.
select any_path from resource_view where resid='9F124A513AAC9A44E040240A43227D33'; --4 sec
select any_path from resource_view where equals_path(RES, '/public/infoportal/stapswe/BLIVATEST117_45100') = 1 --32 msec
LennartHi,
Try with HEXTORAW function :
select any_path
from resource_view
where resid = hextoraw('9F124A513AAC9A44E040240A43227D33')
;The optimizer should then consider using an access path based on the index because the datatypes are matching each other.
On the contrary, with no explicit conversion to the RAW datatype, the optimizer internally converts RESID to VARCHAR2 by applying a function on it, thus preventing the index from being used.
See both explain plans for the details.
SQL> explain plan for
2 select * from resource_view where resid = '8BAE7B7BE7D14E07A13E73F6824648E3'
3 ;
Explicité.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3007404872
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 132 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XDB$RESOURCE | 1 | 132 | 3 (0)| 00:00:01 |
|* 2 | DOMAIN INDEX | XDBHI_IDX | | | | |
Predicate Information (identified by operation id):
1 - filter(RAWTOHEX("SYS_NC_OID$")='8BAE7B7BE7D14E07A13E73F6824648E3')
2 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,
"XMLEXTRA","XMLDATA"),'/',9999)=1)
16 ligne(s) sélectionnée(s).
SQL> explain plan for
2 select * from resource_view where resid = hextoraw('8BAE7B7BE7D14E07A13E73F6824648E3')
3 ;
Explicité.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1655379850
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 132 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | XDB$RESOURCE | 1 | 132 | 3 (0)| 00:00:01 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 3 | BITMAP AND | | | | | |
| 4 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 5 | INDEX RANGE SCAN | SYS_C003123 | 1 | | 0 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 7 | SORT ORDER BY | | | | | |
|* 8 | DOMAIN INDEX | XDBHI_IDX | 1 | | | |
Predicate Information (identified by operation id):
5 - access("SYS_NC_OID$"=HEXTORAW('8BAE7B7BE7D14E07A13E73F6824648E3') )
8 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,"XMLE
XTRA","XMLDATA"),'/',9999)=1)
22 ligne(s) sélectionnée(s).Edited by: odie_63 on 8 avr. 2011 08:55 -
Average HTTP response time going up week by week as said by SAP
Hello All,
I was looking into our SRM earlywatch report,SRM is on windows SQL server 2005 with Windows 2003 server
In the performance indicators,there is an interseting trend though nothing is in yellow or red
The values of following parameters : average response time in HTTP task, the maximum no of HTTP steps per hour and the average DB request time in HTTP task are going up
I understand that as the no of HTTP steps are increasing,the response time is becoming more which is quite obvious.
But how to balance this load out so that the average response time comes down?
what steps should be taken so that I can get this response time to come down though the load on the server is going up
RohitHi Rohit,
Is you system in High Avalibity setup? If yes, then try to load the balance on Node A and Node B. If this is already done or if your system is not in high availability then plan and install a Additional application server (Dialog instance) for load sharing.
Regards,
Sharath -
Hello,
Problem: New SG-200 26P Smart Switch with Latest Firmware - Very High Responce Time 500-800ms
We've a EdgeMarc 4500 Router with 10 VPN tunnels to 10 brach locations. SG-200 26P Smart Switch is connected to 7 Servers (2 Terminal, SQL, and Other) All locations have 50MB Download and 20MB Upload speed from Verizon FiOS Internet service.
As per the SolarWind tool, the response time of this switch is around at 500ms. At the same time, the EdgeMarc 4500 router response time is around 40ms and less.
We've 60 desktops remotely connected to our SQL Server database and 40 RDP Users via Remote Desktop. The configuration is same from past 3 years. But we change the switch from HP 1800-24G to Cisco due to some Connection Failures. For Connection Failures, we first suspect the old HP switch, but it's look like issue with EdgeMarc Router.
Is this Response Time is normal? I attached two screenshots of both Cisco Switch and EdgeMarc Router Response Time from past 24 hours according to SolarWind tool. Any further advice would be greatly appreciated. Thank you.Hello Srinath,
Thank you for participating in the Small Business support community. My name is Nico Muselle from Cisco Sofia SBSC.
The response time from the switch could be considered as quite normal. Reason for this is that the switch gives CPU priority to it's actual duties which would of course be switching, access lists, VLANs, QoS, multicast and DHCP snooping etc etc. As a result of that, ping response times of the switch itself do not show in any way the correct working of the switch.
I invite you to try pinging clients connected to the switch, you should be able to notice that response times to the clients are a lot lower than response times of the switch itself.
Hope this answers your question !
Best regards,
Nico Muselle
Sr. Network Engineer - CCNA - CCNA Security -
High Response Times with 50 content items in a Publisher 6.5 portlet
Folks,
Have set up a load test, running with a single user, in which new News Article content items are inserted into a Publisher 6.5 portlet created of the News Portlet template. Inserts have good response times through 25 or so content items in the portlet. Then response times become linearly longer, until it takes ten minutes to insert a content item, when there are 160 content items already.
This is a test system that is experiencing no other problems. There are no other users on the system, only the single test user in LoadRunner, inserting one content item at a time. The actual size of the content item is tiny. Memory usage in the Publisher JVM (as seein on the Diagnostics page) does not vary from 87% used with 13% free. So I asked for a DB Trace, to determine if there were long-running queries. I can provide this on request, it zips to less than 700k.
Have seldom seen this kind of linear scalability!
Looked at the trace through SQL Server Profiler. There are several items running for more than one second, the Audit Logout EventClass repeatedly occurs with long durations (ten minutes and more). The users are publisher user, workflow user, an NT user and one DatabaseMail transaction taking 286173 ms.
In most cases there is no TextData, and the ApplicationName is i-net opta 2000 (which looks like a JDBC driver) in the longest-running cases.
Nevertheless, for the short running queries, there are many (hundreds) of calls to exec sp_execute and IF @@TRANCOUNT > 0 ROLLBACK TRAN. This is most of what fills the log. This is strange because only a few records were actually inserted successfully, during the course of the test. I see numerous calls to sp_prepexec related to the main table in question, PCSCONTENTITEMS, but very short duration (no apparent problems) on the execution of the stored procedures. Completed usually within 20ms.
I am unble to tell if a session has an active request but is being blocked, or is blocking others... can anyone with SQL Server DBA knowledge help me interpret these results?
Thanks !!!
Roberthmmm....is this the ootb news portlet? does it keep all content items in one publisher folder? if so then it is probably trying to re-publish that entire folder for every content item and choking on multiple republish executes. i dont think that ootb portlet was meant to cover a use case of multiple content item inserts so quickly. by definition newsworth stuff should not happen to need bulk inserts. is there another way to insert all of the items using publisher admin and then do one publish for all?
i know in past migration efforts when i've written utilities to migrate from legacy to publisher the inserts and saves for each item took a couple of seconds each. the publishing was done at the end and took quite a long time.
Maybe you are looking for
-
Can I have 2 iphones on the same Apple ID?
SO I just got a new iPhone and since I had a Shuffle I had been using my wife's AppleID for Itunes. I used her ID on my iPhone so I could get my music. Was this a mistake? It appears we may have issues with iCloud, not to mention there are a couple a
-
"Missing Plug ins" in InDesign CS6
Hi, lately, I've got installed CS6 Standard on a Windows XP machine, German version. There were CS2 and CS4 formerly on this workstation. The admin deleted the CS2 before installing cs6. Now, every part of CS6 is up and running - except for InDesign
-
Editing sound in CS3,,,?
I tried a search here for 'sound' and it came up with no results! So I'm sorry if this has cropped up before. How do other folks edit sound in Premiere CS3? I've been editing mine in Pro Tools for ages, but sometimes after importing into Premiere I'd
-
How to Clear Gl account at Period-End ?
Hello Expert, .. Im very basic on SAP FI and I need some explanation about Clearing activities at Period-End (month & Year). My questions : A) Do we have to Clear every GL/Vendor/Customer Account at every Month/Year-End ? If so, Why? What is the con
-
Issues in arching/processing a zip file
On my sender side i process the zip(contains around20k files) and backitup and send it to the target side.I have a unzip program which unzips the file on my receiver side in command line. When i see the archive directory i see multiple zip files with