Response times
Whats the average response time for pinging the db server?
Rackspace has 1-6 ms as acceptable response time, i was wondering if azure is faster.
Thanks
Hi PangbornIdentity,
I run a monitoring service for Azure SQL DB and one of the things that we monitor for users is connection latency. This is not exact match to ping's but it does hopefully give you an idea of what we see an
an external monitoring service. Please keep in mind that we do ping's against customer databases around the world also that if you do a ping (or connection latency) test from within the Azure data centers
the response times will be much better then what we see.
Over a sample of the last 5,000 connection latency tests we ran this is what we saw:
Average: 475ms
Max: 12,828ms
Min: 15ms
I hope that helps!
Similar Messages
-
Unable to capture the Citrix network response time using OATS Load testing.
Unable to capture the Citrix network response time using OATS Load testing. Here is the scenario " in our project users logs into Citrix network and select the Hyperion application and does the Transaction and the Clients wants us to simulate the same scenario for load testing. We have scripted starting from Citrix Login and then launching Hyperion application. But the time taken to launch the Hyperion Application from Citrix network has not been captured whereas Hyperion Transaction time have been recorded. Can any help to resolve this issue ASAP?
Hi keerthi,
1. I have pasted the code for the first issue
web
.button(
122,
"/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
.click();
adf
.table(
"/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
.columnSort("Ascending", "Name" );
} -
HP Pavilion hard drive response time is very slow but tests pass (evenutally)
I have owned a HP Pavilion M1199a originally running Windows XP, but has been running Windows Vista 32-bit for the last 3 years or more.
The PC started to lock up for 30 seconds or more (2 weeks ago), with the disk light constantly on then go again for no apparent reason. These lock ups have become more frequent and for longer durations. I removed anti-virus software, and a few other applications/services but generally this computer is fairly clean as I use it as a media center PC.
I have tried checking the disk (SATA) for errors, and it passes although the tests take a lot longer than they should. The resource monitor shows no excess CPU usage and there is plenty of memory available. The disk monitor shows response times of 5000-20000 ms.
What is the best way to proceed from here? Is there a SATA controller or motherboard (ASUS PTGD1-LA) test? Should I buy another SATA drive and try that? I suspect that it is either the drive, drive controller and/or the motherboard that is failing but I don't know how to isolate the problem. The computer hardware configuration has remained the same for years.
The OS has the automatic updates enabled and I uninstalled the recent ones in case they were somehow causing an issue.tr3v wrote:
Thanks for replying.
1/ No, but I am running Vista, so assume this will be the same? Or should I just look at the tmp and temp environment variables to see what folders are being used?
In Vista type temp in the Search programs and files box and double click on the temp files icon that appears above. Delete all the files in the folder that you can. It is safe as they are exactly what they are called and that is temporary files. If you haven't done this ever or in quite a while there should be a noticeable increase in the operating system's response time.
2/ Yes. It completes after a very long time.
5/ No - but will try this out too.
****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
2015 Microsoft MVP - Windows Experience Consumer -
SAP GoLive : File System Response Times and Online Redologs design
Hello,
A SAP Going Live Verification session has just been performed on our SAP Production environnement.
SAP ECC6
Oracle 10.2.0.2
Solaris 10
As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
1/
We have been told that our file system read response times "do not meet the standard requirements"
The following datafile has ben considered having a too high average read time per block.
File name -Blocks read - Avg. read time (ms) -Total read time per datafile (ms)
/oracle/PMA/sapdata5/sr3700_10/sr3700.data10 67534 23 1553282
I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
2/
We have been asked to increase the size of the online redo logs which are already quite large (54Mb).
Actually we have BW loading that generates "Chekpoint not comlete" message every night.
I've read in sap note 79341 that :
"The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
Frankly, I have problems undertanding this sentence.
Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
But how is it that frequent chekpoints should decrease the time necessary for recovery ?
Thank you.
Any useful help would be appreciated.Hello
>> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
The recommended ("standard") values are published at the end of sapnote #322896.
23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
>> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
Correct.
>> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
Every dirty block in the buffer cache is written down to the datafiles
The latest SCN is written (updated) into the datafile header
The latest SCN is also written to the controlfiles
If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
Regards
Stefan -
Is there a solution to a jerky screen and a slow response time after updating to IOS 8?
Try some basic troubleshooting:
(A) Try reset iPad
Hold down the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears
(B) Try reset all settings
Settings>General>Reset>Reset All Settings
(C) Setup as new (data will be lost)
Settings>General>Reset>Erase all content and settings -
Is there a way to speed up the response time from the dock
Is there a way to speed up the response time for external hard drives from the dock? I have three external HDs but when I click on the alias in the dock there is always hesitation before it opens. I'm leaning towards the fact that it is just a result of the fact that the speed of the Mac is what it is. But, maybe there is something I can do to speed it up. The drives are plugged via usb directly to the Mac and they have there own power source. The curious thing is that they once were plugged into a large separately powered USB hub and I don't recall a lag like I have now. Any thoughts? Thanks...
Message was edited by: gfann18Have you got the drives set up to spin down when not in use?
Have a look at the Energy Saver settings in System Preference on the Mac.
Make sure the "Put the Hard Discs to sleep when possible" box is not ticked. -
SQL tune (High response time)
Hi,
I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
GENERAL INFORMATION SECTION
Tuning Task Name : BFG_TUNING1
Tuning Task Owner : ARADMIN
Scope : COMPREHENSIVE
Time Limit(seconds) : 60
Completion Status : COMPLETED
Started at : 01/28/2013 15:48:39
Completed at : 01/28/2013 15:49:43
Number of SQL Restructure Findings: 7
Number of Errors : 1
Schema Name: ARADMIN
SQL ID : 2d61kbs9vpvp6
SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
chg.Change_Title, chg.Change_Type, chg.Change_Description,
chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
chg.Scheduled_End_Date_Int, chg.Outage_Required,
chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
chg.Customer_Visible, chg.Change_Source,
chg.Related_Ticket_Type, chg.Related_Ticket_ID,
chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
chg.Element_id, chg.Element_Type, chg.Element_Name,
chg.Search_flag, chg.remedy_id, chg.Change_Manager,
chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
AND a.bfg_con_id = chg.contract_id ) UNION SELECT
/*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
chg.Related_Ticket_Type, chg.Related_Ticket_ID,
chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
chg.Element_id, chg.Element_Type, chg.Element_Name,
chg.Search_flag, chg.remedy_id, chg.Change_Manager,
chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
chg.contract_id AND a.bfg_con_id IS NOT NULL
FINDINGS SECTION (7 findings)
1- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 26 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
2- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
3- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 10 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
4- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
5- Restructure SQL finding (see plan 1 in explain plans section)
The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
line ID 6 of the execution plan contains an expression on indexed column
"C536871160". This expression prevents the optimizer from selecting indices
on table "ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
6- Restructure SQL finding (see plan 1 in explain plans section)
The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
the execution plan contains an expression on indexed column "C536871160".
This expression prevents the optimizer from selecting indices on table
"ARADMIN"."T100".
Recommendation
- Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.
Rationale
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.
7- Restructure SQL finding (see plan 1 in explain plans section)
An expensive "UNION" operation was found at line ID 1 of the execution plan.
Recommendation
- Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
or uniqueness is guaranteed.
Rationale
"UNION" is an expensive and blocking operation because it requires
elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
assuming that duplicates are allowed or uniqueness is guaranteed.
ERRORS SECTION
- The current operation was interrupted because it timed out.
EXPLAIN PLANS SECTION
1- Original
Plan hash value: 1047651452
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
| 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
| 2 | UNION-ALL | | | | | | | |
|* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
| 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
| 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
|* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
|* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
| 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
|* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
|* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
| 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
| 12 | UNION-ALL | | | | | | | |
|* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
| 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
|* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
| 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
| 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
|* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
| 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
|* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
|* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
| 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
| 28 | UNION-ALL | | | | | | | |
|* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
| 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
|* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
| 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
| 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
| 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
|* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
7 - access("C536870913"="C536870914")
9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
23 - access("C536870913"="C536870914")
25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
"EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
39 - access("C536870913"="C536870914")
Remote SQL Information (identified by operation id):
14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
"PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
(accessing 'ARS_BFG_DBLINK.WORLD' )
21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
"PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
(accessing 'ARS_BFG_DBLINK.WORLD' )
37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
-------------------------------------------------------------------------------Please review the following threads:
{message:id=9360002}
{message:id=9360003} -
As in many families, I am ready to upgrade my iphone 4 to a 5 and pass on my 4 to my partner (who has my old 3), BUT I am resistant to do so because I have been having problems with Safari, (very slow response time) ever since I upgraded to the new OS a month or so back. I have completed all the cleaning tips (cookies, individual app shut downs) and hard boot steps with no change. I can sit my partners Iphone 3 next to my 4 and when using Safari it beats it hands down, almost doubling the time it takes on the 3. My 4 is quicker on all the other apps so I don't believe it is a hardware issue. My tests were done using our home wi fi set up the same on both phones after running all the cleaning tips on both.
I actually broke down and called the support line and the first level support told me if I upgraded to the 5 I would likely carry the problem to the new phone. She said she suspected malware or a virus and that for $300 she could forward me to second level support that could scan my phone and see what the problem is on my 4 and support any issues on a new iphone 5 OR for a one time lower fee $79, just help me fix my 4.
Thinking through her options, I chose none of the above. Seems like I should be able to either 1) isolate the issue myself with the help of this forum or 2) upgrade to the iphone 5 and if the problem follows to the new one, get the support via the new device warranty. If the support person is correct and the problem is in the software and would follow the device, then in theroy I could wipe and restore my old 4 with the back up/data of my partners 3 and the problem on the 4 should be resolved.
I really don't want to carry the problem to a new phone if I can fix the issue first. I have airwatch software to push my work email to my phone and that is usually a pain to do when I upgrade anyhow. I don't need to pay for a new phone that has the same problem as my old one. I will get grief from my partner for passing on an issue that will come up with the daily use of the old iphone 4 that I will be responsible to fix anyhow. I have invested too much time already to generate problems on both our phones. Thanks in advance for any tips or guideance about a streamlined way forward.Basic troubleshooting from the User's Guide is reset, restart, restore (first from backup then as new). Has any of this been tried?
FYI, there are no viruses that affect iOS unless the device has been hacked or jailbroken, in which case they cannot be discussed here. -
Response time for Error Messages - Please Help
Hi
I have a PRO C application talking to an Oracle database.
The Response time for successful query is within desirable limits.
But when there is a error condition (eg SQLError -3113,or connection refused) it takes more than 9 minutes for the database to respond with the error code.
This condition is observed with only one database while the others are working fine.
What is the reason for this? Can’t it be reduced?
Regards
Davidever been faced with the same problem ?
Why deleting ? It that only one way to fix this problem ?
What are the others doing in such cases. Or am I the only one person
where has this special problem on the world. Becides I dont believe
in solving the problem through removing mentioned directory and
reinstalling. Nevertheless I will try it. I let you know about the result.
bye
sas -
Response Time of a query in 2 different enviroment
Hi guys Luca speaking, sorry for the bad written english
the questions is:
The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
*) I have a query in Benchmark with good results in execution time, the execution plan is really good
*) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
#### The Execution Plan are different ####
#### The stats are the same ####
this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
in Production the stas are the same
the other one is an external_table
the only differences that I noticed at the moment is about the tablespace used to defined the table on:
Production
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
Benchmark
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
I'm studing on at the moment
What I have to check to obtain the same execution plan (without change the query)
This is the query:
SELECT
'test query',
sysdate,
storico.tc_scarti_seq.NEXTVAL,
NULL, --ROW_ID
-- A.AZIONE,
'I',
A.CODE_PREF_TCN,
A.CODE_NUM_TCN,
'ADSL non presente su CRM' ,
-- a.AZIONE
'I'
|| ';' || a.CODE_PREF_TCN
|| ';' || a.CODE_NUM_TCN
|| ';' || a.DATA_ATVZ_CMM
|| ';' || a.CODE_PREF_DSR
|| ';' || a.CODE_NUM_TFN
|| ';' || a.DATA_CSSZ_CMM
|| ';' || a.TIPO_EVENTO
|| ';' || a.INVARIANTE_FONIA
|| ';' || a.CODE_TIPO_ADSL
|| ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
|| ';' || a.TIPO_RICHIESTA_CESSAZIONE
|| ';' || a.ROW_ID_ATTIVAZIONE
|| ';' || a.ROW_ID_CESSAZIONE
FROM storico.FLUSSO_ASTCM_INC A
WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
Esito di set autotrace traceonly explain ESERCIZIO
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
Esito di set autotrace traceonly explain BENCHMARK
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
tes=291895338)
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
8)
3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
Card=2861719 Bytes=183150016)
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
t=2 Card=1 Bytes=38)
2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
3 PARALLEL_FROM_SERIAL
4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
The differences on the InitOra are on these parameters:
Could they influence the Optimizer, and the execution plan are so different
background_dump_dest
cpu_count
db_file_multiblock_read_count
db_files
db_32k_cache_size
dml_locks
enqueue_resources
event
fast_start_mttr_target
fast_start_parallel_rollback
hash_area_size
log_buffer
log_parallelism
max_rollback_segments
open_cursors
open_links
parallel_execution_message_size
parallel_max_servers
processes
query_rewrite_enabled
remote_login_passwordfile
session_cached_cursors
sessions
sga_max_size
shared_pool_reserved_size
sort_area_retained_size
sort_area_size
star_transformation_enabled
transactions
undo_retention
user_dump_dest
utl_file_dir
Please Help me
Thanks a lot LucaHi Luca,
test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
you're using external tables. Are the speed of these drives are identically?
have you analyzed the schema with the same statement? Could you send me the statement?
have you system statistics?
have you testet the statement in an environment which is nearly like the production? concurrent user etc.
Could you send me the top 5 wait events from the statspack report.
Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
Regards
Marc -
Response time of a function module
Hi Friends,
I'm creating a cutom program where i was using a BAPI ,which exist in other server.
Now i want to record the response time of the BAPI , after placing the request in it and display the Time for the
corresponding record in output.
Is there any procedure to record the response time in the program / I'm not asking the transactions where we can
measure the performances.
Moderator message - please do not ask for or promise rewards.
Thanks & Warm Regards
Krishna
Edited by: Rob Burbank on Oct 1, 2009 8:50 AMHello,
The correct method, as pointed out in previous posts, is with GET RUN TIME. Note that this returns time in microseconds, so you may want to scale this up to a larger unit.
As to the usefulness: it is perfectly legitimate to include time measurements in your program as long as this has a clear purpose, e.g. comparing response times between different remote systems, identifying erratic response times, etc. In that case I would advise you to also include some other measurement, e.g. the amount of data processed (whether you can do this and how depends on the BAPI, e.g. you could use the number of lines in the returned internal tables as a metric). If your time measurement creates separate log/trace records, then it would also be a good idea to have the option to enable and disable the time measurement.
Regards,
Mark -
How to increase built-in cisco vpn peer response timer?
Hi,
I use OS x in-built cisco vpn client to connect to work VPN.
The VPN server, or perhaps the radius server, takes a long time to return a response. OS X always try for 10 seconds, then drop the conneciton when no response from the remote peer. When I use cisco vpn client on a windows machine, the vpn client has a setting to allow for 90 seconds remote peer response time. It works fine using cisco vpn client.
I prefer to use os x as my primary working environment, so I need to fix this problme. My question is how to increase the phase 1 & 2 timer for vpn under 10.6.7. I have tried to change racoon.conf phase 1 & phase 2 timer, but it made no difference. OS X only try for 10 seconds.
Any ideas? (besides asking work people to fix the server or radius problem)
Thanks
jmsherry123i have the same problem ... certificate is imported in keychain, but cant select it when setup vpn connection
-
ISE 1.2 Auth Avg Response Time
Hi Guys,
We have recently moved to ISE 1.2 (distributed deployment on UCS C220 blades) from ACS 5.x. We are seeing Avergage Auth response time ~150ms in each PSN nodes (4 in total) & wonder whether this is too slow.
Is this normal or we should have much lower average response time for thos radius authentications ? What are the typical value you guys observed in those sort of deployment
Any input would be much appreciated
RasikaHi,
Where did you get your information from? Is it from the ISE Authentication Report Summary? If so, which of the Average responses are you concerned about? Authentications By Day, Identity Group, Identity Store, Allowed Protocol etc.
In my network average response based on protocol PEAP is 121ms. Authentication by day is 74ms. Then again my network may be smaller than yours. Also I have an appliance and not a Virtual Server. In my opinion, I don't think 150ms is that much to make the user notice. If authentication response gets close to 300ms, then you have an issue.
If you have a very large network like a University Campus, then 150ms is OK. -
Report to calculate avg response time for a transaction using ST03.
Hi Abap Gurus ,
I want to develop a report which calculates the average response time(ST 03) for a transaction on hourly basis.
I have read many threads like in which users are posting which tables/FM to use to extract data such as dialog step and total response time .
I am sure many would have created a report like this , would appeciate if you can share pseudo code for same.Any help regarding the same is highly appreciated...
Cheers,
Karanhttp://jakarta.apache.org/jmeter/
-
Help required in optimizing the query response time
Hi,
I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
The first table can have maximum of 6 million rows but the second table rows will be around 9000.
My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
the second query selection criteria is to find the value in the range .
for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
My first query returns a result which then will be used to select using the following query
select column1 from my_table where start_range < my_value and end_range> my_value;
I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
I am using a preparedStatement for the second query loop.
Can some one suggest me how I can improve the query response time?
Regards,
ShyamTry the code below.
Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
I have written a sample db code for the same interraction.
Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
Good luck.
DROP TYPE idlist;
CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
CREATE OR REPLACE PACKAGE mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
END mypkg1;
CREATE OR REPLACE PACKAGE BODY mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
AS
ctr NUMBER;
BEGIN
DBMS_OUTPUT.put_line (myval_list.COUNT);
FOR x IN (SELECT object_name, object_id, myvalue
FROM user_objects a,
(SELECT myval_list (ROWNUM + 1) myvalue
FROM TABLE (myval_list)) b
WHERE a.object_id < b.myvalue)
LOOP
DBMS_OUTPUT.put_line ( x.object_name
|| ' - '
|| x.object_id
|| ' - '
|| x.myvalue
END LOOP;
END;
END mypkg1;
[pre]
Testing the code above. Make sure dbms output is ON.
[pre]
DECLARE
a idlist;
refc sys_refcursor;
c number;
BEGIN
SELECT x.nu
BULK COLLECT INTO a
FROM (SELECT 5000 nu
FROM DUAL) x;
mypkg1.get_list (a, refc);
END;
[pre]
Vishal V. -
Strange response time for an RFC call viewed from STAD on R/3 4.7
Hello,
On our R/3 4.7 production system, we have a lot of external RFC calls to execute an abap module function. There are 70 000 of these calls per day.
The mean response time for this RFC call is 35 ms.
Some times a few of them (maybe 10 to 20 per day) are way much longer.
I am currently analysing with STAD one of these long calls which lasted 10 seconds !
Here is the info from STAD
Response time : 10 683 ms
Total time in workprocess : 10 683 ms
CPU time : 0 ms
RFC+CPIC time : 0 ms
Wait for work process 0 ms
Processing time 10.679 ms
Load time 1 ms
Generating time 0 ms
Roll (in) time 0 ms
Database request time 3 ms
Enqueue time 0 ms
Number Roll ins 0
Roll outs 0
Enqueues 0
Load time Program 1 ms
Screen 0 ms
CUA interf. 0 ms
Roll time Out 0 ms
In 0 ms
Wait 0 ms
Frontend No.roundtrips 0
GUI time 0 ms
Net time 0 ms
There is nearly no abap processing in the function module.
I really don't uderstand what is this 10 679 ms processing time especially with 0 ms cpu time and 0 ms wait time.
A usual fast RFC call gives this data
23 ms response time
16 ms cpu time
14 ms processing time
1 ms load time
8 ms Database request time
Does anybody have an idea of what is the system doing during the 10 seconds processing time ?
Regards,
OlivierHi Graham,
Thank you for your input and thoughts.
I will have to investigate on RZ23N and RZ21 because I'm not used to use them.
I'm used to investigate performance problems with ST03 and STAD.
My system is R/3 4.7 WAS 6.20. ABAP and BASIS 43
Kernel 6.40 patch level 109
We know these are old patch levels but we are not allowed to stop this system for upgrade "if it's not broken" as it is used 7/7 24/24.
I'm nearlly sure that the problem is not an RFC issue because I've found other slow dialog steps for web service calls and even for a SAPSYS technical dialog step of type <no buffer>. (what is this ?)
This SAPSYS dialog step has the following data :
User : SAPSYS
Task type : B
Program : <no buffer>
CPU time 0 ms
RFC+CPIC time 0 ms
Total time in workprocs 5.490 ms
Response time 5.490 ms
Wait for work process 0 ms
Processing time 5.489 ms
Load time 0 ms
Generating time 0 ms
Roll (in+wait) time 0 ms
Database request time 1 ms ( 3 Database requests)
Enqueue time 0 ms
All hundreds of other SAPSYS <no buffer> steps have a less than 5 ms response time.
It looks like the system was frozen during 5 seconds...
Here are some extracts from STAD of another case from last saturday.
11:00:03 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 13 13 0 0
11:00:03 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 19 19 0 16
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 77 77 0 16
11:00:04 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 14 14 0 16
11:00:05 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 12 12 0 16
11:00:05 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 53 53 0 0
11:00:06 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 76 76 0 0
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 20 20 0 31
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 12 12 0 0
11:00:06 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 13 13 0 0
11:00:06 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 34 34 0 16
11:00:07 bt1sqkvh_PLG_18 RFC R 0 USER_LECDIS 15 15 0 0
11:00:07 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 13 13 0 16
11:00:07 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 19 19 0 0
11:00:07 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 23 13 10 0
11:00:07 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 38 38 0 0
11:00:08 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 20 20 0 16
11:00:09 bt1sqkvg_PLG_18 RFC R 0 USER_LECDIS 9 495 9 495 0 16
11:00:09 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 9 404 9 404 0 0
11:00:09 bt1sqkvh_PLG_18 RFC R 1 USER_LECKIT 9 181 9 181 0 0
11:00:10 bt1fsaplpr02_PLG RFC R 3 USER_LECDIS 23 23 0 0
11:00:10 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 8 465 8 465 0 16
11:00:18 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 18 18 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 89 89 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 75 75 0 0
11:00:18 bt1sqkvh_PLG_18 RFC R 1 USER_LECDIS 43 43 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 1 USER_LECDIS 32 32 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 1 USER_LECDIS 15 15 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 14 14 0 0
11:00:18 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 69 69 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 49 49 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 5 USER_LECKIT 19 19 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 15 15 0 16
The load at that time was very light with only a few jobs starting :
11:00:08 bt1fsaplpr02_PLG RSCONN01 B 31 USER_BATCH 39
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 31 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG /SDF/RSORAVSH B 33 USER_BATCH 64
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 33 USER_BATCH 43
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 34 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 35 USER_BATCH 37
11:00:09 bt1fsaplpr02_PLG RVV50R10C B 34 USER_BATCH 60
11:00:09 bt1fsaplpr02_PLG ZLM_HDS_IS_PURGE_RESERVATION B 35 USER_BATCH 206
I'm thinking also now about the message server as there is load balancing for each RFC call ?
Regards,
Olivier
Maybe you are looking for
-
Deletion of a line item from PO
Hi, I have a PO with only one line item, where the line item is created with reference to PR and PR is created from Production order. The GR and IR is done for PO. But the line item is marked for deletion after IR. Now i want to undelete the line ite
-
Exchange 2007: user not found
Hi all, I have a very strange problem..... I've been using Sun IDM 8.0 for quite some time now to manage Users in an exchange 2007 environment, without major issues. Now I finally got a new, improved test-environment, which includes a test-active-dir
-
Ipad intermittant tone driving me crazy!
In the past two weeks, my iPad has intermittantly sounded some tone (four notes with third one raised) with NO other type of notification. I have turned EVERY notification I can find OFF and yet it still sounds from time to time ... I can NOT figure
-
Hi, I have a paragraph of text and I want to change color of some characters to form a big "CRE". In other words change some characters in the paragraph to blue so they form CRE in larger text. I was thinking of having two copies of the test, crop o
-
Native VHD Boot deployments from SCCM 2012 R2
Hi, Is anyone able to confirm if SCCM 2012 R2 supports deploying a customized Windows 7 Enterprise WIM to VHD for native boot deployments. ? MDT 2013 seems to have support for this via the Deploy to VHD Client Task Sequence, however I have not found