Response time difference
Hello:
I did 2 experiments with oracle 10g on AMD 64 bit windows 2003 server.
expt 1:
Post 10000 records, read records from 1 to 10000 in batch of 100 and note the average time .
Now with 10000 records already in the database, post another 10000 records and get records as before.
This is repeated upto 200000.
The getevents average as the records size increased by 10000 approximately increasd by 20 msecs as shown below:
records --> timetaken in msecs
0 -10000 ---> 29
10000 - 20000 --> 47
20000-30000 --> 69
30000-40000 --> 101
160000-170000 -->359
170000-180000-->384
180000-190000-->414
190000-200000-->429
Expt2:
Now with all the 200000 records already in the database table, just do the get query in interevals of 100 for every 10000 batch. Suprisingly now since the recors size is always 20000, the response time is always close to 500msecs as shown below:
records --> timetaken in msecs
0 -10000 ---> 502
10000 - 20000 --> 509
20000-30000 -->500
30000-40000 --> 500
160000-170000 -->484
170000-180000-->484
180000-190000-->476
190000-200000-->480
Wondering why is there difference when the record size is of bigger size. Any pointers related to database optimizations will be very useful.
Thanks,
Ravi
The sql query is :
"SELECT EVENT_ID, TYPE, VALUE, CREATE_DATE, CONTEXT_OBJ, PROCESS_INSTANCE_ID, PROCESS_TEMPLATE_ID FROM BIZEVENT EVT WHERE EVENT_ID > ? AND EVENT_ID <= ? AND ((TYPE = 'BizLogic' AND VALUE = 'W_ACTIVATED' AND PROCESS_TEMPLATE_ID = ?) OR (TYPE = 'START' AND VALUE = 'START') OR (TYPE = 'END' AND VALUE = 'END')) ORDER BY EVENT_ID");
The BizEvent table has event_id as the primarykey
The code to read a hundred block is as below:
final public static long getEvents(long start, long end, long ptid){
long starttime = System.currentTimeMillis();
PreparedStatement pstmt = null;
Connection conn = null;
try {
//dsource is used and is already cached.
conn = dsource.getConnection();
pstmt = conn.prepareStatement(query);
pstmt.setLong(1, start);
pstmt.setLong(2, end);
pstmt.setLong(3, ptid);
pstmt.executeQuery();
} catch (Exception ex) {
ex.printStackTrace();
} finally {
try{
if (pstmt != null)
pstmt.close();
}catch(Exception err){}
try{
if (conn != null)
conn.close();
}catch(Exception err){}
long finishtime = System.currentTimeMillis();
return (finishtime - starttime);
The difference between start and end id will be 100.
Thanks for your time,
Similar Messages
-
JDBC Interaction response time difference in j2sdk1.4 and jdk1.3
Hi All
I am working on performance issues regarding response time . i have upgraded my system from jdk1.3 to j2sdk1.4 . I was expecting the performance gain in terms of response time in j2sdk1.4. but to my surprise it shows varied results with my application. it shows that j2sdk1.4 is taking higher time for executing the application when it has to deal with database. I am using oracle 9i as the backend database server.
if any body has the idea about, why j2sdk1.4 is showing higher responce time while interacting with database as compare to jdk1.3. then do let me know this.
Thanx in advanceYou may use the latest jdbc driver - http://www.oracle.com/technology/tech/java/sqlj_jdbc/index.html
And check the documentation for the new features and changes between jdbc drivers from JDBC Developer's Guide and Reference - http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/toc.htm
Best regards. -
Response time difference between copy and original
I have make an export of my database like that :
EXP system/manager@mybase owner=myowner file=exp_mybase_myowner.dmp
and an import in new session (on same server):
IMP system/manager@mybase2 full=y file=exp_mybase_myowner.dmp
To increase performance of my two databases, i have drop all indexs and recreate them.
But, response time is significantly different between databases.
How can i solve this problem ?1. Analyze all the tables that are used in your query by using command:
Analyze table <table_name> compute statistics;
You can alsp use package DBMS_STATS.GATHER_DATABASE_STATS to calculate the statistics for all object in DB.
2. Have you checked the DB_BLOCK_SIZE for both the DB? Is it same? Command "SHOW PARAMETER DB_BLOCK
4. Is the SGA size same for both the DB? Command "SHOW SGA"
3. Was there sufficient RAM to create the SGA? I hope you are not using SWAP space for the SGA -
Difference between max response time of operation and service.
Hi All,
I am using ALSB to implement SOA.While monitoring the statistics for a proxy service based on WSDL (web service) ,noticed one thing that max response time of a service operation is different than max response time for that particular service.E.g max response time for one of the operation is 3462 (ms) where as max response time for that service was 4467 (ms).
Can any one help me to identify why there is difference between these two?I have also noticed this inconsistence. There is probably no explanation in OSB (ALSB) documentation, so I can only guess. Maybe OSB starts to measure response time for service earlier then for an operation. The reason for that could be that OSB can identify service (based on endpoint) earlier then operation (based on request data which have to be parsed first). This could cause a difference in response times. However, this difference should be much smaller than a second (your case). In my case it is usually a matter of few milliseconds.
Please remember that all of above is just my imagination. :-) -
Significant difference in response times for same query running on Windows client vs database server
I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
In both cases the query plans are the same.
The query and plan is shown below :
{code}
SQL> explain plan
2 set statement_id = 'SLOW'
3 for
4 SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
5 FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
6 ;
Explained.
SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2852K| 46M| | 69851 (1)|
| 1 | HASH UNIQUE | | 2852K| 46M| 153M| 69851 (1)|
|* 2 | TABLE ACCESS FULL| DOCUMENTS | 2852K| 46M| | 54063 (1)|
{code}
Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
The version on the database server is 10.2.0.1.0
The version of the oracle client is also 10.2.0.1.0
I am happy to provide any further information if required.
Thank you in advance.I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using. -
Response Time of a query in 2 different enviroment
Hi guys Luca speaking, sorry for the bad written english
the questions is:
The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
*) I have a query in Benchmark with good results in execution time, the execution plan is really good
*) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
#### The Execution Plan are different ####
#### The stats are the same ####
this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
in Production the stas are the same
the other one is an external_table
the only differences that I noticed at the moment is about the tablespace used to defined the table on:
Production
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
Benchmark
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
I'm studing on at the moment
What I have to check to obtain the same execution plan (without change the query)
This is the query:
SELECT
'test query',
sysdate,
storico.tc_scarti_seq.NEXTVAL,
NULL, --ROW_ID
-- A.AZIONE,
'I',
A.CODE_PREF_TCN,
A.CODE_NUM_TCN,
'ADSL non presente su CRM' ,
-- a.AZIONE
'I'
|| ';' || a.CODE_PREF_TCN
|| ';' || a.CODE_NUM_TCN
|| ';' || a.DATA_ATVZ_CMM
|| ';' || a.CODE_PREF_DSR
|| ';' || a.CODE_NUM_TFN
|| ';' || a.DATA_CSSZ_CMM
|| ';' || a.TIPO_EVENTO
|| ';' || a.INVARIANTE_FONIA
|| ';' || a.CODE_TIPO_ADSL
|| ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
|| ';' || a.TIPO_RICHIESTA_CESSAZIONE
|| ';' || a.ROW_ID_ATTIVAZIONE
|| ';' || a.ROW_ID_CESSAZIONE
FROM storico.FLUSSO_ASTCM_INC A
WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
Esito di set autotrace traceonly explain ESERCIZIO
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
Esito di set autotrace traceonly explain BENCHMARK
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
tes=291895338)
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
8)
3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
Card=2861719 Bytes=183150016)
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
t=2 Card=1 Bytes=38)
2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
3 PARALLEL_FROM_SERIAL
4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
The differences on the InitOra are on these parameters:
Could they influence the Optimizer, and the execution plan are so different
background_dump_dest
cpu_count
db_file_multiblock_read_count
db_files
db_32k_cache_size
dml_locks
enqueue_resources
event
fast_start_mttr_target
fast_start_parallel_rollback
hash_area_size
log_buffer
log_parallelism
max_rollback_segments
open_cursors
open_links
parallel_execution_message_size
parallel_max_servers
processes
query_rewrite_enabled
remote_login_passwordfile
session_cached_cursors
sessions
sga_max_size
shared_pool_reserved_size
sort_area_retained_size
sort_area_size
star_transformation_enabled
transactions
undo_retention
user_dump_dest
utl_file_dir
Please Help me
Thanks a lot LucaHi Luca,
test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
you're using external tables. Are the speed of these drives are identically?
have you analyzed the schema with the same statement? Could you send me the statement?
have you system statistics?
have you testet the statement in an environment which is nearly like the production? concurrent user etc.
Could you send me the top 5 wait events from the statspack report.
Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
Regards
Marc -
How to increase built-in cisco vpn peer response timer?
Hi,
I use OS x in-built cisco vpn client to connect to work VPN.
The VPN server, or perhaps the radius server, takes a long time to return a response. OS X always try for 10 seconds, then drop the conneciton when no response from the remote peer. When I use cisco vpn client on a windows machine, the vpn client has a setting to allow for 90 seconds remote peer response time. It works fine using cisco vpn client.
I prefer to use os x as my primary working environment, so I need to fix this problme. My question is how to increase the phase 1 & 2 timer for vpn under 10.6.7. I have tried to change racoon.conf phase 1 & phase 2 timer, but it made no difference. OS X only try for 10 seconds.
Any ideas? (besides asking work people to fix the server or radius problem)
Thanks
jmsherry123i have the same problem ... certificate is imported in keychain, but cant select it when setup vpn connection
-
Explain plan - lower cost but higher response time in 11g compared to 10g
Hello,
I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
In 11g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANALYZED NUM_ROWS
11-08-2012 18:21:12 3413956
Elapsed: 00:00:00.30
In 10g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANAL NUM_ROWS
07-NOV-12 3502160
Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
BTW - 'm running the queries directly on the server, so no network latency in play here.
Thanks in advance
aBBy.*11g Env:*
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
*10g Env:*
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
The query used is:
explain plan for
select
NCP_DETAIL_ID ,
NCP_ID ,
STATUS_ID ,
FIBER_NODE ,
NODE_DESC ,
GL ,
FTA_ID ,
OLD_BUS_ID ,
VIRTUAL_NODE_IND ,
SERVICE_DELIVERY_TYPE ,
HHP_AUDIT_QTY ,
COMMUNITY_SERVED ,
CMTS_CARD_ID ,
OPTICAL_TRANSMITTER ,
OPTICAL_RECEIVER ,
LASER_GROUP_ID ,
UNIT_ID ,
DS_SLOT ,
DOWNSTREAM_PORT_ID ,
DS_PORT_OR_MOD_RF_CHAN ,
DOWNSTREAM_FREQ ,
DOWNSTREAM_MODULATION ,
UPSTREAM_PORT_ID ,
UPSTREAM_PORT ,
UPSTREAM_FREQ ,
UPSTREAM_MODULATION ,
UPSTREAM_WIDTH ,
UPSTREAM_LOGICAL_PORT ,
UPSTREAM_PHYSICAL_PORT ,
NCP_DETAIL_COMMENTS ,
ROW_CHANGE_IND ,
STATUS_DATE ,
STATUS_USER ,
MODEM_COUNT ,
NODE_ID ,
NODE_FIELD_ID ,
CREATE_USER ,
CREATE_DT ,
LAST_CHANGE_USER ,
LAST_CHANGE_DT ,
UNIT_ID_IP ,
US_SLOT ,
MOD_RF_CHAN_ID ,
DOWNSTREAM_LOGICAL_PORT ,
STATE
from markethealth.NCP_DETAIL_TAB
WHERE UNIT_ID = :B1
ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
This is the query used for Query 1.
Stats differences are:
1. Rownum differes by apprx - 90K more rows in 10g env
2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g. -
Response time of query utterly upside down because of small where clause change
Hello,
I'm wondering why a small change on a where clause in a query has a dramatic impact on its response time.
Here is the query, with its plan and a few details:
select * from (
SELECT xyz_id, time_oper, ...
FROM (SELECT
d.xyz_id xyz_id,
TO_CHAR (di.time_operation, 'DD/MM/YYYY') time_oper,
di.time_operation time_operation,
UPPER (d.delivery_name || ' ' || d.delivery_firstname) custname,
d.ticket_language ticket_language, d.payed,
dsum.delivery_mode delivery_mode,
d.station_delivery station_delivery,
d.total_price total_price, d.crm_cust_id custid,
d.bene_cust_id person_id, d.xyz_num, dpe.ers_pnr ers_pnr,
d.delivery_name,
TO_CHAR (dsum.first_travel_date, 'DD/MM/YYYY') first_traveldate,
d.crm_company custtype, UPPER (d.client_name) partyname,
getremark(d.xyz_num) remark,
d.client_app, di.work_unit, di.account_unit,
di.distrib_code,
UPPER (d.crm_name || ' ' || d.crm_firstname) crm_custname,
getspecialproduct(di.xyz_id) specialproduct
FROM xyz d, xyz_info di, xyz_pnr_ers dpe, xyz_summary dsum
WHERE d.cancel_state = 'N'
-- AND d.payed = 'N'
AND dsum.delivery_mode NOT IN ('DD')
AND dsum.payment_method NOT IN ('AC', 'AG')
AND d.xyz_blocked IS NULL
AND di.xyz_id = d.xyz_id
AND di.operation = 'CREATE'
AND dpe.xyz_id(+) = d.xyz_id
AND EXISTS (SELECT 1
FROM xyz_ticket dt
WHERE dt.xyz_id = d.xyz_id)
AND dsum.xyz_id = di.xyz_id
ORDER BY di.time_operation DESC)
WHERE ROWNUM < 1002
) view
WHERE view.DISTRIB_CODE in ('NS') AND view.TIME_OPERATION > TO_DATE('20/5/2013', 'dd/MM/yyyy')
plan with "d.payed = 'N'" (no rows, *extremely* slow):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 4166K| 39354 (1)| 00:02:59 |
|* 1 | VIEW | | 1001 | 4166K| 39354 (1)| 00:02:59 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1001 | 4166K| 39354 (1)| 00:02:59 |
| 4 | NESTED LOOPS OUTER | | 1001 | 130K| 39354 (1)| 00:02:59 |
| 5 | NESTED LOOPS SEMI | | 970 | 111K| 36747 (1)| 00:02:47 |
| 6 | NESTED LOOPS | | 970 | 104K| 34803 (1)| 00:02:39 |
| 7 | NESTED LOOPS | | 970 | 54320 | 32857 (1)| 00:02:30 |
|* 8 | TABLE ACCESS BY INDEX ROWID| XYZ_INFO | 19M| 704M| 28886 (1)| 00:02:12 |
| 9 | INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5 | 36967 | | 296 (2)| 00:00:02 |
|* 10 | TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY | 1 | 19 | 2 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | SB11_DSMM_XYZ_UK | 1 | | 1 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | XYZ | 1 | 54 | 2 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | XYZ_PK | 1 | | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DNTI_NI1 | 32M| 249M| 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | XYZ_PNR_ERS | 1 | 15 | 4 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | DNPE_XYZ | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
2 - filter(ROWNUM<1002)
8 - filter("DI"."OPERATION"='CREATE')
10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
12 - filter("D"."PAYED"='N' AND "D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
^^^^^^^^^^^^^^
13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
plan with "d.payed = 'N'" (+/- 450 rows, less than two minutes):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 4166K| 58604 (1)| 00:04:27 |
|* 1 | VIEW | | 1001 | 4166K| 58604 (1)| 00:04:27 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1002 | 4170K| 58604 (1)| 00:04:27 |
| 4 | NESTED LOOPS OUTER | | 1002 | 130K| 58604 (1)| 00:04:27 |
| 5 | NESTED LOOPS SEMI | | 1002 | 115K| 55911 (1)| 00:04:14 |
| 6 | NESTED LOOPS | | 1476 | 158K| 52952 (1)| 00:04:01 |
| 7 | NESTED LOOPS | | 1476 | 82656 | 49992 (1)| 00:03:48 |
|* 8 | TABLE ACCESS BY INDEX ROWID| XYZ_INFO | 19M| 704M| 43948 (1)| 00:03:20 |
| 9 | INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5 | 56244 | | 449 (1)| 00:00:03 |
|* 10 | TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY | 1 | 19 | 2 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | AAAA_DSMM_XYZ_UK | 1 | | 1 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | XYZ | 1 | 54 | 2 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | XYZ_PK | 1 | | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DNTI_NI1 | 22M| 168M| 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | XYZ_PNR_ERS | 1 | 15 | 4 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | DNPE_XYZ | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
2 - filter(ROWNUM<1002)
8 - filter("DI"."OPERATION"='CREATE')
10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
12 - filter("D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
XYZ.PAYED values breakdown:
P COUNT(1)
Y 12202716
N 9430207
tables nb of records:
TABLE_NAME NUM_ROWS
XYZ 21606776
XYZ_INFO 186301951
XYZ_PNR_ERS 9716471
XYZ_SUMMARY 21616607
Everything that comes inside the "select * from(...) view" parentheses is defined in a view. We've noticed that the line "AND d.payed = 'N'" (commented above) is the guilty clause: the query takes one or two seconds to return between 400 and 500 rows if this line is removed, when included in the query, the response time then switches to *hours* -sic !- but then the result set is empty (no rows returned). The plan is exactly the same whether this "d.payed = 'N'" is added or removed, I mean the nb of steps, access paths, join order etc., only the rows/bytes/cost columns values change, as you can see.
We've found no other way of solving this perf issue but by taking out this "d.payed = 'N'" condition and setting it outside the view along with view.DISTRIB_CODE and view.TIME_OPERATION.
But we would like to understand why such a small change on the XYZ.PAYED column turns everything upside down that much, and we'd like to be able to tell the optimizer to perform this check on payed = 'N' by itself in the end, just like we did, through the use of a hint if possible...
Anybody ever encountered such a behaviour before ? Do you have any advice regarding the use of a hint to reach the same response time as that we've got by setting the payed = N condition outside of the view definition ??
Thanks a lot in advance.
Regards,
SebI am really sorry I couldn't get back earlier to this forum...
Thanks to you all for your answers.
First I'd just like to correct a small mistake I made, when writing
"the query takes one or two seconds": I meant one or 2 *minutes*. Sorry.
> What table/columns are indexed by "DNTI_NI1"?
aaaa.dnti_ni1 is an index ON aaaa.xyz_ticket(xyz_id, ticket_status)
> And what are the indexes on xyz table?
Too many:
XYZ_ARCHIV_STATE_IND ARCHIVE_STATE
XYZ_BENE_CUST_ID_IND BENE_CUST_ID
XYZ_BENE_TTL_IND BENE_TTL
XYZ_CANCEL_STATE_IND CANCEL_STATE
XYZ_CLIENT_APP_NI CLIENT_APP
XYZ_CRM_CUST_ID_IND CRM_CUST_ID
XYZ_DELIVE_MODE_IND DELIVERY_MODE
XYZ_DELIV_BLOCK_IND DELIVERY_BLOCKED
XYZ_DELIV_STATE_IND DELIVERY_STATE
XYZ_XYZ_BLOCKED XYZ_BLOCKED
XYZ_FIRST_TRAVELDATE_IND FIRST_TRAVELDATE
XYZ_MASTER_XYZ_IND MASTER_XYZ_ID
XYZ_ORG_ID_NI ORG_ID
XYZ_PAYMT_STATE_IND PAYMENT_STATE
XYZ_PK XYZ_ID
XYZ_TO_PO_IDX TO_PO
XYZ_UK XYZ_NUM
For ex. XYZ_CANCEL_STATE_IND on CANCEL_STATE seems superfluous to me, as the column may only contain Y or N (or be null)...
> Have you traced both cases to compare statistics? What differences did it reveal?
Yes but it only shows more of *everything* (more tables blocks accessed, the same
for indexes blocks, for almost all objects involved) for the slowest query !
Greping WAIT on the two trc files made for every statement and counting the
object IDs access show that the quicker query requires much less I/Os; the
slowest one overall needs much more blocks to be read (except for the indexes
DNSG_NI1 or DNPE_XYZ for example). Below I replaced obj# with the table/index
name, the first column is the figure showing how many times the object was
accessed in the 10053 file (I ctrl-C'ed my second execution ofr course, the
figures should be much higher !!):
[login.hostname] ? grep WAIT OM-quick.trc|...|sort|uniq -c
335 XYZ_SUMMARY
20816 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
192 XYZ
4804 XYZ_INFO
246 XYZ_SEGMENT
6 XYZ_REMARKS
63 XYZ_PNR_ERS
719 XYZ_PK (index on xyz.xyz_id)
2182 DNIN_IDX_NI5 (index on xyz.xyz_id)
877 DNSG_NI1 (index on xyz_segment.xyz_id, segment_status)
980 DNTI_NI1 (index on xyz_ticket.xyz_id, ticket_status)
850 DNPE_XYZ (index on xyz_pnr_ers.xyz_id)
[login.hostname] ? grep WAIT OM-slow.trc|...|sort|uniq -c
1733 XYZ_SUMMARY
38225 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
4359 XYZ
12536 XYZ_INFO
65 XYZ_SEGMENT
17 XYZ_REMARKS
20 XYZ_PNR_ERS
8598 XYZ_PK
7406 DNIN_IDX_NI5
29 DNSG_NI1
2475 DNTI_NI1
27 DNPE_XYZ
The overwhelmingly dominant wait event is by far 'db file sequential read':
[login.hostname] ? grep WAIT OM-*elect.txt|cut -d"'" -f2|sort |uniq -c
36 SQL*Net message from client
38 SQL*Net message to client
107647 db file sequential read
1 latch free
1 latch: object queue header operation
3 latch: session allocation
> It will be worth knowing the estimations...
It show the same plan with a higher cost when PAYED = N is added:
SQL> select * from sb11.dnr d
2* where d.dnr_blocked IS NULL and d.cancel_state = 'N'
SQL> /
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1002 | 166K| 40 (3)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XYZ | 1002 | 166K| 40 (3)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XYZ_CANCEL_STATE_IND | | | 8 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("D"."XYZ_BLOCKED" IS NULL)
2 - access("D"."CANCEL_STATE"='N')
SQL> select * from sb11.dnr d
2 where d.dnr_blocked IS NULL and d.cancel_state = 'N'
3* and d.payed = 'N'
SQL> /
Execution Plan
Plan hash value: 1292668880
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 166K| 89 (3)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XYZ | 1001 | 166K| 89 (3)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XYZ_CANCEL_STATE_IND | | | 15 (0)| 00:00:01 | -
Time Difference Span Over Midnight
I am trying to calculate time difference for things that happen before and after midnight...
Example - Call Time of 23:58:22 and an Arrival Time of 00:02:50...
Gets all screwed up with <cfset response = DateDiff("s", calltime,artime)>
I need the final value in seconds...
Can't seem to figure this one out...
Thanx,
MerleI'm going to try elaborate further and add some coding for any help to others...
First things first... CFFORM
Time Vaildation Errors on the seconds...
<cfinput type="Text" name="WhateverTime" required="Yes" message="Please Enter Time (HH:MM:SS)" size=6 validate="time">
It will not take 26:11:22 nor will it take 13:76:33 - but it would take 17:55:77 (obviously 77 is no good)
This will accept the entry but error on Database Insertion/Validation...
Here is code to force appropriate time entry - to avoid the user error - data entry problem...
It forces only good numbers...
<cfselect name="aiqtimeh" class=verd8>
<option value="00">00</option>
<option value="01">01</option>
<option value="02">02</option>
<option value="03">03</option>
<option value="04">04</option>
<option value="05">05</option>
<option value="06">06</option>
<option value="07">07</option>
<option value="08">08</option>
<option value="09">09</option>
<option value="10">10</option>
<option value="11">11</option>
<option value="12">12</option>
<option value="13">13</option>
<option value="14">14</option>
<option value="15">15</option>
<option value="16">16</option>
<option value="17">17</option>
<option value="18">18</option>
<option value="19">19</option>
<option value="20">20</option>
<option value="21">21</option>
<option value="22">22</option>
<option value="23">23</option>
</cfselect> :
<cfselect name="aiqtimem" class=verd8>
<option value="00">00</option>
<option value="01">01</option>
<option value="02">02</option>
<option value="03">03</option>
<option value="04">04</option>
<option value="05">05</option>
<option value="06">06</option>
<option value="07">07</option>
<option value="08">08</option>
<option value="09">09</option>
<option value="10">10</option>
<option value="11">11</option>
<option value="12">12</option>
<option value="13">13</option>
<option value="14">14</option>
<option value="15">15</option>
<option value="16">16</option>
<option value="17">17</option>
<option value="18">18</option>
<option value="19">19</option>
<option value="20">20</option>
<option value="21">21</option>
<option value="22">22</option>
<option value="23">23</option>
<option value="24">24</option>
<option value="25">25</option>
<option value="26">26</option>
<option value="27">27</option>
<option value="28">28</option>
<option value="29">29</option>
<option value="30">30</option>
<option value="31">31</option>
<option value="32">32</option>
<option value="33">33</option>
<option value="34">34</option>
<option value="35">35</option>
<option value="36">36</option>
<option value="37">37</option>
<option value="38">38</option>
<option value="39">39</option>
<option value="40">40</option>
<option value="41">41</option>
<option value="42">42</option>
<option value="43">43</option>
<option value="44">44</option>
<option value="45">45</option>
<option value="46">46</option>
<option value="47">47</option>
<option value="48">48</option>
<option value="49">49</option>
<option value="50">50</option>
<option value="51">51</option>
<option value="52">52</option>
<option value="53">53</option>
<option value="54">54</option>
<option value="55">55</option>
<option value="56">56</option>
<option value="57">57</option>
<option value="58">58</option>
<option value="59">59</option>
</cfselect> :
<cfselect name="aiqtimes" class=verd8>
<option value="00">00</option>
<option value="01">01</option>
<option value="02">02</option>
<option value="03">03</option>
<option value="04">04</option>
<option value="05">05</option>
<option value="06">06</option>
<option value="07">07</option>
<option value="08">08</option>
<option value="09">09</option>
<option value="10">10</option>
<option value="11">11</option>
<option value="12">12</option>
<option value="13">13</option>
<option value="14">14</option>
<option value="15">15</option>
<option value="16">16</option>
<option value="17">17</option>
<option value="18">18</option>
<option value="19">19</option>
<option value="20">20</option>
<option value="21">21</option>
<option value="22">22</option>
<option value="23">23</option>
<option value="24">24</option>
<option value="25">25</option>
<option value="26">26</option>
<option value="27">27</option>
<option value="28">28</option>
<option value="29">29</option>
<option value="30">30</option>
<option value="31">31</option>
<option value="32">32</option>
<option value="33">33</option>
<option value="34">34</option>
<option value="35">35</option>
<option value="36">36</option>
<option value="37">37</option>
<option value="38">38</option>
<option value="39">39</option>
<option value="40">40</option>
<option value="41">41</option>
<option value="42">42</option>
<option value="43">43</option>
<option value="44">44</option>
<option value="45">45</option>
<option value="46">46</option>
<option value="47">47</option>
<option value="48">48</option>
<option value="49">49</option>
<option value="50">50</option>
<option value="51">51</option>
<option value="52">52</option>
<option value="53">53</option>
<option value="54">54</option>
<option value="55">55</option>
<option value="56">56</option>
<option value="57">57</option>
<option value="58">58</option>
<option value="59">59</option>
</cfselect>
So here lies the orignal problem...
Data entry for the report is by hand...
So it is not done automatically...
A CreateTime Now() will not work for any entered time...
As it might be a report from a week ago... Or as related spans across midnight...
With this reporting system - there is no real chance of moving out further than 1 day - an incident is mitigated without spanning several days...
If times are all lined up properly - and not spanning over midnight - no problem...
ie:
<cfif calltime LTE ConcTime>
<cfset callduration = DateDiff("s", calltime,conctime)> - (forgive the different time notes calltime / enrtime - there are several that are being tracked/used)
</cfif>
<cfif calltime GTE ConcTime>
<cfset sec=DateDiff("s",calltime,conctime)>
<cfset days=int(sec/86400)>
<cfset hours=int((sec-(days*86400))/3600)>
<cfset minutes=int((sec-(days*86400)-(hours*3600))/60)>
<cfset seconds=(sec-(days*86400)-(hours*3600)-(minutes*60))>
<cfset callduration=((hours*60*60)+(minutes*60)+seconds)>
</cfif>
If Call time is Less ThanConclusion Time - Works fine... Dandy - if on same day...
If not - GTE code is invoked...
Took someone to figure out the math etc...
From the Forum - if u are passing the dates...
<cfset oneDateTime = createDateTime(2011,04,23,09,35,26)>
<cfset twoDateTime = createDateTime(2011,04,24,12,15,13)>
<cfoutput>#dateDiff("s",oneDateTime,twoDateTime)#</cfoutput>
I'm not passing the dates - and would have to create the same
<cfif calltime LTE ConcTime> etc...
or
<cfif calltime GTE ConcTime> etc...
To start manipulating the date functions...
So it is half a dozen of one or another...
Hopefully this helps anyone with a similar problem...
Merle -
Time difference by date by time range - a better way?
Someone suggested this is a better place for my question:
Hi
I need to display the difference in time - every day for different time ranges,
Example 1-2 pm, 4-5 pm, 7-8 pm. And I need the time difference in 2 dates in the past week for each of these ranges everyday.
example
Date Diff Range
01/01/2007 00:01 1-2pm
01/02/2007 00:03
01/03/2007 00:10
01/04/2007 00:05
01/05/2007 00:23
01/01/2007 00:10 4-5pm
01/02/2007 00:13
01/03/2007 00:11
01/04/2007 00:15
01/05/2007 00:23
01/01/2007 01:10 7-8pm
01/02/2007 00:13
01/03/2007 00:10
01/04/2007 00:11
01/05/2007 00:21
One way to achieve this is to have multiple unions for each day and each time range.
Example:
select
from
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 13:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 14:00','mm/dd/yyyy hh24:mi:ss')
union
select
from
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 16:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 17:00','mm/dd/yyyy hh24:mi:ss')
union
select
from
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 19:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 20:00','mm/dd/yyyy hh24:mi:ss')
This will give me the required information for only one day - and that is for sysdate-5.
I will have same nyumber of unions for each day.
Is there a better way to accomplish the same?
Any help appreciated.
Thx!Hi
Sorry for the late response but better late than never:::
I have gotten the answer of getting data for previous 5 days. I have changed the time between statement and is given below(*).
Here is a reply to all the questions you had asked in response to my questions.
What data you have? What parameters are you going to input?
I have already given sample data in my post.
There are no input parameters.
You are talking about the difference - between what is this difference?
Difference is the difference between 2 timestamp datatypes in 2 different tables (as you may see in the query)
The field diff - is it varchar2 like '1-2 pm' or what?
I didnt understand your question. What do you get when you subtract two timestamp datatypes - that should be the datatype - if I have understood your question. Not sure if thats what you asked.
But IMHO it's impossible to get such a result, of course, if dt_tm in the query is the same as Date in the result!
The time components in the queries are different. If you see:
1st Query:...
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 13:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 14:00','mm/dd/yyyy hh24:mi:ss')
2nd Query:....
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 16:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 17:00','mm/dd/yyyy hh24:mi:ss')
3rd Query:
where dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 19:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 20:00','mm/dd/yyyy hh24:mi:ss')
First should be between 1 and 2 pm for sysdate-5.
I need starting previous 5 days till sysdate.
Second is between 4 and 5 pm again for sysdate-5.
I need starting previous 5 days till sysdate.
Same with 3rd.
My final query is something like this:
select t1.dt_tm, count(t1.id), '1 - 3 am' as period
from table1 t1, table2 t2
where t1.id = t2.id
and dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 13:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 14:00','mm/dd/yyyy hh24:mi:ss')
group by t1.dt_tm
union
select t1.dt_tm, count(t1.id), '4 - 5 pm' as period
from table1 t1, table2 t2
where t1.id = t2.id
and dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 16:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 17:00','mm/dd/yyyy hh24:mi:ss')
group by t1.dt_tm
union
select t1.dt_tm, count(t1.id), '7 -8 pm' as period
from table1 t1, table2 t2
where t1.id = t2.id
and dt_tm between
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 19:00','mm/dd/yyyy hh24:mi:ss')and
to_date(to_char((sysdate-5),'mm/dd/yyyy')||' 20:00','mm/dd/yyyy hh24:mi:ss')
group by t1.dt_tm
I need for the last 5 days and what i can think of is have 5 different queries for past 5 days and 3 queries per day for the 3 different time periods. This would mean 15 queries. Was wondering if there is a better way to achieve the same?
Any help appreciated.
Thx! -
Response times, interpretation times and round trips time
Hi
Can some body explain me what these terms means exactly ? What is their significance ? Appreciate answers without help links.
thanks
kumarHai,
Interpretation times: This is the duration from the time that user input actions are validated in the SAP GUI client and the moment the request is actually sent to the SAP application server. This measures the health of the SAP GUI client used for the test rather than the performance of the SAP R/3 server.
Check SAP Note: 364625.
Response Time: The response time of a transaction step is the difference in time between the point when the request arrives in the R/3 System and the point when the R/3 System completes the processing.
Check SAP Note: 8963 and 851012.
Round trip times: Is almost same as the response time. Time from client - LAN - Server and back to the client is known as Round trip time.
Regards,
Yoganand.V -
Dramatically increased response times when connected to network
I suddenly noticed that the response time for opening a folder item in my local Portal installation increased when I connected to the network. Why? I want to understand this, since I'm supposed to help a customer with their performance problems.
Further details:
The response time increased from about 2 seconds to about 10 seconds when I connected to the network. In this case, I connected via an ISDN line to my normal dial-up number at work. However, when I connect to my private ISP account (Telenor Online), the response times are unchanged, i.e. still 2 seconds.
One difference between the two access points is the use of a proxy in the first case.
What's the explanation?
Thanks, Erik Hagen
nullHi,
To your issue, the following blogs would be helpful:
Outlook Performance Troubleshooting including Office 365
How To Troubleshoot Microsoft Exchange Server Latency or Connection
Issues
Thanks,
Jessie -
RFC response times deteriorate with time
Hi
We notice that every time our system is restarted, performance is much improved for a the first few weeks. I can appreciate this from the fact that it is a huge system and things can 'clog up' after a while but we have been trying to isolate any specific cause / memory leaks etc.
One thing we have noticed is that although average dialog and background response times appear to increase gradually with time, RFC response times increase substantially. The first few days after a restart we see times of 2, 3 seconds, by the first week 7-10 seconds and by week two and beyond they are over 20 seconds.
We have 15 application servers and all the RFC traffic is load balanced using a RZ12 group that does not include the Central Instance; the same applies in SMQR & SMQS. We still do see some RFC traffic on the CI though... some of the application servers are on a different site, but we cannot see any differences in response times across sites...
Any ideas what's causing this and how we can keep these RFC response times down? The system is very busy and deals with a lot of RFC traffic, when users start complaining about poor performance we can usually see the system flooded with RFCs....
Due to size and nature of this system we can only arrange a restart every couple of months.
Thanks
RossHi Markus, good question, but no, haven't noticed... not sure how I'd check either? There are hundreds (if not thousands) of different transactions ran each month, I'll try sorting the transactions by time for each day and see if there are any common longer runners that have increased, but think it'll be like looking for a needle in a haystack...
-
Like many I get variable speeds and occasional drop outs, but the question is slightly different so hopefully you will bear with me if I seem to be asking something already answered.
My nominal speed is 2.3 Mbits, though it can drop to around 1.0 Mbits. The problem however is how long sites can take to respond, how slow the page is written to the screen and how variable that can be even with the same site. This is worse (or certainly appears to be) at mid-term and after 4.00 p.m. so I tend to blame school children getting home and going on line! If I check the speed when sites are loading very slowly, more often than not the speed is around 2.0 Mbits, yet at other times sites respond quickly yet the speed is slower. Downloads also vary - start off at 2 or 3Mbits and then drop to almost to the old dial up speeds. Other times they download very (for me) fast.
I believe the contention ratio is to blame, partly because of the school related issue, and partly because switching off the modem for a few minutes can sometimes help, as if I am picking up a less busy "box" at the exchange. Sometimes it solves itself if I try again later so I doubt if it is the modem to blame.
The question is therefore - am I right and if so, is there anything I can do, or if I am wrong can my modem (Netgear DGN 3500) be to blame.
Thanks.Sorry - been away.
Yes, I had just done a manual reset. I have checked again now that it has been on for a few days. Speed not much different and web page response times are still very slow. The bt.com page for example took about 3 seconds to load and did so in noticable stages.
I have tried two different machines connected via short cable to modem which is connectd to master socket with nothing plugged into any other socket in the house. Not much difference noted. Reminds me of the situation when I was working - sometimes the LAN was very slow because too many people were using it. I suspect the exchange basically can't cope with the load, butthat opinion may well be due to ignorance on my part!
Ian
System Up Time 104:15:17
Port
Status
TxPkts
RxPkts
Collisions
Tx B/s
Rx B/s
Up Time
WAN
PPPoA
1291977
1806967
0
610
5117
104:14:03
LAN
10M/100M
1507093
2028952
0
2480
2462
104:15:09
WLAN
11M/54M/270M
5876544
4807671
0
5540
1022
104:14:31
ADSL Link
Downstream
Upstream
Connection Speed
2271 kbps
916 kbps
Line Attenuation
57.8 db
32.5 db
Noise Margin
4.3 db
6.9 db
Maybe you are looking for
-
CS5.5 Web Premium (Mac) Download Hangs at 650.5MB
I am trying to download CS5.5 Web Premium for Mac from this link, but I cannot get the download to finish. On the Mac, I have attempted downloading with Firefox and Safari, but the download stops at 650.5MB. I have also tried downloading via Firefo
-
Error code -50 or system hand when copying
Hi all I'm hoping someone can give me some extra info about an error code -50 / system hang that I keep experiencing. I've already done some research so I think I know where the problems might lie but not exactly how to solve them. OK, my iBook hard
-
Hi, I am creating a application in which i want to read some value from .properties file. But when i build my project (using right click project and select Rebuild) this properties file is not copied in the deployed war files in integrated weblogic s
-
Dear All, I work at a radio astronomy research centre. We are about to move office to be close to one of our radio telescopes and this means that we must use USM instead of GSM for our calls etc. I love my iPhone (and iPad) but I have been told that
-
I can't compile servlet code because it doesn't recognize the http... classes. I downloaded the servlet api but don't know what to do with it (unziped it but no installation!) please help