Disk 100% busy

Hi
We have two node Sun Cluster and recently we found that one of the disk is always 100% busy.
See a snapshot of iostat below
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.7 10.3 5.3 82.5 0.0 2.0 0.0 185.0 0 100 8/md1631
1.7 8.7 13.3 77.3 0.0 2.0 0.0 196.3 0 100 8/md1631
So disk (8/md1631) which is steadily 100% busy,while there is not really heavy activity on that disk.
In this output we see, that the disk is 100% busy and the average service time is > 100ms.
Additional we see only very low activity on that disk:
0.7 reads per second with 5.3K per second
10.3 writes per second with 82.5K per second.
The disk is used for Oracle DB as raw device.
After the reboot, the problem gets solved and but afte rsome time,it re-appears.
Please see attached iostat and lockstat output.
Can you help to find out if there is known solaris bug or how to solve this?
cpu
us sy wt id
15 3 0 81
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md4
0.0 5.8 38.6 38.6 0.0 0.0 0.0 3.2 0 1 1/md611
0.0 6.0 39.6 39.6 0.0 0.0 0.0 3.2 0 2 1/md621
0.0 6.0 39.7 39.7 0.0 0.0 0.0 3.1 0 1 1/md631
0.0 5.8 0.0 38.6 0.0 0.0 0.0 2.8 0 1 1/md811
0.0 6.0 0.0 39.6 0.0 0.0 0.0 2.8 0 1 1/md821
0.0 6.0 0.0 39.7 0.0 0.0 0.0 2.8 0 1 1/md831
33.1 25.8 1940.1 1986.6 0.0 0.5 0.1 9.2 0 12 2/md0
16.6 25.8 970.0 1986.6 0.0 0.4 0.0 9.1 0 10 2/md1
16.6 25.8 970.0 1986.6 0.0 0.4 0.0 9.5 0 10 2/md2
0.0 0.1 0.0 0.0 0.0 0.0 0.0 3.4 0 0 2/md20
33.1 25.7 1940.1 1986.6 0.0 0.5 0.0 9.3 0 12 2/md21
0.8 0.9 229.6 246.9 0.0 0.0 0.3 13.1 0 2 3/md0
0.4 0.9 114.6 246.9 0.0 0.0 0.0 10.0 0 1 3/md1
0.4 0.9 114.9 246.9 0.0 0.0 0.0 13.4 0 1 3/md2
0.8 0.9 229.6 246.9 0.0 0.0 0.0 13.4 0 2 3/md23
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md311
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md312
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md321
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md322
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md331
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md332
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md341
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md342
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md351
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md352
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md361
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md362
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md411
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md412
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md421
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md422
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md431
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md432
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md441
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md442
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md451
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md452
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md461
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md462
0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.4 0 0 4/md541
0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.4 0 0 4/md551
0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.3 0 0 4/md561
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.8 0 0 4/md741
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 0 0 4/md751
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.6 0 0 4/md761
0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.1 0 0 5/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.5 0 0 5/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.2 0 0 5/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.1 0 0 5/md24
3.0 2.6 182.9 189.8 0.0 0.1 0.1 13.3 0 2 6/md0
1.5 2.6 91.5 189.8 0.0 0.1 0.0 13.1 0 2 6/md1
1.5 2.6 91.5 189.8 0.0 0.1 0.0 13.4 0 2 6/md2
3.0 2.6 182.9 189.8 0.0 0.1 0.0 13.4 0 2 6/md22
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.3 0 0 7/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.9 0 0 7/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.6 0 0 7/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.9 0 0 7/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.6 0 0 7/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md65
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md66
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md67
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md68
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md69
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md70
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md71
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md72
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md73
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md74
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md75
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md76
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md77
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md78
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md79
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md80
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md81
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md82
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md83
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md84
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md85
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md86
0.0 6.0 39.6 39.6 0.0 0.0 0.0 3.2 0 2 7/md511
0.0 5.9 39.5 39.5 0.0 0.0 0.0 3.2 0 2 7/md521
0.0 6.1 40.4 40.4 0.0 0.0 0.0 3.1 0 2 7/md531
0.0 6.0 0.0 39.6 0.0 0.0 0.0 2.8 0 1 7/md711
0.0 5.9 0.0 39.5 0.0 0.0 0.0 2.8 0 1 7/md721
0.0 6.1 0.0 40.4 0.0 0.0 0.0 2.8 0 1 7/md731
19.0 35.6 2061.6 485.2 0.0 1.1 0.1 20.9 0 9 8/md0
9.3 35.9 1028.4 490.1 0.0 1.0 0.0 22.9 0 7 8/md1
10.1 35.2 1080.1 457.2 0.0 0.9 0.0 20.8 0 7 8/md2
9.3 35.9 1028.4 490.1 0.0 1.0 0.0 22.9 0 7 8/md3
10.1 35.2 1080.1 457.2 0.0 0.9 0.0 20.8 0 7 8/md4
0.0 0.0 11.8 0.0 0.0 0.0 0.0 20.5 0 0 8/md5
0.0 0.2 11.8 2.9 0.0 0.0 0.0 10.1 0 0 8/md6
0.0 0.0 11.8 0.0 0.0 0.0 0.0 20.2 0 0 8/md7
0.0 0.0 11.9 0.1 0.0 0.0 0.0 12.7 0 0 8/md8
0.1 0.1 12.6 0.6 0.0 0.0 0.0 20.1 0 0 8/md9
0.0 0.0 11.8 0.0 0.0 0.0 0.0 19.7 0 0 8/md10
0.0 0.2 11.8 1.0 0.0 0.0 0.0 5.9 0 0 8/md11
0.0 0.0 11.8 0.0 0.0 0.0 0.0 15.7 0 0 8/md12
0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 0 0 8/md13
3.2 0.4 56.6 6.2 0.0 0.0 0.0 1.5 0 1 8/md14
0.1 0.4 1.2 6.2 0.0 0.0 0.0 3.5 0 0 8/md15
2.0 0.0 7.9 0.0 0.0 0.0 0.0 1.0 0 0 8/md16
2.0 1.0 4.4 0.5 0.0 0.0 0.0 3.1 0 1 8/md17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md18
0.0 0.3 1.6 1.6 0.0 0.0 0.0 2.6 0 0 8/md111
0.0 0.3 0.0 1.6 0.0 0.0 0.0 2.4 0 0 8/md112
0.0 0.3 1.6 1.6 0.0 0.0 0.0 2.6 0 0 8/md121
0.0 0.3 0.0 1.6 0.0 0.0 0.0 2.3 0 0 8/md122
0.0 0.3 1.6 1.6 0.0 0.0 0.0 2.7 0 0 8/md131
0.0 0.3 0.0 1.6 0.0 0.0 0.0 2.4 0 0 8/md132
0.0 0.3 1.6 1.6 0.0 0.0 0.0 2.6 0 0 8/md141
0.0 0.3 0.0 1.6 0.0 0.0 0.0 2.4 0 0 8/md142
0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.9 0 0 8/md211
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.7 0 0 8/md212
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.1 0 0 8/md221
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.2 0 0 8/md222
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.3 0 0 8/md231
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.9 0 0 8/md232
0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.9 0 0 8/md241
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.8 0 0 8/md242
4.3 0.5 77.0 8.0 0.0 0.0 0.0 1.3 0 1 8/md1501
0.1 0.5 2.1 8.0 0.0 0.0 0.0 2.9 0 0 8/md1511
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.5 0 0 8/md1521
0.1 2.1 141.9 102.9 0.0 0.1 0.0 41.6 0 1 8/md1531
0.1 0.0 141.9 0.0 0.0 0.0 0.0 35.8 0 0 8/md1541
0.0 0.0 11.9 0.3 0.0 0.0 0.0 39.3 0 0 8/md1551
0.0 0.1 5.3 19.2 0.0 0.0 0.0 15.4 0 0 8/md1561
0.0 0.1 25.8 0.7 0.0 0.0 0.0 34.0 0 0 8/md1571
0.1 0.0 7.6 0.4 0.0 0.0 0.0 11.8 0 0 8/md1581
0.1 0.0 57.7 0.0 0.0 0.0 0.0 41.3 0 0 8/md1591
0.0 0.0 17.3 0.1 0.0 0.0 0.0 34.1 0 0 8/md1601
0.0 0.0 11.6 0.0 0.0 0.0 0.0 39.2 0 0 8/md1611
0.0 0.0 3.0 0.0 0.0 0.0 0.0 32.5 0 0 8/md1621
3.8 29.4 273.9 298.4 0.0 1.9 0.0 58.3 0 56 8/md1631
1.5 0.1 253.2 1.4 0.0 0.0 0.0 23.1 0 1 8/md1641
0.1 0.0 80.8 0.1 0.0 0.0 0.0 41.8 0 0 8/md1651
0.0 0.0 28.9 0.0 0.0 0.0 0.0 42.8 0 0 8/md1661
0.0 0.0 0.6 0.0 0.0 0.0 0.0 18.2 0 0 8/md1671
0.0 0.0 11.8 0.0 0.0 0.0 0.0 40.4 0 0 8/md1681
0.3 0.0 323.3 0.0 0.0 0.0 0.0 17.5 0 0 8/md1691
0.3 0.0 277.1 0.0 0.0 0.0 0.0 21.3 0 0 8/md1701
0.1 0.0 115.5 0.0 0.0 0.0 0.0 42.9 0 0 8/md1711
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 9/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 9/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 9/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 9/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 9/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.1 0 0 9/md641
0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 0 0 9/md651
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.6 0 0 9/md661
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 0 0 9/md841
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 0 0 9/md851
0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0 0 9/md861
0.4 1.1 9.3 7.5 0.0 0.0 0.3 9.0 0 1 d0
0.2 1.0 5.5 6.0 0.0 0.0 0.0 8.1 0 1 d1
0.2 1.1 4.0 7.7 0.0 0.0 0.0 8.8 0 1 d2
0.5 0.3 6.4 3.7 0.0 0.0 0.2 12.0 0 0 d3
0.3 0.2 4.9 0.6 0.0 0.0 0.0 12.9 0 0 d4
0.2 0.3 1.5 3.8 0.0 0.0 0.0 15.8 0 0 d5
0.2 1.4 9.7 28.1 0.0 0.0 2.6 19.7 0 1 d6
0.1 1.3 6.6 24.8 0.0 0.0 0.0 16.6 0 1 d7
0.1 1.4 3.3 28.1 0.0 0.0 0.0 16.0 0 1 d8
0.0 0.0 0.0 0.0 0.0 0.0 5.6 15.5 0 0 d9
0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.8 0 0 d10
0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.9 0 0 d11
0.6 4.5 24.4 140.0 0.0 0.2 1.4 35.0 1 2 d12
0.4 4.5 15.2 135.0 0.0 0.1 0.0 30.9 0 2 d13
0.3 4.6 9.7 141.3 0.0 0.1 0.0 30.2 0 2 d14
0.0 4.3 0.8 2258.5 0.0 0.1 0.5 18.4 0 7 d15
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.4 0 7 d16
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.4 0 7 d17
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.4 0 7 d18
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.4 0 7 d19
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 d20
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 d21
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 d22
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 d23
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 d24
0.0 0.8 0.0 102.1 0.0 0.0 1.0 15.3 0 1 d25
0.0 0.8 0.0 102.1 0.0 0.0 0.0 9.1 0 1 d26
0.0 0.8 0.0 102.1 0.0 0.0 0.0 9.5 0 1 d27
0.0 0.8 0.0 102.1 0.0 0.0 0.0 9.1 0 1 d28
0.0 0.8 0.0 102.1 0.0 0.0 0.0 9.5 0 1 d29
0.0 0.0 0.0 0.0 0.0 0.0 5.1 45.2 0 0 d30
0.0 0.0 0.0 0.0 0.0 0.0 0.0 11.5 0 0 d31
0.0 0.0 0.0 0.0 0.0 0.0 0.0 57.6 0 0 d32
0.0 0.0 0.0 0.0 0.0 0.0 0.0 11.5 0 0 d33
0.0 0.0 0.0 0.0 0.0 0.0 0.0 57.7 0 0 d34
0.1 7.1 20.1 518.3 0.0 0.2 1.6 22.9 1 4 d35
0.0 7.1 10.1 518.3 0.0 0.1 0.0 18.7 0 4 d36
0.0 7.1 10.1 518.3 0.0 0.1 0.0 19.0 0 4 d37
0.0 7.1 10.1 518.3 0.0 0.1 0.0 18.7 0 4 d38
0.0 7.1 10.1 518.3 0.0 0.1 0.0 19.0 0 4 d39
0.0 0.0 0.0 0.0 0.0 0.0 13.2 20.0 0 0 d40
0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.7 0 0 d41
0.0 0.0 0.0 0.0 0.0 0.0 0.0 12.9 0 0 d42
0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.7 0 0 d43
0.0 0.0 0.0 0.0 0.0 0.0 0.0 12.9 0 0 d44
0.2 0.2 106.0 64.6 0.0 0.0 0.0 111.9 0 0 d45
0.1 0.1 50.8 27.0 0.0 0.0 0.0 44.3 0 0 d46
0.1 0.1 55.2 37.6 0.0 0.0 0.0 195.4 0 0 d47
0.0 0.0 0.3 0.0 0.0 0.0 0.0 10.6 0 0 d48
0.0 0.0 0.3 0.0 0.0 0.0 0.0 10.6 0 0 d49
1.1 7.7 32.1 166.8 0.0 0.2 0.1 22.9 0 4 c0t0d0
0.2 1.0 5.5 6.0 0.0 0.0 0.0 8.0 0 1 c0t0d0s0
0.3 0.2 4.9 0.6 0.0 0.0 0.0 12.8 0 0 c0t0d0s1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0t0d0s2
0.1 1.3 6.6 24.8 0.0 0.0 0.0 16.6 0 1 c0t0d0s3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.8 0 0 c0t0d0s4
0.4 4.5 15.2 135.0 0.0 0.1 0.2 30.6 0 2 c0t0d0s5
0.0 0.6 0.0 0.3 0.0 0.0 0.0 18.0 0 1 c0t0d0s7
0.0 4.6 0.4 2258.6 0.0 0.1 0.0 15.7 0 7 c0t1d0
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.3 0 7 c0t1d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0t1d0s2
0.0 0.3 0.0 0.1 0.0 0.0 0.0 5.0 0 0 c0t1d0s7
0.1 1.0 51.1 129.1 0.0 0.0 0.0 14.8 0 1 c0t4d0
0.1 0.9 51.1 129.1 0.0 0.0 0.0 15.8 0 1 c0t4d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0t4d0s2
0.0 0.2 0.0 0.1 0.0 0.0 0.0 9.0 0 0 c0t4d0s7
0.1 7.1 10.1 518.3 0.0 0.1 0.0 18.6 0 4 c0t6d0
0.0 7.1 10.1 518.3 0.0 0.1 0.0 18.6 0 4 c0t6d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0t6d0s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.2 0 0 c0t6d0s7
0.8 8.0 17.7 173.2 0.0 0.2 0.0 22.0 0 4 c1t0d0
0.2 1.0 3.8 7.4 0.0 0.0 0.0 8.7 0 1 c1t0d0s0
0.2 0.3 1.5 3.6 0.0 0.0 0.0 15.8 0 0 c1t0d0s1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c1t0d0s2
0.1 1.3 3.1 26.8 0.0 0.0 0.0 16.0 0 1 c1t0d0s3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.9 0 0 c1t0d0s4
0.2 4.4 9.3 134.8 0.0 0.1 0.0 30.2 0 2 c1t0d0s5
0.0 1.0 0.0 0.5 0.0 0.0 0.0 11.8 0 1 c1t0d0s7
0.0 4.8 0.4 2258.7 0.0 0.1 0.0 15.4 0 7 c1t1d0
0.0 4.3 0.4 2258.5 0.0 0.1 0.0 16.3 0 7 c1t1d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c1t1d0s2
0.0 0.5 0.0 0.2 0.0 0.0 0.0 7.5 0 0 c1t1d0s7
0.1 0.9 55.3 139.7 0.0 0.0 2.3 33.7 0 1 c1t4d0
0.1 0.9 55.2 139.6 0.0 0.0 2.3 34.7 0 1 c1t4d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c1t4d0s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.1 0 0 c1t4d0s7
0.1 7.6 10.1 518.5 0.0 0.1 0.0 19.0 0 4 c1t6d0
0.0 7.1 10.1 518.3 0.0 0.1 0.0 18.9 0 4 c1t6d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c1t6d0s2
0.0 0.5 0.0 0.3 0.0 0.0 0.0 19.6 0 1 c1t6d0s7
0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c8t600C0FF0000000000B5EBA337797C800d0
0.1 17.8 118.0 118.0 0.0 0.1 0.0 3.1 0 4 c8t600C0FF0000000000B5EBA3EB1204800d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c8t600C0FF0000000000B5EBA61A93B8900d0
0.1 18.0 119.5 119.5 0.0 0.1 0.0 3.1 0 5 c8t600C0FF0000000000B5EBA3C4D4B7C00d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0 0 c8t600C0FF0000000000B5EBA799193A804d0
0.4 1.0 114.9 246.9 0.0 0.0 0.0 13.0 0 1 c8t600C0FF0000000000B5EBA799193A803d0
1.5 2.7 91.5 189.9 0.0 0.1 0.0 13.1 0 2 c8t600C0FF0000000000B5EBA799193A802d0
9.9 35.4 988.2 474.0 0.1 0.5 1.7 11.8 0 7 c8t600C0FF0000000000B5EBA799193A801d0
16.6 26.3 970.0 1986.8 0.0 0.4 0.0 9.4 0 10 c8t600C0FF0000000000B5EBA799193A800d0
0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c8t600C0FF0000000000B5E6C079EE88A00d0
0.0 17.8 0.0 118.0 0.0 0.0 0.0 2.8 0 4 c8t600C0FF0000000000B5E6C63193C2200d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c8t600C0FF0000000000B5E6C43B069AE00d0
0.0 18.0 0.0 119.5 0.0 0.1 0.0 2.8 0 4 c8t600C0FF0000000000B5E6C64BC271200d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0 0 c8t600C0FF0000000000B5E6C4933C1A704d0
0.4 1.0 114.6 246.9 0.0 0.0 0.0 9.6 0 1 c8t600C0FF0000000000B5E6C4933C1A703d0
1.5 2.7 91.5 189.9 0.0 0.1 0.0 12.8 0 2 c8t600C0FF0000000000B5E6C4933C1A702d0
11.1 36.4 1080.6 463.5 0.1 0.5 1.5 10.7 0 7 c8t600C0FF0000000000B5E6C4933C1A701d0
16.6 26.3 970.0 1986.8 0.0 0.4 0.0 9.0 0 10 c8t600C0FF0000000000B5E6C4933C1A700d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0 0 in5n1:vold(pid3732)
cpu
us sy wt id
17 2 0 81
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md611
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md621
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md631
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md811
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md821
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1/md831
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2/md20
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2/md21
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 3/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 3/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 3/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 3/md23
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md311
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md312
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md321
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md322
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md331
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md332
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md341
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md342
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md351
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md352
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md361
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md362
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md411
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md412
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md421
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md422
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md431
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md432
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md441
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md442
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md451
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md452
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md461
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md462
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md541
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md551
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md561
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md741
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md751
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 4/md761
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 5/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 5/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 5/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 5/md24
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6/md22
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md65
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md66
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md67
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md68
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md69
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md70
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md71
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md72
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md73
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md74
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md75
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md76
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md77
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md78
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md79
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md80
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md81
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md82
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md83
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md84
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md85
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md86
0.0 11.0 0.0 61.0 0.0 0.0 0.0 1.5 0 2 7/md511
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md521
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md531
0.0 11.0 0.0 61.0 0.0 0.0 0.0 1.5 0 2 7/md711
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md721
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 7/md731
6.3 15.3 62.7 221.4 0.0 0.0 0.0 2.2 0 4 8/md0
3.3 15.3 34.9 221.4 0.0 0.0 0.0 2.2 0 3 8/md1
3.0 15.3 27.8 221.4 0.0 0.0 0.0 1.8 0 3 8/md2
3.3 15.3 34.9 221.4 0.0 0.0 0.0 2.2 0 3 8/md3
3.0 15.3 27.8 221.4 0.0 0.0 0.0 1.8 0 3 8/md4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md5
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md6
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md7
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md8
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md9
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md10
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md11
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md12
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md13
1.3 0.3 21.3 5.3 0.0 0.0 0.0 0.9 0 0 8/md14
0.0 0.3 0.0 5.3 0.0 0.0 0.0 2.0 0 0 8/md15
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md16
2.0 1.0 1.5 0.5 0.0 0.0 0.0 1.6 0 0 8/md17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md18
0.0 0.3 0.0 0.2 0.0 0.0 0.0 1.4 0 0 8/md111
0.0 0.3 0.0 0.2 0.0 0.0 0.0 1.5 0 0 8/md112
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md121
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md122
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md131
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md132
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md141
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md142
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md211
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md212
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md221
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md222
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md231
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md232
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md241
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md242
1.7 1.3 26.6 21.3 0.0 0.0 0.0 1.2 0 0 8/md1501
0.3 1.3 5.3 21.3 0.0 0.0 0.0 1.8 0 0 8/md1511
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 8/md1521
0.0 0.7 0.0 85.2 0.0 0.0 0.0 4

There is a good document on support.oracle.com which explains what 100% means. It does not mean the disk is under heavy load or saturated.
In short either ignore it or write some d to determine what process is causing it. As has been said previously finding out the metadevices configuration would help then running iostat again with all the metadevices and underlying disks as an argument.
metaset -s <set> -p
iostat -xntd <md device 1> <md device2> <md device n> .... <cXtXdX 1> <cXtXdX 2> <cXtXdX n> ... 1 20

Similar Messages

  • Hdisk0 and hdisk1 is showing 100% busy most of the time

    Hi,
    hdisk0 and hdisk1 is showing 100% busy most of the time.During 100% busy database and application goes hang even a simple sql query or simple os command(ex: ls -lrt) takes long time to execute.whereas hdisk0 and hdisk1 not hit where oracle home and oracle datafiles is located.hdisk0 and hdisk1 hit /var,/tmp,/usr,/opt etc.
    Regards,
    Sajid

    What OS ?
    --AIX 5.3
    What database version ?
    --10.2.0.4.0
    What kind of filesystem ?
    Filesystem GB blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 1.00 0.95 6% 1975 1% /
    /dev/hd2 4.00 1.45 64% 39807 11% /usr
    /dev/hd9var 2.00 0.94 53% 577 1% /var
    /dev/hd3 6.00 3.13 48% 4401 1% /tmp
    /dev/hd1 1.00 0.90 11% 234 1% /home
    /proc - - - - - /proc
    /dev/hd10opt 0.25 0.18 29% 1284 3% /opt
    /dev/fslv00 750.00 198.92 74% 1620391 4% /vol01
    /dev/fslv01 460.00 200.18 57% 524775 2% /vol02
    /dev/fslv02 185.00 44.46 76% 440425 5% /vol03
    What is on those disks ?
    rmsdb:/>lspv -l hdisk0
    hdisk0:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd10opt 1 1 00..00..01..00..00 /opt
    hd1 4 4 00..00..04..00..00 /home
    lg_dumplv 8 8 00..08..00..00..00 N/A
    hd5 1 1 01..00..00..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    hd6 77 77 00..77..00..00..00 N/A
    hd2 16 16 00..00..16..00..00 /usr
    hd4 4 4 00..00..04..00..00 /
    hd3 24 24 00..00..24..00..00 /tmp
    hd9var 8 8 00..00..08..00..00 /var
    rmsdb:/>lspv -l hdisk1
    hdisk1:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd10opt 1 1 00..00..01..00..00 /opt
    hd1 4 4 00..00..04..00..00 /home
    lv02 7 7 00..07..00..00..00 /mkcd/cd_images
    loglv01 1 1 00..01..00..00..00 N/A
    hd5 1 1 01..00..00..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    hd6 77 77 00..77..00..00..00 N/A
    hd2 16 16 00..00..16..00..00 /usr
    hd4 4 4 00..00..04..00..00 /
    hd3 24 24 00..00..24..00..00 /tmp
    hd9var 8 8 00..00..08..00..00 /var
    What database processes are running ?
    --As i observed while running oracle batch running
    Did you look at memory and cpu and swap usage at the OS level ? Did you look at what OS process is using the most resource ?
    --I use topas to look this but i did not get,How can i look what os process is using the most
    resource?
    Did you run a statspack or awr report on the database to identify what is running ?
    --Nohting as Oracle support said
    Regards,
    Sajid

  • How to read Disk Device Busy (%) in GC?

    Hello All,
    I got the following alert in GC 10.2.0.3:
    Target Name=hostname
    Target Type=Host
    Host=mccmrkwbm007
    Metric=Disk Device Busy (%)
    Metric Value=95.31
    Disk Device=ssd31
    Timestamp=***
    Severity=Critical
    Message=Disk Device ssd31 is 95.31% busy.
    Notification Rule Name=Host Availability and Critical States
    Notification Rule Owner=SYSMAN
    Unfortunately, the disk device ssd31 doesn't correlate with any file system on this host. How did GC come up with the device name ssd31?
    Still researching in the documentation but thought I would post here just in case someone has seen this. I saw some posts about the documentation bug but that's about it so far.
    Cheers.

    I also saw under the metric Disk Activity the list of devices that GC can see...
    They are all identified as ssd#... how do i correlate this with the friendly file system?

  • Disk drive goes to 100% busy when starting Elements...How do I fix the problem?

    I've been successfull running elements for the past few years.  Now, the drive goes to 100% and locks up the computer.  Elements seems to start-up, but the drive crashes the system.  Cannot use Elements 11 at this time.  How do I fix the problem?

    If you've been using the program successfully for years and suddenly your drive is crashing, I'd first check the drive itself. They do go bad after a couple of years.
    I'd also run a registry cleaner, Disk Cleanup, a Disk Defragmenter and maybe even a good virus scan.
    YOu don't say which version of Premiere Elements you're using, what operating system you're on or how old your computer is, but computer's do need regular maintenance and hardware sometimes needs to be replaced.

  • X4150 100% busy disk on idle

    Morning,
    I am having an issue with a new server I aquired.
    Sunfire x4150, 4gig ram, 2x 73gig 10000rpm disk. Configiured with Raid 1. Sun StorageTek controller.
    I just installed Solaris 10 05/8 with a basic setup. The system is slugish I beleive becuase the disk is 100% most of the time.
    Here is a example of my iostat. This is basically what the system does at idle.
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    extended device statistics tty cpu
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 5.5 0.0 0 46
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    extended device statistics tty cpu
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 12.0 0.0 0 100
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    extended device statistics tty cpu
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 12.0 0.0 0 100
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    extended device statistics tty cpu
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 12.0 0.0 0 100
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    extended device statistics tty cpu
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 99 0 0 0 100
    sd1 0.0 2.8 0.0 4.7 0.0 9.0 3198.2 0 75
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    How can I resolve this issue? I have Solaris 10 installed on some HP desktops, nothing special and they fly.

    There is a good document on support.oracle.com which explains what 100% means. It does not mean the disk is under heavy load or saturated.
    In short either ignore it or write some d to determine what process is causing it. As has been said previously finding out the metadevices configuration would help then running iostat again with all the metadevices and underlying disks as an argument.
    metaset -s <set> -p
    iostat -xntd <md device 1> <md device2> <md device n> .... <cXtXdX 1> <cXtXdX 2> <cXtXdX n> ... 1 20

  • Windows 8.1 disk 100% utiization

    Hi Team
    I am using HP envy m4  Windows 8.1 i7 8gigs ram. My disk usage is always 100% no matter what i do. Can you please let me know what might have caused the issue. I havent observed till my laptop start freezing and today it gave my screen saying that it
    has to reboot and some data has been collected for performance analysis.
    PS: I did check the paging, its fine; Disabled caching on the disk; none of them have hepled.
    Thanks in Advance

    In order to diagnose your problem you will need to download and install the below
    Install the WPT (Windows Performance Toolkit) 
    http://www.microsoft.com/en-us/download/details.aspx?id=30652
    Help with installation (if needed) is here
    When you have, open an elevated command prompt and type the following 
    WPRUI.exe (which is the windows performance recorder) and check off the boxes for the following:
    First level triage (if available), CPU usage, Disk IO.  
    If your problem is not CPU or HD then check off the relevant box/s as well (for example networking or registry)  Please configure yours as per the below snip
    Click Start
    Let it run for 60 secs or more and save the file (it will show you where it is being saved and what the file is called)
    Zip the file and upload to us on Onedrive (or any file sharing service) and give us a link to it in your next post.
    Wanikiya and Dyami--Team Zigzag

  • Leopard Penpower Tooya tablet writes 10000"s of diags-syslogd 100% busy

    Hello all, forgive me if this is the wrong forum to post this question regarding Leopard support (what is the correct driver) for the Penpower Tooya writing tablet?
    I would like to know what the CORRECT driver is for this hardware. I have installed the stuff that came from the tablet and kit. It is V1.66 for the driver.
    It does work ok however with nothig running and a few pen touches on the tablet (like a Wacom GTE), both the MACBOOKPRO's dual CPUS go to 100%CPU each with syslogd extremely busy!
    A quick look into the console and, I see low and behold a zillion of the following messages.
    These look like penpower or "Hyperpendriver" developer diags.. this is so slack if that is the case!
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] NX_MOUSEMOVED ( 571, 291)NXSUBTYPE_TABLETPOINT 0x00 +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc X 3975 Y 2033 P 27 B 3 +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] NX_MOUSEMOVED ( 571, 291)NXSUBTYPE_TABLETPOINT 0x00 +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc X 3976 Y 2033 P 27 B 3 +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] NX_MOUSEMOVED ( 571, 291)NXSUBTYPE_TABLETPOINT 0x00 +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc +
    +10/03/08 7:44:41 PM [0x0-0x11011].HyperPenDriver[164] kIOHIDElementTypeInput_Misc X 3979 Y 2033 P 23 B 3+
    I bought this penpower tablet yesterday for my wife for her MACBOOKPRO so she can input Chinese (trad and simplified) using the tablet and stylus. She has an old Powerbook G4 with Penpower stylus/tablet and old software tht works great but under 10.4.2 only.
    Penpower (http://www.penpower.net) , a Taiwanese company have release the TooyaPRO tablet for Leopard (and thse other brandx sofwtare companies).
    I've pulled the stuff of from ftp.penpower.net.tw, howver iy is the same as the stuff I have on the installer DVD's.
    Yes, I have been to the PENPOWER web site looking for suport and YES I did email them and NO I havenot heard back.
    So, I was hoping someone on htese forums would know of this issue?
    Thanks for any help
    w
    HK

    I have bought the Penpower handwriter which come with TAB 403 tablet. The driver for TAB 403 come with the software CD does not work in OSX 10.5.2. The support guy from Taiwan send me another version which suppose to work under 10.5.2 but it does not. It claim that it has to run under "classic mode". May be the HK guy also has a driver for TAB 403 tablet. Penpower support website is so poorly organize. Sad to said that they are the only company release Chinese writing software. Apple has do nothing to make Chinese input easier.

  • Disk - 100 %

    Hi experts,
    I am writing because I have a problem with an efficiency of disk from date of purchase. The computer was restored to initial state and nothing has changed. 
    I don't like a saving and reading of the disk. There are "kickbacks"  in characteristics during launching some program. Disk can work at 100 percent  when installing or uninstalling whatever (for example Microsoft Office 2010-  alert Source Engine. It caused "dead-halt" for a few seconds.
    The temperature of the disk got up to 47 Celsius degrees maximally.
    Tests:
    I also examined disk with the aid of diagnostic program in UEFI.
    I started up particular test which lasted for 3 hours. Everything seems to be fine.
    There are alerts pertain to efficiency of the disk (TOSHIBA MQ01ABD075). It was detected by Norton Internet Security and from my observations (screens below). It is not normal.
    Sometimes LOCAL NETWORK SERVICE and SYSTEM (itself/ alone) cause the usage of the disk at 100% ( after starting operating system ). 
    Could you give me a piece of advice? Is that malfunction serious ?

    Notebook HP Pavilion g6-2260ew
    Bluescreens:
    http://speedy.sh/t5KCv/010313-34218-01.zip
    http://speedy.sh/BefUz/122112-29390-01.zip
    http://speedy.sh/HmsuV/122212-30796-01.zip
    I talked to specialist from Microsoft.
    This is his diagnosis:
    The minidump files showed a KERNEL_SECURITY_CHECK_FAILURE (139) stop error referencing the USBXHCI.SYS:
    BUCKET_ID:  0x139_3_USBXHCI!TransferRing_TransferResourcesFree
    The USBXHCI.SYS, which is dated Jul 25, 2012, is a USB driver and there may be a conflict between the USBXHCI.SYS and another driver possibly from the Nokia software.
    I bought  machine on 7 December 2012 so it could not be driver from the Nokia software.
    I tried to update my mobile phone (Nokia 500) by Nokia Suite 3.5.34 on about 25 December 2012, not 25 Jul.
    So what do you think ? What reason can be in your opinion? Maybe  the wrong software of Controller Disk, video driver?
    or Windows 8 heh
    I am going to make a complaint tomorrow.  I am not able to repair it myself , even with your help.

  • Running db2 on zfs make disk 100% utilization

    Hello,
    I am using db2 on solaris 10 with zfs filesystem. Since change from ufs to zfs it show 100% percent utilization of disk that keep db2 data file almost all the time.
    Is this normal behavior or it has something wrong about zfs or db2 configuration?
    I use T6140 with raid 1+0 to present to ldom host and using zfs, this is result from iostat
    # iostat -xne 5|grep c0d3
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
    164.0 7.4 5897.8 191.3 0.0 1.2 0.0 7.2 0 39 0 0 0 0 c0d3
    214.6 3.2 3394.5 39.2 0.0 1.8 0.0 8.3 0 99 0 0 0 0 c0d3
    198.0 2.0 3145.3 32.0 0.0 1.5 0.0 7.7 0 99 0 0 0 0 c0d3
    270.6 38.2 4347.1 2011.5 0.0 2.8 0.0 8.9 0 100 0 0 0 0 c0d3
    206.2 2.8 3189.1 40.0 0.0 1.7 0.0 7.9 0 99 0 0 0 0 c0d3
    202.2 2.0 3201.5 32.0 0.0 1.8 0.0 8.7 0 99 0 0 0 0 c0d3
    186.4 2.0 2963.8 32.0 0.0 1.6 0.0 8.2 0 99 0 0 0 0 c0d3
    231.8 2.0 3675.0 32.0 0.0 2.0 0.0 8.5 0 99 0 0 0 0 c0d3
    199.4 1.6 3188.4 25.6 0.0 1.7 0.0 8.6 0 99 0 0 0 0 c0d3
    264.6 17.8 4287.0 220.4 0.0 2.7 0.0 9.6 0 99 0 0 0 0 c0d3
    243.2 2.8 3893.2 40.0 0.0 2.2 0.0 8.9 0 99 0 0 0 0 c0d3
    230.6 2.8 3698.8 37.6 0.0 2.1 0.0 8.8 0 99 0 0 0 0 c0d3
    259.0 2.0 4207.1 32.0 0.0 2.4 0.0 9.1 0 99 0 0 0 0 c0d3
    305.8 2.8 4847.6 33.6 0.0 2.9 0.0 9.5 0 100 0 0 0 0 c0d3
    Regards,

    If your filesystem is populated with more than 80%, then the performance can be drastically decrease. Please try to increase filesystem size to become less that 80%. Several options are available with zfs command to force ZFS filesystem size or to have an minimal size.

  • Dv4-1275mx freeze for few mins and disk 100% highest active time.

               I bought a dv4-1275mx in june and about a week ago, my laptop started to freeze for few minutes and come back.In that time, i didn't do anything and when i checked Resource Monitor (already opened), i saw 100% in Highest Active time in Disk with small data read. This happened several times and when i checked the events, i found following,
    Error    6/23/2009 10:08:42 PM    atapi    11    None    The driver detected a controller error on \Device\Ide\IdePort0.
    Error    6/23/2009 10:01:48 PM    atapi    11    None    The driver detected a controller error on \Device\Ide\IdePort0.
    Error    6/23/2009 10:01:48 PM    atapi    11    None    The driver detected a controller error on \Device\Ide\IdePort0.
    Warning    6/23/2009 9:52:40 PM    ESENT    510    Performance    "Windows (500) Windows: A request to write to the file ""C:\ProgramData\Microsoft\Search\Data\Application​s\Windows\Windows.edb"" at offset 2482176 (0x000000000025e000) for 8192 (0x00002000) bytes succeeded, but took an abnormally long time (60 seconds) to be serviced by the OS. In addition, 1 other I/O requests to this file have also taken an abnormally long time to be serviced since the last message regarding this problem was posted 84 seconds ago. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem."
    Warning    6/23/2009 9:51:15 PM    ESENT    510    Performance    "Windows (500) Windows: A request to write to the file ""C:\ProgramData\Microsoft\Search\Data\Application​s\Windows\Windows.edb"" at offset 2523136 (0x0000000000268000) for 8192 (0x00002000) bytes succeeded, but took an abnormally long time (60 seconds) to be serviced by the OS. In addition, 0 other I/O requests to this file have also taken an abnormally long time to be serviced since the last message regarding this problem was posted 5515 seconds ago. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem."
    Error    6/23/2009 9:49:06 PM    atapi    11    None    The driver detected a controller error on \Device\Ide\IdePort0.
    Error    6/23/2009 9:49:06 PM    atapi    11    None    The driver detected a controller error on \Device\Ide\IdePort0.
               In yesterday, I got and error message after windows start saying there is a problem in svchost module.Therefore i restart my laptop and then i didn't get any error messages.Then i tried to start firefox after about 5 mins and got a error message saying it stopped by data execution prevention.After about 1 min, i got a message saying windows will restart in 1 min and in restart, it did a checkdisk.After that i didn't got any error messages but i couldn't open firefox, therefore i installed it again and i found following events.
    Warning    6/29/2009 7:26:35 PM    Ntfs    130    None    The file system structure on volume C: has now been repaired.
    Warning    6/29/2009 7:26:01 PM    Ntfs    130    None    The file system structure on volume C: has now been repaired.
    Warning    6/29/2009 7:25:49 PM    Ntfs    130    None    The file system structure on volume C: has now been repaired.
    Error    6/29/2009 7:25:02 PM    Service Control Manager    7000    None    "The GSRestartSvc service failed to start due to the following error:
    The system cannot find the file specified."
    Error    6/29/2009 7:25:02 PM    Microsoft-Windows-WMI    10    None    "Event filter with query ""SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA ""Win32_Processor"" AND TargetInstance.LoadPercentage > 99"" could not be reactivated in namespace ""//./root/CIMV2"" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected."
    Warning    6/29/2009 7:23:51 PM    Microsoft-Windows-WLAN-AutoConfig    4001    None    "WLAN AutoConfig service has successfully stopped.
    Warning    6/29/2009 7:23:51 PM    Microsoft-Windows-WLAN-AutoConfig    10002    None    "WLAN Extensibility Module has stopped.
    Module Path: C:\Windows\System32\bcmihvsrv64.dll
    Warning    6/29/2009 7:23:49 PM    Microsoft-Windows-Winlogon    6001    None    The winlogon notification subscriber <Profiles> failed a notification event.
    Warning    6/29/2009 7:23:49 PM    Microsoft-Windows-Winlogon    6000    None    The winlogon notification subscriber <Profiles> was unavailable to handle a notification event.
    Warning    6/29/2009 7:23:49 PM    Microsoft-Windows-Winlogon    6001    None    The winlogon notification subscriber <Sens> failed a notification event.
    Error    6/29/2009 7:23:47 PM    Microsoft-Windows-EventSystem    4621    Event System    The COM+ Event System could not remove the EventSystem.EventSubscription object {CEB8B221-89C5-41A8-98CE-79B413BF150B}-{00000000-0​000-0000-0000-000000000000}-{00000000-0000-0000-00​00-000000000000}.  The HRESULT was 80070005.
    Error    6/29/2009 7:22:41 PM    Microsoft-Windows-WMI    10    None    "Event filter with query ""SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA ""Win32_Processor"" AND TargetInstance.LoadPercentage > 99"" could not be reactivated in namespace ""//./root/CIMV2"" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected."
    Error    6/29/2009 7:19:31 PM    Service Control Manager    7032    None    "The Service Control Manager tried to take a corrective action (Restart the service) after the unexpected termination of the Server service, but this action failed with the following error:
    An instance of the service is already running."
    Error    6/29/2009 7:19:31 PM    Application Error    1000    (100)    Faulting application svchost.exe_BITS, version 6.0.6001.18000, time stamp 0x47919291, faulting module bitsigd.dll, version 7.0.6002.18005, time stamp 0x49e040c9, exception code 0xc0000005, fault offset 0x000000000000232c, process id 0xb98, application start time 0x01c9f8c04759c999.
    Error    6/29/2009 7:18:27 PM    Microsoft-Windows-WMI    10    None    "Event filter with query ""SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA ""Win32_Processor"" AND TargetInstance.LoadPercentage > 99"" could not be reactivated in namespace ""//./root/CIMV2"" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected."
    Error    6/29/2009 7:17:40 PM    PlugPlayManager    12    None    The device 'JMB38X xD Host Controller' (PCI\VEN_197B&DEV_2384&SUBSYS_30FB103C&REV_00\4&2a​995034&0&0428) disappeared from the system without first being prepared for removal.
    Error    6/29/2009 7:17:40 PM    PlugPlayManager    12    None    The device 'JMB38X MS Host Controller' (PCI\VEN_197B&DEV_2383&SUBSYS_30FB103C&REV_00\4&2a​995034&0&0328) disappeared from the system without first being prepared for removal.
    Error    6/29/2009 7:17:40 PM    PlugPlayManager    12    None    The device 'JMB38X SD Host Controller' (PCI\VEN_197B&DEV_2381&SUBSYS_30FB103C&REV_00\4&2a​995034&0&0228) disappeared from the system without first being prepared for removal.
    Error    6/29/2009 7:17:40 PM    PlugPlayManager    12    None    The device 'JMB38X SD/MMC Host Controller' (PCI\VEN_197B&DEV_2382&SUBSYS_30FB103C&REV_00\4&2a​995034&0&0028) disappeared from the system without first being prepared for removal.
    Error    6/29/2009 7:17:27 PM    Application Error    1000    (100)    Faulting application svchost.exe_BITS, version 6.0.6001.18000, time stamp 0x47919291, faulting module bitsigd.dll, version 7.0.6002.18005, time stamp 0x49e040c9, exception code 0xc0000005, fault offset 0x0000000000002339, process id 0xff8, application start time 0x01c9f8c023973749.
    Error    6/29/2009 7:16:26 PM    SecurityCenter    6    None    The Windows Security Center Service was unable to load instances of AntiSpywareProduct from WMI.
    Error    6/29/2009 7:16:20 PM    Application Error    1000    (100)    Faulting application svchost.exe_BITS, version 6.0.6001.18000, time stamp 0x47919291, faulting module bitsigd.dll, version 7.0.6002.18005, time stamp 0x49e040c9, exception code 0xc0000005, fault offset 0x000000000000232c, process id 0x1c4, application start time 0x01c9f8bf81c900ad.
    Error    6/29/2009 7:14:34 PM    Service Control Manager    7000    None    "The Norton Internet Security service failed to start due to the following error:
    The service did not respond to the start or control request in a timely fashion."
    Error    6/29/2009 7:14:34 PM    Service Control Manager    7009    None    A timeout was reached (30000 milliseconds) while waiting for the Norton Internet Security service to connect.
    Error    6/29/2009 7:14:34 PM    Service Control Manager    7000    None    "The GSRestartSvc service failed to start due to the following error:
    The system cannot find the file specified." 
    I can attach Full events if want. Is this problem comes due to an hardware or OS error? Pls help.
    Thanks. 

    I own two of the dv4-1275mx laptops.  Both machines were purchased around June of 2009 from Best Buy, and both machines are doing the same thing as what has been described here.  It appears that this model has a flaw, or certain production runs have this flaw.  I have done an exhaustive amount of research about this problem and know quite a bit about it.
    But, good luck getting HP's Customer No-Service department to actually fix the problem.  If you try to get them to fix it, you are about to be taken for quite a ride.  They have been absolutely no help in diagnosing the problem, and I am still trying to get them to acknowledge and fix the problem with these laptops.
    The problem is with the hard disk controller.  It happens both with the built-in hard disk drive and with an external disk drive.  It occurs whenever you place a significant load on the disk controller.  When the problem occurs, the machine will lock up for 60 seconds at a time.  Eventually, the driver times out, resets the disk, and drive activity continues until it happens again.
    Normally, the problem is very intermittent because normal usage of the machine doesn't always trigger this problem.  However, I have been able to replicate the problem consistently using a backup program called Macrium Reflect (free edition).  If I use Macrium Reflect to back up or restore the hard disk image using an external disk drive plugged into the eSATA port, then the ATAPI errors in the log appear immediately after it begins copying data.  Eventually, the backup will fail.
    The worst part about this is the data corruption.  If the lock-up condition occurs when a disk write is happening, then the data being written will get corrupted and you will lose data.  I have had disks that have gradually gotten so corrupted that they won't boot anymore.  I have spent a countless number of hours backing up, restoring, and attempting to use the machine despite these problems.
    After doing further research and testing, it appears that this type of problem is not uncommon to the AMD M780G / ATI Radeon 3200 chipset that these laptops use.  If you search the Internet for this type of error, you will find a number of HP and Non-HP customers complaining about this same issue with this chipset on different machines.  This leads me to believe that the problem might be with the chipset itself.
    I have attempted to find a software remedy to the problem with no success.  This included trying all of the following:
    - Trying Windows Vista 64 with various configurations of service packs and updates.
    - Using the Microsoft AHCI driver and the ATI-supplied AHCI driver for the Radeon 3200 series
    - Using different BIOS revisions
    - Trying to find a magic combination of AHCI driver and BIOS revision that would work
    - Trying various settings for disk write caching
    - Turning of DMA on the controller
    - Uninstalling every piece of HP-supplied software that I could think of that might have even the slightest possibility of interfering with the hard disk controller
    None of this worked.  Eventually, I concluded that this is a hardware problem that isn't going to get fixed through software.
    Sadly, after doing all of this research, HP has not been helpful with getting this problem fixed.  In the past, I have owned laptops from Dell and Toshiba, and while I know that no manufacturer is perfect, I have not had many issues getting prompt and reliable service from either of those two companies when I needed it.  This is the first time I have purchased an HP laptop.  And after the customer no-service experiences I have had, it is likely to be the last time I purchase an HP laptop.
    HP's technical support seems to think that re-imaging the drive to reset the machine to factory defaults cures all problems.  Over the phone, they have made me do this a couple of times, insisting that the problem is a result of a virus or something I installed on my machine.
    Recently, I sent both of my laptops to them for service.  To assist them, I included the details of all of the research that I had done.  I gave them copies of this information both electronically, and included inside the box with the laptop.  The information had details on the steps that I tried, and links to websites documenting problems with the M780G chipset.  It also had details on how to replicate the problem with Macrium Reflect.  I even re-imaged the machine back to the factory image, and installed Macrium Reflect for their convenience.  And what did HP's service department do?  They re-imaged the hard drive again and sent it back to me without even bothering to run my test!  This is just absolutely moronic.  Either the service technician can't read, or the service technician just doesn't care about fixing customer problems!
    My complaints about HP's service are as follows:
    There is no sense of urgency about fixing problems.  I have been enduring this issue for 9 months now.  But no one seems to care.
    It takes forever for them to turn around machines in their service department.  With Dell, I used to have a working laptop in my hands in 5 days, tops.  What's worse is that they ship the packages via FedEx Ground with signature confirmation.  (If you work a normal day job, good luck trying to get your package from FedEx Ground, which is only open when I am at work.)
    The telephone and online support personnel often don't listen to you, and the details of problems don't ever seem to be recorded anywhere.  I have had to explain this problem over and over for various people who seemed almost completely ignorant about the problems I have been experiencing.
    The technicians are not competent, even when you tell them what needs to be fixed.
    As far as I am concerned, HP has not been honoring the warranty on my machines.  This is has been frustrating beyond words.  I am at the point where I just want two new laptops without this chipset, but I seriously doubt I will ever get that.  To all of you who are experiencing the same issue as I am, I wish you the best of luck in getting HP to resolve this problem--you are going to need it!

  • Can a disk utilization be more than 100% as shown from alert notification?

    Hello,
    I received an alert notification from the GRID. The percentage is 670.94%? What does this mean? Is it possible to go over 100%?
    Thank you.
    Severity=Critical
    Message=Disk Utilization for 10 is 670.94%, crossed warning (80) or critical (95) threshold.
    Notification Rule Name=Host Availability and Critical States
    Notification Rule Owner=SYSMAN
    Notification Count=1

    Known Issue:
    Disk Utilization/Disk Device Busy Metrics Displaying Values Above 100% on Windows, Generating Critical Alerts
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=303788.1
    DISK UTILIZATION METRIC FALSELY TRIGGERS ON WINDOWS
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=382802.1

  • Dot11 carrier busy is 100%

    Hello
    how is it possbile that the configured channel on the ap show when i do the cmd "dot11 carrier busy" 100 busy load? When I change the channel, then the new channel shows 100 busys load.
    Are this interferences or disturbing noise from neighbors?
    Any input is very welcome
    Oliver

    This might be a fault with Radio. Try to reset the AP and check again. You can also try upgrading the software.

  • OCIStmtExecute does not return immediately when client is busy.

    Hi.
    I'm testing a very busy multi-threaded client server that consistently generates
    a large number of simple queries through oci. The problem is that, when the
    server(client) is busy, OCIStmtExecute does not return immediately in
    non-blocking mode.
    Of course I have set non-blocking mode and OCIStmtExecute does return
    OCI_STILL_EXECUTING immediately when the server is not busy. But
    when log rotation occurs which concatenates a large text file (~500MB)
    onto an even larger text file (up to several giga bytes), or when I simply copies
    or concatenates large text files manually, OCIStmtExecute returns very slowly.
    (roughly about after 100~200ms)
    However, while log rotation takes place, everything else including other oci
    calls that come before OCIStmtExecute (prepare, define) return fast. So
    for me it really seems that only OCIStmtExecute becomes extremely slower
    when local server (especially the disk) is busy.
    Is there any way to let OCIStmtExecute immediately return all the time?
    Thanks in advance.

    Yes, I knew that OCIStmtExecute would be the only function that causes such
    delay and that was why I traced that call. And so far, I checked several times
    what happens at the exact moment on the server but everything was ok.
    Actually OCIStmtExecute becomes slower exactly when crontab-ed log rotate
    occurs so I think this delay must be a client-side problem for now.
    This server is quite busy and has to respond fast, so it is important to
    guarantee fast response time while a small number of timeout losses are tolerable.
    But as OCIStmtExecutes' first OCI_STILL_EXECUTING return takes hundreds of
    ms it has become more like a blocking call and currently I cannot find any way to do what I want.
    So now everytime such thing happens, the thread waits
    quite long, after the first OCI_STILL_EXECUTING return
    the time difference exceeds timeout limit, and the thread
    calls OCIBreak() and OCIReset() and returns.

  • Grid Control showing excessive amount of disk utilization.

    Hi Folks,
    I understand that Oracle doesn't support their products 100% on non Oracle VMs, but I thought I might take a stab at it.
    I decided to stick with a simple installation of Oracle Grid Control running on a Windows 2003 R2 32 bit os, here are the steps I took.
    1. Run Guest OS Windows 2003 R2 32 bit on a Windows 2008 x64 running Hyper-V (Yes I know why not EL5 running XEN, long story)
    2. Install Grid Control with Database and Agent versions 10.2.0.2.1
    3. Patch agent and oms to 10.2.0.5.0
    4. Some minor issues, but managed to get by those.
    - First off EM Website shows down, got to take a look at that, probably need to fix the beacon, not sure!
    Problem:
    - Grid shows that the host running em is over 100% disk utilization.
    - According to Metalink this is a known bug in Windows 2000 Performance Counters (not 2003?)
    - More specially Microsoft KB article Q310067.
    - Metalink Doc Id: 303788.1
    - Note, I'm running 2003 on a Hyper-V Guest, so anything can go wrong!
    The specific error is:
    Metric               Disk Device Busy (%)
    Disk Device               0 C:
    Severity               Critical
    Alert Triggered          Jun 3, 2009 7:57:54 PM
    Last Updated          Jun 4, 2009 3:48:48 PM
    Acknowledged          No
    Acknowledged By     n/a
    Message               Disk Utilization for 0 C: is 587.86%, crossed warning (80) or critical (95) threshold.
    More info:
    - Guest is running on a Dual Quad Core Xeon System with 16 GB of RAM
    - Guest has dedicated 4096MB of ram
    - Guest has dedicated 2 CPUS
    - Guest is configured with a Virtual Disk (specifically partition c:\) to be on local Hyper-V disk array
    - Windows 2008 Hyper-V host disk array is in a RAID 5 configuration running 10k SAS drives
    - Windows 2008 Hyper-V host utilization is literary at most 5% (current is 1%)
    - Windows 2008 Hyper-V disk utilization according to graphs is showing 100KB/s usage (less than 1%)
    Temporary solution is to turn off the metric, but not sure if that's such a good idea.
    - Note that 11g EM and 10g EM running on Hyper-V server had the same issue.
    - Note that 11g EM and 10g EM running without visualization had no issues.
    Any thoughts?

    It may be related to Bug 8677212: BACKUP INFORMATION FOR SOME DATABASES SHOWS INCORRECTLY
    Although bug info says it for HP-UX, I've also seen the same problem on Grid Control 10.2.0.5 running on Solaris.
    You may want to apply 10.2.0.5.4 Grid Control Patch Set Update (PSU) [ID 1139563.1] to fix it.
    Best Regards,
    Gokhan Atil
    If this question is answered, please mark appropriate posts as correct/helpful and the thread as closed. Thanks

  • About Disk Load

    Hi,
    I am confused about disk load. When I use Solaris management console to see disk use I see that there is average 600Kb/sec on disk. But disk busy time is almost %100.
    I know that the io capacity of a disk must be much larger 600kb/sec. So I dont understand why disk usage is 100% when io is only 600Kb.
    My cpu shows about 50 % wait time and I gues wait time is due to disks.
    We have 4 disks with RAID 5.
    2 Sparc with 8GB ram. 20 GB swap. 64 bit environment
    Any help will be useful
    Thanks in advance
    Sezai

    the%busy time is not what you expect! It is the percentage of the sample time for which there was at least one command active in the HBA.
    If you use iostat -xnz 1 to use 1 second sample interval and you see a drive with 100 % busy then that just means that there was at least 1 active command out in the HBA/disk for 100% of that second. Modern disk drives can execute more than 1 command at a time so you can't use the %busy column to determine how busy a disk drive is. You can look at the actv column to see how many commands are running in parallel on that drive.
    Solaris implements a throttle for each drive and when you exceed that you will see commands appear in the wait column and that is bad..
    The % busy column is used to identify when you have bursty workloads that drop large numbers of commands on the disk drive at the same time and then queueing theory kicks in.
    Make sure you don't have anything in the wait column of iostat -xnz 1 and watch the asvc_t doesn't get too big.
    tim

Maybe you are looking for