Performance of dd in solaris 10

I ran a test the other day of dd'ing an entire 9g drive to another 9g driver under solaris 9, and solaris 10. solaris 10 was 2x faster. I know sun had been working on performance enhancements, and had speeded up system calls and such, but this seems like a huge enhancement, esp for something that seems low level like dd. Does anybody have an explanation on what made dd so much faster under solaris 10 ?
I'll be looking forward to finding other features like this in solaris 10!
SunOS 5.9 Generic_112233-05 sun4u sparc SUNW,UltraSPARC-IIi-Engine
# time dd bs=2048k if=/dev/rdsk/c1t1d0s2 of=/dev/rdsk/c1t6d0s2
4316+1 records in
4316+1 records out
real 59:22.2
user 0.0
sys 10.0
SunOS 5.10 s10_69 sun4u sparc SUNW,UltraSPARC-IIi-Engine
# time dd bs=2048k if=/dev/rdsk/c1t1d0s2 of=/dev/rdsk/c1t6d0s2
4316+1 records in
4316+1 records out
real 31:39.8
user 0.0
sys 1.6

That is a happy surprise. In my tests of dd over the years, we
max out at the media speed of the device.
Also, was this an x86 platform? If so, check to see if
DMA is enabled. prtconf -pv and look for
something like *-dma-enabled for the
appropriate driver.
-- richard

Similar Messages

  • Native Performance pack for Intel Solaris

    Do we have native performance pack for Intel Solaris? We are using
    Weblogic 4.5.1
    Thanks
    Nagaraj

    You have to add that directory to you LD_LIBRARY_PATH, not PATH
    K.P. Patel <[email protected]> wrote in message
    news:[email protected]..
    never mind... I got it.. For some odd reason, I restarted weblogic few
    times...and it picked up....
    thanks
    "K.P. Patel" <[email protected]> wrote in message
    news:3ad64fb6$[email protected]..
    yep, these property was set to true already....still doesn't load
    performance pack
    "Kumar Allamraju" <[email protected]> wrote in message
    news:[email protected]..
    Add the following property in weblogic.properties file.
    weblogic.system.nativeIO.enable=true
    Kumar
    "K.P. Patel" wrote:
    How do I include/install performance pack on my Solaris system for
    WLS
    5.1
    I included $WAS_HOME/lib/solaris in my PATH in the startup script
    which
    didn't help any
    thanks
    kp

  • Performance started degrading in Solaris 9

    Dear all,
    I have one Sun Fire V880 with 4 CPU and 8 GB of memory installed with Solaris 9, as the system installed around 2+ year before the server seems quiet good with high performance but these days the performance of the server seems degrading. There are lot of ideal session appeared in the system. The terminal process appears in the system although the telnet session is over, we have to manually kill the telnet process if appears. Can anyone suggest me what can I do to make the performance better and to kill the ideal process.
    Thanks in Advance.
    Bikash

    Hi Sonylwc,
    Yes the machine is running with Solaris 9. Please find here the output of prstat -a.
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    24367 kumari 90M 38M sleep 10 0 0:01:06 14% EX/1
    20476 remit 94M 38M sleep 59 0 0:00:20 0.7% EX/1
    23307 kumari 79M 32M sleep 59 0 0:00:05 0.6% EX/1
    19017 kumari 95M 40M sleep 59 0 0:00:47 0.6% EX/1
    16270 kumari 76M 34M sleep 59 0 0:00:12 0.5% EX/1
    25849 kumari 78M 26M sleep 59 0 0:00:01 0.5% EX/1
    19190 kumari 69M 37M sleep 51 0 0:03:25 0.5% DE.PHANTOM.CALL/1
    21205 kumari 114M 43M cpu3 39 0 0:01:07 0.4% EX/1
    23113 kumari 119M 44M sleep 59 0 0:01:01 0.4% EX/1
    17865 kumari 81M 39M sleep 59 0 0:00:28 0.3% EX/1
    26792 kumari 111M 35M sleep 44 0 0:00:29 0.3% EX/1
    24945 kumari 82M 26M sleep 59 0 0:00:02 0.3% EX/1
    442 root 9720K 7856K sleep 60 -8 0:37:48 0.3% jPML/1
    28682 kumari 92M 45M sleep 59 0 0:01:13 0.3% EX/1
    14699 kumari 112M 40M sleep 59 0 0:00:35 0.2% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    168 kumari 7567M 2913M 37% 0:34:51 21%
    2 remit 96M 39M 0.5% 0:00:20 0.7%
    149 root 450M 266M 3.1% 0:43:36 0.4%
    4 cash 176M 82M 1.0% 0:01:53 0.2%
    2 kblatm1 169M 68M 0.9% 0:23:43 0.2%
    Total: 344 processes, 487 lwps, load averages: 1.48, 1.30, 1.16
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    24367 kumari 91M 39M cpu3 0 0 0:01:22 13% EX/1
    4034 kblatm1 141M 38M sleep 29 10 0:22:56 3.4% OFS.CONNECTION./1
    13336 kumari 128M 35M sleep 60 0 0:00:14 1.3% EX/1
    19017 kumari 104M 38M sleep 19 0 0:00:48 0.8% EX/1
    14699 kumari 130M 40M sleep 54 0 0:00:36 0.7% EX/1
    20476 remit 104M 38M sleep 59 0 0:00:21 0.7% EX/1
    24402 kumari 103M 35M sleep 8 0 0:00:08 0.6% EX/1
    6906 cash 118M 42M sleep 0 0 0:01:03 0.6% EX/1
    19190 kumari 69M 37M sleep 54 0 0:03:25 0.5% DE.PHANTOM.CALL/1
    26202 kumari 43M 17M sleep 59 0 0:00:00 0.5% EX/1
    27897 kumari 91M 41M sleep 59 0 0:01:10 0.3% EX/1
    24182 kumari 125M 41M sleep 59 0 0:01:15 0.3% EX/1
    442 root 9720K 7856K sleep 59 -8 0:37:48 0.2% jPML/1
    24585 kumari 82M 42M sleep 59 0 0:01:02 0.2% EX/1
    1477 root 83M 21M sleep 59 0 0:00:16 0.2% Xsun/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    171 kumari 7653M 2943M 37% 0:35:15 21%
    2 kblatm1 212M 68M 0.9% 0:23:48 3.4%
    2 remit 106M 39M 0.5% 0:00:21 0.7%
    154 root 465M 276M 3.2% 0:43:36 0.7%
    4 cash 208M 83M 1.1% 0:01:53 0.6%
    Total: 352 processes, 495 lwps, load averages: 1.38, 1.29, 1.16
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 110M 39M sleep 0 10 0:23:10 9.3% OFS.CONNECTION./1
    24367 kumari 91M 39M sleep 59 0 0:01:24 3.3% EX/1
    22943 kumari 112M 41M sleep 10 0 0:00:07 1.2% EX/1
    17044 kumari 92M 30M sleep 42 0 0:00:05 1.2% EX/1
    24790 csd 91M 29M sleep 52 0 0:00:05 0.9% EX/1
    24402 kumari 108M 35M sleep 59 0 0:00:09 0.8% EX/1
    26329 kumari 43M 17M sleep 51 0 0:00:00 0.8% EX/1
    20325 kumari 116M 42M sleep 34 0 0:01:31 0.7% EX/1
    19961 kumari 118M 41M sleep 59 0 0:00:54 0.5% EX/1
    14749 kumari 116M 39M sleep 52 0 0:00:22 0.5% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:26 0.5% DE.PHANTOM.CALL/1
    28947 cash 127M 39M sleep 59 0 0:00:50 0.5% EX/1
    13035 kumari 152M 39M cpu3 30 0 0:00:24 0.4% EX/1
    14699 kumari 88M 39M sleep 59 0 0:00:36 0.4% EX/1
    442 root 9720K 7856K sleep 59 -8 0:37:49 0.4% jPML/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    172 kumari 7805M 2965M 37% 0:35:31 14%
    2 kblatm1 180M 68M 0.9% 0:24:02 9.4%
    6 csd 275M 93M 1.2% 0:00:34 0.9%
    4 cash 247M 83M 1.1% 0:01:54 0.8%
    156 root 470M 279M 3.3% 0:43:37 0.6%
    Total: 355 processes, 498 lwps, load averages: 1.49, 1.33, 1.18
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 132M 41M sleep 29 10 0:23:16 4.5% OFS.CONNECTION./1
    22943 kumari 124M 38M sleep 59 0 0:00:08 1.0% EX/1
    25434 kumari 89M 42M sleep 59 0 0:01:35 1.0% EX/1
    26457 kumari 65M 22M sleep 9 0 0:00:00 0.9% EX/1
    20476 remit 95M 38M sleep 59 0 0:00:22 0.8% EX/1
    25094 kumari 62M 25M sleep 29 10 0:00:02 0.8% OFS.CONNECTION./1
    19017 kumari 115M 40M sleep 59 0 0:00:49 0.8% EX/1
    24402 kumari 110M 37M sleep 59 0 0:00:10 0.7% EX/1
    24367 kumari 91M 39M sleep 59 0 0:01:24 0.6% EX/1
    28682 kumari 123M 46M sleep 51 0 0:01:14 0.6% EX/1
    19190 kumari 69M 37M sleep 29 0 0:03:26 0.5% DE.PHANTOM.CALL/1
    13035 kumari 159M 39M sleep 59 0 0:00:25 0.5% EX/1
    24790 csd 94M 29M cpu2 49 0 0:00:05 0.5% EX/1
    14699 kumari 103M 39M sleep 39 0 0:00:37 0.5% EX/1
    442 root 9720K 7856K sleep 57 -8 0:37:49 0.4% jPML/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    174 kumari 7756M 2963M 37% 0:35:35 12%
    2 kblatm1 202M 70M 0.9% 0:24:08 4.5%
    2 remit 97M 39M 0.5% 0:00:22 0.8%
    6 csd 245M 94M 1.2% 0:00:34 0.7%
    153 root 458M 271M 3.2% 0:43:37 0.6%
    Total: 354 processes, 496 lwps, load averages: 1.29, 1.30, 1.17
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 128M 38M sleep 29 10 0:23:22 6.0% OFS.CONNECTION./1
    6906 cash 116M 42M sleep 53 0 0:01:05 0.8% EX/1
    19894 kumari 116M 38M sleep 59 0 0:00:16 0.8% EX/1
    13336 kumari 110M 37M sleep 59 0 0:00:16 0.6% EX/1
    16745 csd 74M 32M sleep 55 0 0:00:22 0.6% EX/1
    19190 kumari 69M 37M sleep 38 0 0:03:27 0.5% DE.PHANTOM.CALL/1
    26457 kumari 78M 26M sleep 59 0 0:00:01 0.5% EX/1
    17044 kumari 110M 33M sleep 59 0 0:00:06 0.5% EX/1
    25859 kumari 65M 25M sleep 60 0 0:00:01 0.5% EX/1
    24182 kumari 112M 41M sleep 59 0 0:01:16 0.5% EX/1
    19961 kumari 104M 42M sleep 44 0 0:00:55 0.4% EX/1
    20325 kumari 87M 42M sleep 59 0 0:01:32 0.4% EX/1
    442 root 9720K 7856K sleep 60 -8 0:37:50 0.4% jPML/1
    24402 kumari 93M 38M sleep 29 0 0:00:10 0.3% EX/1
    28682 kumari 97M 45M sleep 59 0 0:01:14 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    170 kumari 7524M 2937M 37% 0:35:46 8.8%
    2 kblatm1 198M 68M 0.9% 0:24:14 6.2%
    4 cash 225M 83M 1.0% 0:01:56 1.1%
    6 csd 255M 94M 1.2% 0:00:35 0.9%
    151 root 455M 269M 3.1% 0:43:38 0.4%
    Total: 348 processes, 490 lwps, load averages: 1.28, 1.29, 1.18
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 142M 38M sleep 29 10 0:23:29 5.1% OFS.CONNECTION./1
    19223 kumari 242M 195M sleep 60 0 0:03:45 1.8% DE.PHANTOM.CALL/1
    25693 kumari 104M 24M sleep 59 0 0:00:02 1.0% EX/1
    26701 credit 43M 17M sleep 20 0 0:00:00 0.8% EX/1
    356 kumari 125M 49M sleep 52 0 0:01:10 0.7% EX/1
    19894 kumari 97M 38M sleep 59 0 0:00:16 0.6% EX/1
    27897 kumari 109M 42M sleep 59 0 0:01:11 0.5% EX/1
    13336 kumari 129M 36M sleep 59 0 0:00:17 0.5% EX/1
    19190 kumari 69M 37M sleep 54 0 0:03:28 0.5% DE.PHANTOM.CALL/1
    14699 kumari 106M 39M sleep 59 0 0:00:38 0.5% EX/1
    26702 csd 39M 10M sleep 60 5 0:00:00 0.5% SSELECT/1
    22943 kumari 118M 40M sleep 59 0 0:00:09 0.5% EX/1
    25410 kumari 76M 27M sleep 59 0 0:00:01 0.5% EX/1
    17865 kumari 121M 40M sleep 60 0 0:00:30 0.4% EX/1
    24790 csd 69M 28M sleep 19 0 0:00:05 0.4% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    168 kumari 7499M 2911M 37% 0:35:53 12%
    2 kblatm1 212M 68M 0.9% 0:24:21 5.1%
    7 csd 260M 104M 1.3% 0:00:35 1.4%
    2 credit 45M 18M 0.2% 0:00:00 0.8%
    4 cash 194M 83M 1.0% 0:01:56 0.4%
    Total: 349 processes, 491 lwps, load averages: 1.18, 1.27, 1.17
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 132M 41M sleep 29 10 0:23:32 2.7% OFS.CONNECTION./1
    13336 kumari 111M 37M sleep 59 0 0:00:18 1.4% EX/1
    25906 kumari 64M 30M sleep 59 0 0:00:02 1.1% EX/1
    22314 kumari 90M 37M sleep 60 0 0:00:09 1.0% EX/1
    24182 kumari 101M 42M sleep 59 0 0:01:17 0.6% EX/1
    26792 kumari 106M 35M sleep 60 0 0:00:30 0.5% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:28 0.5% DE.PHANTOM.CALL/1
    22943 kumari 79M 40M sleep 59 0 0:00:10 0.5% EX/1
    23726 kumari 65M 26M sleep 59 0 0:00:03 0.5% EX/1
    25094 kumari 62M 25M sleep 29 10 0:00:03 0.5% OFS.CONNECTION./1
    26325 kumari 97M 44M sleep 59 0 0:01:06 0.4% EX/1
    14699 kumari 130M 39M sleep 59 0 0:00:39 0.4% EX/1
    2503 kumari 109M 36M sleep 59 0 0:00:37 0.4% EX/1
    19961 kumari 133M 41M sleep 60 0 0:00:55 0.4% EX/1
    19223 kumari 242M 195M sleep 60 0 0:03:45 0.4% DE.PHANTOM.CALL/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    166 kumari 7383M 2904M 37% 0:36:08 12%
    2 kblatm1 202M 70M 0.9% 0:24:25 2.7%
    6 csd 278M 94M 1.2% 0:00:37 0.5%
    153 root 464M 275M 3.2% 0:43:39 0.5%
    2 remit 120M 40M 0.5% 0:00:24 0.4%
    Total: 346 processes, 489 lwps, load averages: 1.14, 1.25, 1.17
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    19776 kumari 202M 86M sleep 59 0 0:04:12 2.9% EX/1
    22943 kumari 105M 40M sleep 59 0 0:00:11 1.1% EX/1
    19223 kumari 228M 195M sleep 60 0 0:03:46 0.9% DE.PHANTOM.CALL/1
    20325 kumari 122M 45M sleep 59 0 0:01:35 0.9% EX/1
    4034 kblatm1 112M 39M sleep 22 10 0:23:32 0.8% OFS.CONNECTION./1
    13336 kumari 126M 37M sleep 59 0 0:00:20 0.8% EX/1
    26964 kumari 43M 17M sleep 52 0 0:00:00 0.8% EX/1
    26909 kumari 43M 17M sleep 59 0 0:00:00 0.5% EX/1
    19190 kumari 69M 37M sleep 54 0 0:03:29 0.5% DE.PHANTOM.CALL/1
    22876 kumari 82M 37M sleep 59 0 0:00:16 0.5% EX/1
    16745 csd 74M 32M sleep 59 0 0:00:24 0.5% EX/1
    442 root 9720K 7856K sleep 60 -8 0:37:51 0.4% jPML/1
    25906 kumari 59M 30M sleep 59 0 0:00:03 0.4% EX/1
    24182 kumari 103M 42M sleep 59 0 0:01:18 0.3% EX/1
    26792 kumari 96M 36M sleep 59 0 0:00:30 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    167 kumari 7440M 2907M 37% 0:36:18 14%
    2 kblatm1 182M 69M 0.9% 0:24:25 0.9%
    153 root 464M 275M 3.2% 0:43:40 0.6%
    6 csd 216M 93M 1.2% 0:00:38 0.5%
    2 remit 114M 40M 0.5% 0:00:24 0.2%
    Total: 347 processes, 492 lwps, load averages: 1.04, 1.21, 1.16
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 142M 38M sleep 29 10 0:23:34 2.2% OFS.CONNECTION./1
    27051 lc 43M 17M sleep 59 0 0:00:00 0.7% EX/1
    19017 kumari 81M 39M sleep 59 0 0:00:50 0.6% EX/1
    26739 kumari 91M 26M sleep 59 0 0:00:01 0.6% EX/1
    17865 kumari 106M 40M sleep 59 0 0:00:31 0.6% EX/1
    19776 kumari 202M 86M sleep 59 0 0:04:12 0.6% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:30 0.5% DE.PHANTOM.CALL/1
    17545 kumari 99M 41M sleep 59 0 0:00:23 0.5% EX/1
    20325 kumari 121M 45M sleep 59 0 0:01:35 0.4% EX/1
    25859 kumari 93M 29M sleep 59 0 0:00:02 0.4% EX/1
    23298 kumari 101M 29M sleep 59 0 0:00:09 0.4% EX/1
    25906 kumari 82M 35M sleep 44 0 0:00:03 0.4% EX/1
    19961 kumari 118M 41M sleep 59 0 0:00:56 0.3% EX/1
    19894 kumari 97M 38M sleep 59 0 0:00:17 0.3% EX/1
    22876 kumari 98M 37M sleep 59 0 0:00:16 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    163 kumari 7424M 2860M 36% 0:36:19 9.2%
    2 kblatm1 212M 68M 0.9% 0:24:27 2.2%
    4 lc 125M 51M 0.6% 0:00:14 0.8%
    154 root 468M 277M 3.2% 0:43:40 0.4%
    2 remit 104M 40M 0.5% 0:00:24 0.2%
    Total: 346 processes, 491 lwps, load averages: 0.96, 1.18, 1.15
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    19223 kumari 242M 195M sleep 60 0 0:03:52 5.2% DE.PHANTOM.CALL/1
    27099 kumari 63M 24M sleep 52 0 0:00:01 0.7% EX/1
    23893 kumari 102M 28M sleep 43 0 0:00:04 0.6% EX/1
    21205 kumari 113M 43M sleep 59 0 0:01:10 0.6% EX/1
    19190 kumari 69M 37M sleep 59 0 0:03:30 0.5% DE.PHANTOM.CALL/1
    16745 csd 100M 33M sleep 32 0 0:00:24 0.4% EX/1
    4034 kblatm1 142M 38M sleep 29 10 0:23:34 0.4% OFS.CONNECTION./1
    22314 kumari 113M 39M sleep 59 0 0:00:09 0.4% EX/1
    20476 remit 117M 39M sleep 59 0 0:00:25 0.3% EX/1
    19961 kumari 92M 40M sleep 59 0 0:00:57 0.3% EX/1
    17865 kumari 98M 40M sleep 59 0 0:00:31 0.3% EX/1
    442 root 9720K 7856K sleep 59 -8 0:37:52 0.3% jPML/1
    26739 kumari 76M 27M sleep 39 0 0:00:02 0.3% EX/1
    25094 kumari 76M 26M sleep 29 10 0:00:04 0.3% OFS.CONNECTION./1
    19894 kumari 79M 37M sleep 59 0 0:00:17 0.2% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    159 kumari 6992M 2810M 36% 0:36:29 12%
    7 csd 247M 95M 1.2% 0:00:38 0.4%
    2 kblatm1 212M 68M 0.9% 0:24:27 0.4%
    152 root 464M 274M 3.2% 0:43:41 0.4%
    2 remit 119M 40M 0.5% 0:00:25 0.3%
    Total: 341 processes, 486 lwps, load averages: 1.12, 1.20, 1.15
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    4034 kblatm1 142M 38M sleep 29 10 0:23:39 4.1% OFS.CONNECTION./1
    19223 kumari 242M 195M sleep 60 0 0:03:52 1.0% DE.PHANTOM.CALL/1
    26739 kumari 106M 34M sleep 59 0 0:00:03 0.9% EX/1
    17865 kumari 81M 39M sleep 59 0 0:00:32 0.7% EX/1
    24182 kumari 87M 41M sleep 59 0 0:01:19 0.5% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:31 0.5% DE.PHANTOM.CALL/1
    6906 cash 115M 41M sleep 59 0 0:01:07 0.5% EX/1
    27099 kumari 109M 27M sleep 59 0 0:00:01 0.4% EX/1
    27207 kumari 77M 20M sleep 59 0 0:00:00 0.4% EX/1
    26202 kumari 84M 25M sleep 59 0 0:00:01 0.3% EX/1
    21205 kumari 85M 43M sleep 59 0 0:01:10 0.3% EX/1
    442 root 9720K 7856K sleep 59 -8 0:37:52 0.3% jPML/1
    25859 kumari 83M 32M sleep 59 0 0:00:03 0.2% EX/1
    23893 kumari 88M 27M sleep 59 0 0:00:04 0.2% EX/1
    20325 kumari 119M 44M sleep 59 0 0:01:36 0.2% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    158 kumari 7089M 2805M 35% 0:36:34 6.9%
    2 kblatm1 212M 68M 0.9% 0:24:32 4.2%
    4 cash 224M 82M 1.0% 0:01:58 0.5%
    152 root 464M 274M 3.2% 0:43:41 0.3%
    2 remit 119M 40M 0.5% 0:00:25 0.2%
    Total: 339 processes, 481 lwps, load averages: 0.97, 1.15, 1.14
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 42M 13M cpu0 26 5 0:00:03 3.6% SSELECT/1
    23893 kumari 82M 33M sleep 59 0 0:00:07 3.0% EX/1
    25529 kumari 92M 29M cpu1 50 0 0:00:01 1.4% EX/1
    26739 kumari 105M 34M sleep 59 0 0:00:04 0.9% EX/1
    24585 kumari 88M 40M sleep 52 0 0:01:04 0.8% EX/1
    4034 kblatm1 142M 38M sleep 29 10 0:23:39 0.8% OFS.CONNECTION./1
    27289 kumari 80M 21M sleep 59 0 0:00:00 0.7% EX/1
    25434 kumari 120M 44M sleep 59 0 0:01:37 0.6% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:31 0.5% DE.PHANTOM.CALL/1
    23113 kumari 119M 43M sleep 59 0 0:01:04 0.5% EX/1
    21205 kumari 105M 43M sleep 59 0 0:01:11 0.4% EX/1
    442 root 9720K 7856K sleep 60 -8 0:37:53 0.4% jPML/1
    20325 kumari 122M 45M sleep 59 0 0:01:36 0.4% EX/1
    19223 kumari 242M 196M sleep 41 0 0:03:52 0.3% DE.PHANTOM.CALL/1
    356 kumari 125M 48M sleep 59 0 0:01:11 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    160 kumari 7385M 2840M 36% 0:36:49 16%
    2 kblatm1 212M 68M 0.9% 0:24:32 0.8%
    151 root 462M 273M 3.2% 0:43:42 0.4%
    4 cash 223M 82M 1.0% 0:01:58 0.2%
    2 remit 114M 41M 0.5% 0:00:25 0.1%
    Total: 338 processes, 480 lwps, load averages: 0.96, 1.12, 1.13
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 42M 26M cpu0 17 5 0:00:17 9.4% SSELECT/1
    4034 kblatm1 141M 38M sleep 29 10 0:23:45 3.8% OFS.CONNECTION./1
    28682 kumari 126M 48M sleep 59 0 0:01:17 1.3% EX/1
    19017 kumari 82M 40M sleep 59 0 0:00:52 1.3% EX/1
    25693 kumari 90M 28M sleep 53 0 0:00:03 0.8% EX/1
    19223 kumari 257M 195M sleep 60 0 0:03:54 0.7% DE.PHANTOM.CALL/1
    25434 kumari 102M 43M sleep 59 0 0:01:38 0.7% EX/1
    24585 kumari 82M 41M sleep 59 0 0:01:05 0.7% EX/1
    27384 kumari 50M 21M sleep 59 0 0:00:00 0.7% EX/1
    23893 kumari 82M 33M sleep 59 0 0:00:07 0.6% EX/1
    27897 kumari 131M 41M sleep 59 0 0:01:13 0.5% EX/1
    20325 kumari 89M 44M sleep 59 0 0:01:37 0.5% EX/1
    19190 kumari 69M 37M sleep 59 0 0:03:32 0.5% DE.PHANTOM.CALL/1
    25529 kumari 93M 31M sleep 59 0 0:00:02 0.5% EX/1
    23113 kumari 115M 42M sleep 59 0 0:01:04 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    157 kumari 7146M 2809M 36% 0:35:54 21%
    2 kblatm1 211M 68M 0.9% 0:24:38 3.8%
    150 root 460M 271M 3.2% 0:43:43 0.4%
    2 remit 96M 40M 0.5% 0:00:26 0.2%
    4 cash 199M 82M 1.0% 0:01:59 0.2%
    Total: 334 processes, 476 lwps, load averages: 1.13, 1.15, 1.14
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 46M 30M cpu3 17 5 0:00:31 11% SSELECT/1
    4034 kblatm1 110M 39M sleep 0 10 0:23:55 6.4% OFS.CONNECTION./1
    25693 kumari 90M 37M sleep 43 0 0:00:07 2.4% EX/1
    22876 kumari 104M 38M sleep 59 0 0:00:17 0.9% EX/1
    26739 kumari 74M 34M sleep 59 0 0:00:07 0.9% EX/1
    28682 kumari 126M 48M sleep 59 0 0:01:18 0.9% EX/1
    2503 kumari 77M 36M sleep 54 0 0:00:38 0.5% EX/1
    356 kumari 94M 47M sleep 59 0 0:01:12 0.5% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:32 0.5% DE.PHANTOM.CALL/1
    14749 kumari 94M 39M cpu0 49 0 0:00:23 0.4% EX/1
    27384 kumari 80M 26M sleep 59 0 0:00:01 0.3% EX/1
    442 root 9720K 7856K sleep 60 -8 0:37:53 0.3% jPML/1
    19017 kumari 82M 40M sleep 59 0 0:00:52 0.2% EX/1
    23113 kumari 117M 43M sleep 59 0 0:01:04 0.2% EX/1
    27289 kumari 77M 26M sleep 59 0 0:00:01 0.2% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    157 kumari 7067M 2832M 36% 0:36:23 22%
    2 kblatm1 180M 69M 0.9% 0:24:48 6.4%
    150 root 460M 271M 3.2% 0:43:43 0.3%
    3 kblatm2 72M 27M 0.3% 0:00:25 0.1%
    2 remit 96M 40M 0.5% 0:00:26 0.0%
    Total: 334 processes, 477 lwps, load averages: 1.41, 1.22, 1.16
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 46M 30M cpu2 6 5 0:00:48 13% SSELECT/1
    27533 root 4624K 2176K cpu3 32 0 0:00:03 3.1% sendmail/1
    4034 kblatm1 103M 41M sleep 29 10 0:23:57 2.0% OFS.CONNECTION./1
    25693 kumari 91M 40M sleep 59 0 0:00:09 1.7% EX/1
    27534 root 4056K 2832K cpu3 52 0 0:00:01 1.5% mail.local/1
    19961 kumari 87M 41M sleep 59 0 0:00:58 0.7% EX/1
    19190 kumari 69M 37M sleep 59 0 0:03:33 0.5% DE.PHANTOM.CALL/1
    22876 kumari 109M 39M sleep 59 0 0:00:18 0.5% EX/1
    26325 kumari 91M 44M sleep 59 0 0:01:07 0.4% EX/1
    25094 kumari 63M 26M sleep 29 10 0:00:04 0.4% OFS.CONNECTION./1
    26739 kumari 103M 35M sleep 59 0 0:00:07 0.3% EX/1
    14749 kumari 101M 39M sleep 59 0 0:00:23 0.3% EX/1
    28682 kumari 126M 48M sleep 59 0 0:01:18 0.3% EX/1
    24402 kumari 123M 36M sleep 59 0 0:00:13 0.3% EX/1
    27535 kumari 41M 16M sleep 29 10 0:00:00 0.2% OFS.CONNECTION./1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    158 kumari 7010M 2844M 36% 0:36:46 20%
    152 root 469M 276M 3.2% 0:43:47 4.8%
    2 kblatm1 173M 71M 0.9% 0:24:50 2.0%
    3 kblatm2 72M 27M 0.3% 0:00:25 0.0%
    2 remit 96M 40M 0.5% 0:00:26 0.0%
    Total: 337 processes, 482 lwps, load averages: 1.28, 1.21, 1.16
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 46M 30M sleep 60 5 0:01:03 12% SSELECT/1
    4034 kblatm1 114M 38M sleep 29 10 0:24:07 7.2% OFS.CONNECTION./1
    14699 kumari 114M 40M sleep 51 0 0:00:40 1.1% EX/1
    26739 kumari 108M 36M sleep 59 0 0:00:09 0.9% EX/1
    28682 kumari 93M 46M sleep 59 0 0:01:19 0.8% EX/1
    24402 kumari 105M 38M sleep 59 0 0:00:14 0.7% EX/1
    27566 kumari 85M 24M sleep 59 0 0:00:01 0.7% EX/1
    25434 kumari 119M 43M sleep 59 0 0:01:39 0.6% EX/1
    27384 kumari 103M 30M sleep 59 0 0:00:02 0.6% EX/1
    19961 kumari 106M 41M sleep 54 0 0:00:58 0.6% EX/1
    19190 kumari 69M 37M sleep 59 0 0:03:34 0.5% DE.PHANTOM.CALL/1
    14749 kumari 122M 40M sleep 59 0 0:00:24 0.4% EX/1
    23726 kumari 100M 30M sleep 59 0 0:00:04 0.3% EX/1
    25693 kumari 91M 40M sleep 59 0 0:00:09 0.3% EX/1
    19894 kumari 108M 37M sleep 59 0 0:00:18 0.3% EX/1
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    157 kumari 7251M 2848M 36% 0:37:13 22%
    2 kblatm1 184M 68M 0.9% 0:25:00 7.2%
    4 cash 232M 83M 1.0% 0:02:00 0.4%
    150 root 460M 271M 3.2% 0:43:44 0.2%
    6 csd 284M 94M 1.2% 0:00:38 0.1%
    Total: 334 processes, 477 lwps, load averages: 1.44, 1.25, 1.18
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    27302 kumari 47M 30M cpu2 38 5 0:01:16 11% SSELECT/1
    4034 kblatm1 128M 38M sleep 29 10 0:24:12 4.7% OFS.CONNECTION./1
    26202 kumari 83M 31M sleep 59 0 0:00:06 3.0% EX/1
    26739 kumari 78M 36M sleep 59 0 0:00:11 1.8% EX/1
    19017 kumari 116M 42M sleep 59 0 0:00:54 1.0% EX/1
    28947 cash 105M 40M sleep 52 0 0:00:53 0.9% EX/1
    22314 kumari 112M 41M sleep 8 0 0:00:10 0.8% EX/1
    23893 kumari 103M 33M sleep 59 0 0:00:08 0.7% EX/1
    17865 kumari 115M 42M sleep 59 0 0:00:34 0.6% EX/1
    23726 kumari 105M 31M sleep 59 0 0:00:05 0.6% EX/1
    27566 kumari 108M 25M sleep 59 0 0:00:02 0.5% EX/1
    6906 cash 86M 42M sleep 59 0 0:01:09 0.5% EX/1
    19190 kumari 69M 37M sleep 60 0 0:03:34 0.4% DE.PHANTOM.CALL/1
    442 root 9720K 7856K sleep 59 -8 0:37:54 0.4% jPML/1
    25434 kumari 118M 43M sleep 59 0 0:01:40 0.4% EX/1
    Bikash

  • Sun JVM Performance Issue in Sun Solaris 10 (SPARC)

    Hi,
    Issue : Performance issue after the migration of a Java application from IBM-AIX 5 to Sun Solaris 10 (SPARC)
    I am facing performance issue after the migration of a Java application from IBM-AIX 5.3 to Sun Solaris 10 (SPARC).
     Normally the application takes less than 1 hour to complete the process in AIX, but after migration in Solaris the application is taking 4+ hours.
    The Java version of IBM AIX is ,
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap32dev-20051104)
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc-32 j9vmap3223-20051103 (JIT enabled)
    The Java version of Solaris 10 is,
    Java(TM) Platform, Standard Edition for Business (build 1.5.0_17-b04)
    Java HotSpot(TM) Server VM (build 1.5.0_17-b04, mixed mode)
    Description of Application
    The application merges 2 XML files of size 300 MB each using DOM Parser and generates flat file according to certain business logic.No remote files are using for the file generation. There are two folders and around 200 XML file in each folders of similar names. The application loads 2 similar XML file at a time from each folder and Processes. Same way, the application processes all the 200 XML file pairs using loop.
    The JVM Parameters are given below.
    /usr/java5/bin/java -cp $CLASSPATH -Xms3072m -Xmx3072M com.db.mcc.creditderiv.GCDXMLTransProc
    Here the extended swap memory in AIX is 3072 (3GB). After copying the same tode to Solaris, the
    application started throwing java.lang.OutofMemoryError. So that we have increased the swap memory up to 12 GB.
    Since 32bit Java allows maximum 4 GB extended memory we started using 64 Bit Java in Solaris using -d64 argument.
    The Current JVM Parameter in Solaris is given below.
    java -d64 -cp $CLASSPATH -Xms8192m -Xmx12288m com.db.mcc.creditderiv.GCDXMLTransProc ( 64 GB Swap Memory is available in the System)
    We have tried the following options
    1.       Extended heap size up to 12 GB using -xms and -xmx parameters and tried multiple -XX options. Earlier the application was working fine in AIX with 3.5 GB extended heap size. ( 64 GB Swap Memory is available in the System)
    2.       Downloaded and installed the Solaris SPARC Patches from the website,
         http://java.sun.com/javase/downloads/index_jdk5.jsp
    4.   Downloaded and installed XML and XSLT patch from sun website
    5.       Tried to run the Java in server mode using -server option.

    A 64 bit VM is not necessarily faster than a 32 bit one. I remember at least on suggestion that it could be slower.
    Make sure you use the -server option.
    As a guess IBM isn't necessarily a slouch when it comes to Java. It might simply be that their VM was faster. Could have used a different dom library as well.
    Could be an environment problem of course.
    Profiling the application and the machine as well might provide information.

  • Performance Monitor Agent Configuration Solaris

    Hi there,
    Was wondering if anyone here has managed to set up the Performance Monitor on
    Solaris?
    I've followed the instructions in the manual to the letter but can't seem to get
    the agent hooked up on the console. I know that the agent is running but when
    I click on the Performance Monitor -> Configuration -> Agent, the console does
    not display the any Agent.
    Can any one here please help me?
    Cheers
    Zeus.

    Per,
    Could you please list the config steps performed including all the DB operation you performed for setting up the new DB?
    Regards,
    Deepak

  • CBLAS in sun performance library coming with solaris studio 12.2

    Hello,
    In sunperf library coming with sun studio 12u1 (linux x86) I can use cblas callings using the standard names cblas_xxxx. This interface do not appears in sunperf.h (you can use standard cblas.h) but the objects are in libsunperf.(a|so).
    But in solaris studio 12.2 libsunperf cblas_xxxx objects do not exists. Is this normal? Contains libsunperf a standard c blas interface?
    Thanks

    Hello again,
    In this blog post
    http://www.mlds-networks.com/index.php/component/option,com_mojo/Itemid,29/p,35/
    is explained how to link the ACML (AMD core math library) in order to use the standard CBLAS interface (ACML do not provides a standard CBLAS). I tried it and all runs OK. I did the same for the sun performance library in solstudio 12.2 and the compilation proccess runs ok, but in the testing step all functions fails becaude an incorrect argument in each function.
    Exists any way for use the standard CBLAS interface with sun performance library? As I noted in my previous post, version 12.1 of sunstudio libsunperf contains the standard CBLAS interface, but I would like to use version 12.2
    Thanks

  • Tuxedo8 core dumps when performing a tpcall in Solaris

    Hi all,
    I'm installing a Tuxedo application on a Solaris OS:
    SunOS 5.8 Generic_108528-13 sun4u sparc SUNW,Sun-Fire-280R
    Tuxedo 8.0 compiled under 32bits libraries.
    This application also runs correctly under a RedHat Linux 7.1 (kernel 2.4.9-31)
    and on a Digital (OSF1 V4.0 878 alpha)
    When running the application on Solaris, we always get a core dump when the service
    performs a tpcall. Debugging the Tuxedo server we can see the core dump is produced
    when the service gets de response from the tpcall, i.e: when the service called
    performs the tpreturn.
    Any clues will be appreciated, thanks!
    Yol.

    Oh, thanks very much, what a stupid mistake! sorry :(
    We knew about the cast, but after looking for the problem in many ways we didn't
    realize FLDLEN was a short!
    Thank you all for your quick help!
    Yol.
    Scott Orshan <[email protected]> wrote:
    FLDLEN nLongitud is a short. Casting its pointer to a long * does not
    change the
    fact that the return value will overwrite other memory. On Linux, the
    alignment or
    arrangement of the stack was different, so it didn't core dump. You need
    to pass
    the address of a real long for the return length.
         Scott Orshan
    Yol. wrote:
    Yes, that's what we thought at first sight, nevertheless remember itruns ok in
    other OS.
    Anyway here I give you 2 samples of code we've tried.
    Any of this cases fail creating a core dump.
    Case 1:
    Src1 calls Src2:
    Src1:
    void SRC1(TPSVCINFO * BufferFml)
    FLDLEN nLongitud;
    FBFR *pBuffer;
    pBuffer = (FBFR *) BufferFml->data;
    if (tpcall("SRC2", (char *) pBuffer, 0, (char **) &pBuffer, (long*) &nLongitud,
    0) == -1)
    userlog("Error!!!!!!!!!!!!!!!!!");
    tpreturn(TPSUCCESS, 0, (char *) pBuffer, 0L, 0);
    Src2
    void SRC2(TPSVCINFO * BufferFml)
    FLDLEN nLongitud;
    FBFR *pBuffer;
    pBuffer = (FBFR *) BufferFml->data;
    tpreturn(TPSUCCESS, 0, (char *) pBuffer, 0L, 0);
    Case 2:
    Src1 calls Src2:
    Src1:
    The same as in case 1
    Src2:
    void SRC2(TPSVCINFO * BufferFml)
    tpreturn(TPSUCCESS, 0, NULL, 0L, 0);
    Thanks anyway for your attention ;-)
    Peter Holditch <[email protected]> wrote:
    Yol,
    My initial guess is that your code is not keeping track of the tpalloced
    buffers correctly - in particular, the one that the reply is received
    into.
    If you post some code, maybe someone will see the error. Alternatively,
    have you got purift or some other bounds checking software that might
    help you track the problem?
    Regards,
    Peter.
    Yol. wrote:
    Hi all,
    I'm installing a Tuxedo application on a Solaris OS:
    SunOS 5.8 Generic_108528-13 sun4u sparc SUNW,Sun-Fire-280R
    Tuxedo 8.0 compiled under 32bits libraries.
    This application also runs correctly under a RedHat Linux 7.1 (kernel2.4.9-31)
    and on a Digital (OSF1 V4.0 878 alpha)
    When running the application on Solaris, we always get a core dumpwhen the service
    performs a tpcall. Debugging the Tuxedo server we can see the coredump is produced
    when the service gets de response from the tpcall, i.e: when the servicecalled
    performs the tpreturn.
    Any clues will be appreciated, thanks!
    Yol.

  • Does the sp8 performance pack work for Solaris?

    I'm finally building our QA environment today, and I'll be setting up a
    Solaris cluster...
    I've just seen that the Linux and HP-UX performance packs no longer work as
    of sp8. Is anyone using them successfully on Solaris?
    thanks,
    John Stotler

    The iPod FAQ does not exclude the mini when referring to the World Adapter Kit.
    (10721)

  • Essbase Performance Issues on Sun Solaris 10

    We have a new Hyperion Environment 11.1.1.3 with Essbase sitting on a Solaris box. We are running a calculation script under the "FINSTMT" database that is called CALCALL. This is the default calculation for a database in Essbase (it runs a command called CALC ALL). We are running this same calc against the same database outline and data set across the environments to benchmark performance.
    The script in the new environment should run faster, but it runs slower. The server is basically sleeping and we were curious if anyone can recommend configurations within the app or for the OS? Things like semiphors, shared memory, etc... Also if anyone has suggests or ideals to tweak Essbase performance on a Solaris 10 machine and/or UNIX. What should I do to the Essbase.config file?
    Mike

    I can't help you solaris tuning, but some things to look at.
    1. Is the Essbase.cfg file the same on both servers? You might have parallel calculation turned on in one and not the other. Caches could also be set differently
    2. Are the database caches set the same? This could impact performance as well
    3. Are you doing an apples to apples comparison? Is one database loaded and recalculated many times while the other is not (or restructured or reloaded)

  • Performance issues (Oracle 9i Solaris 9)

    Hi Guys,
    How do I tell if my database is performing at its optimum level. We seem to be having perfomance issues on one of our applications. There are saying it's the database, network, etc.
    Thank you.

    Hi,
    In order to determine whether or not your Database is having performance Issues,you will need to install and execute Statspack. Statspack is utility which provides information about the Performance Parameters of Oracle Database.
    If you are already using statspack report for performance analysis post the snapshot of the report.........
    Regards,
    Prosenjit Mukherjee.

  • Performance Problem For Sun Solaris Kernel

    Hello,
    I have DB version 10.2.0.2 On Sun Solaris 5.10, when a run top utility i see that 20-25 % of CPU takes Kernel.
    Then i Truss DB writer and in out file i see error likes this: kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    On metalink i read that it's bu using direct I/O on Solaris UFS file system. Then I change init parameter filesystemio_options from setall to asynch. now value of this parameter is asynch, but when i truss DB writer i saw same error: kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    has anyone same problem?

    file system is UFS. mount option is forcefirectio.
    This is example:
    14051/169:     pwrite(341, "06A2\0\01505D1 <8797 DBA".., 8192, 0xBA278000) = 8192
    14051/1:     lwp_unpark(171)                         = 0
    14051/171:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(172)                         = 0
    14051/172:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(173)                         = 0
    14051/170:     pwrite(367, "06A2\0\01BAA .BA8797 C #".., 8192, 0x545D74000) = 8192
    14051/173:     lwp_park(0x00000000, 0)                    = 0
    14051/172:     pwrite(369, "06A2\0\01C 88DE68797 DBA".., 8192, 0x711BCC000) = 8192
    14051/173:     pwrite(370, " $A2\0\01C p07 z8797 D13".., 8192, 25785483264) = 8192
    14051/1:     lwp_unpark(174)                         = 0
    14051/174:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(175)                         = 0
    14051/175:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(176)                         = 0
    14051/176:     lwp_park(0x00000000, 0)                    = 0
    14051/171:     pwrite(367, "06A2\0\01BABB6 p8797 C #".., 8192, 0x576CE0000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/174:     pwrite(370, "06A2\0\01C } # W8797 =B6".., 106496, 0x7A46AE000) = 106496
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/175:     pwrite(374, "06A2\0\01D E - 28797 C #".., 8192, 0xA5A64000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/176:     pwrite(375, "06A2\0\01D8B 7 Z8796 J96".., 8192, 0x166EB4000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/156:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/153:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/152:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/157:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/154:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/155:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/161:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/160:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/158:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/163:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/162:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/159:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/164:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/165:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/166:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/167:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/168:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/169:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/173:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/170:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/172:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/174:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/175:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/171:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/1:     semtimedop(16777258, 0xFFFFFFFF7FFFDEB4, 1, 0xFFFFFFFF7FFFDEA0) (sleeping...)
    14051/176:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/1:     semtimedop(16777258, 0xFFFFFFFF7FFFDEB4, 1, 0xFFFFFFFF7FFFDEA0) = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(177)                         = 0
    14051/177:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(178)                         = 0
    14051/178:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(179)                         = 0
    14051/179:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(180)                         = 0
    14051/180:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(181)                         = 0
    14051/181:     lwp_park(0x00000000, 0)                    = 0
    14051/182:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(182)                         = 0
    14051/1:     lwp_unpark(183)                         = 0
    14051/183:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(184)                         = 0
    14051/184:     lwp_park(0x00000000, 0)                    = 0
    14051/177:     pwrite(258, "06A2\0\0\0 @0602879719EA".., 8192, 12599296) = 8192
    14051/1:     lwp_unpark(185)                         = 0
    14051/185:     lwp_park(0x00000000, 0)                    = 0
    14051/181:     pwrite(259, " &A2\0\0\080\09987977F P".., 8192, 1253376) = 8192
    14051/186:     lwp_park(0x00000000, 0)                    = 0
    14051/182:     pwrite(259, " &A2\0\0\0800189879783\t".., 8192, 3219456) = 8192
    14051/184:     pwrite(259, " &A2\0\0\08002C987978114".., 8192, 5840896) = 8192
    14051/180:     pwrite(259, " &A2\0\0\080\0 )879782F9".., 8192, 335872) = 8192
    14051/1:     lwp_unpark(186)                         = 0
    14051/185:     pwrite(259, " &A2\0\0\08004A98797 x ~".., 8192, 9773056) = 8192
    14051/1:     lwp_unpark(187)                         = 0
    14051/187:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(188)                         = 0
    14051/1:     yield()                              = 0
    14051/188:     lwp_park(0x00000000, 0)                    = 0
    14051/186:     pwrite(259, " &A2\0\0\08005 98796 pA4".., 8192, 10952704) = 8192
    14051/1:     lwp_unpark(189)                         = 0
    14051/187:     pwrite(259, " &A2\0\0\08005998797 {FF".., 8192, 11739136) = 8192
    14051/1:     lwp_unpark(190)                         = 0
    14051/188:     pwrite(259, " &A2\0\0\08006 987977F95".., 8192, 13049856) = 8192
    14051/1:     lwp_unpark(191)                         = 0
    14051/169:     pwrite(341, "06A2\0\01505D1 <8797 DBA".., 8192, 0xBA278000) = 8192
    14051/1:     lwp_unpark(171)                         = 0
    14051/171:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(172)                         = 0
    14051/172:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(173)                         = 0
    14051/170:     pwrite(367, "06A2\0\01BAA .BA8797 C #".., 8192, 0x545D74000) = 8192
    14051/173:     lwp_park(0x00000000, 0)                    = 0
    14051/172:     pwrite(369, "06A2\0\01C 88DE68797 DBA".., 8192, 0x711BCC000) = 8192
    14051/173:     pwrite(370, " $A2\0\01C p07 z8797 D13".., 8192, 25785483264) = 8192
    14051/1:     lwp_unpark(174)                         = 0
    14051/174:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(175)                         = 0
    14051/175:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(176)                         = 0
    14051/176:     lwp_park(0x00000000, 0)                    = 0
    14051/171:     pwrite(367, "06A2\0\01BABB6 p8797 C #".., 8192, 0x576CE0000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/174:     pwrite(370, "06A2\0\01C } # W8797 =B6".., 106496, 0x7A46AE000) = 106496
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/175:     pwrite(374, "06A2\0\01D E - 28797 C #".., 8192, 0xA5A64000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/176:     pwrite(375, "06A2\0\01D8B 7 Z8796 J96".., 8192, 0x166EB4000) = 8192
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/1:     kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF)          Err#22 EINVAL
    14051/156:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/153:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/152:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/157:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/154:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/155:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/161:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/160:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/158:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/163:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/162:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/159:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/164:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/165:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/166:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/167:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/168:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/169:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/173:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/170:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/172:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/174:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/175:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/171:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/1:     semtimedop(16777258, 0xFFFFFFFF7FFFDEB4, 1, 0xFFFFFFFF7FFFDEA0) (sleeping...)
    14051/176:     lwp_park(0x00000000, 0)          (sleeping...)
    14051/1:     semtimedop(16777258, 0xFFFFFFFF7FFFDEB4, 1, 0xFFFFFFFF7FFFDEA0) = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(177)                         = 0
    14051/177:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(178)                         = 0
    14051/178:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(179)                         = 0
    14051/179:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     yield()                              = 0
    14051/1:     lwp_unpark(180)                         = 0
    14051/180:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(181)                         = 0
    14051/181:     lwp_park(0x00000000, 0)                    = 0
    14051/182:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(182)                         = 0
    14051/1:     lwp_unpark(183)                         = 0
    14051/183:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(184)                         = 0
    14051/184:     lwp_park(0x00000000, 0)                    = 0
    14051/177:     pwrite(258, "06A2\0\0\0 @0602879719EA".., 8192, 12599296) = 8192
    14051/1:     lwp_unpark(185)                         = 0
    14051/185:     lwp_park(0x00000000, 0)                    = 0
    14051/181:     pwrite(259, " &A2\0\0\080\09987977F P".., 8192, 1253376) = 8192
    14051/186:     lwp_park(0x00000000, 0)                    = 0
    14051/182:     pwrite(259, " &A2\0\0\0800189879783\t".., 8192, 3219456) = 8192
    14051/184:     pwrite(259, " &A2\0\0\08002C987978114".., 8192, 5840896) = 8192
    14051/180:     pwrite(259, " &A2\0\0\080\0 )879782F9".., 8192, 335872) = 8192
    14051/1:     lwp_unpark(186)                         = 0
    14051/185:     pwrite(259, " &A2\0\0\08004A98797 x ~".., 8192, 9773056) = 8192
    14051/1:     lwp_unpark(187)                         = 0
    14051/187:     lwp_park(0x00000000, 0)                    = 0
    14051/1:     lwp_unpark(188)                         = 0
    14051/1:     yield()                              = 0
    14051/188:     lwp_park(0x00000000, 0)                    = 0
    14051/186:     pwrite(259, " &A2\0\0\08005 98796 pA4".., 8192, 10952704) = 8192
    14051/1:     lwp_unpark(189)                         = 0
    14051/187:     pwrite(259, " &A2\0\0\08005998797 {FF".., 8192, 11739136) = 8192
    14051/1:     lwp_unpark(190)                         = 0
    14051/188:     pwrite(259, " &A2\0\0\08006 987977F95".., 8192, 13049856) = 8192
    14051/1:     lwp_unpark(191)                         = 0

  • JCaps 5.1.3 Sun Solaris CPU performance issue

    Folks,
    We are experiencing a serious CPU performance issue on our Solaris server with HL7 projects deployed.
    The projects consist of the sample HL7 inbound and outbound projects with an additional service sending to a batch local file external for writing journals.
    The performance issue occurs when there is volume of data in the queues/topics. As we continue to deploy additional HL7 projects (usually about 6 interfaces), the CPU increases until it reached 100%.
    This sanapshot is prstat when no date is transmitting through the interfaces (One inbound - one outbound):
    B PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    15598 jre 379M 177M sleep 59 0 2:49:11 3.1% eManager/74
    21549 phs 1174M 1037M sleep 59 0 14:49:00 2.5% is_dm_phs/113
    23090 phs 3456K 3136K cpu1 59 0 0:00:01 0.4% prstat/1
    23102 phs 3792K 3496K sleep 59 0 0:00:00 0.2% prstat/1
    21550 phs 46M 35M sleep 59 0 0:13:27 0.1% stcms.exe/3
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    131 root 4368K 2480K sleep 59 0 0:02:10 0.1% nscd/30
    23094 phs 3064K 2168K sleep 59 0 0:00:00 0.1% bash/1
    This sanapshot is prstat when data is transmitting through the interfaces(One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    21549 phs 1174M 1037M cpu1 20 0 14:51:20 88% is_dm_phs/113
    15598 jre 379M 181M sleep 59 0 2:49:18 1.3% eManager/74
    21550 phs 46M 35M sleep 49 0 0:13:29 1.2% stcms.exe/3
    23090 phs 3456K 3128K cpu3 49 0 0:00:03 0.4% prstat/1
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    21546 phs 118M 904K sleep 59 0 0:01:21 0.1% isprocmgr_dm_ph/13
    This sanapshot is prstat -L when data is transmitting through the interfaces (One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    21549 phs 1174M 1037M cpu1 41 0 0:00:45 22% is_dm_phs/13971
    21549 phs 1174M 1037M sleep 51 0 3:31:06 21% is_dm_phs/1394
    21549 phs 1174M 1037M run 51 0 3:14:16 20% is_dm_phs/1296
    21549 phs 1174M 1037M sleep 52 0 3:14:13 19% is_dm_phs/1380
    15598 jre 379M 181M sleep 50 0 1:49:57 3.1% eManager/4
    21549 phs 1174M 1037M sleep 59 0 0:15:36 1.7% is_dm_phs/4
    21550 phs 46M 35M sleep 59 0 0:10:52 1.0% stcms.exe/1
    21549 phs 1174M 1037M sleep 59 0 0:10:45 0.9% is_dm_phs/6
    15598 jre 379M 181M sleep 54 0 0:33:35 0.3% eManager/35
    21549 phs 1174M 1037M sleep 59 0 0:03:34 0.3% is_dm_phs/5
    21550 phs 46M 35M sleep 59 0 0:02:37 0.2% stcms.exe/2
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/3
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/2
    Solaris 10 server details:
    CPU's (4x900 Sparc III+)
    4096 MB RAM
    SunOS testican 5.9 Generic_118558-39 sun4u sparc SUNW,Sun-Fire-880
    Disk: 6 internal Fujitsu 72GBs
    swapspace on the server:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    My sysadmin has run statistics (iostat, vmstat, psig, pmap, pfind, pstack, mpstat, etc.) - and has reported that the server is performing fine - with the exception of the CPU. It also looked like the swap space was not being utilized.
    We have increased the MaxPerm value to 512, and increased the heapsize on isprocmgr_dm_phs to -Xmx2048m, and increased the heapsize on the domain to 2048 per KB 103824
    We have also added the -d64 value (specific to Solaris) per the Deployment Guide.
    We increased the value of Maximum Pool size in the JMS clients to 128 - per the deployment Guide.
    We increased the swapspace on the server to 10Gb:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    We have modified the tcpip and kernal parameters per the Sun Administration server 8.2 performance tuning guide:
    core file size (blocks, -c) unlimited
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    open files (-n) 8192
    pipe size (512 bytes, -p) 10
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) 29995
    virtual memory (kbytes, -v) unlimited
    None of these modificatons appear to increase performance.
    Any help is appreciated.
    Thanks
    Rich...

    Hi,
    I noticed this behavior with the Alert + SNMP Agents installed but not configured. In this situation, the SNMP agent generates traps for all events, leading to high CPU using, even when nothing was processed. Are you in a similar case?
    Regards

  • Performance pack for solaris

    How do I include/install performance pack on my Solaris system for WLS 5.1
    I included $WAS_HOME/lib/solaris in my PATH in the startup script which
    didn't help any
    thanks
    kp

    You have to add that directory to you LD_LIBRARY_PATH, not PATH
    K.P. Patel <[email protected]> wrote in message
    news:[email protected]..
    never mind... I got it.. For some odd reason, I restarted weblogic few
    times...and it picked up....
    thanks
    "K.P. Patel" <[email protected]> wrote in message
    news:3ad64fb6$[email protected]..
    yep, these property was set to true already....still doesn't load
    performance pack
    "Kumar Allamraju" <[email protected]> wrote in message
    news:[email protected]..
    Add the following property in weblogic.properties file.
    weblogic.system.nativeIO.enable=true
    Kumar
    "K.P. Patel" wrote:
    How do I include/install performance pack on my Solaris system for
    WLS
    5.1
    I included $WAS_HOME/lib/solaris in my PATH in the startup script
    which
    didn't help any
    thanks
    kp

  • Help needed in getting real time system performance monitor

    Hi,
    I need a real time system performance monitor for my solaris.
    i am able to graph system usage graph on a daily/weely basis using the ksar grapher.
    In the same way i need to capture the system utilisation real time to be viewed on a webpage. Please let me know if there are any free tool/scripts capable of doing it.

    Hi,
    Process Chain Errors
    /people/mona.kapur/blog/2008/01/14/process-chain-errors
    Common Process chain errors
    For Data Load Errors check this blog:
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Implementation issues
    Lifecycle Implementation
    http://help.sap.com/bp_biv170/documentation/SolutionScope_EN.doc
    http://help.sap.com/bp_biv235/BI_EN/documentation/BWProjectPlan_EN.mpp
    Hope this helps.
    Thanks,
    JituK

  • Some problems in measuring system performance

    Dear all
    I'm a new in Solaris world. Recently, my team is doing some performance tests on the Solaris 10 platform. But I find that it puzzles me about how Solaris system will measure CPU load
    for example, I use command prstat -L -p <pid> to determine the CPU load of each threads for a process and get the result like:
    [zhf@SunOS@whale]/export/home/zhf/PCS_Rel/conf> prstat -L -p 12685
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    12685 zhf 58M 34M sleep 52 0 0:00:06 3.8% pcs/4
    12685 zhf 58M 34M sleep 42 0 0:00:05 3.7% pcs/6
    12685 zhf 58M 34M sleep 59 0 0:00:05 3.6% pcs/5
    12685 zhf 58M 34M sleep 59 0 0:00:02 1.4% pcs/8
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.5% pcs/15
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.2% pcs/16
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.1% pcs/7
    12685 zhf 58M 34M sleep 59 0 0:00:01 0.1% pcs/1
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/3
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/2
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/14
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/13
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/12
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/11
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/10
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/9
    and prstat -mL -p <pid> to determine the microstate of each thread for a process. the example like:
    [zhf@SunOS@whale]/export/home/zhf/PCS_Rel/conf> prstat -mL -p 12685
    PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
    12685 zhf 28 0.4 0.0 0.0 0.0 72 0.0 0.0 377 15 762 0 pcs/4
    12685 zhf 24 0.3 0.0 0.0 0.0 75 0.0 0.0 332 16 666 0 pcs/6
    12685 zhf 21 0.3 0.0 0.0 0.0 78 0.0 0.0 290 8 584 0 pcs/5
    12685 zhf 4.8 0.6 0.0 0.0 0.0 95 0.0 0.0 501 4 4K 0 pcs/8
    12685 zhf 2.4 0.3 0.0 0.0 0.0 97 0.0 0.1 1K 3 2K 0 pcs/15
    12685 zhf 0.9 0.3 0.0 0.0 0.0 0.0 99 0.0 503 10 1K 0 pcs/16
    12685 zhf 0.3 0.2 0.0 0.0 0.0 0.0 99 0.0 501 0 1K 0 pcs/7
    12685 zhf 0.1 0.1 0.0 0.0 0.0 0.0 100 0.1 501 2 501 0 pcs/3
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 77 0 47 0 pcs/2
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/14
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/13
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/12
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/11
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/10
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/9
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/1
    Let's look at thread thread 4, I can see from -L result that thread 4 occupies 3.8% CPU time. But in -mL result shows that thread 4 user part only take 28% time and 72% is waiting for locks.
    My question is, is the "waitig for locks" also calculated into CPU load? That's to say, if the 3.8% CPU load includes lock time, is that means the real processing time is 3.8%*28%? or the 3.8% CPU load not includes lock time, So, the 3.8% CPU load is the real cost of this thread (which is 28% user processing). I wish my explanation will not mess you :)
    For my colleagues have many arguments on this, but no one could be sure. So I ask the experts here to seek the answers.
    many many thanks in advance
    Cheers
    Shen
    Message was edited by:
    lishen

    #1. The first display you have (without the -m) is not an immediate display. The CPU figures are long-term averages, so they can lie considerably.
    Take an otherwise idle machine and run a CPU intensive program. It will take 100% of one CPU immediately, but 'top' and 'prstat' will take many seconds to reflect that.
    #2. Whether 'waiting on a lock' takes CPU time probably depends on how it's wating. Solaris has adaptive locks, so sometimes the wait will take CPU time and other times it sleeps. (Going to sleep and waking up again has an overhead associated with it. So if the lock is going to release "quickly", then it makes sense to just spin on the CPU and do nothing for a few cycles until the lock is released. However if it's going to take "a while" for the lock to release, then it's better to release the CPU and let other processes have it while we wait for the lock.)
    In most circumstances (and almost certainly in your example) the processes are sleeping while waiting for the lock. However there might be other situations where that is not true.
    Darren

Maybe you are looking for

  • How to declare top of page in alv report

    hi guru how to declare top of page in alv report thanks subhasis

  • Backup discs could not be read error occurred (-50). HELP!

    I'm using itunes 9.0.2; running Windows XP on Oracle Virtual Box in Ubuntu 10.04. the latest itunes will not work and 9.0.2 is the closest i can get. One out of 4 backup discs contains only the itunes purchases and it works but a message pops up: +'c

  • Windows 8.1 won't go to sleep

    This started about a week, maybe 2, ago. I can't get the computer to go to sleep and I have no idea why. If I manually select sleep from the power menu it works the way I want, but after hours of inactivity it will never fall asleep. I did a full re-

  • 100% cpu when scrolling in Adobe Reader 8.1.2

    I've installed Reader 8.1.2 on a 2003 Windows server STD Edition with SP1. Everytime I open a pdf document (regardless of what's contained in it)and I scroll using the mouse or the scroll bar the AcroRd32.exe process spikes near 100% utilization. The

  • Using an unchecked exception to alter flow control inside a class

    Best-practice question: Is it wrong to use an unchecked exception in an if-like manner? For example, instead of using: if (condition) { -----------a lot of code----------------- } else { ----------some more code-------------- Is it wrong to do it lik