Low CPU utilization on Solaris

Hi all.
We've recently been performance tuning our java application running
inside of an Application Server with Java 1.3.1 Hotspot -server. We've
begun to notice some odd trends and were curious if anyone else out
there has seen similiar things.
Performance numbers show that our server runs twice
as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
Here's the hardware information:
Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
Throughput for most use cases in a low number of threads is twice as
fast on Intel. The only exception is some of our use-cases that are
heavily dependent on a stored procedure which runs twice as fast on
Solaris. The database (oracle 8i) and the app server run on the same
machine in these tests.
There should minor (or no) network traffic. GC does not seem to be an
issue. We set the max heap at 1024 MG. We tried the various solaris
threading models as recommended, but they have accomplished little.
It is possible our Solaris machine is not configured properly in some
way.
My question (after all that ...) is whether this seems normal to
anyone? Should throughput be higher since the processors are faster on
the wIntel box? Does the fact that the solaris processors are 64bit
have any benefit?
We have also run the HeapTest recommended on this site on both
machines. We found that the memory test performs twice as fast on
solaris, but the CPU test performs 4 times as slow on solaris. The
"joint" test performs twice as slow on solaris. Does this imply bad
things about our solaris configuration? Or is this a normal result?
Another big difference is between Solaris and Win2K in these runs is
that CPU Utilization is low on solaris (20-30%) while its much higher
on Win2K (60-70%)
[both machines are 2 processor and the tests are "primarily" single
threaded at
this stage]. I would except the solaris CPU utilization to be around
50% as well. Any ideas why it isn't?

Hi,
I recently went down this path and wound up coming to the realization that the
cpu's are almost neck and neck per cycle when running my Java app. Let me qualify
this a little more (400mhz Sparc II cpu vs 500mhz Intel cpu) under similar load
running the same test gave me similar results. It wasn't as huge difference in
performance as I was expecting.
My theory is given the scalability of the SPARC architecture, more chips==more
performance with less hardware, whereas the Wintel boxes are cheaper, but in order
to get scaling, the underlying hardware comes into question. (how many wintel
boxes to cluster, co-locate, manage, etc…)
From what little I've found out when running tests against our Solaris 8 (E-250's)
400mhz UltraSparc 2's is that it appears that the CPU performance in a lightly
threaded environment is almost 1 cycle / 1 cycle (SPARC to Intel). I don't think
the 64 bit SPARC architecture will buy you anything for java 1.3.1, but if your
application has some huge memory requirements, then using 1.4.0(when BEA supports
it) should be beneficial (check out http://java.sun.com/j2se/1.4/performance.guide.html).
If your application is running only a few threads, tying the threads to the LWP
kernel processes probably won't gain you much. I noticed that it decreased performance
for a test with only a few threads.
I can't give you a good reason as to why your Solaris CPU utilization is so low,
you may want to try getting a copy of Jprobe and profiling Weblogic and your application
to see where your bottlenecks are. I was able to do this with our product, and
found some nasty little performance bugs, but even with that our CPU utilization
was around 98% on a single and 50% on a dual.
Also, take a look at iostat / vmstat and see if your system is bottlenecking doing
io operations. I kept a background process of vmstat to a log and then looked
at it after my test and saw that my cpu was constantly pegged out (doing a lot
of context switching), but that it wasn't doing a whole lot of page faults
(had enough memory).
If you're doing a lot of serialization, that could explain slow performance as
well.
I did follow a suggestion on this board of running my test several times with
the optimizer (-server) and it boosted performance on each iteration until a plateau
on or about the 3rd test.
If you're running Oracle or another RDBMS on your Solaris machine you should see
a pretty decent performance benchmark against NT as these types of applications
are more geared toward the SPARC architecture. From what I've seen running Oracle
on Solaris is pretty darn fast when compared to Intel.
I know that I tried a lot of different tweaks on my Solaris configuration (tcp
buffer size, etc/system parameters for file descriptors, etc.) I even got to the
point where I wanted
to see how WebLogic was handling the Nagle algorithm as far as it's POSIX muxer
was concerned and ran a little test to see how they were setting the sockets (setTcpNoDelay(Boolean)
on java.net.Socket). They're disabling the Nagle algorithm so that wasn't an
issue sigh. My best advice would be to profile your application and see where
the bottlenecks are, you might be able to increase performance, but I'm not too
sure. I also checked out www.spec.org and saw some of their benchmarks that
coincide with our findings.
Best of luck to you and I hope this helps :)
Andy
[email protected] (feanor73) wrote:
Hi all.
We've recently been performance tuning our java application running
inside of an Application Server with Java 1.3.1 Hotspot -server. We've
begun to notice some odd trends and were curious if anyone else out
there has seen similiar things.
Performance numbers show that our server runs twice
as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
Here's the hardware information:
Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
Throughput for most use cases in a low number of threads is twice as
fast on Intel. The only exception is some of our use-cases that are
heavily dependent on a stored procedure which runs twice as fast on
Solaris. The database (oracle 8i) and the app server run on the same
machine in these tests.
There should minor (or no) network traffic. GC does not seem to be an
issue. We set the max heap at 1024 MG. We tried the various solaris
threading models as recommended, but they have accomplished little.
It is possible our Solaris machine is not configured properly in some
way.
My question (after all that ...) is whether this seems normal to
anyone? Should throughput be higher since the processors are faster on
the wIntel box? Does the fact that the solaris processors are 64bit
have any benefit?
We have also run the HeapTest recommended on this site on both
machines. We found that the memory test performs twice as fast on
solaris, but the CPU test performs 4 times as slow on solaris. The
"joint" test performs twice as slow on solaris. Does this imply bad
things about our solaris configuration? Or is this a normal result?
Another big difference is between Solaris and Win2K in these runs is
that CPU Utilization is low on solaris (20-30%) while its much higher
on Win2K (60-70%)
[both machines are 2 processor and the tests are "primarily" single
threaded at
this stage]. I would except the solaris CPU utilization to be around
50% as well. Any ideas why it isn't?

Similar Messages

  • Syslogd Consuming more CPU utilization in Solaris 10

    Hi All,
    The Syslogd process consuming more CPU utilization in Solaris 10. Kindly help how to reduce this cpu utilization.
    Regards
    Siva

    Hi Robert,
    Both are same architecture. x86
    The following is the prstat o/p from the affected server. Pls note that one of the mount point in this server is in ZFS.
    prstat -l
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    26092 root 3933M 3930M cpu1 60 0 29:00:43 22% syslogd/56
    26092 root 3933M 3930M run 30 0 289:47:33 12% syslogd/18
    26092 root 3933M 3930M run 40 0 272:31:05 11% syslogd/22
    26092 root 3933M 3930M run 22 0 14:47:16 9.7% syslogd/65
    26092 root 3933M 3930M run 42 0 14:43:46 9.7% syslogd/63
    26092 root 3933M 3930M run 31 0 14:40:42 9.6% syslogd/66
    26092 root 3933M 3930M sleep 40 0 152:45:42 5.9% syslogd/21
    26092 root 3933M 3930M cpu0 53 0 6:41:58 4.1% syslogd/58
    26092 root 3933M 3930M run 52 0 6:23:13 4.0% syslogd/57
    26092 root 3933M 3930M sleep 43 0 6:29:21 3.9% syslogd/59
    26092 root 3933M 3930M sleep 52 0 5:55:14 3.6% syslogd/71
    More over we are continuously receiving the below error message in the /var/adm/messages, we don't know, from where the error arises from the syslog server.
    syslogd: malloc failed: dropping message from remote: Not enough space
    PRIVILEGE :[4] 'NONE'
    Edited by: Siva_Systems on Mar 29, 2010 5:06 AM
    Edited by: Siva_Systems on Mar 29, 2010 8:18 AM

  • HELP: WL 8.1 run unexpectedly slow on solaris 9 with low CPU utilization

    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env + SUN mid-end series server.
    JVM was configured to use 3G RAM and there are still abundant RAM on the HW server.
    I tried out a use case, it took long time to get response (~ 2 minutes). But the
    CPU utilization has been always lower 20%. I have tried out the same test case
    on a wintel server with 500 RAM allocated to JVM, the response time is much quicker
    (less than 30 sec). I did the same on solaris 8 with 3G RAM and had used alternate
    threads library mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which
    is the default mode adopted by solaris 9. The same use case responded much quicker
    and comparable to abovementioned test on wintel. Can anybody advice on how to
    tune WL 8.1 on solaris 9 so as to make it perform best ? Is there any special
    trick ?
    thank u very much for any advice in advance
    dso

    "Arjan Kramer" <[email protected]> wrote:
    >
    Hi dso,
    I'm running the same two configs and run into the same performance issues
    as you do. Please let me know if you any response on this!
    Regards,
    Arjan Kramer
    "dso" <[email protected]> wrote:
    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env. JVM was configured
    to use
    3G RAM and there are still abundant RAM on the HW server. I tried out
    a use case,
    it took long time to get response (~ 2 minutes). But the CPU utilization
    has been
    always lower 20%. I have tried out the same test case on a wintel server
    with
    500 RAM allocated to JVM, the response time is much quicker (less than
    30 sec).
    I did the same on solaris 8 with 3G RAM and had used alternate threads
    library
    mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which is the
    default mode
    adopted by solaris 9. The same use case responded much quicker and comparable
    to abovementioned test on wintel. Can anybody advice on how to tuneWL
    8.1 on
    solaris 9 so as to make it perform best ? Is there any special trick
    thank u very much for any advice in advance
    dso
    There could be many factors that add to performance degradation, database, OS,
    Network, app config etc, so without knowing too much its difficult to tell.
    Can you please supply the startup JAVA options used to set the heap etc. Having
    larger heao sizes is not always the best approach of building HA applications...the
    bigger they are, the bigger they fall. I'd suggest using many but smaller instances.
    Provide the heap info from NT also.
    BTW, when weblogic starts, can you tell me how much memory is being used in the
    console...ie the footprint of weblogic + your application.
    Many Thanks

  • Xsun (high CPU utilization) on Solaris 10 Sparc

    hi
    i have sun blade 1500 and am running solaris 10 on it. the machine is a 2 CPU (750Mhz) 4GB Sparc with the lastest cluster patch.
    The Xsun process is alway at 50% util and the windowmaker (wmaker) is at 27%.
    The Xsun is alway using all available CPU and the machine is really slow.. any help on what patch will fix the Xsun process ? any operation on the machine increases the Xsun's CPU util.
    thx
    Sriram

    Hi sridhar,
    can i know which platform you are using....
    if it is solaris,
    can you paste the details of the prstat -L -p wlpid 1 1 ---> which gives the light wieght pid threads
    and also (pstact wlspid) for lwpid to process id mapping
    or you can follow these steps for finding which thread is causing the hight cpu utilization
    1.find the highest usage lwpid in prstat output
    2.find the lwpid in pstack output and get the matching thread number
    3.convert the thread number to hexadecimal
    4.find the hexadecimal thread number in the server thread dump (nid= xxx)
    5.determine what thread was doing to cause the high cpu usage
    you can find similar way if it is linux..
    ps u -Lp wlspid and thread dump
    Thank you,
    Bob
    Edited by: Bob on Sep 21, 2010 10:18 AM
    Edited by: Bob on Sep 21, 2010 10:24 AM

  • CPU utilization on Solaris

    When running a series of performance tests on Solaris machine running Weblogic
    8.1sp2 and comparing the results to a Windows200 machine running Weblogic 8.1sp2
    I noticed the Solaris box was only using 10% of the each of its 4 CPUs. The Window2k
    machine was however using 85% of it's single CPU and was performing 3 time better
    than the Solaris machine.
    I was hoping someone might have some suggestions for where to look for causes
    of the cpus being under utilized.
    Thanks
    -Fazle

    When running a series of performance tests on Solaris machine running Weblogic
    8.1sp2 and comparing the results to a Windows200 machine running Weblogic 8.1sp2
    I noticed the Solaris box was only using 10% of the each of its 4 CPUs. The Window2k
    machine was however using 85% of it's single CPU and was performing 3 time better
    than the Solaris machine.
    I was hoping someone might have some suggestions for where to look for causes
    of the cpus being under utilized.
    Thanks
    -Fazle

  • High CPU response times, altough low CPU utilization

    Hi Friends,
    We have a performance problem after we migrate our basis system,
    The current system is :
    Database server : Sun SPARC Enterprise T5240 Server - 2 CPU's with 6 core, 8 thread, 1.2 Ghz
                               32 GB RAM
    and we use another identical configured server as an application server.
    Database is Oracle 10.2.0 and operating system is Solaris 10.
    The problem is average CPU response time is 450 ms, and max. CPU load is %5 percent.
    Pre-migration configuration with old servers, we had CPU response : 150 ms and max CPU load: 50%.
    Has anyone of you, have experinced a similar problem with this new CPU's?
    Thanks in advance

    > I am aware that there might be lots of reasons, but what I  guess is, there should be a parameter wrong with the CPU settings. This SPARC server CPU is a new tech. 6 core 8 thread one, which is expected to work much faster than the old simple dual core CPU.
    Is it?
    ABAP is basically a virtual machine and what is most important: ABAP runs single threaded. This means in consequence that the processing time of a program relies on the single processing power of one core. More cores means more parallelism but the speed of the single statement always relies on the power of one core and how fast it can get the data over the bus and back. So the significant number for speed is basically the Mhz (for ABAP, Java is very different).
    Your old machine may have had a DualCore SPARC 1,2 Ghz and the new machine has one 1,4 Ghz 6core CPU (assuming so).
    > However its response time is very high, although it is never utilized more than %5 percent.
    see above.
    > The problem might be at database settings(Oracle 10.2.0) or at Solaris ? ? ?
    Well - no - I don't think so
    Since ABAP is a single threaded application it can't leverage the CPU power due to the fact, that a parallelism in the program itself does not take place and hence the machine scales not linear with the number of cores. So you may have factually a machine with about 1 dot something CPUs and not 6 as you would expect.
    This is not specific to SPARC CPU design but for all multi-core systems. A single threaded application is only as fast as the CPU speed. ABAP programs tend to be huge so you will also see effects of cache displacement and bus congestions. Less cores and more physical CPUs perform much better than any multicore CPU.
    For Java the world is very different because Java works as one process (in the SAP case jlaunch) which has many many threads that can be executed in parallel on different pipelines on the CPU.
    Unfortunately this is a design problem and there's not much you can do about it.
    Markus

  • Encore CS 4 and Low CPU Utilization

    I have a core I7 machine. When doing builds in Encore CS4, the CPU never gets above 20% and right now all seven cores are running at 6% while doing a Flash build.
    It's really slow.
    Can anyone advise me as to why more the CPU is not being utilized? I'm not certain what other dependencies might exist or if the speed can be improved. I realize one can import media that does not require transcoding in some cases but, assuming transcoding is required, I expect Encore to take all the CPU it can get.
    Thanks.

    You have probably already addressed the tips in this ARTICLE, but authoring, like video editing, will benefit from a very clean machine.
    If you have Win7, please make sure to see the link to Black Viper's page, near the bottom of the thread.
    Good luck,
    Hunt

  • IWS 6.0 100% CPU utilization hanging- very urgent

    Hi,
    We are using Iplanet Web server 6.0 on windows-2000 SP2.The problem we are facing is after 10 concurrent users have logged in the CPU utilization shoots up to 100% and we have to reboot the systesm
    Our billing Application is affected very much due to this.
    Can anybody throw some light on this?
    Thanks in advance.

    Hi,
    Are you using any plugin with iWS. Please let me know your config file. Mean while please check tunning parameters of solaris for Performance bench mark.
    http://docs.iplanet.com/docs/manuals/enterprise/50/tuning/perf6.htm#17580
    Regards,
    Dakshin.

  • Performance degrading CPU utilization 100%

    Hello,
    RHEL 4
    Oracle 10.2.0.4
    Attached to a DAS (partition is 91% full) RAID 5
    Over the past few weeks my production database performance has majorly degraded. I have not made any application, OS, or database changes (I was on vacation!). I have started troubleshooting, but need some more tips as to what else I can check.
    My users run a query against the database, and for a table with only 40,000 rows, it will take about 2 minutes before the results return. For a table with 12 million records, it takes about 10 minutes or more for the query to complete. If I run a script that counts/displays a total record count for each table in the database as well as a total count of all records in the database (~15,000,000 records total), the script either takes about 45 minutes to complete or sometimes it just never completes. The Linux partition on my DAS is currently 91% full. I do not have Flashback or auditing enabled.
    These are some things I tried/observed:
    I shut down all applications/servers/connections to the database and then restarted the database. After starting the database, I monitored the DAS interface, and the CPU utilization spiked to 100% and never goes down, even with no users/application trying to connect to the database. The alert.log file contains these errors:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code arguments: [ttcdrv-recursivecall]
    ORA-03135: connection lost contact
    ORA-06512: at "CTXSYS.SYNCRN", line 1
    The database still starts, but the performance is bad. From the error above and after checking performance in EM, I see there are a lot of sync index jobs running by each of the schemas and the db sequential file read is high. There is a job to resync the indexes every 5 minutes. I am going to try disabling these jobs tihs afternoon to see what happens with the CPU utilization. If it helps, I will try adjusting the job from running every 5 minutes to something like every 30 minutes. Is there a way to defrag the CONTEXT indexes? REBUILD?
    I'm not sure if I am running down the right path or not. Does anyone have any other suggestions as to what I can check? My SGA_TARGET is currently set to 880M and the SGA_MAX_SIZE is 2032M. Would it also help for me to increase the SGA_TARGET to the SGA_MAX_SIZE; thus increasing the amount of space allocated to the buffer cache? I have ASMM enabled and currently this is what is allocated:
    Shared Pool = 18.2%
    Buffer Cache = 61.8%
    Large Pool = 16.4%
    Java Pool = 1.8%
    Other = 1.8%
    I also ran ADDM and these were the results of my Performance Analysis:
    34.7% The throughput of the I/O subsystem was significantly lower than expected (when I clicked on this it said to either implement ASM or stripe using SAME methodology...we are already using RAID5)
    31% SQL statements consuming significant database time were found (I cannot make application code changes, and my database consists entirely of INSERT statements...there are never any deletes or updates. I see that the updates that are being made were by the index resyncing job to the various DR$ tables)
    18% Individual database segments responsible for significant user I/O wait were found
    15.9% Individual SQL statements responsible for significant user I/O wait were found
    8.4% PL/SQL execution consumed significant database time
    I also recently ran a SHRINK on all possible tablespace as recommended in EM, but that did not seem to help either.
    Please let me know if I can provide any other pertinent information to solve the poor I/O problem. I am leaning toward thinking it has to do with the index sync job stepping on itself...the job cannot complete in 5 minutes before it tries to kick off again...but I could be completely wrong! What else can I check to figure out why I have 100% CPU utilization, with no users/applications connected? Thank you!
    Mimi
    Edited by: Mimi Miami on Jul 25, 2009 10:22 AM

    Tables/Indexes last analyzed today.
    I figured out that it was the Oracle Text indexes synching to frequently that was causing the problem. I disabled all the jobs that kicked off those indexes and my CPU utilization dropped to almost 0%. I will work on tuning the interval/re-enabling the indexes for my dynamic datasources.
    Thank you for everyone's suggestions!
    Mimi

  • How to check CPU % Utilization with SNMP

    We are using Ipswitch's What's Up Professional 2006 to monitor devices on our network. Their default snmp graphing utilities use the HOST RESOURCES MIB to collect and graph system performance statistics over time. I have them running perfectly on all my Windows servers, but for my Solaris systems I am only able to retrieve Memory, Disk, and Network Interface statistics. The CPU monitor is unable to retrieve data from our Solaris systems. A typical system is running Solaris 10 with the default agent, NET-SNMP version 5.0.x. Does anyone know if the default agent supports checking CPU usage in the HOST RESOURCES MIB? If not can anyone point me to a different MIB & instance that will return % CPU utilization of a Solaris 10 server? Any input would be greatly appreciated. I've searched everywhere for the solution but I am unable to find any straight-forward answers, hopefully someone can help.

    Hi
      You can do it via  T Code ST02.I am sending you a link hope it help you
    <a href="http://help.sap.com/saphelp_erp2004/helpdata/en/02/96263e538111d1891b0000e8322f96/content.htm">Check this</a>
    Rewards point if helpful
    Thanks
    Pankaj Kumar

  • NSAPI plugin has high CPU utilization on Sunone web server 6.0 SP5

    Hi,
    I am running WL 6.1 SP3 with pluginProxy SP03 on Sunone web server 6.0SP5 on Solaris.
    Seeing very high CPU utilization with 3 threads running wl_proxy (about 33% each).
    Any latest NSAPI pluginproxy patch I can use to fix this?
    Walter

    I'm having the same problems as all the above posts. I run a colloborative tool which uses IPlanet as a directory server and I receive the Event ID:25 Source: WebServer 6.0 error as well as Event ID:0 Source:https-admnserv6.0 which gives "the local computer may not have the necessary registry information or message DLL files to display messages from a remote computer". I have 3 servers built and all exhibit the same errors.

  • EEM applet that triggers on high CPU utilization

    Hi Folks,
    I am trying to create an eem applet which triggers on high cpu utilization (detected by erm).   The applet should then tftp the output from "show proc cpu sorted" to a tftp server.   
    I am trying to configure this on a 1841 running 124-24.T3 code
    This is my config:
    resource policy
      policy HighGlobalCPU global
       system
        cpu total
         critical rising 5 falling 2 interval 10
        cpu process
         critical rising 5 falling 2 interval 10
    ! I'm not sure whether it is correct to monitor 'cpu total' or 'cpu process'.   The rising thresholds are deliberately low to maker testing easier
    event manager applet ReportHighCPU
    event resource policy "HighGlobalCPU"
    action 1.0 cli command "show process cpu sorted 5sec | redirect tftp://192.168.1.1/highCPU$_resource_time_sent.txt"
    action 2.0 syslog priority debugging msg "high cpu event detected, output tftp sent"
    The problem is that I can't seem to trigger the applet.    I have generated enough traffic to push the cpu utilization to over 30% (according to 'show proc cpu'), but the applet does not appear to trigger (no syslog messages appear, and my syslog server does not receive anything).
    If anyone can tell me what I've done wrong here I would be very grateful!
    Thanks,
    Darragh

    I am just replying off the top of my head but I believe you
    need to also add to your conf the line
    user global HighGlobalCPU

  • AHCI cpu utilization skyrockets

    This issue is a bit new to me--have done RAID and IDE setups for decades, but thought I'd tinker with AHCI.  Motherboard is MSI 970a-G46.  Enabling and disabling AHCI with an established Win7x64 installation is not a problem for me.
    Problem is that after enabling AHCI properly, cpu usage soars to 25%-30%+ with the Windows AHCI drivers, and jumps to as high as 40% with the latest AMD chipset drivers.  OK--this is what HD Tach reports, anyway.  IDE settings for the same drives measure 1-2% cpu utilization.  According to HD Tach, too, the performance of AHCI & IDE is identical.  Ergo: I see no advantage for my client system running in AHCI and will return to IDE.
    Agree--disagree? Suggestions?  Thanks.

    Quote from: Panther57 on 30-June-12, 01:01:20
    This is an interesting post... With my new build I was set Raid0 / IDE. I had an unhappy line in device manager and changed to AHCI. Then it downloaded the driver.
    I have not seen a jump in CPU usage. But I also have not been watching it like a hawk. Hmmm
    I am going to watch my AMD System Monitor for results. In a post of mine..earlier.. I was told, and did some tests of AHCI vs: IDE. I ran IDE on my other PC (listed below, HTPC) and am now AHCI on my main 990FXA-GD80. The difference between the 2 ways tested on my 790FX actually did show an advantage IDE, using Bench32.
    Not a huge advantage.. but a little over AHCI. I don't know if the difference is really worth much inspection.
    I am looking forward to the results you get WaltC
    Thanks, Panther57...;)  My "results" are really more of an opinion, but ...
    Right now I'm not really sure what hard drive benchmark I should be using or trusting!...;)  HD Tach's last release in 2004 is now confirmed on the company's site as the last version of the bench it will make--as it is, I have to set the compatibility tab for WinXP just to run the darn thing in Win7x64!  But...I installed the free version of HD Tune (and the 15-day trial for the "Pro" version of the program, too), and the results are very similar--except that HD Tune seems to be measuring my burst speeds incorrectly:  HD Tach consistently puts them north of 200mb/s; HD Tune, well south of  200mb/s.  (A strike against HD Tune--the free version does not measure cpu dependency--grrr-r-r-r.  You have to pay for the "Pro" version to see that particular number, or install the Pro trial version which reveals those numbers for 15 days.)
    OK, between the two benchmarks, and after several tests, cpu utilization seems high *both* in IDE and in AHCI modes.  Like you, it has been quite awhile since I actually *looked* at cpu utilization of any kind for hard drives.  I guess I wasn't prepared to see how cpu dependent things have become again.  Certainly, we are nowhere near the point of decades ago when cpu utilization approached 100% and our programs would literally freeze while loading from the IDE disk, until the load was finished.  The "good old days," right?  NOT, hardly...;)  I suppose, though, that with multicore cpus being the rule these days instead of the exception, cpu dependency is just not as big a deal as it was in the "old days" when we dealt with single-core cpus exclusively and searching an IDE drive could literally stop the whole show.
    Again, when running these read tests to see the degree of cpu utilization, I found that while the tests were all uniform and basically just repeats of each other, done a couple of dozen times, the results for cpu utilization in each test were *all over the map*--from 0% to 40% cpu dependency!  And the same was true whether I was testing in IDE mode or AHCI mode.  That was kind of surprising--and yet, it still leaves open the question of how accurate and reliable the two HD benchmarks that I used actually are.   Besides that, I did find a direct correlation between the size of the files being moved/copied and the degree of cpu dependency--the smaller the files copied and moved the higher the cpu involvement--the larger the files, the lower the cpu overhead in copying and moving, etc.  Much as we'd expect.
    So after all was said and done--it does seem to me that AHCI is actually more of a performer than IDE, albeit not by much.  I think maybe it demands a tad less cpu dependency, too, which is another mark in its favor.  In one group of tests I ran on a single drive (I also tested a pair of Windows-spanned hard drives in RAID 0 (software RAID) in AHCI and in IDE mode, just for the heck of it...;)),  I found the *average* read speed of the AHCI drive some ~15mb/s faster than the same drive tested in IDE.  That was with HD Tune tests.  But as I've stated, how reliable or accurate are these benchmarks?  Heh...;)  Anybody's guess, I suppose.
    My take in general on the subject (for anyone interested) is that going to AHCI won't hurt if a person decides to go that route, but it also won't help that much, either. You definitely can easily and very quickly move from an installed Win7 IDE installation to an AHCI installation, no problem (some sources swear it can't be done without a reformat and a reinstall--just not true!  They just haven't discovered how easy and simple it is to move from IDE to AHCI and back again.)   Current cpu dependencies whether in AHCI or in IDE surprise me they seem so high.  However, the last time I paid close attention to such numbers was back when I ran a single-core cpu, and back then cpu dependency numbers for a hard drive meant quite a lot.  Today's cpus have both the raw computational power and the number of cores to take that particular concern and put it on its ear, with a large grain of salt!...;)
    I have three drives total at current:
    Boot drive:
    ST332062 OAS Sata, boot drive
    then,
    (2) ST350041 8AS Satas, spanned in software RAID 0, making two ~500GB RAID 0 partitions. 
    Total disk space ~1.32 terabytes, all drives including RAID 0 partitions running in AHCI mode. (Software RAID is just as happy with IDE, btw.)
    My Friday "project" is complete...:]  Hope I haven't confused anyone more than myself...;)

  • DS6 instance reaches 50% CPU utilization after restart

    Hi,
    I am running DS6 on Solaris 10. I noticed that after every orderly restart slapd process reaches 50% CPU utilization. This situation lasts for about 5 min. What do you think it is?
    Thank you.

    At Startup Directory Server does initialize some of its caches :
    - ACI
    - Roles
    - Class Of Service.
    - Groups (for the new isMemberOf feature).
    Depending on your configuration these searches may take a little bit of time and cpu.
    Regards,
    Ludovic

  • Low cpu usage when exporting

    Is it normal to have only 4-12% of cpu usage when exporting video out of ae? I´m exporting it from render queue.
    My computer is 2x dual 3,0 ghz prosessor, 8 gb ram, nvidia 8600 gt in xp professional x64
    thanks

    As Jonas suggested, CPU utilization may be somewhat low because the bottleneck isn't the processing of the data but moving (reading and writing) it.
    I'm not saying that this is necessarily what's going on for you, but it's worth pointing out that the CPU is often not the bottleneck.
    If disk I/O is what's slowing you down, one way that you can speed things up is to use three disks for your work: one to run the program from, one to store input assets, and one to write output files to.
    There are a lot of tips for improving performance in the
    "Improve performance" section of After Effects CS3 Help on the Web.

Maybe you are looking for

  • Crystal Report Prompt For Enter Parameter Values

    Hi, I did a report with 2 subreports embeded in VS2005 using VB and it worked fine but when I ported the whole application to VS2008 the above problem occurs. Here are some codes:     Private Sub btnPreview_Click(ByVal sender As System.Object, ByVal

  • 3rd Gen iPod Touch DEAD- Not reseting, charging or turning on!

    Yesterday afternoon the iPod began acting very strange: 1. Not responding to touches, but display not frozen (slide to unlock animated) 2. Home and Power buttons working, display sleeping as normal. This was then resolved after an hour so. Just went

  • Error installing CS 5.5

    Hallo, my System configuration: HP Elite Book 8540W 8GB RAM core i5 win 7 pro 64 bit with all updates I try to reinstall CS 5.5 after some minor troubles with PS The installer exits with no error message ... only in the PDApp dump I find that the swf

  • Acrobat crashes when extracting pages

    Version Adobe Acrobat X 10.1.12 When extracting and emailing pages from a pdf multiple times, Acrobat crashes.  Here is the process: 1. Open pdf 2. Tools -> Pages -> Extract 3. Select any number of pages -> delete after extracting -> Yes at prompt 4.

  • Can I set up SQL Trace or Audit to connect to servers

    I have a server (2003) with SQL (2005) on it.  If i install MS SQL Management Studio 2012 on my PC, can I audit what SQL on that server is doing by connecting to it from the PC ? Or would it be better to set up SQL Trace on the server itself ? Thanks