Increasing VirtualBox's CPU Utilization

Arch x86_64, everything's current with pacman as of today. 8Gb ram, Intel Core2 Quad extreme proc @ 3Ghz. Latest VirtualBox
When running an XP-x32 VM, I can only get VB to utilize about 25% of my host's CPU bandwidth. If I increase the VM's CPU count, then the IO APIC is forcibly enabled which makes any VM run so slow as to render the VM unusuable, and then my host CPU goes to 100% load, but the VM's performance goes into the toilet. And I have tried freshinstalls of XP to a new VM where the IO APIC is enabled out of the box. The result is the same, an unusably slow VM.
I've posted on the VB forums but it is silent. It seems that when you hit a "nerve" on the VB forums, the Oracle folks go silent on you. I've only had one user reply, but their reply wasn't really on topic. I appreciated their post, but their bug was different than what I want to do with VB.
The question I have, is how can I configure VB  to use more of my host's CPU?
Since hal has been depreciated, VMWare isn't viable, and the VMWare folks won't respond to my post when/if they are going to release updates to their products that use dbus instead of hal.
It seems VirtualBox right now is the most viable of the VM solutions.....
I have converted the XP vm to use sata, and that decreased the VM's need for CPU....but I still cannot (under CAD) get the VM to use more than 25% of the host's CPU.
Does anyone have any suggestions?
Thank you!
Sincerely and respectfully,
Dave

dcbdbis wrote:If I increase the VM's CPU count, then the IO APIC is forcibly enabled which makes any VM run so slow as to render the VM unusuable, and then my host CPU goes to 100% load, but the VM's performance goes into the toilet. And I have tried freshinstalls of XP to a new VM where the IO APIC is enabled out of the box. The result is the same, an unusably slow VM.
I have noticed that extremely poor performance too, in some cases.
Have you tried in the settings of a VM in the "System" tab setting the Chipset from PIIX3 to ICH9? Although marked as experimental that worked very well in the cases I experienced the poor performance.
edit: XP probably doesn't like the exchange of a chipset so you might want to back it up first / reinstall it.
Last edited by Cdh (2011-08-28 22:04:59)

Similar Messages

  • Increase in CPU Utilization after migration from APEX 3.1.2 to APEX 3.2

    Has any noticed any increase in CPU Utilization after migrating from APEX 3.1.2 to APEX 3.2?
    Thanks,
    Mark

    Hi Mark,
    Take a look at some of the usage reports within APEX (sessions, page views etc) to get an overall feel for where the time is being consumed.
    You'll also find it useful if you can run a statspack report (or AWR, ASH etc) during a busy period to be able to drill down into where that CPU is being spent.
    There's no magic answers here unfortunately, you need to track it down to where the time is being spent before working back up to find out where best to tune it.
    John.
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    REWARDS: Please remember to mark helpful or correct posts on the forum, not just for my answers but for everyone!

  • Increased CPU utilization on Sup1A after upgrade

    Hello,
    I recently upgraded a 6009 with Supervisor 1A from CatOS 7.6(5) to 8.4(4). Baseline total CPU utilization before the upgrade was about 12%, post upgrade it sits at about 30%.
    We have several similar switches throughout our network that we intend to upgrade as well, so are treating this one box as a testbed. We will soon deploy IP telephones to every desk in our network, and want to determine if this higher baseline utilization will be problematic for us.
    Considering voice, should we be concerned about this? Will any changes due to voice deployment such as many trunks, QoS, etc. cause a significant jump in CPU that might put our baseline even higher?
    Will the higher baseline CPU utilization affect switch performance? I know most forwarding functions do not depend upon the CPU, and are switched via ASIC.
    thanks for the help,
    Brad

    The IDLE_Tasks process on which you are seeing a slightly higher CPU is actually an enhancement that collects information more effectively on crashinfo files (files generated when there is a crash on the box). This enhancement went into 8.3(x) and 8.4(x)
    In the past "IDLE_tasks" processing was not counted and therefore belong to the "kernel and IDLE" process (which as you can see in the show proc cpu) accounts the amount of cpu not being used.
    If you are upgrading other 6500's with Sup1a's make sure they all have 128MB DRAM just like this one has.

  • Swiping causes spikes in CPU utilization.

    Using app to monitor CPU on iPhone 5 and iPad mini. Swiping causes spikes in CPU utilization from 5% to 65% for every swipe. Heavy swiping causes processor to run near 100%. Seems abnormal.
    Reduce Motion turned on.
    IOS 7.1
    Anything to make this not run so high? Battery drain seems much faster with just general usage.

    Hey Apple. This is likely a major issue with battery drain. Constant CPU spikes will cause the CPU to heat which increases battery drain. Swiping should not peg the CPU. I suspect it has to do with how The screen is rendered to the user.somebody needs to look into this from Apple. This is been an issue for at least the last few iOS versions.

  • Xsun (high CPU utilization) on Solaris 10 Sparc

    hi
    i have sun blade 1500 and am running solaris 10 on it. the machine is a 2 CPU (750Mhz) 4GB Sparc with the lastest cluster patch.
    The Xsun process is alway at 50% util and the windowmaker (wmaker) is at 27%.
    The Xsun is alway using all available CPU and the machine is really slow.. any help on what patch will fix the Xsun process ? any operation on the machine increases the Xsun's CPU util.
    thx
    Sriram

    Hi sridhar,
    can i know which platform you are using....
    if it is solaris,
    can you paste the details of the prstat -L -p wlpid 1 1 ---> which gives the light wieght pid threads
    and also (pstact wlspid) for lwpid to process id mapping
    or you can follow these steps for finding which thread is causing the hight cpu utilization
    1.find the highest usage lwpid in prstat output
    2.find the lwpid in pstack output and get the matching thread number
    3.convert the thread number to hexadecimal
    4.find the hexadecimal thread number in the server thread dump (nid= xxx)
    5.determine what thread was doing to cause the high cpu usage
    you can find similar way if it is linux..
    ps u -Lp wlspid and thread dump
    Thank you,
    Bob
    Edited by: Bob on Sep 21, 2010 10:18 AM
    Edited by: Bob on Sep 21, 2010 10:24 AM

  • Performance degrading CPU utilization 100%

    Hello,
    RHEL 4
    Oracle 10.2.0.4
    Attached to a DAS (partition is 91% full) RAID 5
    Over the past few weeks my production database performance has majorly degraded. I have not made any application, OS, or database changes (I was on vacation!). I have started troubleshooting, but need some more tips as to what else I can check.
    My users run a query against the database, and for a table with only 40,000 rows, it will take about 2 minutes before the results return. For a table with 12 million records, it takes about 10 minutes or more for the query to complete. If I run a script that counts/displays a total record count for each table in the database as well as a total count of all records in the database (~15,000,000 records total), the script either takes about 45 minutes to complete or sometimes it just never completes. The Linux partition on my DAS is currently 91% full. I do not have Flashback or auditing enabled.
    These are some things I tried/observed:
    I shut down all applications/servers/connections to the database and then restarted the database. After starting the database, I monitored the DAS interface, and the CPU utilization spiked to 100% and never goes down, even with no users/application trying to connect to the database. The alert.log file contains these errors:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code arguments: [ttcdrv-recursivecall]
    ORA-03135: connection lost contact
    ORA-06512: at "CTXSYS.SYNCRN", line 1
    The database still starts, but the performance is bad. From the error above and after checking performance in EM, I see there are a lot of sync index jobs running by each of the schemas and the db sequential file read is high. There is a job to resync the indexes every 5 minutes. I am going to try disabling these jobs tihs afternoon to see what happens with the CPU utilization. If it helps, I will try adjusting the job from running every 5 minutes to something like every 30 minutes. Is there a way to defrag the CONTEXT indexes? REBUILD?
    I'm not sure if I am running down the right path or not. Does anyone have any other suggestions as to what I can check? My SGA_TARGET is currently set to 880M and the SGA_MAX_SIZE is 2032M. Would it also help for me to increase the SGA_TARGET to the SGA_MAX_SIZE; thus increasing the amount of space allocated to the buffer cache? I have ASMM enabled and currently this is what is allocated:
    Shared Pool = 18.2%
    Buffer Cache = 61.8%
    Large Pool = 16.4%
    Java Pool = 1.8%
    Other = 1.8%
    I also ran ADDM and these were the results of my Performance Analysis:
    34.7% The throughput of the I/O subsystem was significantly lower than expected (when I clicked on this it said to either implement ASM or stripe using SAME methodology...we are already using RAID5)
    31% SQL statements consuming significant database time were found (I cannot make application code changes, and my database consists entirely of INSERT statements...there are never any deletes or updates. I see that the updates that are being made were by the index resyncing job to the various DR$ tables)
    18% Individual database segments responsible for significant user I/O wait were found
    15.9% Individual SQL statements responsible for significant user I/O wait were found
    8.4% PL/SQL execution consumed significant database time
    I also recently ran a SHRINK on all possible tablespace as recommended in EM, but that did not seem to help either.
    Please let me know if I can provide any other pertinent information to solve the poor I/O problem. I am leaning toward thinking it has to do with the index sync job stepping on itself...the job cannot complete in 5 minutes before it tries to kick off again...but I could be completely wrong! What else can I check to figure out why I have 100% CPU utilization, with no users/applications connected? Thank you!
    Mimi
    Edited by: Mimi Miami on Jul 25, 2009 10:22 AM

    Tables/Indexes last analyzed today.
    I figured out that it was the Oracle Text indexes synching to frequently that was causing the problem. I disabled all the jobs that kicked off those indexes and my CPU utilization dropped to almost 0%. I will work on tuning the interval/re-enabling the indexes for my dynamic datasources.
    Thank you for everyone's suggestions!
    Mimi

  • Very high cpu utilization with mq broker

    Hi all,
    I see a very high cpu utilization (400% on 8 cpu server) when I connect consumers to OpenQ. It increase close to 100% for every consumer I add. Slowly, the consumer comes to a halt, as the producers are sending messages at a good rate too.
    Environment Setup
    Glassfish version 2.1
    com.sun.messaging.jmq Version Information Product Compatibility Version: 4.3 Protocol Version: 4.3 Target JMS API Version: 1.1
    Cluster set up using persistent storage. snippet from broker log.
    Java Runtime: 1.6.0_14 Sun Microsystems Inc. /home/user/foundation/jdk-1.6/jre [06/Apr/2011:12:48:44 EDT] IMQ_HOME=/home/user/foundation/sges/imq [06/Apr/2011:12:48:44 EDT] IMQ_VARHOME=/home/user/foundation/installation/node-agent-server1/server1/imq [06/Apr/2011:12:48:44 EDT] Linux 2.6.18-164.10.1.el5xen i386 server1 (8 cpu) user [06/Apr/2011:12:48:44 EDT] Java Heap Size: max=394432k, current=193920k [06/Apr/2011:12:48:44 EDT] Arguments: -javahome /home/user/foundation/jdk-1.6 -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.cluster.masterbroker=mq://server1:37676/ -Dimq.cluster.brokerlist=mq://server1:37676/,mq://server2:37676/ -Dimq.cluster.nowaitForMasterBroker=true -varhome /home/user/foundation/installation/node-agent-server1/server1/imq -startRmiRegistry -rmiRegistryPort 37776 -Dimq.imqcmd.user=admin -passfile /tmp/asmq5711749746025968663.tmp -save -name clusterservercom -port 37676 -bgnd -silent [06/Apr/2011:12:48:44 EDT] [B1004]: Starting the portmapper service using tcp [ 37676, 50, * ] with min threads 1 and max threads of 1 [06/Apr/2011:12:48:45 EDT] [B1060]: Loading persistent data...
    I followed step in http://middlewaremagic.com/weblogic/?p=4884 to narrow it down to Threads that was causing high cpu. Both were around 94%.
    Following is the stack for those threads.
    "Thread-jms[224]" prio=10 tid=0xd635f400 nid=0x5665 runnable [0xd18fe000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xf3d35730> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    "Thread-jms[214]" prio=10 tid=0xd56c8000 nid=0x566c waiting for monitor entry [0xd2838000] java.lang.Thread.State: BLOCKED (on object monitor) at com.sun.messaging.jmq.jmsserver.data.TransactionInformation.isConsumedMessage(TransactionList.java:2544) - locked <0xdbeeb538> (a com.sun.messaging.jmq.jmsserver.data.TransactionInformation) at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c9abf0> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    "Thread-jms[213]" prio=10 tid=0xd65be800 nid=0x5670 runnable [0xd1a28000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c4bad8> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    Any ideas will be appreciated.
    --

    Thanks ak, for the response.
    Yes, the messages are consumed in transactions. I set imq.txn.reapLimit=200 in Start Arguments in jvm configuration.
    I verified that it is being set in the log.txt file for the broker:
    -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.txn.reapLimit=250
    It did not make any difference. Do I need to set this property somewhere else ?
    As far as upgrading MQ is concerned, I am using glassfish 2.1. And I think MQ 4.3 is packaged with it. Can you suggest a safe way to upgrade to OpenMQ 4.5 in a running environment. I can bring down the cluster temporarily. Can I just change the jar file somwhere to use MQ4.5 ?
    Here is the snippet of the consumer code :
    I create Connection in @postConstruct and close it in @preDestroy, so that I don't have to do it everytime.
    private ResultMessage[] doRetrieve(String username, String password, String jndiDestination, String filter, int maxMessages, long timeout, RetrieveType type)
    throws InvalidCredentialsException, InvalidFilterException, ConsumerException {
    // Resources
    Session session = null;
    try {
    if (log.isTraceEnabled()) log.trace("Creating transacted session with JMS broker.");
    session = connection.createSession(true, Session.SESSION_TRANSACTED);
    // Locate bound destination and create consumer
    if (log.isTraceEnabled()) log.trace("Searching for named destination: " + jndiDestination);
    Destination destination = (Destination) ic.lookup(jndiDestination);
    if (log.isTraceEnabled()) log.trace("Creating consumer for named destination " + jndiDestination);
    MessageConsumer consumer = (filter == null || filter.trim().length() == 0) ? session.createConsumer(destination) : session.createConsumer(destination, filter);
    if (log.isTraceEnabled()) log.trace("Starting JMS connection.");
    connection.start();
    // Consume messages
    if (log.isDebugEnabled()) log.trace("Creating retrieval containers.");
    List<ResultMessage> processedMessages = new ArrayList<ResultMessage>(maxMessages);
    BytesMessage jmsMessage = null;
    for (int i = 0 ; i < maxMessages ; i++) {
    // Attempt message retrieve
    if (log.isTraceEnabled()) log.trace("Attempting retrieval: " + i);
    switch (type) {
    case BLOCKING :
    jmsMessage = (BytesMessage) consumer.receive();
    break;
    case IMMEDIATE :
    jmsMessage = (BytesMessage) consumer.receiveNoWait();
    break;
    case TIMED :
    jmsMessage = (BytesMessage) consumer.receive(timeout);
    break;
    // Process retrieved message
    if (jmsMessage != null) {
    if (log.isTraceEnabled()) log.trace("Message retrieved\n" + jmsMessage);
    // Extract message
    if (log.isTraceEnabled()) log.trace("Extracting result message container from JMS message.");
    byte[] extracted = new byte[(int) jmsMessage.getBodyLength()];
    jmsMessage.readBytes(extracted);
    // Decompress message
    if (jmsMessage.propertyExists(COMPRESSED_HEADER) && jmsMessage.getBooleanProperty(COMPRESSED_HEADER)) {
    if (log.isTraceEnabled()) log.trace("Decompressing message.");
    extracted = decompress(extracted);
    // Done processing message
    if (log.isTraceEnabled()) log.trace("Message added to retrieval container.");
    String signature = jmsMessage.getStringProperty(DIGITAL_SIGNATURE);
    processedMessages.add(new ResultMessage(extracted, signature));
    } else
    if (log.isTraceEnabled()) log.trace("No message was available.");
    // Package return container
    if (log.isTraceEnabled()) log.trace("Packing retrieved messages to return.");
    ResultMessage[] collectorMessages = new ResultMessage[processedMessages.size()];
    for (int i = 0 ; i < collectorMessages.length ; i++)
    collectorMessages[i] = processedMessages.get(i);
    if (log.isTraceEnabled()) log.trace("Returning " + collectorMessages.length + " messages.");
    return collectorMessages;
    } catch (NamingException ex) {
    sessionContext.setRollbackOnly();
    log.error("Unable to locate named queue: " + jndiDestination, ex);
    throw new ConsumerException("Unable to locate named queue: " + jndiDestination, ex);
    } catch (InvalidSelectorException ex) {
    sessionContext.setRollbackOnly();
    log.error("Invalid filter: " + filter, ex);
    throw new InvalidFilterException("Invalid filter: " + filter, ex);
    } catch (IOException ex) {
    sessionContext.setRollbackOnly();
    log.error("Message decompression failed.", ex);
    throw new ConsumerException("Message decompression failed.", ex);
    } catch (GeneralSecurityException ex) {
    sessionContext.setRollbackOnly();
    log.error("Message decryption failed.", ex);
    throw new ConsumerException("Message decryption failed.", ex);
    } catch (JMSException ex) {
    sessionContext.setRollbackOnly();
    log.error("Unable to consumer messages.", ex);
    throw new ConsumerException("Unable to consume messages.", ex);
    } catch (Throwable ex) {
    sessionContext.setRollbackOnly();
    log.error("Unexpected error.", ex);
    throw new ConsumerException("Unexpected error.", ex);
    } finally {
    try {
    if (session != null) session.close();
    } catch (JMSException ex) {
    log.error("Unexpected error.", ex);
    Thanks for your help.
    Edited by: vineet on Apr 7, 2011 10:06 AM

  • Low CPU utilization on Solaris

    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

    Hi,
    I recently went down this path and wound up coming to the realization that the
    cpu's are almost neck and neck per cycle when running my Java app. Let me qualify
    this a little more (400mhz Sparc II cpu vs 500mhz Intel cpu) under similar load
    running the same test gave me similar results. It wasn't as huge difference in
    performance as I was expecting.
    My theory is given the scalability of the SPARC architecture, more chips==more
    performance with less hardware, whereas the Wintel boxes are cheaper, but in order
    to get scaling, the underlying hardware comes into question. (how many wintel
    boxes to cluster, co-locate, manage, etc…)
    From what little I've found out when running tests against our Solaris 8 (E-250's)
    400mhz UltraSparc 2's is that it appears that the CPU performance in a lightly
    threaded environment is almost 1 cycle / 1 cycle (SPARC to Intel). I don't think
    the 64 bit SPARC architecture will buy you anything for java 1.3.1, but if your
    application has some huge memory requirements, then using 1.4.0(when BEA supports
    it) should be beneficial (check out http://java.sun.com/j2se/1.4/performance.guide.html).
    If your application is running only a few threads, tying the threads to the LWP
    kernel processes probably won't gain you much. I noticed that it decreased performance
    for a test with only a few threads.
    I can't give you a good reason as to why your Solaris CPU utilization is so low,
    you may want to try getting a copy of Jprobe and profiling Weblogic and your application
    to see where your bottlenecks are. I was able to do this with our product, and
    found some nasty little performance bugs, but even with that our CPU utilization
    was around 98% on a single and 50% on a dual.
    Also, take a look at iostat / vmstat and see if your system is bottlenecking doing
    io operations. I kept a background process of vmstat to a log and then looked
    at it after my test and saw that my cpu was constantly pegged out (doing a lot
    of context switching), but that it wasn't doing a whole lot of page faults
    (had enough memory).
    If you're doing a lot of serialization, that could explain slow performance as
    well.
    I did follow a suggestion on this board of running my test several times with
    the optimizer (-server) and it boosted performance on each iteration until a plateau
    on or about the 3rd test.
    If you're running Oracle or another RDBMS on your Solaris machine you should see
    a pretty decent performance benchmark against NT as these types of applications
    are more geared toward the SPARC architecture. From what I've seen running Oracle
    on Solaris is pretty darn fast when compared to Intel.
    I know that I tried a lot of different tweaks on my Solaris configuration (tcp
    buffer size, etc/system parameters for file descriptors, etc.) I even got to the
    point where I wanted
    to see how WebLogic was handling the Nagle algorithm as far as it's POSIX muxer
    was concerned and ran a little test to see how they were setting the sockets (setTcpNoDelay(Boolean)
    on java.net.Socket). They're disabling the Nagle algorithm so that wasn't an
    issue sigh. My best advice would be to profile your application and see where
    the bottlenecks are, you might be able to increase performance, but I'm not too
    sure. I also checked out www.spec.org and saw some of their benchmarks that
    coincide with our findings.
    Best of luck to you and I hope this helps :)
    Andy
    [email protected] (feanor73) wrote:
    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

  • Long running GC - High cpu% utilization

    I have a tomcat process running on linux which generates a lot of garbage.
    Throughout the day the machine will be at around 80% idle, and then suddenly jump to 20% idle cpu. It can stay at 20% idle for 1-4 hours, then goes back to 80% idle where it can stay for hours as well. The graph looks like a step-function.
    This does not correlate to the usage graph; the usage graph is smooth, typical of a B->C webapp.
    In conjunction with the 20% idle, i also see about 20% spent in "system" cpu, as well as an increase in "user" cpu consumption. When at 80% idle, system% is around 1%-2%.
    I log GC stats, and when the cpu is at 80% idle, i see:
    75324.001: [GC 75324.001: [ParNew: 65408K->0K(65472K), 0.0102260 secs] 472166K->407009K(1535936K), 0.0104630 secs]
    75327.998: [GC 75327.998: [ParNew: 65408K->0K(65472K), 0.0118190 secs] 472417K->407212K(1535936K), 0.0120560 secs]
    75333.342: [GC 75333.343: [ParNew: 65408K->0K(65472K), 0.0105900 secs] 472620K->407480K(1535936K), 0.0108260 secs]
    About 10 msecs spent in GC every 4 secs, but when it gets to 20% idle i see:
    30336.581: [GC 30336.581: [ParNew: 65408K->0K(65472K), 0.5277730 secs] 244204K->203369K(1535936K), 0.5280800 secs]
    30338.403: [GC 30338.403: [ParNew: 65408K->0K(65472K), 0.4632960 secs] 268777K->224890K(1535936K), 0.4635790 secs]
    30340.256: [GC 30340.256: [ParNew: 65408K->0K(65472K), 0.5050930 secs] 290298K->248727K(1535936K), 0.5053670 secs]
    So now it's taking 500 msecs every 2 secs or so.
    I'm using 1.5 with the following options:
    -Xms1500m -Xmx1500m -XX:+UseConcMarkSweepGC -XX:NewSize=64m
    So a 1.5G heap, with a 64M young generation. My inclination is to bump up the size of the young generation, but when i see these long 500 msecs young gen collection times, i get worried.
    Can anybody explain the strange step-function behavior of the process over time that would be strictly related to GC?
    Note that i see exactly the same pattern using the jrockit jvm (although the absolute numbers are different). I see the same step-function graph for cpu utilization vs a smooth graph for usage.
    Thanks.

    The fact that you see the same behavior with two completely different VM's makes me think it's not the VM's. -XX:+UseConcMarkSweepGC will cause slightly ParNew collections to take slightly longer, when it is running, but not anything like 50 times slower.
    Trying using jconsole to see what going on during the idle periods and the busy periods. That is, get thread stack traces. If you are running JDK-1.6.0, you can just start up jconsole and attach to the running VM. If you are running JDK-1.5.0, I think you need to start the VM with an additional option to allow jconsole to attach to it.

  • CPU utilization for GC...?? (Need Help)

    In am currently investigating high CPU utilization of an application at certain peak times.
    During the course of investigation, I was watching the Performance Monitor screen on the Weblogic Admin Console.
    I noticed that the "Memory Usage" graph(Amount of memory available in the JVM Heap) is showing continuous set of spikes(a SAWTOOTH pattern) during the peak times of the application.
    Now I am wondering if:
    1) I can CONCLUDE that this is really GC happening multiple times, which is showing as the sawtooth pattern on the graph.
    (I cannot turn on any verbose GC options, since the app is on production).
    2) Does doing GC contribute to the User CPU or System CPU.
    (Although the JVM i.e: Weblogic server was started as a user process, I want to know if GC is completely accounted only into the User CPU count).
    3) What would generally be the contribution of GC to CPU usage. How much CPU could it consume (minor GC and Full GC). Could the contribution be really so large that my CPU usage graph peaks out(above 85%).
    4) In case the answer for (3) is really large, then what should be my approach first..to increase New Gen size or to enable Parallel GC to use multiple CPUs (I have 2 CPUs and my GC options are default).
    Please advice. I really am in a soup.

    Hi,
    Yes, the sawtooth pattern shows that a GC has happened on every peak.
    I assume that you run on top of some Linux distribution.
    GC does contribute mainly to the value registered under User CPU unless you have limited physical memory on your machine where you could note an increase in System CPU during swapping.
    Optimization could also contribute to high CPU usage, you can turn off optimization with -Xnoopt.
    If you run with -Xgc:parallel (default with most JRockit versions), highest throughput is achieved when all CPUs concentrate on GC, thus generating a high CPU usage.
    What JRockit version are you using? (Include -showversion in your commandline or run java -version.)
    Kind regards, Cecilia
    JRockit Customer Centric Engineering
    Kind regards,
    Cecilia Borg
    BEA WebLogic JRockit

  • CPU utilization during GC...?? (Need Help).

    In am currently investigating high CPU utilization of an application at certain peak times.
    During the course of investigation, I was watching the Performance Monitor screen on the Weblogic Admin Console.
    I noticed that the "Memory Usage" graph(Amount of memory available in the JVM Heap) is showing continuous set of spikes during the peak times of the application.
    Now I am wondering if:
    1) I can infer that this is really GC happening multiple times, which is showing as the spike pattern on the graph.
    (I cannot turn on any verbose GC options, since the app is on production).
    2) Does doing GC contribute to the User CPU or System CPU.
    (Although the JVM i.e: Weblogic server was started as a user process, I want to know if GC is completely accounted into the User CPU count).
    Please advice. I really want to get this thing clear.

    Hi,
    Yes, the sawtooth pattern shows that a GC has happened on every peak.
    I assume that you run on top of some Linux distribution.
    GC does contribute mainly to the value registered under User CPU unless you have limited physical memory on your machine where you could note an increase in System CPU during swapping.
    Optimization could also contribute to high CPU usage, you can turn off optimization with -Xnoopt.
    If you run with -Xgc:parallel (default with most JRockit versions), highest throughput is achieved when all CPUs concentrate on GC, thus generating a high CPU usage.
    What JRockit version are you using? (Include -showversion in your commandline or run java -version.)
    Kind regards, Cecilia
    JRockit Customer Centric Engineering
    Kind regards,
    Cecilia Borg
    BEA WebLogic JRockit

  • Does SAM effect to decrease CPU utilization on CSS ?

    Hi everyone,
    I have encountered the symptom that CPU utilization on CSS 11501 is increasing periodically.
    I assume that this symptom is caused by handling many traffic on CSS 11501 and I am thinking it would be better to replace CSS 11501 to CSS 11503 with Session Accelerator Module (SAM).
    So my question is, if increasing CPU utilization due to many traffic to handle by CSS, is CPU utilization decreasing by using SAM ?
    Or do you have any idea ? such as adding I/O module ?
    Your information would be appreciated.
    Shinichi

    the traffic will be split accross all the modules.
    So, if the high cpu is due to traffic, adding another module like sam will help.
    However, only the scp handles management of the box, like configuration, telnet, ssh, routing, snmp, ...
    so if high cpu is due to such a process, adding another module will have no impact.
    Gilles

  • Jagged peaked cpu utilization during export

    Hello All,
    When I export JPGs onto my local drive from lightroom 5.6 the CPU utilization is rather low and goes up and down (see photo below). Also the export is quite slow, 250 files at 90% from RAW.  Is this pattern normal and why is lightroom not using all of my resources??  Only <1/2 CPU and 4 GB of RAM out of 12.  I didn't have anything else running except a browser. 
    I see a consistent very high CPU utilization when rendering out of after effects and much higher, but still somewhat peaked utilization with Premiere Pro media encoder.
    My system is pretty fast, i7 quadcore, haswell.  It does not seem that all the cores are being used by lightroom, but that is a known issue I guess that can be addressed by breaking up the export into multiple smaller exports.
    Thanks

    Hello,
    here are my results. One export. Screenshot from task manager and thread view from process exporer.
    Now, same screenshots with two exports:
    Never saw more than 3 threads (parallel processes) with on export but 6 threads with two exports. Systems becomes sluggish with two exports. Overall CPU usage of Lightroom increases about 10-20%.

  • High CPU Utilization on Lync Mediation Servers -

    All,
    We are experiencing an issue for last couple of weeks where we are seeing very high CPU utilization on our Lync Mediation Servers (connected
    directly to SIP gateway). It is resulting in a high Jitter value and so the poor call quality.
    The CPU utilization is normal on mediation boxes on our other Lync hubs regardless of the number of inbound/outbound calls
    but on new Lync hub, as soon as the call volume (inbound/outbound) reaches over ~10-15 calls, CPU peaks nearly 80%-90% utilization. To reduce the call volume, we have routed our outbound calls to different hub.
    To resolve this we have already taken following troubleshooting actions:
    Increase CPU Cores from 4 to 8 on original Mediation VM  (The mediation boxes in other hubs have 4 CPU Cores and there is no issue)
    Copied known good Mediation VM and replaced a mediation box on new Lync hub
    Created new VM and deployed into Mediation Pool
    Monitoring of HVH’s showed no resource allocation issue
    No indication on Lync Logs or Monitoring reports as to the root cause of the issue 
    Raised this with our SIP suppliers to compare SIP configuration with all our SIPs and they came back with no differences. 
    Unfortunately, after all of the above actions, we have still seeing a high CPU utilization. 
    Can anyone suggest something which helps us to resolve this issue?
    Any assistance appreciated
    Cheers,
    Hemat

    Anti-virus exclusion list/exceptions properly configured on the affected Lync Mediation Servers? Using dynamic instead of fixed-size virtual disks? Is the host hypervisor on the official supported list (see
    Windows Server Virtualization Validation Program)?
    Also do check out the recently updated guide:
    Planning a Lync Server 2013 Deployment on Virtual Servers
    Thanks / rgds,
    TechNet/MSDN Forum Moderator - http://www.leedesmond.com

  • High CPU utilization on CSS 11501 version sg0750303

    Hi everyone,
    I have the problem about High CPU utilization on CSS 11501 version sg0750303.
    Our customer has used one pair of CSS 11501 (active-standby).
    As a matter of convenience, called "Old CSS" after here in this post.
    However traffic via Old CSS had been increasing so customer decided to add one more
    pair (active-standby) of CSS to separate traffic.
    Yesterday we installed new two CSS 11501 version sg0750303 (active-standby).
    As a matter of convenience, called "New CSS" after here in this post.
    Today, active CSS 11501 and standby CSS 11501 which were installed yesterday (New CSSs)
    indicates High CPU utilization.
    Active CSS 11501:
    Peak CPU utilization: about 85%
    Average CPU Utilization: about 60%
    Standby CSS 11501:
    Peak CPU utilization: about 40%
    Average CPU Utilization: unknown
    I do not understand why CPU utilization of both New CSSs become high.
    The traffic pass through New CSS less than Old CSS, because the traffic is separated into
    Old CSS and New CSS.
    And CSS's configuration parameters (service, content, access-list) also less than Old CSS,
    because real servers are also separated into Old CSS and New CSS.
    Old CSS indicated average of CPU utilization about 20% before installing New CSSs yesterday,
    in spite of all traffic pass through Old CSS only.
    I wrote "New CSS remains High CPU utilization", however end users do not feel the
    performance issue (e.g., performance delay, communication failure and so on) and
    the traffic pass through New CSS normally.
    So I have the question "CSS 11501 sg0750303 remains High CPU utilization on normal situation ?"
    And customer uses MTRG to poll SNMP for Old CSSs and New CSSs.
    So I have the question "CSS 11501 sg0750303 become High CPU utilization in case of receiving
    SNMP polling ?".
    Or if this situation is abnormal we need to start investigation.
    Would you please let me know how do we investigate this situation.
    I found the DDTS CSCek57080 "Performance issue using arrowpoint-cookie with ASR".
    Release note of this DDTS says that
    A customer was using a CSS pair configuration where arrowpoint-cookie
    is being used along with a redundant-index on many content rules. When
    the flow rate increased to a few hundred flows/sec, the peer message
    queue of the CSS receiving ASR related message began to fill up.
    When the peer message queue became over subscribed, the CPU increased
    and the CSS became unstable.
    New CSSs have configured redunrant-index on two content rules, and end users do not feel the
    performance issue (e.g., performance delay, communication failure and so on) and
    the traffic pass through New CSS normally.
    So I think this DDTS does not related to this case.
    Your information would be greatly appreciated.
    Best regards,

    Gilles,
    Thank you very much for your cooperation.
    I got the capture you instructed us.
    The following are additional information from our customer.
    At time user traffic path through the active CSS, active CSS indicates;
    CPU utilization always range of 30% - 40%
    Peak CPU utilization about 60% - 80%
    At time there is no user traffic pass through active CSS, active CSS indicates;
    CPU utilization always range of 0% - 5%
    Attached files are named "Active CSS.log" and "Standby CSS.log".
    "Active CSS.log" is captured on active CSS and "Standby CSS.log" is captured on
    standby CSS.
    I found the following process is using resource by looking the output of
    "shell 1 1 spyReport" command.
    On active CSS,
    tFlowMgrPktR 8ba24070 50 26% ( 1469) 20% ( 26)
    On standby CSS,
    fmPeerMsgTas 8a511510 50 16% ( 176) 10% ( 7)
    Your comment would be greatly appreciated.
    Best regards,

Maybe you are looking for