CPU utilization is no lower than 5% in KDE.

just wondering if i am the only one having this issue. what about you guys?

Shot in the dark

Similar Messages

  • CPU idle is 8 ,lower than 20

    cpu idle is 0 ,lower than 20
    14889 35.7 1519480 1483640   orashp 08:01:05 oracleSHP (DESCRIPTION=(LOCAL=NO)(SDU=32768))
    14878 27.2 1516224 1485264   orashp 08:00:29 oracleSHP (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    18467  1.5 1525744 1476680   orashp   Aug_23 ora_lgwr_SHP
    14875  1.1 67616 56072   orashp 08:00:10 brconnect -u / -jid STATS20080904080000 -c -f stats -t ALL
        3  0.3    0    0     root   Apr_30 fsflush
    14933  0.3 4152 2976     root 08:04:37 /usr/local/sbin/sshd -R

    Hi Fidel,
    SAP run program RSDBAJOB this time.
    Follow is our oracle parameters:
    *._b_tree_bitmap_plans=FALSE
    *._optimizer_or_expansion='depth'
    *._push_join_predicate=FALSE
    *.aq_tm_processes=1
    *.background_dump_dest='/oracle/SHP/saptrace/background'
    *.compatible='9.2.0.0.0'
    *.control_file_record_keep_time=30
    *.control_files='/oracle/SHP/sapdata/cntrl/cntrlSHP01.ctl','/oracle/SHP/sapdata/cntrl/cntrlSHP02.ctl','/oracle/SHP/sapdata/cntrl/cntrlSHP03.ctl'
    *.core_dump_dest='/oracle/SHP/saptrace/background'
    *.db_block_size=8192
    *.db_cache_size=838860800
    *.DB_DOMAIN='WORLD'
    *.db_file_multiblock_read_count=8
    *.db_files=1024
    *.db_name='SHP'
    *.dml_locks=6250
    *.enqueue_resources=8000
    *.event='10028 trace name context forever, level 1','10027 trace name context forever, level 1','10183 trace name context forever, level 1','10191 trace name context forever, level 1','38068 trace name context forever, level 100'
    *.fal_client='SHP_std'
    *.fast_start_mttr_target=300
    *.hash_join_enabled=false
    *.instance_name='SHP'
    *.java_pool_size=0
    *.job_queue_processes=10
    *.large_pool_size=20971520
    *.log_archive_dest_1='location=/oracle/SHP/saparch'
    *.log_archive_dest_2='service=SHP_std optional reopen=60 max_failure=10 delay=1440'
    *.log_archive_format='SHP_%s.arc'
    *.log_archive_start=TRUE
    *.log_buffer=8192000
    *.log_checkpoints_to_alert=true
    *.open_cursors=800
    *.optimizer_features_enable='9.2.0'
    *.optimizer_index_cost_adj=10
    *.pga_aggregate_target=419430400
    *.processes=150
    *.query_rewrite_enabled='FALSE'
    *.remote_os_authent=true
    *.replication_dependency_tracking=false
    *.shared_pool_size=524288000
    *.sort_area_size=2097152
    *.star_transformation_enabled='FALSE'
    *.timed_statistics=TRUE
    *.transaction_auditing=false
    *.undo_management='AUTO'
    *.undo_retention=10800
    *.undo_tablespace='PSAPROLL'
    *.user_dump_dest='/oracle/SHP/saptrace/usertrace'
    *.utl_file_dir='/oracle/SHP/saptrace/usertrace'
    Thanks!
    Lily

  • Low CPU utilization on Solaris

    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

    Hi,
    I recently went down this path and wound up coming to the realization that the
    cpu's are almost neck and neck per cycle when running my Java app. Let me qualify
    this a little more (400mhz Sparc II cpu vs 500mhz Intel cpu) under similar load
    running the same test gave me similar results. It wasn't as huge difference in
    performance as I was expecting.
    My theory is given the scalability of the SPARC architecture, more chips==more
    performance with less hardware, whereas the Wintel boxes are cheaper, but in order
    to get scaling, the underlying hardware comes into question. (how many wintel
    boxes to cluster, co-locate, manage, etc…)
    From what little I've found out when running tests against our Solaris 8 (E-250's)
    400mhz UltraSparc 2's is that it appears that the CPU performance in a lightly
    threaded environment is almost 1 cycle / 1 cycle (SPARC to Intel). I don't think
    the 64 bit SPARC architecture will buy you anything for java 1.3.1, but if your
    application has some huge memory requirements, then using 1.4.0(when BEA supports
    it) should be beneficial (check out http://java.sun.com/j2se/1.4/performance.guide.html).
    If your application is running only a few threads, tying the threads to the LWP
    kernel processes probably won't gain you much. I noticed that it decreased performance
    for a test with only a few threads.
    I can't give you a good reason as to why your Solaris CPU utilization is so low,
    you may want to try getting a copy of Jprobe and profiling Weblogic and your application
    to see where your bottlenecks are. I was able to do this with our product, and
    found some nasty little performance bugs, but even with that our CPU utilization
    was around 98% on a single and 50% on a dual.
    Also, take a look at iostat / vmstat and see if your system is bottlenecking doing
    io operations. I kept a background process of vmstat to a log and then looked
    at it after my test and saw that my cpu was constantly pegged out (doing a lot
    of context switching), but that it wasn't doing a whole lot of page faults
    (had enough memory).
    If you're doing a lot of serialization, that could explain slow performance as
    well.
    I did follow a suggestion on this board of running my test several times with
    the optimizer (-server) and it boosted performance on each iteration until a plateau
    on or about the 3rd test.
    If you're running Oracle or another RDBMS on your Solaris machine you should see
    a pretty decent performance benchmark against NT as these types of applications
    are more geared toward the SPARC architecture. From what I've seen running Oracle
    on Solaris is pretty darn fast when compared to Intel.
    I know that I tried a lot of different tweaks on my Solaris configuration (tcp
    buffer size, etc/system parameters for file descriptors, etc.) I even got to the
    point where I wanted
    to see how WebLogic was handling the Nagle algorithm as far as it's POSIX muxer
    was concerned and ran a little test to see how they were setting the sockets (setTcpNoDelay(Boolean)
    on java.net.Socket). They're disabling the Nagle algorithm so that wasn't an
    issue sigh. My best advice would be to profile your application and see where
    the bottlenecks are, you might be able to increase performance, but I'm not too
    sure. I also checked out www.spec.org and saw some of their benchmarks that
    coincide with our findings.
    Best of luck to you and I hope this helps :)
    Andy
    [email protected] (feanor73) wrote:
    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

  • Jagged peaked cpu utilization during export

    Hello All,
    When I export JPGs onto my local drive from lightroom 5.6 the CPU utilization is rather low and goes up and down (see photo below). Also the export is quite slow, 250 files at 90% from RAW.  Is this pattern normal and why is lightroom not using all of my resources??  Only <1/2 CPU and 4 GB of RAM out of 12.  I didn't have anything else running except a browser. 
    I see a consistent very high CPU utilization when rendering out of after effects and much higher, but still somewhat peaked utilization with Premiere Pro media encoder.
    My system is pretty fast, i7 quadcore, haswell.  It does not seem that all the cores are being used by lightroom, but that is a known issue I guess that can be addressed by breaking up the export into multiple smaller exports.
    Thanks

    Hello,
    here are my results. One export. Screenshot from task manager and thread view from process exporer.
    Now, same screenshots with two exports:
    Never saw more than 3 threads (parallel processes) with on export but 6 threads with two exports. Systems becomes sluggish with two exports. Overall CPU usage of Lightroom increases about 10-20%.

  • RAC - CPU utilization

    Hi guys!
    We are specifying the hardware for the installation of a two RAC nodes. Do you agree with me if I said that the average of CPU utilization should be less than 50%, to ensure that in case of failure of a node, the other is able to handle everything and guarantee the same level of service? Is there a documentation on this?
    Thank you
    A.

    Well, it depends, if you are using RAC only for failover then this statement is somewhat true, although CPU isn't the only indicator of a busy database as one bad query can tank CPU in an instant, and a high I/O query can kill performance on ALL rac nodes. You will have to look at I/O capacity and network availability as well. Also, you have to look at SLAs to determine what performance is expected and for how long you can be expected to run on one node in a diminished capacity.
    Most clients I have worked for expect an outage, as it is realistic. The more hosts you have the greater chance that one will fail. They plan on beefy hardware, but then the reality of a growing application sets in and before you know it, you are using 50-60-80% of what each node can handle and you then have three options, tuning, faster hardware or add another node.
    Cheers
    Jay
    http://www.grumpy-dba.com

  • HELP: WL 8.1 run unexpectedly slow on solaris 9 with low CPU utilization

    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env + SUN mid-end series server.
    JVM was configured to use 3G RAM and there are still abundant RAM on the HW server.
    I tried out a use case, it took long time to get response (~ 2 minutes). But the
    CPU utilization has been always lower 20%. I have tried out the same test case
    on a wintel server with 500 RAM allocated to JVM, the response time is much quicker
    (less than 30 sec). I did the same on solaris 8 with 3G RAM and had used alternate
    threads library mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which
    is the default mode adopted by solaris 9. The same use case responded much quicker
    and comparable to abovementioned test on wintel. Can anybody advice on how to
    tune WL 8.1 on solaris 9 so as to make it perform best ? Is there any special
    trick ?
    thank u very much for any advice in advance
    dso

    "Arjan Kramer" <[email protected]> wrote:
    >
    Hi dso,
    I'm running the same two configs and run into the same performance issues
    as you do. Please let me know if you any response on this!
    Regards,
    Arjan Kramer
    "dso" <[email protected]> wrote:
    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env. JVM was configured
    to use
    3G RAM and there are still abundant RAM on the HW server. I tried out
    a use case,
    it took long time to get response (~ 2 minutes). But the CPU utilization
    has been
    always lower 20%. I have tried out the same test case on a wintel server
    with
    500 RAM allocated to JVM, the response time is much quicker (less than
    30 sec).
    I did the same on solaris 8 with 3G RAM and had used alternate threads
    library
    mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which is the
    default mode
    adopted by solaris 9. The same use case responded much quicker and comparable
    to abovementioned test on wintel. Can anybody advice on how to tuneWL
    8.1 on
    solaris 9 so as to make it perform best ? Is there any special trick
    thank u very much for any advice in advance
    dso
    There could be many factors that add to performance degradation, database, OS,
    Network, app config etc, so without knowing too much its difficult to tell.
    Can you please supply the startup JAVA options used to set the heap etc. Having
    larger heao sizes is not always the best approach of building HA applications...the
    bigger they are, the bigger they fall. I'd suggest using many but smaller instances.
    Provide the heap info from NT also.
    BTW, when weblogic starts, can you tell me how much memory is being used in the
    console...ie the footprint of weblogic + your application.
    Many Thanks

  • % CPU is consistently very low, never reaches over 30%. Nothing gets more than 5% anytime!

    I used to have the problem of over 100% CPU usage all the time, but the last few months, it's been the opposite problem. %CPU is never above 30%. Individual apps never get more than 10% ever, and it's more like 3-5% at any time. System and User apps are getting the same treatment.
    I see Index running constantly, and all applications hang after every click or swipe.
    I've verified and repaired permissions numerous times. Every time, there seems to be these little errors.
    What could I try?

    Problem description:
    CPU % Usage is very low all the time. Even when I have many operations running, and many of them are usually high-CPU users.
    This is the EtreCheck profile after having rebooted 2 times. It seems like it is running a little better now, but it still took 15 minutes to boot up. But apps are now running sometimes in the double digits!
    EtreCheck version: 2.1.8 (121)
    Report generated March 30, 2015 at 6:06:14 PM EDT
    Download EtreCheck from http://etresoft.com/etrecheck
    Click the [Click for support] links for help with non-Apple products.
    Click the [Click for details] links for more information about that line.
    Click the [Adware! - Remove] links for help removing adware.
    Hardware Information: ℹ️
        MacBook Pro (13-inch, Mid 2010) (Technical Specifications)
        MacBook Pro - model: MacBookPro7,1
        1 2.4 GHz Intel Core 2 Duo CPU: 2-core
        8 GB RAM Upgradeable
            BANK 0/DIMM0
                4 GB DDR3 1067 MHz ok
            BANK 1/DIMM0
                4 GB DDR3 1067 MHz ok
        Bluetooth: Old - Handoff/Airdrop2 not supported
        Wireless:  en1: 802.11 a/b/g/n
        Battery Health: Check Battery - Cycle count 389
    Video Information: ℹ️
        NVIDIA GeForce 320M - VRAM: 256 MB
            LED Cinema Display 1920 x 1200
    System Software: ℹ️
        OS X 10.9.5 (13F34) - Time since boot: 0:14:8
    Disk Information: ℹ️
        TOSHIBA MK2555GSXF disk0 : (250.06 GB)
            EFI (disk0s1) <not mounted> : 210 MB
            Macintosh HD (disk0s2) / : 249.20 GB (30.88 GB free)
            Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
        MATSHITADVD-R   UJ-898 
    USB Information: ℹ️
        Apple Inc. Built-in iSight
        Toshiba External USB HDD 500.11 GB
            EFI (disk1s1) <not mounted> : 210 MB
            Beechwood Videos (disk1s2) /Volumes/Beechwood Videos : 499.76 GB (13.83 GB free)
        Apple, Inc. Keyboard Hub
            Apple, Inc Apple Keyboard
        Apple Inc. Display iSight
        Apple Inc. Display Audio
        Apple Inc. Apple LED Cinema Display
        Apple Internal Memory Card Reader
        Apple Inc. BRCM2046 Hub
            Apple Inc. Bluetooth USB Host Controller
        Apple Computer, Inc. IR Receiver
        Apple Inc. Apple Internal Keyboard / Trackpad
    Gatekeeper: ℹ️
        Mac App Store and identified developers
    Adware: ℹ️
        Geneio [Adware! - Remove]
    Kernel Extensions: ℹ️
            /System/Library/Extensions
        [not loaded]    com.rim.driver.BlackBerryUSBDriverInt (0.0.39) [Click for support]
        [not loaded]    com.rim.driver.BlackBerryUSBDriverVSP (0.0.39) [Click for support]
    Launch Agents: ℹ️
        [not loaded]    com.adobe.AAM.Updater-1.0.plist [Click for support]
        [running]    com.adobe.AdobeCreativeCloud.plist [Click for support]
        [loaded]    com.divx.dms.agent.plist [Click for support]
        [loaded]    com.divx.update.agent.plist [Click for support]
        [loaded]    com.google.keystone.agent.plist [Click for support]
        [loaded]    com.oracle.java.Java-Updater.plist [Click for support]
        [running]    com.rim.BBLaunchAgent.plist [Click for support]
    Launch Daemons: ℹ️
        [loaded]    com.adobe.fpsaud.plist [Click for support]
        [loaded]    com.adobe.SwitchBoard.plist [Click for support]
        [loaded]    com.google.keystone.daemon.plist [Click for support]
        [loaded]    com.oracle.java.Helper-Tool.plist [Click for support]
        [loaded]    com.oracle.java.JavaUpdateHelper.plist [Click for support]
        [running]    com.rim.BBDaemon.plist [Click for support]
    User Launch Agents: ℹ️
        [loaded]    com.adobe.AAM.Updater-1.0.plist [Click for support]
        [loaded]    com.adobe.ARM.[...].plist [Click for support]
        [loaded]    com.citrixonline.GoToMeeting.G2MUpdate.plist [Click for support]
        [running]    com.Installer.completer.download.plist [Click for support]
        [loaded]    com.Installer.completer.ltvbit.plist [Click for support]
        [loaded]    com.Installer.completer.update.plist [Click for support]
    User Login Items: ℹ️
        GrowlHelperApp    Application  (/Library/PreferencePanes/Growl.prefPane/Contents/Resources/GrowlHelperApp.app)
        GrowlHelperApp    UNKNOWN  (missing value)
        iTunesHelper    UNKNOWN Hidden (missing value)
        Dropbox    Application  (/Applications/Dropbox.app)
    Internet Plug-ins: ℹ️
        o1dbrowserplugin: Version: 5.40.2.0 - SDK 10.8 [Click for support]
        Google Earth Web Plug-in: Version: 6.1 [Click for support]
        Default Browser: Version: 537 - SDK 10.9
        AdobeExManDetect: Version: AdobeExManDetect 1.1.0.0 - SDK 10.7 [Click for support]
        Flip4Mac WMV Plugin: Version: 3.2.0.16   - SDK 10.8 [Click for support]
        OfficeLiveBrowserPlugin: Version: 12.3.6 [Click for support]
        AdobeAAMDetect: Version: AdobeAAMDetect 2.0.0.0 - SDK 10.7 [Click for support]
        FlashPlayer-10.6: Version: 17.0.0.134 - SDK 10.6 [Click for support]
        DivX Web Player: Version: 3.2.0.788 - SDK 10.6 [Click for support]
        OVSHelper: Version: 1.1 [Click for support]
        Flash Player: Version: 17.0.0.134 - SDK 10.6 [Click for support]
        iPhotoPhotocast: Version: 7.0
        googletalkbrowserplugin: Version: 5.40.2.0 - SDK 10.8 [Click for support]
        QuickTime Plugin: Version: 7.7.3
        AdobePDFViewer: Version: 9.5.4 [Click for support]
        GarminGpsControl: Version: 2.9.2.0 Release [Click for support]
        Silverlight: Version: 5.1.20913.0 - SDK 10.6 [Click for support]
        JavaAppletPlugin: Version: Java 7 Update 55 Check version
    User internet Plug-ins: ℹ️
        CitrixOnlineWebDeploymentPlugin: Version: 1.0.105 [Click for support]
        WebEx: Version: 1.0 [Click for support]
    3rd Party Preference Panes: ℹ️
        Flash Player  [Click for support]
        Flip4Mac WMV  [Click for support]
        Growl  [Click for support]
        Java  [Click for support]
    Time Machine: ℹ️
        Skip System Files: NO
        Mobile backups: ON
        Auto backup: YES
        Volumes being backed up:
            Macintosh HD: Disk size: 249.20 GB Disk used: 218.32 GB
        Destinations:
            Data [Network]
            Total size: 0 B
            Total number of backups: 0
            Oldest backup: -
            Last backup: -
            Size of backup disk: Too small
                Backup size 0 B < (Disk used 218.32 GB X 3)
            Time Capsule backup [Network]
            Total size: 1.40 TB
            Total number of backups: 58
            Oldest backup: 2014-11-13 05:01:04 +0000
            Last backup: 2015-03-30 00:57:07 +0000
            Size of backup disk: Excellent
                Backup size 1.40 TB > (Disk size 249.20 GB X 3)
    Top Processes by CPU: ℹ️
            12%    WindowServer
             8%    repair_packages
             5%    Microsoft Entourage
             1%    SystemUIServer
             1%    Disk Utility
    Top Processes by Memory: ℹ️
        653 MB    firefox
        232 MB    mds_stores
        149 MB    repair_packages
        137 MB    com.apple.IconServicesAgent
        77 MB    Finder
    Virtual Memory Information: ℹ️
        3.99 GB    Free RAM
        2.43 GB    Active RAM
        1.13 GB    Inactive RAM
        769 MB    Wired RAM
        601 MB    Page-ins
        0 B    Page-outs
    Diagnostics Information: ℹ️
        Mar 30, 2015, 05:48:25 PM    Self test - passed

  • Macbook air cpu charge is lower than 5%, but temperature is more than 90°C and fans don't speed up ! is it normal ?

    hello,
    my macbook air cpu charge is lower than 5% (i.e. no apps are loaded only the finder, istat menu give me a cpu charge near 3%),
    but temperature is more than 90°C and fans don't speed up !
    is it normal ?

    It is degrees C.
    when the computer has a normal behavour with no CPU charge, its temp is about 35°C
    but sometimes temp increase to 90°C and more.
    because i don't heard the fan speed increasing (I almost don't heard the fan so i think they should stay at 2000rpm), I'm afraid to burn the CPU and shut down the mac
    i didn't let him get hotter.

  • Email alerts if the free drive space is less than 50 GB and CPU utilization is more than 95%

    Hi all,
    I am new to SQl servers , Can someone please explain how can I add email alert to my Sq l server box for following scenarios.
    Drive free space is less than 50GB.
    CPU utilization is more than 95%
    Any help would be much appreciated.

    Try with Powershell and scheduled it to run from task scheduler
    Refer below links for more information
    https://www.simple-talk.com/sysadmin/powershell/disk-space-monitoring-and-early-warning-with-powershell/
    http://sqlpowershell.wordpress.com/2013/07/11/powershell-get-cpu-details-and-its-usage-2/
    --Prashanth

  • Performance degrading CPU utilization 100%

    Hello,
    RHEL 4
    Oracle 10.2.0.4
    Attached to a DAS (partition is 91% full) RAID 5
    Over the past few weeks my production database performance has majorly degraded. I have not made any application, OS, or database changes (I was on vacation!). I have started troubleshooting, but need some more tips as to what else I can check.
    My users run a query against the database, and for a table with only 40,000 rows, it will take about 2 minutes before the results return. For a table with 12 million records, it takes about 10 minutes or more for the query to complete. If I run a script that counts/displays a total record count for each table in the database as well as a total count of all records in the database (~15,000,000 records total), the script either takes about 45 minutes to complete or sometimes it just never completes. The Linux partition on my DAS is currently 91% full. I do not have Flashback or auditing enabled.
    These are some things I tried/observed:
    I shut down all applications/servers/connections to the database and then restarted the database. After starting the database, I monitored the DAS interface, and the CPU utilization spiked to 100% and never goes down, even with no users/application trying to connect to the database. The alert.log file contains these errors:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code arguments: [ttcdrv-recursivecall]
    ORA-03135: connection lost contact
    ORA-06512: at "CTXSYS.SYNCRN", line 1
    The database still starts, but the performance is bad. From the error above and after checking performance in EM, I see there are a lot of sync index jobs running by each of the schemas and the db sequential file read is high. There is a job to resync the indexes every 5 minutes. I am going to try disabling these jobs tihs afternoon to see what happens with the CPU utilization. If it helps, I will try adjusting the job from running every 5 minutes to something like every 30 minutes. Is there a way to defrag the CONTEXT indexes? REBUILD?
    I'm not sure if I am running down the right path or not. Does anyone have any other suggestions as to what I can check? My SGA_TARGET is currently set to 880M and the SGA_MAX_SIZE is 2032M. Would it also help for me to increase the SGA_TARGET to the SGA_MAX_SIZE; thus increasing the amount of space allocated to the buffer cache? I have ASMM enabled and currently this is what is allocated:
    Shared Pool = 18.2%
    Buffer Cache = 61.8%
    Large Pool = 16.4%
    Java Pool = 1.8%
    Other = 1.8%
    I also ran ADDM and these were the results of my Performance Analysis:
    34.7% The throughput of the I/O subsystem was significantly lower than expected (when I clicked on this it said to either implement ASM or stripe using SAME methodology...we are already using RAID5)
    31% SQL statements consuming significant database time were found (I cannot make application code changes, and my database consists entirely of INSERT statements...there are never any deletes or updates. I see that the updates that are being made were by the index resyncing job to the various DR$ tables)
    18% Individual database segments responsible for significant user I/O wait were found
    15.9% Individual SQL statements responsible for significant user I/O wait were found
    8.4% PL/SQL execution consumed significant database time
    I also recently ran a SHRINK on all possible tablespace as recommended in EM, but that did not seem to help either.
    Please let me know if I can provide any other pertinent information to solve the poor I/O problem. I am leaning toward thinking it has to do with the index sync job stepping on itself...the job cannot complete in 5 minutes before it tries to kick off again...but I could be completely wrong! What else can I check to figure out why I have 100% CPU utilization, with no users/applications connected? Thank you!
    Mimi
    Edited by: Mimi Miami on Jul 25, 2009 10:22 AM

    Tables/Indexes last analyzed today.
    I figured out that it was the Oracle Text indexes synching to frequently that was causing the problem. I disabled all the jobs that kicked off those indexes and my CPU utilization dropped to almost 0%. I will work on tuning the interval/re-enabling the indexes for my dynamic datasources.
    Thank you for everyone's suggestions!
    Mimi

  • AHCI cpu utilization skyrockets

    This issue is a bit new to me--have done RAID and IDE setups for decades, but thought I'd tinker with AHCI.  Motherboard is MSI 970a-G46.  Enabling and disabling AHCI with an established Win7x64 installation is not a problem for me.
    Problem is that after enabling AHCI properly, cpu usage soars to 25%-30%+ with the Windows AHCI drivers, and jumps to as high as 40% with the latest AMD chipset drivers.  OK--this is what HD Tach reports, anyway.  IDE settings for the same drives measure 1-2% cpu utilization.  According to HD Tach, too, the performance of AHCI & IDE is identical.  Ergo: I see no advantage for my client system running in AHCI and will return to IDE.
    Agree--disagree? Suggestions?  Thanks.

    Quote from: Panther57 on 30-June-12, 01:01:20
    This is an interesting post... With my new build I was set Raid0 / IDE. I had an unhappy line in device manager and changed to AHCI. Then it downloaded the driver.
    I have not seen a jump in CPU usage. But I also have not been watching it like a hawk. Hmmm
    I am going to watch my AMD System Monitor for results. In a post of mine..earlier.. I was told, and did some tests of AHCI vs: IDE. I ran IDE on my other PC (listed below, HTPC) and am now AHCI on my main 990FXA-GD80. The difference between the 2 ways tested on my 790FX actually did show an advantage IDE, using Bench32.
    Not a huge advantage.. but a little over AHCI. I don't know if the difference is really worth much inspection.
    I am looking forward to the results you get WaltC
    Thanks, Panther57...;)  My "results" are really more of an opinion, but ...
    Right now I'm not really sure what hard drive benchmark I should be using or trusting!...;)  HD Tach's last release in 2004 is now confirmed on the company's site as the last version of the bench it will make--as it is, I have to set the compatibility tab for WinXP just to run the darn thing in Win7x64!  But...I installed the free version of HD Tune (and the 15-day trial for the "Pro" version of the program, too), and the results are very similar--except that HD Tune seems to be measuring my burst speeds incorrectly:  HD Tach consistently puts them north of 200mb/s; HD Tune, well south of  200mb/s.  (A strike against HD Tune--the free version does not measure cpu dependency--grrr-r-r-r.  You have to pay for the "Pro" version to see that particular number, or install the Pro trial version which reveals those numbers for 15 days.)
    OK, between the two benchmarks, and after several tests, cpu utilization seems high *both* in IDE and in AHCI modes.  Like you, it has been quite awhile since I actually *looked* at cpu utilization of any kind for hard drives.  I guess I wasn't prepared to see how cpu dependent things have become again.  Certainly, we are nowhere near the point of decades ago when cpu utilization approached 100% and our programs would literally freeze while loading from the IDE disk, until the load was finished.  The "good old days," right?  NOT, hardly...;)  I suppose, though, that with multicore cpus being the rule these days instead of the exception, cpu dependency is just not as big a deal as it was in the "old days" when we dealt with single-core cpus exclusively and searching an IDE drive could literally stop the whole show.
    Again, when running these read tests to see the degree of cpu utilization, I found that while the tests were all uniform and basically just repeats of each other, done a couple of dozen times, the results for cpu utilization in each test were *all over the map*--from 0% to 40% cpu dependency!  And the same was true whether I was testing in IDE mode or AHCI mode.  That was kind of surprising--and yet, it still leaves open the question of how accurate and reliable the two HD benchmarks that I used actually are.   Besides that, I did find a direct correlation between the size of the files being moved/copied and the degree of cpu dependency--the smaller the files copied and moved the higher the cpu involvement--the larger the files, the lower the cpu overhead in copying and moving, etc.  Much as we'd expect.
    So after all was said and done--it does seem to me that AHCI is actually more of a performer than IDE, albeit not by much.  I think maybe it demands a tad less cpu dependency, too, which is another mark in its favor.  In one group of tests I ran on a single drive (I also tested a pair of Windows-spanned hard drives in RAID 0 (software RAID) in AHCI and in IDE mode, just for the heck of it...;)),  I found the *average* read speed of the AHCI drive some ~15mb/s faster than the same drive tested in IDE.  That was with HD Tune tests.  But as I've stated, how reliable or accurate are these benchmarks?  Heh...;)  Anybody's guess, I suppose.
    My take in general on the subject (for anyone interested) is that going to AHCI won't hurt if a person decides to go that route, but it also won't help that much, either. You definitely can easily and very quickly move from an installed Win7 IDE installation to an AHCI installation, no problem (some sources swear it can't be done without a reformat and a reinstall--just not true!  They just haven't discovered how easy and simple it is to move from IDE to AHCI and back again.)   Current cpu dependencies whether in AHCI or in IDE surprise me they seem so high.  However, the last time I paid close attention to such numbers was back when I ran a single-core cpu, and back then cpu dependency numbers for a hard drive meant quite a lot.  Today's cpus have both the raw computational power and the number of cores to take that particular concern and put it on its ear, with a large grain of salt!...;)
    I have three drives total at current:
    Boot drive:
    ST332062 OAS Sata, boot drive
    then,
    (2) ST350041 8AS Satas, spanned in software RAID 0, making two ~500GB RAID 0 partitions. 
    Total disk space ~1.32 terabytes, all drives including RAID 0 partitions running in AHCI mode. (Software RAID is just as happy with IDE, btw.)
    My Friday "project" is complete...:]  Hope I haven't confused anyone more than myself...;)

  • High CPU utilization with JDesktopPane.OUTLINE_DRAG_MODE

    Hello there,
    since I updated from Java SDK 1.4.0 to 1.4.1_01 I recognized a problem with MDI Java applications using a JDesktopPane with JInternalFrames. When the drag mode of the internal frames is set to OUTLINE_DRAG_MODE, which should have a better performance than the LIVE_DRAG_MODE the cpu utilization goes nearly up to 100% an the drag of the frame is quite slow.
    Does anybody else experience this problem?
    (The problem exists in the application I develop and also in the IDE Netbeans, I use for development)
    I am not sure if this is the right place for my problem, so if there is a better one to post it to, please tell me.
    Thanks
    R�diger

    Hi,
    I've also noticed this. It happens on Windows 2000 with 1.4.1, but not with version 1.4.0. Have you found a solution yet?
    Martin

  • LMS 4.2.3 Server high CPU Utilization

    Hi All,
    We are observing high CPU utilization on the lms server. tomcat is the process eating more than 1GB of the memory i checked from the task manager.
    Server details:
    device license : 100
    windows server 2k8 R2, with 8gb physical memory.
    anybody suggest what might be causing the issue.?? because of this performance reports were affected.
    Regards,
    Channa

    Hi Channa,
    log into dbreader using the following::
    http://servername : 1741/dbreader/dbreader.html
    or
    https:// servername : 443/dbreader/dbreader.html
    user Id is DBA
    Database Name is upm
    password :: user defined (by default password is c2ky2k)
    1.       Here is the query to get the pollerwise managed MIBObjects:
    "select count (*), PollerName from Poller_Details_Table a,Poller_Definition_Table b where a.PollerId = b.PollerId and b.Poller_State NOT IN (1) Group by b.PollerName;"
    2.       Here is the query to get the total mibobject count for the active pollers
    “select count (*) from Poller_Details_Table a,Poller_Definition_Table b where a.PollerId =
    b.PollerId and b.Poller_State NOT IN (1);”
    Hope it will help
    Thanks-
    Afroz
    ***Ratings Encourages Contributors ***

  • LMS 4.0.1 Create CPU Utilization Quick Report

    LMS 4.0.1 Create CPU Utilization Quick Report not work,then not any error!

    I made a little batch
    https://supportforums.cisco.com/docs/DOC-21031
    It show what process in LMS is eating you RAM / Hogging the CPU.
    I don't think resources are used very effectivly in LMS
    I did have the impression that some virtual machines running LMS 3.2 actually performed better than real machines, as if the VMware saw it load all these java virtual machines and that it was 45 times the same thing only being used for a few % and therefore could be swapped to disk, leaving the resources to what was actually working in LMS.
    What worries me more than the resources used is the gui per.formance.
    Cheers,
    Michel

  • Different CPU utilization on ESX VMware servers of Netweaver Portal

    Hello,
    we are running an Enterprise Portal NW 7.0, SPS20. The application servers are running on 6 ESX VMware servers.
    Although all application servers have almost the same amount of user sessions and the same processes on all servers,
    always one of the servers have more than the double amount of CPU utilization than all other servers !
    We found out, that this phänomen appears, as soon as one of the VM servers is running on a different physical hardware.
    As soon as all servers are running on the same physical hardware, the problem does not exist.
    Does anyone have experiences with the topic (Portal-)application servers on ESX VMware and different CPU utilization ?
    Best regards,
    Matthias

    Hi Matthias,
    Here is some information that may help you analyze the situation further:
    1158363 - vm-support - Exporting Diagnostic Data from VMware
    Use 'esxtop'.  Helpful information to evaluate the data shown in 'esxtop' can be found in "Performance Analysis Methods" available at:
    http://www.vmware.com/files/pdf/perf_analysis_methods_tn.pdf
    Furthermore please take note of the following SAP notes and ensure you have set up the extended SAP System Monitoring.
    674851 - Virtualization on Windows
    1159490 - Virtualization on Windows: Monitoring on VMware ESX
    1056052 - Windows: VMware ESX Server 3 configuration guideline
    1104578 -  Virtualization on Windows: Enhanced monitoring
    Hope this helps.
    Best Regards,
    Matt

Maybe you are looking for