Is it ok If the CPU Hits more 90%

We have a database 9.2 running in aix 5.3
Client sometime reports cpu hitting almost 100%.Its purely a DB server.No other application is running.Is it Ok if we can say thats normal behavior to the client,or something wrong with DB. Any help as i always get

If the performance of the database is not at issue, but a monitoring tool is identifying peaks of cpu usage, I would consider this normal. You want the database to use as much cpu as possible while it is doing its work. This means that the DB isn't constrained by another resource wait, like a disk IO. Looking at just peaks isn't a good measure. You do a select count(*) from some table and the cpu could spike, it is prolonged episodes at this level that you may want to look at. Is the cpu above 90% during your entire peak usage period? If so, I would look at stats pack to see what is going on. Another measure to look at are the load averages. This are full proof methods either, but they give a view of how busy the system is a 1, 5, and 15 minute intervals (if you are using some flavor of unix/linux).

Similar Messages

  • Dear All, I'm using Cisco ASA 5505 Firewall and I want the email alert from my Firewall if the CPU increase more than 70 %. Is it possible, Please help me. Thanks Vijay

    Dear All,
                         I'm using Cisco ASA 5505 Firewall and I want the email alert from my Firewall if the CPU increase more than 70 %. Is it possible, Please help me.
    Thanks
    Vijay

    Hi Vijay,
    If can be done but you need any network management software. I personally dont think you can ask your ask to send mails. ASA can trigger alert to a SNMP configured server which will intern send mail to you 
    HTH,

  • MSI Big Bang-XPower II More MB RAM than the CPU can handle...

     I understand reading in a few forum post :" By making more memory available to the system more data can be cached in RAM, so there will be less hard drive activity, and less swapping to memory so your system will perform better." But what happen when motherboard for a desktop that supports( and had install) more RAM than the processor can handle?(aka:processor Max Memory Size).
    Please comment in this configuration.as I 'm looking for a MB than offer me the more SATA6 connection i found a few available using LGA2011 socket (MSI Big Bang-XPower II LGA 2011 Intel X79 SATA 6GB/s USB 3.0 XL ATX Intel Motherboard with UEFI BIOS) than support 128GB. As I understand the processor Intel® Core™ i7-3930K Processor only work with 64GB.
     Is there any advantage to have a MB that hold a bigger RAM capacity than the CPU? If so, how can this be translate using in a program like Adobe After Effects, RAM preview? thank

    More Memory doe's not make it faster as its Volatile Memory (more RAM but it has to be run at much slower speeds) that is very short term and is just a buffer for active Programs and files it Caches there while its active but in no way improves system speed (as it has to be clocked slower to allow the IMC on the CPU to cope with it)! more memory just allows more Active Large programs and files open at the same time and huge amount of ram will only really be of benefit for things like Video Editing, photo editing and 3D rendering as those take alot of ram to allow all the computations that the CPU does to be held for quick access but only when they are active!
    having more RAM then your CPU can Handel is a VERY BAD IDEA as it will just refuse to even post or in very rare cases damage the CPU's IMC leading to a dead CPU!
    why MSI have done that is for possible later 2011 CPU's that may allow the 128GB of RAM limit to be used but currently they do not exist currently as X79 is a more professional grade Platform that will probably be around for years to come (may be used for many upgrade cycles and may eventually have a cpu that can use 128GB or ram) and who knows what will happen with Intel CPU's in that time frame! (more of a possible forward compatibility thing as its a very high end platform)
    as for after effects it uses RAM for  holding Render Files so in theory more ram will just allow it to render Larger files that are more complex without hitting the Available Ram ceiling that your computer has but it will not render any faster as a result it just gives it more headroom to work in!
    so overall more ram = more Active space available & no increase in speed (actually maybe a drop in speed as it needs to run slower causing LAG/Latency) so it all depends what is important to you here to whether more ram is of any benefit or just a hindrance but that is down to your own use case scenario!

  • CPU hits the limit, and causes the network slowdown to become unuseable.

    Hi All
    School:
    Running 2 x xserves both 2 x quad cores running 10.5.6, connected to a promise raid via fibre channels, for storage of network home folders. one being an open directory Master, the other a replica.
    aprox 290 Clients are imacs running 10.5.4 and emacs running 10.4.11. This has been running fine. We upgraded 16 of the imacs to 10.5.6 when it came out, and we experienced a major kernel Panic when networked accounts logged out. We downgraded them back to 10.5.4
    Last week we bought the new "entry level" imacs, which wouldn't boot with 10.5.4, so we performed an archive and install of 10.5.6. (maybe not the best way, but time was precious) The room has 30 machines.
    When pupils now log on, the network grinds to a halt, the CPU's on both servers, maxs out at 100% (when previously it hardly ever gets near 20%) we turn the room off and all the other rooms log in.. goes back to normal...
    We had a very bad start at the beginning of september, when we were sold a raid box and xsan to work with home folders.. which didn't, leaving us we now working machines for 3 weeks, with a new head that is not into Macs.. this is giving him fuel to get rid of them (not also because the price of this room shot up considerably, because of the new specs)
    A new image has been started from scratch, but who's to say it won't happen again come monday morning...but someone MUST know what is happening here.. also, on activity monitor - afp is stating 202 threads.. regardless of how many users... Oh, and spotlight has caused major problems in the past and this is disabled across all machines....ANY help would be appreciated
    I have spoken to two major apple dealers in the UK.. still no answer...
    Thanks
    Nicola Clarke

    I think you're experiencing the issue that several others have. AFP is swamping your CPU and everything grinds to a halt.
    You'll find there's already a topic on here with something like 15,000 hits and hundreds of replies.
    AppleCare has more or less acknowledged there is a problem but I think Apple was initially strugglingly to recreate the problem.
    We're all waiting to see if 10.5.7 resolves this. Hopefully it will be released any day now.

  • Make Spotlight (& the OS) use more CPU power?

    I've noticed that a lot of the time that the OS, specifically when using Spotlight doesn't use all of it's power.
    For example I have iStat menus displaying the CPU usage. When I do Spotlight searches I'll start typing but before I can finish typing my search Spotlight will start grinding away, not letting me finish what I want to type. But looking at the CPU usage in the menu bar both cores are only at around 50%. I'll have to wait for Spotlight to calm down before I can finish typing my search.
    I looked around in System Preferences > Energy Saver > Options & set Processor Performance to highest but it seems a lot of the time the OS doesn't use as much CPU power as it could.
    Is there any way to make the OS, specifically when using Spotlight use more CPU power?

    Spotlight will use as much CPU as it needs - It really doesn't take that much unless it's drawing image previews, etc.... The other-and probably more important- issue is hard disk speed and whether or not the disk is doing anything else when you initiate the search.

  • In mail 8.2 if I click the template button the CPU start running at 150% CPU and WindowServer arrive to 20 GB and more of memory

    hello, after upgrade to Yosemite I have a problem with Mail/CPU.
    if I create a new email message everything is ok, but if I click the button at the top right (show template) on the message window, the CPU running at 130-150% and the VirtualServer memory start increasing up you 20 GB and more.
    If I close and reopen Mail (without closing the mail message) the CPU return to 130-150% but if I send or delete the message the CPU return to the standard value
    If I don't click the template button on the new message window the CPU works correctly.
    This thing cause slowdowns and unexpected logout of my imac, I contacted applecare twice and they told me to create a new user (first) and reinstall a fresh operating system (second). I have done both but the problem is not solved.
    I have also tried this (without results): Guide: How to solve Yosemite memory leaks and CPU usage
    Do you have any solution for may problem?
    imac 5K - 16 GB ram
    Yosemite 10.10.2
    Mail 8.2
    Thanks in advance

    I have that same problem on all my yosemite Mac´s too and no solution yet exept deleting your own templates.
    But if you need to use the templates for your business like i do this is not an "good" option
    I use this workaround at the moment until theres a fix
    You can copy our own templates in an existing nativ template folder apple mail uses. (Google knows where the folders are)
    (Or you can use another Mail program or a slow Apple Mail )

  • Problem with the cache hit ratio

    Hello,
    I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
    1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.
    In oracle cache hit ratio seems to be:
    cache hits / cache lookups,
    where cache hits <=> logica IO - physical reads
    cache lookups <=> logical IO
    Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:
    SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
    FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
    WHERE
    P1.name = 'db block gets' AND
    P2.name = 'consistent gets' AND
    P3.name = 'physical reads' AND
    P4.name = 'sga_max_size'
    2) The problem:
    The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
    And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.
    Some of the relevant stats, wait events:
    db file scattered read     1129,93 seconds
    Elapsed time: 1311,9 seconds
    CPU time: 179,84
    SGA max Size: 314572800 Bytes
    And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
    the source txt loaded to the database was aprox 16 G
    Number of reads was like 4.5 times the source datafile.
    I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
    Cache hit Ratio: 0,92
    Cache hits: 109680186
    Cache lookups: 119173819
    I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.
    3) Procedure of taking stats:
    Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
    But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.
    Could the problem, assuming there is one, be related to this:
    "The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."
    What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.
    Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?

    sono99 wrote:
    Hi,s
    first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.
    >
    quote:
    It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
    First, the query code:
    2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
    Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.
    >
    Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.>
    I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.
    >
    Most of the I/O is reading, a few of the I/O is writing.
    My idea is not to optimize this query, it is to understand how it performs. Well, have you read the Concepts manual?
    I have other implementations to test, namely I having trouble with one of them.
    Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].
    >
    Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
    No, I don't want to that!
    I avoid as much as possible having the code I execute being implemented in java. Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
    When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust. Er, is that sqlplus?
    So yes, I only connect to the database with JDBC... in very last session.
    Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.
    But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.
    >
    Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.
    >
    then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
    I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.
    >
    By the way, I don't get the query results into my client, the query results are dumped into a target output table.
    So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.You may hear this phrase a lot:
    "It depends."
    >
    As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.What?
    JG@TTST> explain plan for create table jjj as select * from product_master;
    Explained.
    JG@TTST> select count(*) from plan_table;
      COUNT(*)
             3
    I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.
    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
    "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
    "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
    Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.
    Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
    The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
    But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.
    But this is just my feeling.Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
    Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
    One for each table block?It depends.
    >
    My best regards,
    Nuno (99sono xp).

  • Now that AMD lowerd the CPU prices, who is thinking of making a speed jump?

    I am seriously considering taking a speed bump from 3000+ to 3400+ since it can be had for $290 thru mwave, even the 3700+ hit the $500 mark.
    Jorge

    i was considering upgrading to a socket 939 via board, faster ram than what i have now (cas 3 ), a faster cpu and maybe an X800 XT but I will need to wait until I get some more money, lol
    But don't forget, you can always sell your old gear and so the cost isn't so high!
    I am definately going to get a new enermax powersupply though, soon.

  • How do I keep a Window resize from holding the CPU and temporaily interrupting the Labview application?

    I am performing a data aquisition in one VI at 10000 Samples per second, then averaging 100 samples every 10 msec.  This runs in a loop and I monitor the time of the loop.  Another VI pulls the single point result of each 10 msec average and plots it to a graph.
    If another window on the Windows XP operating system is re-sized the LabVIEW VI performing the Acquisition suspends until the resizing is complete and I can watch the loop time go from 10 msec to whatever time the other window is being manipulated (100, 200 300 msec...).  The resulting graphical display will then show the next averaged plot point (this is a point to point drawing) without any of the 'real' analog activity that occurred during these +10 msec interruptions (the dataq VI pulls more than 100 samples to perform the averaging).
    Does anyone know how I can prevent the resizing from taking such a high priority on the CPU?  For various reasons I do not want to change my data acquisition scheme.

    chrisger  says: "if you want to pass data without loss you should use a method that allows buffering of data, e.g. queues. Take a look at the LV documentation (there are several solutions, but a named queue is the easiest way to go for such sort of applications)."
    Actually, one VI is running as the executable and the other is called dynamically.  The Dynamic VI uses a queuing vi common to both to access the data.  This part works well, it is the interrupt of the main vi perfoming the data aquisition that is the problem. 
    Because I am not using the full buffer of data, but only whatever portion is acquired and then averaged, the graphing vi essentially gets two (software timed) points and draws between them. 
    I realize software timing is bad, but I am kinda stuck with this scheme.  So, I want to minimize the interruption when a non-related window is re-sized.

  • What is Apple's stand regarding  OWC offering to exchange the CPU in a new MacPro 2013?

    I thought to have already put this question here a short time ago.
    However I cannot find it for reasons I do not understand.
    Therefore here a second try.
    Since a couple of days OWC is offering to swap the CPU of a new ("End 2013") Mac Pro.
    They pretend their CPUs are more powerful and less expensive.
    They quote the "Magnuson Moss Warranty Act" which in the USA states that a manufacturer cannot force his customers to use only the replacement parts he sells.
    In their view therefore Apple could not legally void the 1 year limited coverage or the 3 year Apple Care coverage because a third party (OWC) modifies a Mac Pro.
    On the other side the only "user replaceable" part in a new MacPro End 2013 according to Apple and mentioned as such in their owners manual is RAM.
    While OWC says that the exchange of the CPU is performed by "highly skilled Apple technicians", even if we accept this, those technicians are not doing it in the name and as employees of Apple but working for OWC.
    Therefore, either Apple approves and agrees with the changes OWC makes to their new MacPros, in which case the customer will continue to enjoy full Apple support to the so modified machine, or such a step isn't at all done with Apple's blessing and voids every coverage by Apple.
    I am therefore waiting for a urgently needed clarification of this matter by Apple.
    If any such third part modification of an Apple product voids all commitments of Apple to the owner of that piece of equipment, people should be warned in time before taking any risk of that kind.
    However until now I find no statement whatsoever coming from Apple, either approving such measures or disapproving them entirely.
    Strangely OWC pretends to deliver by themselves full coverage to a modified computer and not only regarding the CPU they exchanged but the ENTIRE MacPro!
    Should that be true, then why mention the "Magnuson Moss Warranty Act" to prove that Apple CANNOT legally refuse coverage to a computer they modify?
    The second point difficult to understand is that such an entire coverage of a modified MacPro would mean being able to replace if necessary ANY PART in one of these new machines, something only possible if Apple delivers to OWC whatever they need.
    I hardly believe that they could get by themselves any and every part of such a new computer bypassing Apple.
    Surely Apple has exclusive agreements with the suppliers of at least most of the parts needed to assemble a new MacPro.
    I believe that a clear statement of Apple regarding this matter should be made and the sooner the better!

    Apple has not made any public statement that I've seen, and won't do so here, these being user-to-user support forums.
    While OWC says that the exchange of the CPU is performed by "highly skilled Apple technicians", even if we accept this, those technicians are not doing it in the name and as employees of Apple but working for OWC.
    That would correct. The technicians may be certified by Apple for doing Mac repairs, but the technicians will not be Apple employees.
    The second point difficult to understand is that such an entire coverage of a modified MacPro would mean being able to replace if necessary ANY PART in one of these new machines, something only possible if Apple delivers to OWC whatever they need.
    I hardly believe that they could get by themselves any and every part of such a new computer bypassing Apple.
    Surely Apple has exclusive agreements with the suppliers of at least most of the parts needed to assemble a new MacPro.
    OWC is not using any Apple-provided parts. They are getting processors from other suppliers.
    The normal legal stance, at least in the US, is that a manufacturer cannot refuse warranty service due to a third-party or user modification unless that modification can be shown to have led to the problem. So whether Apple would refuse or accept warranty on a modified Mac Pro would depend entirely on what the problem is, but this is what OWC is referring to.
    I very much doubt that Apple will make any statement on the issue other than to direct the user to the official warranty statement. For the US, see:
    http://www.apple.com/legal/warranty/products/embedded-mac-warranty-us.html
    but if they do choose to make a statement, it will not be here in these forums.
    Regards.

  • Is it possible to upgrade the CPU or GPU of a Toshiba Notebook?

    Many customers would like to upgrade/replace the CPU (Central Processing Unit) and/or GPU (Graphics Processing Unit) in their mobile systems to gain performance.
    Toshiba does not support or recommend performing modifications on their mobile systems since the core components in these mobile systems are not upgradeable like in a desktop system.
    More Info: http://aps2.toshiba-tro.de/kb0/TSB9401AX0001R01.htm

    Hi there,
    I think what John meant to say is: it is not possible because it´s too expensive and you won´t gain some really big speed. I don´t know where you read something, but I think if you were a toshiba technician and would be familiar with electronics and the machines or especially with your machine than you won´t have a problem to exchange it.
    I already tried to perform such exchange on my Satellite P20 some months ago. I can truly tell you: it didn´t bring me really some speed or performance. You will gain more performance by upgrading your RAM or your harddisk, but not with exchanging your CPU.
    And it doesn´t matter which machine do you have regardless if you own a Satellite, Satellite Pro or a Tecra. The result is the same because the models have all the same componentes (but not similar hardware :) )
    Greets

  • Oracle Database 11.2.0.1.0 CPU hit 100% on Windows 2008 R2 64 bit Hyper-V

    Oracle Environment Detail:
    Oracle Database Enterprise Edition :11.2.0.1.0 (NON RAC, No failsafe)
    O/S:     Windows 2008 R2 64 bit (6.1)
    Specifics: Virtual Server using Hyper-V
    Memory: 9 GB RAM
    Processor: E5540
    Specific about the issue:
    I have two database instance created on this server with ‘memory_target=3600M’ each.
    I am getting into an issue where I find NO activity on database except regular schedule job running using DBMS_SCHEDULER –
    •     Database FULL export DAILY schedule time 7 & 7:30 PM accordingly for both instance (takes about 15-20 mins each)
    •     Database FULL RMAN DAILY schedule time 9 & 10 PM accordingly for both instance (takes about 35-40 mins each)
    This was set up on Feb 25th 2011. Since then we have encounter CPU hit 100% TWICE in this system at around 8:30 PM (SAME TIME). Both database instance exports keep running and never finish. While RMAN schedule job has NO problem. Mostly everything runs every smoothly without any issue.
    Replicate the issue:
    I am unable to replicate this issue and do not know the cause of the issue.
    Work Around:
    Oracle services shutdown/Server reboot.
    Would anyone know about this issue? Have any information could help? This system will become production soon and want to make sure we have solution for this.
    Thanks in advance!

    I am finding that CPU hit 100% is very specific to Oracle 11.2.0.1.0 Export Utility (exp) used to take FULL database export. I understand Original Export is desupported from general use as of 11g but available for specific need.
    Have anyone faced this issue with 11.2.0.1.0 Export Utility (exp)? Please help.
    I have a need to use export utility which gives us migration options.
    Thanks!

  • What is the CPU usage of X and cpu temp on your Arch?

    I just installed the new arch 2009 02. Everything seems to be perfectly out of box, I have lxde DE.
    But I noticed that in idle X is using around 2~3% cpu compared to below 1% in the old Arch I had before. It seems there is a little lag in the desktop experience, not obvious though, but not as responsive as my old Arch. Also the cpu temperature under lm_sensors gives me 43 constantly compared to 39 in my old Arch?! This is not some random sample of temperature, I constantly monitored it to realize it indeed runs hotter.
    Also, when I open one mplayer, it uses 40% cpu + 20% for X. If I open 2 mplayer, fan will go noisy and cpu hit full and both videos become choppy! This never happened in my old Arch. I search the forum for quite a while, but have not found exactly the reason.
    Some help or information? Greatly appreciate it.:D
    My Hardware:
    Shuttle XPC sn68sg2
    Athlon x2 5200+ 2.7Ghz
    Corsior 2G Ram
    Asus Nvidia Geforce 7600GS
    WD 250G x2
    Last edited by yingwuzhao (2009-02-21 20:57:51)

    Hmm interesting.  How old was the "old arch" you are talking about?
    Because if it was quite a while ago that you ran Arch and recorded those temperature readings, I might wonder if your hardware is aging or dust is gathering inside your computer case, decreasing air flow and increasing temperature.
    My laptop also runs almost constantly around 43 degrees and about the same at X idle.  This seems normal to me though.
    Last edited by CheesyBeef (2009-02-21 21:12:13)

  • Has anyone experienced PluginContainer*32 and FlashplayerPlugin 12-0-0-70*32 taking over 60% of the CPU after 2 minutes playing some youtube videos.

    I have a 64 bit pc but Flash player and plugin container seem to be 32 bit. Is this relevant ?
    The CPU goes up to 60% for just these two processes within about 3 minutes of streaming some videos. The system effectively dies until I end one of the processes with task manager.
    An example of a video which displays this issue is http://www.techmoan.com/blog/2013/8/31/the-mini-0801-tiny-1080p-gps-car-camera-with-an-lcd-screen.html
    Firefox plays the BBC Iplayer videos with no problems but files like the Techmoan above make the CPU go wild and stop the PC within about 3 minutes.
    Opera 20 plays the Techmoan video all the way through with no problems.
    This points to the problem being in Firefox.
    I think that I have the latest Adobe Flash player and the latest Firefox.

    Hey,
    Please try to disable hardware acceleration.Some problems with Flash video playback can be resolved by disabling hardware acceleration in your Flash Player settings. (See [[Flash Plugin - Keep it up to date and troubleshoot problems|this article]] for more information on using the Flash plugin in Firefox).
    To disable hardware acceleration in Flash Player:
    #Go to this [http://helpx.adobe.com/flash-player/kb/video-playback-issues.html#main_Solve_video_playback_issues Adobe Flash Player Help page].
    #Right-click on the Flash Player logo on that page.
    #Click on '''Settings''' in the context menu. The Adobe Flash Player Settings screen will open.
    # Click on the icon at the bottom-left of the Adobe Flash Player Settings window to open the Display panel. <br/> <br/>[[Image:fpSettings1.PNG]] <br/>
    # Remove the check mark from '''Enable hardware acceleration'''.
    # Click '''Close''' to close the Adobe Flash Player Settings Window.
    # Restart Firefox.
    This [http://www.macromedia.com/support/documentation/en/flashplayer/help/help01.html Flash Player Help - Display Settings page] has more information on Flash Player hardware acceleration, if you're interested.
    Does this solve the problem? Let us know.

  • Methods to reduce the CPU Usage for painting the image

    Hi,
    I have developed an application to view images from an IP camera. By this I can simualtaneously view images from about 25 cameras. The problem is that the CPU Usage increases as the no of player increases. My Player is JPanel. which continuously paints the images from camera. The method 'paintImage' is called from another thread's run method. This thread is responsible for taking jpeg images from IP camera.
    Here is the code for this.
    public void paintImage(Image image, int fps) {
    try {
      int width = this.getWidth();
      addToBuffer(image);
      currentImage = image;
      Graphics graphics = this.getGraphics();
      if (isRunning && graphics != null) {
       graphics.drawImage(image, 0, 0, getWidth(), getHeight(), this);
       if(border ==true){
        graphics.setColor(Color.RED);
                          graphics.drawRect(0,0,getWidth()-1, getHeight()-1);
       graphics.setColor(Color.white);
       graphics.setFont(new Font("verdana", Font.ITALIC, 12));
       graphics.drawString("FPS : " + fps, width-60, 13);
       this.fps = fps;
       if (isRandomRecord) {
        graphics.setColor(new Color(0, 255, 0));
        graphics.fillArc((getWidth() - 10), 5, 10, 10, 0, 360);
    } catch (Exception e) {
      e.printStackTrace();
    Can someone please help me to solve this problem so that the CPU usage can be reduced.

    Can you give me more detail information about how to use
    an automated profiling tool You run it and excercise your app. Then it presents stats on number of times each method was called and time spent in each method, plus other stuff. Using those two stats you can zero in on the areas that are most likely to yield resullts.

Maybe you are looking for