Need to improve speed when graphing large arrays

My app generates several arrays of 500.000 elements which I want to display in a graph. Sending this array directly to a graph makes everything go very slow, I guess because of large memory allocations (LV is obviously not very good at handling large data unless having very good insight in how LV handles memory etc). It has been suggested in other posts and in App notes to decimate the data before displaying, since the graph can't display more points than the pixel-width anyway. The question is how to implement a decimation algorithm in order to get the same graph as if no decimation was made, and to preserve resolution when zooming into the graph. which require doing a new decimation with a new subset of the data. I think this graph decimation algo should have been implemented as a VI in LV, to put between the data and the graph. Since this is inherent in Labview when trying to graph more data points than pixelwidth, it should be hard to implement such a feature in LV. I would think this is  quite common problem and would like to get some comments , especially from NI people, about this issue. Any work-arounds are also appreciated.

You are probably going to need the following sub vi's as well.
Thank You,
Earl
Attachments:
Contrast Bright Controls.vi ‏24 KB
AI Read config.vi ‏28 KB
Read Line.vi ‏21 KB

Similar Messages

  • I need to improve speed....

    Gents,
    I'm having an Java app which is logging "messages" into another another window(emulating the console with some clever functionality). In this window I'm using a JTextArea for displaying messages, and it seems that repainting this JTextArea is a bottleneck for performance... Do some of you have experience about the topic and some ideas of how to improve speed ?
    Maybe use a simpler component than JTextArea, but I need copy&paste functionality in there ?
    Cheers,
    Preben

    I feel the need...the need for speed!Sounds like my last 2 weeks! We developed filters for the database to clean some data and projected comletion of the run with filtes was 135 days for the first set (not good, especially since 17 different algorithms had to be applied). We've steadily brought it down, so now a run takes about 16 minutes.
    speed... Speed... SPEED... I feel the need!

  • Extremely Slow USB 3.0 Speeds When Transferring Large Amounts of Video

    Hi there,
    I am transferring large amounts of footage (250GB-1.75TB chunks) from 5x 5400rpm 2TB drives to 5x 7200rpm 2TB drives simultaneously (via 6x USB 3.0 connections and 4x SATA III, with copy/paste in explorer) and the transfer speeds are incredibly slow. Initially the speeds show up as quite fast (45-150mb/ps+) but then they slow down to around 3mb/ps.
    The drives have not been manually defragmented but the vast majority of the files on each are R3D video files.
    I am wondering if the amount of drives/data being used/sent is what is causing such slow speeds or if there might be another culprit? I would be incredibly appreciative to learn of any solutions to increase speed significantly. Many thanks...
    Specs:
    OS: Windows 7 Professional
    Processor: i7 4790k
    RAM: 32GB
    GPU: Nvidia 970 GTX

    If the USB ports are all on the same controller, they share their resources, so the transfer rate with 6 ports would be max 1/6-th of the transfer rate with 1 USB port if we disregard the overhead. Add that overhead to the equation and the transfer rate goes down even further. Now take into account the fact that you are copying from slow 5400 RPM disks that effectively max out at around 80 MB/s with these chunks and high latency, add the OS overhead and these transfer rates do not surprise me.

  • Solution to improve performance (to increas the speed) when opening PDF file with large 3d model?

    I insert a large .u3d building model into PDF file(more than 40M .pdf file), very slow when opening it (more than 30 minutes). Is there any way to improve this when creating the PDF file? It is acceptable to reduce the precision of the model shape/color/texture when inserting it into PDF. I think my hardware is OK,THINKPAD 520 with 8G RAM. This is urgent for me. For any solutions or clues, please let me know.
    Thanks and best regards.
    MD

    All of this quite old stuff
    http://www.okino.com/conv/exp_u3d.htm
    PDF Publishing - 3D PDF and PRC | Tech Soft 3D
    http://www.3d-test.com/contenu.php?id=317
    http://wiki.david-3d.com/_media/user_page/magweb/judasml1.pdf
    in the end, I would try
    CAD => PRC
    CAD => OBJ => Blender/MeshLab/... OBJ compression => U3D
    CAD to OBJ conversion can be done outside but it should be best done within the CAD authoring application (I would leave compression level to a minimum here); I say that because this is what a powerful conversion tool such as Okino states about CAD Data
    As mentioned in the "NOTES" above, imported CAD data is most often in a form whereby it mainly consists of triangles, and those triangles can tend to be long sliver triangles due to the "trimming holes" in the source NURBS or BREP solids objects. Also, CAD data tends to have an over-abundance of polygons due to high tessellation of the source data. The Intel MultiRes compression algorithm cannot handle this case of sliver triangles too well (or an overabundance of very small polygons) very well, and hence compression should be reduced or minimized for such CAD datasets. CAD data also tends to have a lot of sharp angular features and thus the "vertex normals" exported and compressed into the U3D file must remain untouched -- too much vertex normal compression leads to a lot of "black banding".
    Blender/MeshLab/... OBJ compression is not straightforward; see "quadric edge collapse decimation" in MeshLab but there are lots of alternatives...
    (you should try to replicate yourself what the Acrobat 3D Toolkit once did: Optimizing file import for Acrobat 3D Toolkit - YouTube - maybe you can check if Tetra4D tools do it; Tetra4D took over Acrobat 3D business for Adobe)
    in the past, I also used Rhino and SALOME to compress CAD to OBJ (SALOME to STL) but I wouldn't recommend this way
    MeshLab/SimLab/Okino/... can save OBJ to U3D

  • Iteration Speed issue when Indexing 3D array wried to a while-loop

    Brief Description of my Program:
    I am working on the real-time signal generation by using LabView and DAQmx (PCI-6110 card). My vi reads the big data file (typically 8MB txt file containing about 400,000 samples (complex double precision). Then, the signal is pre-processed and I end up with a huge 3D array to feed while-loop (typically 3D array dimension is N x 7 x M where N & M >> 7). Inside the loop, that 3D array is indexed and processed before the results are written to the DAQmx outputs. I have a speed issue when indexing the large 3D array (i.e, 3D array having a large sub-array size). My while-loop could not run fast enough to top-up the DAQmx AO buffer (at the output rate of 96kHz). It could run faster only if I use smaller 3D array (i.e, smaller-sized sub-arrays). I do not quite understand why the size of 3D sub-array affects the rate of looping although I am indexing same sub-array size at each iteration. I really appreciate your comments, advices and helps.
    I include my 3D array format as a picture below.
    Question on LabView:
    How does indexing an 3D array which wires to the while-loop affect the speed of the loop iteration? I found that large dimension of sub-arrays in the 3D array slows down the iteration speed by comparing to indexing the same size of sub-array from smaller-sized sub-arrays of the 3D array to perform signal processing inside the while-loop. Why? Is there any other way of designing LabView Program to improve speed of iteration?
    attachment:

    Thank you all for your prompt replies and your interests. I am sorry about my attachment. But, I have now attached a jpg format image file as you suggested.
    I had read the few papers on large data handling such as "LabVIEW Performance and Memory Management". Thus, I had already tried to avoid making unnecessary copies of data and growing arrays in my while-loop. I am not an expert on LabView, so I am not sure if the issues I have are just LabView fundamental limitations or there are any other ways to improve the iteration speed without reducing the input file size and DAQ output rate.
    As you request, I also attach my top-level vi showing essential sections such as while-loop and its indexing. The attached file is as an image jpg format because the actual vi including Sub-VIs are as big as 3MB in total. I hope my attachment would be useful for anyone who would like to reply my question. If anyone would like to see my whole vi & llb files, I would be interesting to send it to you by an e-mail privately and thus please provide your e-mail address.
    The dimension of my 3D array is N x 7 x M (Page x Row x Column), where N represents number of pages in 3D array, and M represents the size of 1D array.  The file I am currently using forms 3D array of N = 28, & M = 10,731.  Refering to the top-level vi picture I attached, my while-loop indexes each page per iteration and wrap-around.  The sub-VI called "channel" inside the while-loop will further index its input (2D array) into seven of 1D arrays for other signal processsing.  The output from that "channel" sub-VI is the superposition of those seven arrays.  I hope my explaination is clear. 
    Attachement: 3Darray.jpg and MyVi.jpg
    Kind Regards,
    Shein
    Attachments:
    3Darray.jpg ‏30 KB
    MyVI.jpg ‏87 KB

  • Need help optimizing the writing of a very large array and streaming it a file

    Hi,
    I have a very large array that I need to create and later write to a TDMS file. The array has 45 million entries, or 4.5x10^7 data points. These data points are of double format. The array is created by using a square pulse waveform generator and user-defined specifications of the delay, wait time, voltages, etc. 
    I'm not sure how to optimize the code so it doesn't take forever. It currently takes at least 40 minutes, and I'm still running it, to create and write this array. I know there needs to be a better way, as the array is large and consumes a lot of memory but it's not absurdly large. The computer I'm running this on is running Windows Vista 32-bit, and has 4GB RAM and an Intel Core 2 CPU @ 1.8Mhz. 
    I've read the "Managing Large Data Sets in LabVIEW" article (http://zone.ni.com/devzone/cda/tut/p/id/3625), but I'm unsure how to apply the principles here.  I believe the problem lies in making too many copies of the array, as creating and writing 1x10^6 values takes < 10 seconds, but writing 4x10^6 values, which should theoretically take < 40 seconds, takes minutes. 
    Is there a way to work with a reference of an array instead of a copy of an array?
    Attached is my current VI, Generate_Square_Pulse_With_TDMS_Stream.VI and it's two dependencies, although I doubt they are bottlenecking the program. 
    Any advice will be very much appreciated. 
    Thanks
    Attachments:
    Generate_Square_Pulse_With_TDMS_Stream.vi ‏13 KB
    Square_Pulse.vi ‏13 KB
    Write_TDMS_File.vi ‏27 KB

    Thanks Ravens Fan, using replace array subset and initializing the array beforehand sped up the process immensely. I can now generate an array of 45,000,000 doubles in about one second.
    However, when I try to write all of that out to TDMS at the end LV runs out of memory and crashes. Is it possible to write out the data in blocks and make sure memory is freed up before writing out the next block? I can use a simple loop to write out the blocks, but I'm unsure how to verify that memory has been cleared before proceeding.  Furthermore, is there a way to ensure that memory and all resources are freed up at the end of the waveform generation VI? 
    Attached is my new VI, and a refined TDMS write VI (I just disabled the file viewer at the end). Sorry that it's a tad bit messy at the moment, but most of that mess comes from doing some arithmetic to determine which indices to replace array subsets with. I currently have the TDMS write disabled.
    Just to clarify the above, I understand how to write out the data in blocks; my question is: how do I ensure that memory is freed up between subsequent writes, and how do I ensure that memory is freed up after execution of the VI?
    @Jeff: I'm generating the waveform here, not reading it. I guess I'm not generating a "waveform" but rather a set of doubles. However, converting that into an actual waveform can come later. 
    Thanks for the replies!
    Attachments:
    Generate_Square_Pulse_With_TDMS_Stream.vi ‏14 KB
    Write_TDMS_File.vi ‏27 KB

  • How to improve spreadsheet speed when single-threaded VBA is the bottleneck.

    My brother works with massive Excel spreadsheets and needs to speed them up. Gigabytes in size and often with a million rows and many sheets within the workbook. He's already refined the sheets to take advantage of Excel's multi-thread recalculation and
    seen significant improvements, but he's hit a stumbling block. He uses extensive VBA code to aid clarity, but the VB engine is single-threaded, and these relatively simple functions can be called millions of times. Some functions are trivial (e.g. conversion
    functions) and just for clarity and easily unwound (at the expense of clarity), some could be unwound but that would make the spreadsheets much more complex, and others could not be unwound. 
    He's aware of http://www.analystcave.com/excel-vba-multithreading-tool/ and similar tools but they don't help as the granularity is insufficiently fine. 
    So what can he do? A search shows requests for multi-threaded VBA going back over a decade.
    qts

    Hi,
    >> The VB engine is single-threaded, and these relatively simple functions can be called millions of times.
    The Office Object Model is
    Single-Threaded Apartments, if the performance bottleneck is the Excel Object Model operation, the multiple-thread will not improve the performance significantly.
    >> How to improve spreadsheet speed when single-threaded VBA is the bottleneck.
    The performance optimization should be based on the business. Since I’m not familiar with your business, so I can only give you some general suggestions from the technical perspective. According to your description, the size of the spreadsheet had reached
    Gigabytes and data volume is about 1 million rows. If so, I will suggest you storing the data to SQL Server and then use the analysis tools (e.g. Power Pivot).
    Create a memory-efficient Data Model using Excel 2013
    and the Power Pivot add-in
    As
    ryguy72 suggested, you can also leverage some other third party data processing tool
    according to your business requirement.
    Regards,
    Jeffrey
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Why is labview so slow when I use an array subset to scan through a large array

    I am using an array sub set to scroll  arrays across an intensity graph, and it works but when I load a large array (ex. 482 x 1000) the program is extremely slow, updating appox. every 3 seconds. What I'm I doing wrong?
    - there is always an easy way, but it is always the hardest to find

    Can you attach your code so we can see how you do this?
    How big are the subsets you are extracting?
    LabVIEW Champion . Do more with less code and in less time .

  • Have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency     When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.     Do I combine them and then copy

    I have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.  Do I combine them and then copy to JPEG 2000 or do I have to save each page separately which is very time consuming Please advise me how to reduce the size and send 10 pages plus quickly by Adobe without the huge hassles I am enduring

    What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.

  • TS1368 when downloading large files such as videos of television season do you need to turn off autolock to prevent download from halting

    when downloading large files such as videos of a television season do you need to turn off autolock to prevent download from halting? And if so, if the download will take a significant amount of time (not sure how much offhand) how do you prevent burn in from prolonged screen image?

    Among the alternatives not mentioned... Using a TiVo DVR, rather than the X1; a Roamio Plus or Pro would solve both the concern over the quality of the DVR, as well as providing the MoCA bridge capability the poster so desperately wanted the X1 DVR to provide. (Although the TiVo's support only MoCA 1.1.) Just get a third-party MoCA adapter for the distant location. Why the hang-up on having a device provided by Comcast? This seems especially ironic given the opinions expressed regarding payments over time to Comcast. If a MoCA 2.0 bridge was the requirement, they don't exist outside providers. So couldn't the poster have simply requested a replacement XB3 from the local office and configured it down to only providing MoCA bridging -- and perhaps as a wireless access point? Comcast would bill him the monthly rate for the extra device, but such is the state of MoCA 2.0. Much of the OP sounds like frustration over devices providing capabilities the poster *thinks* they should have.

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • HT1222 Hi Does anyone know how can I stop my iPhone 4s downloading automatic system update so reducing my network speed when plugged in to charge or connected to the pc ??? I'm really ok with my iOS 5.0.1 and I don't need 5.1

    Hi Does anyone know how can I stop my iPhone 4s downloading automatic system update so reducing my network speed when plugged in to charge or connected to the pc ??? I'm really ok with my iOS 5.0.1 and I don't need 5.1

    Setting up and troubleshooting Mail
    http://www.apple.com/support/ipad/assistant/mail/
    iPad Mail
    http://www.apple.com/support/ipad/mail/
    Try this first - Reset the iPad by holding down on the Sleep and Home buttons at the same time for about 10-15 seconds until the Apple Logo appears - ignore the red slider - let go of the buttons. (This is equivalent to rebooting your computer.)
    Or this - Delete the account in Mail and then set it up again. Settings->Mail, Contacts, Calendars -> Accounts   Tap on the Account, then on the red button that says Remove Account.
     Cheers, Tom

  • Performance problem when initializing a large array

    I am writing an application in C on a SUN T1 machine running Solaris 10. I compiled my application using cc in Sun Studio 11.
    In my code I allocate a large array on the heap. I then initialize every element in this array. When the array contains up to about 2 million elements, the performance is as I would expect -- increasing run time due to more elements to process, cache misses, and instructions to execute. However, once the array size is on the order of 4 million or more elements, the run time increases dramatically -- a large jump not in line with the other increases due to more elements, instructions, cache misses, etc.
    An interesting aspect is that I experience this problem regardless of element size. The break point in performance happens between 2 and 4 million elements, even if the elements are one byte or 64 bytes.
    Could there be a paging issue or other subtle memory allocation issue happening here?
    Thanks in advance for any help you can give.
    -John

    to save me writing some code to reproduce this odd behaviour do you have a small testcase that shows this?
    tim

  • I have ios5 on iphone 4, but it makes iphone too much slower, release update to improve phone speed, when i touch home button, it have slower reaction, too while i scroll in menu and shuting down the applications, infinity blade is frozing

    i have ios5 on iphone 4, but it makes iphone too much slower, release update to improve phone speed, when i touch home button, it have slower reaction, too while i scroll in menu and shuting down the applications, infinity blade is frozing during video game parts and during playing its 3 or 4x slower reactions, make update for ios5 please,
    - reminders are ok
    - camera- perfect
    - newstand - perfect
    - icloud too

    Hi
    I have a iPhone 4S this is not siri it is actally something to help those who have some sort of disability by reading the screen you can disable this by going to general then go to the bottom to accessability
    Hope this helps you

  • HT1338 my mac keep freezing? what can i do to improve speed.

    My mac pro keeps freezing what can I do to improve speed amd performance?

    Things That Can Keep Your Computer From Slowing Down
    If your computer seems to be running slower here are some things you can do:
    Boot into Safe Mode then repair your hard drive and permissions:
    Repair the Hard Drive and Permissions Pre-Lion
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Utilities menu. After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list.  In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive.  If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit DU and return to the installer.
    If DU reports errors it cannot fix, then you will need Disk Warrior and/or Tech Tool Pro to repair the drive. If you don't have either of them or if neither of them can fix the drive, then you will need to reformat the drive and reinstall OS X.
    Repair the Hard Drive - Lion
    Boot from your Lion Recovery HD. When the recovery menu appears select Disk Utility. After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list.  In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive.  If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then click on the Repair Permissions button. Wait until the operation completes, then quit DU and return to the main menu. Select Restart from the Apple menu.
    Boot to the Recovery HD:
    Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
    Restart your computer normally and see if this has helped any. Next do some maintenance:
    Suggestions for OS X Maintenance
    For situations Disk Utility cannot handle the best third-party utility is Disk Warrior;  DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep.  Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced since Tiger.  These utilities have limited or no functionality with Snow Leopard or Lion and should not be installed.
    OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems. For more about malware see Macintosh Virus Guide.
    I would also recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack.  If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line.  Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. There is no confirmation that this version also works with Lion.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    Carbon Copy Cloner
    Data Backup
    Deja Vu
    SuperDuper!
    SyncTwoFolders
    Synk Pro
    Synk Standard
    Tri-Backup
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac maintenance Quick Assist.
    Referenced software can be found at CNet Downloads or MacUpdate.
    Additional Hints
    Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity as free space.
    Add more RAM. If your computer has less than 2 GBs of RAM and you are using OS X Leopard or later, then you can do with more RAM. Snow Leopard and Lion work much better with 4 GBs of RAM than their system minimums. The more concurrent applications you tend to use the more RAM you should have.
    Always maintain at least 15 GBs or 10% of your hard drive's capacity as free space, whichever is greater. OS X is frequently accessing your hard drive, so providing adequate free space will keep things from slowing down.
    Check for applications that may be hogging the CPU:
    Open Activity Monitor in the Utilities folder.  Select All Processes from the Processes dropdown menu.  Click twice on the CPU% column header to display in descending order.  If you find a process using a large amount of CPU time, then select the process and click on the Quit icon in the toolbar.  Click on the Force Quit button to kill the process.  See if that helps.  Be sure to note the name of the runaway process so you can track down the cause of the problem.
    Often this problem occurs because of a corrupted cache or preferences file or an attempt to write to a corrupted log file.

Maybe you are looking for

  • Screen Sharing and MobileMe

    Hi There, I have a problem, I have a MobileMe account, as does my brother who is currently at university in the US, Im working out of an Apple Mac Pro that is going into a BeBox Speedtouch. Typically, when i'm at work i can connect via 'Back to My Ma

  • My iPod Touch 4th Gen won't play certain songs?

    Okay, so here's what happened. I opened my iTunes (the latest one) and it's also on the latest iOS, and then plugged my iPod touch into the computer. Since an album didn't have album artwork and I wanted an album artwork to show up in my ipod, I adde

  • MRP and Purchase Requisitions..

    Hello, I want to know              When MRP generates the Purchase Requisitions which has many due dates per lines,  how does the dating logic work. Can we state create one PO for all lines due on a specific date and create a second one for other dat

  • How to display the elapsed time of a video (event video) in Captivate 6

    I have a Captivate 6 project which includes two separate videos and I am wondering how I can display an elapsed time for the videos - preferably one that stops if the individual stops the video.

  • Which would be a better purchase, Zen Touch or Zen Mic

    Okay for christmas this year I am asking for an mp3 player. Ive done my research and decided between the Zen Touch or Zen Micro so that I may take advantage of Napsters music downloading service. Now my delima is which would be a better purchase. Ive