Timed loop and CPU usage

Platform is WIN_XP Pro and machine is a P4 at 2.5Ghz with 512 Mb ram.
LV7.1 + PCI 6229
I am using  50ms Timed loop for running a state machine inside it
and also a  whole lot of other things like reading / writing
DAQMx  functions;  file I/O functions and such. As the
project involves a  main and sub-panlel set up local variables
could not be elimnated fully and there should be something like 150 of
them. But not all are accessed always - maybe about 15 of them at any
given time depending on the SM staus.
Problem :
Once started the "Finished late"  indication  is off and
the  actual timing  alternates between 49 to 52 ms. The CPU
usage is around 25%.
But as time goes by,  the system gets unstable : After 15 minutes
or so, the Finished Late indication is always ON and the CPU usage is
gradually tending towards or exceeds 100%. 
Obviously the machine control timing now gets affected and things slow
down badly. Closing the application ands restarting repeats the above
cycle.
I am at a loss  to understand what is happening ?  WIll
breaking down the single Timed Loop to multiple ones help  ? WIll
that be an efficient way of parallel threading ?
I can post the code but its quite large and will do it as a last resort.
thanks
Raghunathan
Raghunathan
LV2012 to Automate Hydraulic Test rigs.

Hello,
It sounds like an interesting problem.  It would be worth some experimentation to figure out what's going wrong - attempting to decouple major "pieces" of the code would be helpful.  For example, you could try breaking your code into multiple loops if that makes sense in your architecture, but perhaps you could even eliminate all but one of the loops to begin with, and see if you can correlate the problem to the code in just one of your loops.
Another concern is that you mention using many local variables.  Variable read operations cause new buffer allocations, so if you're passing arrays around that way, you could be hitting a problem of forcing your machine to perform many allocations and deallocations of memory.  As arrays grow, this can be a bigger and bigger problem.  You can use other techniques for passing data around your block diagram, such as dataflow if possible (just simple wires), or queues where dataflow can't dicatate program flow completely.
Hopefully looking into your code with the above considerations will lead you in the right direction.  In your case, removing code so that you can identify which elements are causing the problem should help significantly.
Best Regards,
JLS
Best,
JLS
Sixclear

Similar Messages

  • Problem with memory usage and CPU usage

    Hello,
    i have a problem with the memory usage and cpu usage in my project!
    My application must run for more than 24 hrs. The problem is that the longer it runs the bigger is the memory and cpu usage!
    It starts with ~15% CPU usage and ~70 MBytes memory usage. After ~ 24hrs the CPU usage is ~60% and the memory usage is ~170 MBytes!
    After  3 days the CPU usage is almost about 70% and the memory usage is about 360 MBytes!
    What can I do to reduce this huge recource usage?
    Thank you!

    Hi Pahe,
       I think the issue is memory usage, since CPU usage can increase due to greater memory requirements.
       Anyway, it's difficult to debug without seeing your code, can you post it (possibly for LV 7.1 compatibility)?  Or just post a JPEG of the piece of code that can give problems...
       I guess you're appending data to an array instead of replace data elements, but I can't be sure...
       Have a nice day!
    graziano

  • I think I am see a bug, Under FF 3.6 Mem Usage is 80 mb and CPU is 2%. Under FF 4.0 Mem Usage is 210+mb and CPU usage is 35%. Also in FB on some Zynga games the plugin-container Mem Usage is running at over 400mb in FF 4.0, is this a known problem?

    I think I am see a bug, Under FF 3.6 Mem Usage is 80 mb and CPU is 2%. Under FF 4.0 Mem Usage is 210+mb and CPU usage is 35%. Also on FB on some Zynga games the plugin-container.exe Mem Usage is running at over 400mb in FF 4.0, is this a known problem?

    I think I am see a bug, Under FF 3.6 Mem Usage is 80 mb and CPU is 2%. Under FF 4.0 Mem Usage is 210+mb and CPU usage is 35%. Also on FB on some Zynga games the plugin-container.exe Mem Usage is running at over 400mb in FF 4.0, is this a known problem?

  • How to check the memory and CPU usage on NPU

    Hi expert ,
    Could you tell me how to check the memory and cpu usage on NPU?
    I think ASR1000 has this command (show platform hardware qfp active...) , but I coudn't find the same command on ASR9001.
    I checked " show controllers np counters all"  command, but I did not find it.
    Platform: ASR9001
    version: 4.3.2
    NP: Tyhoon
    Thanks,

    Hi Kanako,
    understand your question, but something like that doesnt exist for the trident/typhoon NPU's for the asr9k.
    in fact cpu usage is not really applicable for parallel/serialized hardware forwarders, but mainly the KPI is here
    the PPS it handles.
    The max pps it can handle is dependent on feature set.
    What you could do is measure the rates on the PARSE_FABRIC_RECEIVE count (packets received from fabric, so egress direction), PARSE_ENET_RECEIVE count (packets received from the interface to the fabric, so ingress) and the LOOPBACK count, that is packets that are recirculated for extra processing. The sum of these 3 gives an idea of the PPS rate the NPU is working on.
    Memory consumption is something tricky also, there are many memories attached to the NPU, notably:
    TCAM used for vlan matching, ACL and QOS matching
    Search memory for L2 and L3 (shared on Trident, separated on Typhoon)
    Frame memory for packet bufferin in QOS
    Stats memory, for all kinds of counters of QOS maps, interface stats etc
    their usage is rather dynamic also.
    The few things you want to verify on these are:
    - ACL usage (in terms of Tcam entries)
    - MAC table size (2M max on typhoon)
    - FIB size (4M max on typhoon)
    frame memory is dependent on the buffering, which is a sum of all the packets instant queue length and size of all pmaps attached to interfaces served by the same NPU. there is no good overview as to how much you have available of that, but it has a rather significant size 1G or 2G depending on the TR or SE cards (typhoon specific).
    regards
    xander
    Xander Thuijs CCIE #6775
    Principal Engineer 
    ASR9000, CRS, NCS6000 & IOS-XR

  • How to control timing without 100% CPU usage

    I wanted fine control over timing (in windows XP), but ran into two problems.
    Problem 1: If you use the Swing timer, or Thread.sleep, the resolution is limited to 10 or 11 milliseconds. There is a Thread.sleep(millis, nanos) function, but I tested it and it still has 11ms resolution in WinXP.
    Problem 2: If you use jbanes's GAGE timer, CPU utilization will always be 100%.
    Solution: Use a hybrid technique. I would love to have nanosecond resolution AND low CPU utilization at any speed (and if you have any suggestions, please post them) but for now:
    class /*name*/ extends Thread{
    public void run(){
              setDelay(delaySettings[speed]); //Set "delay" to the desired delay in nanoseconds
              long nanos;
              while(true){
                   if(running){ //"running" is a boolean that can pause or unpause the game
                        nanos=System.nanoTime()+delay;
                        tick(); //Do the game logic and graphics for one frame
                        if(delay>11000000){ //If the system timer can handle it
                             try{sleep(delay/1000000);} //delay/1000000 gives millis
                             catch(Exception e){System.out.println("Caught sleep exception: "+e);}
                        }else{ //use a nanotimer (cpu-expensive)
                             while(System.nanoTime()<nanos){}
                   }else{ //it is paused, so wait a bit
                        try{sleep(50);}
                        catch(Exception e){System.out.println("Caught sleep exception: "+e);}
    //The rest of your code
    }If your desired delay is greater than the system timer resolution (here, I have it set at 11,000,000 nanoseconds, or 11ms) then you can use the Thread.sleep(milliseconds) call, which will have approximately 0% CPU utilization until the thread wakes up. Theoretically, you could use this time for another thread, but at the very minimum, your computer should use less power / generate less heat. If your desired resolution is smaller than 11,000,000 nanoseconds, it goes into a loop that checks nanoTime()... which gives you 100% CPU usage, but is very accurate. This works pretty well if you want to control the framerate dynamically (I use the "-"/"=" buttons to adjust speed) and it handles "pause" events, though I'm sure there are better ways to do that.
    Note: The timing granularity above 11ms for this technique is probably 11ms, though the code could be modified to provide nanosecond granularity at any speed.
    Note 2: I found this to run 3% faster than when I used the GAGE timer.
    -Cherry

    Pause the game when u alt-tab away ^_^
    Thats what most native fullscreen games do (the
    non-networked kind atleast)Good idea, but the crux of the issue is really the underlying scheduler which allows
    the thread to race.
    How to "Eliminate" 100% CPU usage.
    Tony's Law of the exec constant. <<<<"Any thread or process running on a non-preemptive operating system MUST NOT use blocking
    IO, and MUST preempt itself at leat 20MS per iteration."
    The reason it is called a constant is because it is the minimum time needed to ensure
    the underlying operating system gets enough CPU cycles to function correctly.
    Every milliseconds below this increases exponentially the chance of locking up or
    crashing despite the speed of your system.
    Most people who program do not realize the the implications of running under defective schedulers
    such as the one provided with MS windows where, for example, you can block on a socket, and
    hang your whole system.
    >>> Do NOT use Thread.sleep() <<< but instead use Object.wait();
    Do not use blockng IO. If you have to use java.io, use available() to make such you ONLY
    read the exact amount of bytes you need without blocking and make sure to prempt
    yourself at least 20MS per iteration.
    ie:
    InputStream inputStream;
    while(working)
    Object waitObject = new Object();
    int readCount = 0;
    int availableBytes = 0;
    int  totalBytes = 10;
       while(readCount < totalBytes)
          if((availableBytes = inputStream.available())!=0)
              read() (the # of Bytes available)
          else
              synchronized(waitObject)
                   waitObject.wait(200);        // 5X per second poll
                    waitObject.notifyAll();
    Using wait() removes the current thread from being scheduled; Thread.sleep()
    leaves the Thread on the schedule list. wait() releases all monitors allow other
    threads to have them; THread.sleep() does not.
    The 100% CPU issue is especially bad under NT BTW.
    If you do this not only will your CPU usage go to negligible, but you
    will never again lock your system because of it.
    Good Luck!
    (T)

  • Problem with paintComponent, and CPU usage

    Hi,
    I had the weirdest problem when one of my JDialogs would open. Inside the JDialog, I have a JPanel, with this paintComponent method:
    public void paintComponent(Graphics g) {
    super.paintComponent(g);
    Color old = g.getColor();
    g.setColor(Color.BLACK);
    g.drawString("TEXT", 150, 15);
    g.setColor(old);
    parent.repaint();
    now when this method is called, the CPU usage would be at about 99%. If i took out the line:
    parent.repaint();
    the CPU usage would drop to normal. parent is just a reference to the Jdialog that this panel lies in.
    Anyone have any ideas on why this occurs?
    Thanks!

    1) I never got a stack overflow, and i have had this in my code for quite sometime...If you called paint() or paintComponent(), I'm betting that you would see a stack overflow. The way I understand it, since you are calling repaint(), all you are doing is notifying the repaint manager that the component needs to be repainted at the next available time, not calling a paint method directly. good article on how painting is done : http://java.sun.com/products/jfc/tsc/articles/painting/index.html#mgr
    2) If this is the case, and the two keep asking eachother to paint over and over, .....
    see above answer

  • A problem with delays in timed loops and DAQ

    I am programming a simulation for nuclear rewetting for a visitor centre at my company in Switzerland. It involves heating a "fuel rod" and then filling the chamber with water. The pump automatically starts once the rod core reaches 750C. After this, a requirement stipulates that flow rate be checked to ensure the pump is operating at the necessary conditions. If it isn't, the heater must be shutdown to avoid, well... meltdown. However, we must allow 10 seconds for the pump to respond, while still allowing a DAQ rate of 10-100Hz.
    The challenge is that I can't add a delay in my main loop else delay all acquisition, but I can't figure out how to trigger a peripheral loop (with DAQ for the single channel of checking flow) from the main loop, and when the peripheral loop determines if flow has initalised, respond back to the main loop with the okay.
    I think much of my confusion is in the interaction of the loops and the default feedback nodes that labview is putting in willy nilly. Would the only solution be to have two 'main' loops that don't communicate with eachother but rather do the same thing while operating on different timing? Tell me if you want me to post the file (although its on an unnetworked computer and I didn't think it would be too useful).
    Thanks+ Curran
    Solved!
    Go to Solution.

    Here it is! It is not in any form of completion unfortunately.
    So reading in the temp with NI9213 and watercolumn height with NI9215, we determine to turn on the pump with NI9472. NI9421 determines whether the pump is on (there is flow) and I must respond accordingly.
    I have 3 scenarios similar to this one as well, so having redundant loops with different timing like I mentioned would be way to heavy. I think I may have though up of a solution? At the time the pump is initiated, we record the iteration and wait for a number of iterations that correspond to 10s to pass before fulfilling the pump shutoff requirement?
    Attachments:
    rewettin1.vi ‏15 KB

  • How to find discoverer log files and CPU usage

    Hi,
    i have a report running very slow in prod. but runs fast in dev and qa.
    **Where/how can i find the Log files generated after the report runs?**
    **Also i wanted to see how i can find the CPU Usage**
    i searched online for CPU usage... and found this --> ""For the CPU usage of these servlets, please navigate to the Discoverer application page from the OC4J_BI_Forms home page in the EM standalone console.""
    can anyone tell me how to navigate to OC4J_BI_Forms home page in the EM standalone console?
    Thanks!
    gayatri

    Hi gayathri,
    This link would help i suppose
    http://docs.huihoo.com/oracle/docs/B14099_19/bi.1012/b13918/maint.htm#i1025807
    and http://docs.huihoo.com/oracle/docs/B14099_19/bi.1012/b13918/tshoot.htm#CHDHFJDE
    Be careful,if not done without prior knowledge it would mess up.So mostly this job should be done by DBA of your company.They can log in as sysadministrator to get to "Enterprise Manager" and do as per the link
    Best Wishes,
    Kranthi.

  • NI DAQmx 8.3 and CPU usage

    I have similar problem as described on page http://forums.ni.com/ni/board/message?board.id=250&message.id=23831
    but have solution and some reflections.
    first of all, about system
    system use 2x PCI-6143 and take data from 16 channels with maximum rate (250 kS/s)
    program
    was created about year ago (with LV 7.1.1, DAQmx 7.4) and on PC with
    PIV (3.2 GHz) processor and 1 Gb of RAM work properly (CPU usage less
    30%).
    Now I've recompile program with LV8.2 and update drivers to DAQmx 8.3. now program can't work properly because CPU usage 100%
    I create test VI for research this problem.
    First version (daq_test1) - simple code - configure daqtask and get data.
    I found:
    if
    sample rate less or equal 90 kS/s (on PCI-6143) CPU usage less 5%, but
    when rate more 95 kS/s CPU usage jump to 60% (for one or more channels
    on one or two devises).
    Then I create simulated devise PCI-6143 in MAX and the same vi (daq_test1) require not more than 5-10% of CPU.
    also
    I have examine devise 16E-4 and find the same problem: if samle rate
    more than "boundary" rate, CPU usage is too much (50-70%).
    Then I add configure parameters (daq_test2) - add manualy DMA "on". There are no effects (CPU usage jump to 60% on "fast" rates)
    Then I  "play" with WaitMode (daq_test3). There are no effects (even if WaitMode=Sleep and SleepTime>1s CPU usage about 60%)
    Finally
    I create vi (daq_test4), where "manually" check
    AvailableSamplesPerChannel and call DAQmx Read only when bufer has more
    samples, that I need.
    So I'd wound solution, but think driver
    MUST check available samples carefully (also, why DAQmx 7.4 can fast
    scan, but  DAQmx 8.3 can't do it at least the same way as previous version).
    Attachments:
    daq_test.zip ‏78 KB

    Artem,
    Thank you for contacting National Instruments support.  This is expected behavior. When the wait mode is set to Sleep, DAQmx will only sleep if and only if there is no data available to process. So, for example,  at 200 kS/s, data is filling up the buffer fairly quickly and should always be available to read, which means that the process will not sleep at all. When data is always available, the CPU is going to handle it as quickly as possible and thus the CPU will appear to be running at near 100%.
    Specifying a wait mode of Sleep is really only beneficial when the acquisition is running slowly enough that a lot of time is spent in DAQmx Read waiting for the data to arrive. In the past, the CPu was maxed out even at extremely slow sampling rates.  Sleep mode alleviates this problem by putting the process to sleep when no data is ready for processing; however, if data is available, it will be handled immediately and no sleeping will occur. If data is constantly available for processing, the CPU is expected to deal with it right away and the usage will appear to be high.
    Regards,
    Kenn North
    Senior Product Manager - Search, Product Data
    http://ni.com/search

  • When I am in e-mail. I get the spinning color wheel. Then, when I check the activity montior, it notes high cpu usage (99%) for "Safari web content.". then I quit this process and cpu usage drops way done. but I have to log in again to my e-mail. what can

    When I am in e-mail, I sometimes get the spinning color wheel, which lock up everything. when it doesn't go away, I see on the activity monitor, that CPU usage is very high (99%), for a process calles " Safari web content'. When I click on the proccess and quite, the CPUusage goes way down. Problem is that I have to then log back in to e-mail. What can I do to eliminate this proble.  thanks

    How much FREE RAM do you have when this occurs. It's easy to tell, when it beach balls open Activity Monitor (Applications - Utilities - Activity Monitor) click the System Memory tab and note how much FREE RAM is listed. If it's 500MB or less it's time for a RAM upgrade.

  • Updating songs on IPod in Windows and CPU usage

    Whever i update songs on my ipod, iTunes will use lots of CPU. I do not have a problem with this as i have a dual core system that can handle the load, the problem is that when i switch away to another application iTunes will suddenly decide to idle and reduces it's cpu usage and the speed at which it encodes/transfers.
    I'm guessing this is to increase respsonsivness so that you can use your computer but it makes the transfer horribly slow.
    So, my question is how do i make iTunes use more CPU when it's not the foreground window? Is there a way?
    Dual Core 2.66Mhz Pentium D Windows XP Pro

    When you bring another application to the front of iTunes while it is updating, it will reduce the cpu usage of itself. Load up task manager and see for yourself. Normally when iTunes is the foreground window it will use 70%-80% of the CPU for encoding. Now this isn't a problem, i have a dual core CPU so iTunes can use all the cpu it wants and it won't lag the rest of the computer ( even if i do play games)
    My problem occurs when i want to switch away from iTunes and browse the web while waiting for it to update, when i do this it takes forever to update the iPod because iTunes has gone into low priority mode and only uses about 5% CPU, I want to know if there is a way to "PREVENT" iTunes into going into this "IDLE" mode and instead use as much CPU as it wants on one core, because i don't want to have to set it up and walk away for an hour.
    I suppose this is really a developer issue, in that they have not used proper multi-threading to take advantage of dual core CPU's. Still an option to disable the low priority mode would be nice.

  • System Performance window and CPU usage

    The System Performance window shows two bars - what do these mean? My guess would be for dual cpu, on my quad only two show up but I assume that's because Apple needs to update to show all four?
    Related question - when doing a torture test on my quad, Logic seems to show almost full CPU usage and choke while Activity Monitor only has logic using about 130% of available cpu (out of 400%). Processor load seems to be split equally between all four (none of which seem to be averaging over 50%). On a dual, how close does Logic get to 200%? Is there anything I can do to get Logic to use more of the available CPU power?
    I'm running an assortment of softsynths, most with Space Designer (do longer verbs in SD use more CPU?).
    Along these same lines, are there any standard Logic benchmarks?

    They indicate multiple processors if audio is subdivided. It is most likely Logic hasn't got a CPU meter update-don't the quads simply operate as pairs per core-fuctionally similar to a Dual but when it comes to the processor the duties are split, as opposed to 4 procs it works more like 2 coupled...
    Longer IRs require more calculations, they add up really fast, esp. at higher sampling rates-because the IRs convert to the host (session) rate, as opposed to the rate the IR is.

  • FCE 4 vs. FCE 3.5 and CPU usage?

    I just upgraded to FCE 4, and I noticed a difference in CPU usage right away in the Activity Monitor. FCE 3.5 would often go over 100%, but when I render in FCE 4, the levels are consistently lower.
    Does this mean that FCE 4 isn't rendering with as much CPU as FCE 3.5? Is there a setting that's been changed in the preferences? I couldn't find anything.
    Thanks in advance,
    Doug Dye

    So, the differences between 3.5 and 4 are so minor, that the upgrade isn't worth it - taking into account the usage of "old" camcorders, occasional use etc.) ? Can anyone highlight the most prominent enhancements which are really important ones for a "general" audience?
    Thanks for any advice!
    Stef
    +Lean Back & Relax - Enjoy some Nature Photography+
    +http://photoblog.la-famille-schwarzer.de+

  • 10.4.9 update and CPU usage up %20, even during idle

    I experienced a general slowdown except with file transfers. I am about to give up and rollback to 10.4.8.

    I rolled back to 10.4.8 (because all my problems appeared in 10.4.9) and everything speeded up and CPU dropped to under %9 while I type this. I no longer have to type a few words and wait for them to appear. I installed all my third party stuff except Norton antivirus and sidetrack. Any new update is 10.5 so ...
    400Mhz Cpu can easily be taxed.
    I never had a performance hit with apple updates since the beginning of OX. In fact when I moved from Puma to Jaguar, I experienced the same speedup as a processor upgrade earlier, 300mhz to 500mhz!

  • Access Connection​s egregeious RAM and CPU usage

    Running Access Connections 5.2 (build 7xcx13ww) on WinXP SP3 on a Thinkpad X31 with built-in 802.11 adaptor.  No ethernet or other networking ports connected.  As long as the GUI isn't up, no problem.
    But if I open the GUI and hit the Find button, about 50% of the time the tool goes into some sort of verrrrry long loop, burning as much as 30 minutes of CPU time and expanding the memory footprint to ~500 MB of "Mem Usage" and 1500 MB of "VM Size" (per Windows Task Manager).
    The "solution" is to kill the GUI and restart it, but something is vewy scwewy awound hewe.
    Message Edited by davodavo on 04-25-2009 07:55 AM
    Message Edited by davodavo on 04-25-2009 07:57 AM

    Installed all the packages you indicated, except for the Intel WiFi driver set (because I'm running on the IBM b/g card).  Everything works pretty much as it did before, a little better.
    But things are still messy in the CPU utilization and memory consumption of AccessConnections.exe:
      - when you push the "find" button for WiFi networks, the CPU utilization goes to 100% and memory footprint jumps by 20 MB or so...it jumps up every time you push the find button and it doesn't decline again even though the base-station iconography disappears
      - just leaving the AC GUI open while there's a normal WiFi connection in place, the GUI uses about 25% of CPU indefinitely
      - the AC GUI also grows its memory footprint by about 250 kB/s indefinitely
      - after 3 CPU minutes of normal operation (around 10 minutes), the AC GUI is using 275 MB of memory and 250 MB of VM space
      - after  10 CPU minutes of heavy use (switching networks several times), over 1 GB each for RAM and VM
    Looks like a memory leak to me...
    Problem occurs when:
      - GUI is up on screen showing the map
      - there's a connection and the green-expanding-circles animation is running around the base station you're connected to
      - the Find button is pushed and several available networks are displayed
    Problem stops when:
      - GUI is minimized or is showing the "Location Profile" or "Options" tab
      - there's no active WiFi connection
      - after 10 CPU minutes, you disconnect and re-connect to the same network.  This is a memory footprint of over 250MB, but at least it stops expanding
      - GUI is terminated
    Message Edited by davodavo on 04-28-2009 12:52 PM
    Message Edited by davodavo on 04-28-2009 04:25 PM

Maybe you are looking for

  • What is the BEST way to upgrade from Leopard to Tiger?

    My current set-up is noted below. The main "Macintosh HD" is 250 GB's in size and is fairly filled up with programs and plug-ins. (I own Logic Studio 8 and Final Cut Pro Studio 2, as well as Adobe's CS4 Web Premium Suite.) Prior to doing the upgrade

  • Why does everything apart from contacts sync?

    Hi, I've just set up my new iMac and started using iMessage with a friend. I noticed his name didn't appear above his txts; just his number. I figured my contacts hadn't  synced from my iPhone so plugged it in and attempted several times with no succ

  • Why does my iPhone 4GS bounce of the TV

    Why does my iPhone 4S bounce of the TV (not flatscreen, old thick Philips) when I put the iPhone on top of it? It happens after some time, perhaps 20 minutes. The phone bounces of the TV and lands on the floor. The TV is off!

  • ASM vs ext3 File system(mount point)

    Please suggest which one is better for small databases. ASM or ext3 File system(mount point)? Any metalink note.

  • [OIM] Middle tier upgrade Policy Store configuration  not found exception

    Hi, During the middle tier upgrade of OIM we are facing the issue the logs are mentioned below. Creating oracle/iam/authzpolicydefn/resources/Logging_zh_CN.properties Creating oracle/iam/authzpolicydefn/resources/Logging_zh_TW.properties Creating ora