Iterator resource intensive?

Hi guys,
My code is below:
HashMap documentMap = new LinkedHashMap();
          for (Iterator it = folderInfoMap.keySet().iterator(); it.hasNext();) {
               String key = (String)it.next();
               FolderInfo folderInfo = (FolderInfo)folderInfoMap.get(key);and I has been using iterator to loop thru all the Collection object many years. However, some joker in my office telling me that I shouldn't use iterator as it's resource intensive. he asked me to use for loop like below instead:
for(int x=0;x<documentMap.size();x++{
documentMap.get(x);
}and he said that the for loop is much lesser in using resources. Any ideas?
or we should convert collection to array then loop it thru like this link mention:
http://www.exampledepot.com/egs/java.util/coll_GetArrayFromVector.html
please help, Thanks !
Thanks & Regards,
Mark

It's not a traversal.
It's a way of describing algorithmic complexity, i.e. the running time of algorithms, ignoring constant factors, i.e. concentrating only on the 'order' of the variables. The big O stands for 'order'. For example, bubble sort is an O(N**2) operation and is therefore avoided: decent sorting algorithms are O(N*log(N)) which gives them a better growth factor as N increases.
In your case, direct access to a HashMap is independent of the number of entries so it is described as an O(1) operation ('an operation of order 1'). Direct access to an element of a linked list takes time proportional to the length of the list, N, so it is an O(N) operation. Traversing a LinkedHashMap requires N operations of O(N) each so it is O(N**2) or 'order N squared'.
The time to do an O(1) operation is constant.
The time to do an O(N) operation is a multiple of N.
The time to do an O(N**2) operation is a multiple of N squared.
So you can see how your Iterator code stacks up against the office joker's for N=1, 2, 4, 8, ... 32768, 65536, ... Every time you double N, yours doubles in time, his quadruples, because of N versus N squared.

Similar Messages

  • Is opportunity to resize images resource-intensive?

    Hello to ALL!!!
    I'm trying to make a site with image gallery(for example -
    pets).And I've thought about optimal size of full image. I've
    chosen 640x480. And I'm going to use <imageresize> command
    for images greater than 640/640. But I don't know(in practice) is
    it a good idea to give opportunity to people to resize their images
    with code of my site? I mean how resource-intensive this process
    for a server? And what will happen if for example 500 people will
    want to make "resize operation" at the same time?
    Thank you for your coments/answers!

    if you are seriously going to have 500 people resizing at
    once, I would
    suspect you might have issues.
    But for just average occasional uploads, you should be fine.
    In my work so far with simple or even large batch resizing, I
    have not
    seen any major drag.
    In fact I am overall very impressed with the speed and
    quality of image
    resizing in CF8.
    a bit OT:
    here's a post about some code i've been working on for
    batch-resizing in
    multiple folders, coupled with a jquery photo gallery ... if
    you *or any
    other cf users* are interested, I am looking for testers
    http://www.miuaiga.com/index.cfm/2008/10/14/CF-Gallery-Creator--jQuery-Slider-Gallery
    Michael Evangelista, Evangelista Design
    Web : www.mredesign.com Blog : www.miuaiga.com
    Developer Newsgroups: news://forums.mredesign.com

  • Is TOP a resource intensive query.

    Hi,
    We are using TOP to monitor load on LInux production server.
    I just wanted to know if TOP is a resource intensive command?
    Whether TOP adds up to the load on server. Say for example TOP running simultenously from 5/6 putty sessions.
    Regards,

    Running top with default parameters is not a resource intensive process. There were issues with long running top sessions in the past, with ported versions of the top command on certain Unix platforms, but Linux was not affected. I don't think you will notice a performance issue running several top sessions. You can easily test it. Whether or not the top utility is the right tool for what you are trying to accomplish however is a different question.

  • Top resource intensive processes on AIX/Unix/Linux

    top is good to find this out, topas on aix, any other tips to find out memory/IO resource intensive processes, here i have for cpu....
    oracle:tulpfsd01$ ps -e -o pcpu -o pid -o user -o args | sort -k 1 | tail -5r
    %CPU PID USER COMMAND
    1.1 1230914 oracle oraclepwpd (LOCAL=NO)
    0.9 1248744 oracle oraclepwpd (LOCAL=NO)
    0.9 1099146 oracle oraclepwpd (LOCAL=NO)
    0.6 503688 oracle oraclepwpd (LOCAL=NO)
    0.5 1239486 oracle oraclepwpd (LOCAL=NO)

    Generally, memory shouldn't be a concern. The total amount of SGA is set (fixed) with SGA_MAX_SIZE and occupies shared memory (to be seen with 'ipcs')
    The PGA is set by PGA_AGGREGATE_TARGET, but is only a rough pointer for oracle. Additional information about memory is seeable with pmap (alt least on linux and solaris)
    You'll see the loaded shared libraries (which are used by the process in a shared matter, so not any process which uses libraries copy their own versions, and you should be able to identify the shared memory (identified with shmid=0xsomething) etc. On linux /proc/<pid>/maps shows the same in a little different way.
    Now we've established that the shared libraries are used shared to preserve memory, and the shared memory is used shared (quite obviously), and 'put' into the process' address space, you should understand why listing VSS (virtual set size) and other memory stuff isn't very helpful: if you just add all listed used memory, there are much components which are not really used by that process.
    Top and topas are helpful for finding CPU intensive processes. That is what both are designed for, to be able to identify the processes which are using the most slices of processor time. Of course, it's only helpful if you truly want to identify top CPU intensive processes. This is also quite easy to find inside the database.
    IO: there's no way to identify processes which are the most intensive writers. Simple because linux doesn't keep statistics about it. Inside the database, it's easy to find (guess you should need an example of that?) please mind you should be aware how the database works: a user process does no writing, it's the logwriter, checkpointer and database writer who does that. The user process only reads.
    I will see if I can find some nmon examples tomorrow, when I've got access to AIX systems.
    If you still want examples, could you be specific?

  • What are the most resource intensive Photoshop tutorials or Action Scripts

    I'm trying to benchmark some RAM on a machine, and was planning on using Photoshop as a portion of the real world tests. So I was wondering what some of the most resourse intensive, yet realistic and common things to do are.
    Alternatively, perhaps somebody knows of some highly complex tutorials I can follow along to and create a Action script myself. I just don't want my tests to be "Opened random image, doubled size, rotated 12-degreese, apply filter, bla bla bla".
    I've been googling around a bit but nothing really caught my eye. I'd rather hear from real people than read through a bunch of "100 best photoshop Action scripts" all day, yaknow?
    Cheers

    I'm trying to benchmark some RAM on a machine, and was planning on using Photoshop as a portion of the real world tests. So I was wondering what some of the most resourse intensive, yet realistic and common things to do are.
    Alternatively, perhaps somebody knows of some highly complex tutorials I can follow along to and create a Action script myself. I just don't want my tests to be "Opened random image, doubled size, rotated 12-degreese, apply filter, bla bla bla".
    I've been googling around a bit but nothing really caught my eye. I'd rather hear from real people than read through a bunch of "100 best photoshop Action scripts" all day, yaknow?
    Cheers

  • ITunes too resource intensive

    If anyone wants to try this simple test, please post your results so I can see if I'm the only one experiencing it-
    First open Windows Task Manager by pressing CTRL-ALT-DEL.
    If iTunes isn't running, open it now.
    Place the mouse cursor on the iTunes volume control, and hold the left mouse button down.
    On my system (P4 2.4 Ghz,768 MB RAM), the CPU usage for iTunes.exe jumps up to 98% or 99%. Also, the Memory Usage for iTunes is around 47 MB.
    I think this is a bit much for an MP3 player. BTW, I'm using iTunes 6.
    Thanks for any replies!!
    -Bill

    Edit: I just noticed that this only happens when iTunes is in "Mini Player" mode (CTRL+M)
    -Bill

  • How resource intensive is UIWebView?

    I need some variable text formatting while displaying a string. Unfortunately UITextView applies properties to the ENTIRE string.
    I am thinking about using UIWebView to format the string with HTML. I will need many UIWebViews though, because they will be subviews of cells that are contained in in a table view. I'm worried about UIWebView being too heavyweight to use in this scenario. Does anyone have advice?

    Actually, it doesn't appear possible to embed a UIWebView in a subclass of UITableViewCell. When scrolling the UITableView, the UIWebView does not update correctly.

  • Resource intensive format trigger

    Hi,
    I am using the folwoing format trig on a repeating fram in reports 2.5:
    numRows number(10);
    begin
    select count(ID) into numRows
    from <your table>
    where address = :address;
    if (numRows > 1) then
    srw.attr.mask := srw.FFCOLOR_ATTR + srw.FILLPATT_ATTR;
    srw.attr.ffcolor := 'r75g75b0';
    srw.attr.fillpatt := 'solid';
    srw.set_attr(0, srw.attr);
    end if;
    end;
    =================
    The problem is it takes too long to run (the table holds 6500 recs.) If I was to change the formatted records to a simple underline, do you think that i cuold achieve a faster running report. At the moment it highlights selected records with a "bar".
    thanx,
    n.

    Nicholas,
    I think your problem is going to be more that you have a select at all in the format trigger. That's still a trip to the db and back everytime that object is formatted. I presume you're highlighting duplicate rows? I don't know your data model, but is it possible to do something like create a break group on your address and count the number of details, even if it's some kind of dummy detail? The format trigger could run off of the count. You could still display the address in the same repeating frame as the columns that would make up its details since it's at a lower frequency. You would have an extra repeating frame, but that's less overhead than an additional select executed for each instance of the repeating frame. If you can accomplish this in the data model, I think you'll find it runs faster than if you added an index.
    Just my 2 cents!
    Toby

  • LMS 4.0 High Resources : High CPU

    Hi Guys
      Can some one help me to fix our LMS 4.0 high CPU issues?. It takes lots of resources and also some time stop responding. I am woudering somesettings not done properly or anyway to fix this issues.
    Please refer the attached picture for more details.
    Regards
    SS

    From the graphic, I take it you're using a virtual machine. We'd need to know some more details of your installation - VM resources allocated and size of network being managed are two of them.
    In general LMS is a very resource-intensive product. Trying to run in an installation that minimally meets the requirements specified will almost always result in marginal performance. One thing I always recommend is that folks allocate plenty of RAM. For instance, it may 'work' with 4 GB but will be much more usable with 8 GB. Also, when certain processes kick off, CPU will spike and sit at 100% for a while while the process completes. LMS is multi-processor-aware and will take advantage of having multiple processor cores to spread its workload across.
    Hope this helps.

  • Sql server resource governor per connection

    anyone know if there's a way to limit resources per connection if the server is busy? e.g. Users A,B,C all want to run resource intensive things at the same time. Can I limit them to user each 25% of the cpu/mem, only if the server is busy?  Can I
    do this without having to create 3 pools? Right now if someone does a long query it will freeze other users connections.

    I was under the assumption that you had three users and each should be given 25% of the resources. I.e., leave 25 % to the rest. You can do this two ways:
    1. Have one pool with 75% (3*25%) for which these three uses all uses.
    2. Have three pools, one pool each for the users.
    Be aware that CPU governance is per scheduler. So depending on the type of load and how your connections are hitting the schedulers (CPUs) and how long running queries you have, the governance may be more or less precise according to your config.
    And as for memory, governor doesn't do the buffer pool. Basically memory is worker memory used for query execution (think hash tables and such).
    Yes, max will only step in when there is a need for it (when somebody else want to use the resources). 2012 introduced however a harder max, called CAP_CPU_PERCENT which *is* a hard limit.
    Tibor Karaszi, SQL Server MVP |
    web | blog

  • Measuring Resource Utilization

    Hi,
    I'm working on a project for my M.Sc. degree. I need to measure the utilization of the machines' resources. Is there any way to access such measurements from Java? Or is there a Java framework or open source project capable of this? I tried using NSClient4J but for some reason it seems inaccurate and does not capture changes quickly from Windows performance counters. I'm not bound to Windows by the way. Linux would be nice.
    Note, I don't won't to measure the application's performance on the JVM but the system's performance as a whole (CPU utilization, free memory, free disk space, etc.)
    Thanks in advance for any help of advice.

    Thanks for the reply Peter. But that is exactly my question (i.e. how can I measure these from Java). When I used NSClient4J, I measured the resource's utilization at regular intervals. But when I start some resource intensive application, this was not reflected appropriately in the measurements. That's I was asking if someone knows of another method/tool for doing this.
    Ahmed

  • Hardware Resources for EJB?

    Could someone tell me what hardware it takes to run an application using EJBs. I heard that they are very resource intensive compared to just jsp, servlets.
    I know servlet technology and am considered learning EJBs. I have no problem running a jsp/servlet application and MySQL database server on the same computer (700 MHZ P3). I am considering trying a JBoss, Tomcat, MySQL combination on the same computer.
    Keep in mind I am talking about a very low transaction rate. Maybe 1 every 15 seconds. This is mostly for learning EJBs.

    You never mentioned the amount of RAM you have on your machine. This could be a limiting factor. Having sufficient RAM (256 MB+) should be good enough to run all the three together for learning purposes.
    Hope this helps
    Anand

  • X201s started reaching high temperatures (122-149 Fahrenheit) when using minimal system resources

    Hello, recently my two month old X201s started to get rather hot (50-65 Celsius) (122-149 Fahrenheit) when I am using minimal system resources and putting little of the CPU to use. So I purchased a cooling pad, which only occasionally works to keep the temperature down.
    I could understand this happening if I were running something resource intensive such as a game as I am quite sure this should not be happening whilst running a word processor.
    and before you suggest looking for blockages to the vents and exhaust I already have and they are all unobstructed.
    the temperatures were checked using the program speed fan
    I would appreciate some insight into this matter
    Thank you 

    Hello hmmm-,
    please check if your cpu is running at full speed all the time.
    You can use a free version of the program Everest to measure your CPU frequency.
    Also check your power plan of the powermanager set it to balanced or set the switch in the middle.
    kind regards.
    Andreas
    P.S. If this doesn´t help, act like lead_org already has suggested.
    Follow @LenovoForums on Twitter! Try the forum search, before first posting: Forum Search Option
    Please insert your type, model (not S/N) number and used OS in your posts.
    I´m a volunteer here using New X1 Carbon, ThinkPad Yoga, Yoga 11s, Yoga 13, T430s,T510, X220t, IdeaCentre B540.
    TIP: If your computer runs satisfactorily now, it may not be necessary to update the system.
     English Community       Deutsche Community       Comunidad en Español

  • Why is my sql running parallel ?

    Hi,
    NOTE: first, I thought sql developer tool is causing this and I opened a thread on that category, thanks to @rp0428's warnings and advises, I realized that something else is happening. so reopened this thread in that category, I also need an admin to delete other one:
    https://forums.oracle.com/forums/thread.jspa?threadID=2420515&tstart=0
    thanks.
    so my problem is:
    I have table partitioned by range (no subpartition) on a DATE column per months. It has almost 100 partitions. I run a query on that table based on partition column:
      select *
      from   hareket_table
      where islem_tar between to_date('01/05/2012', 'dd/mm/yyyy') and to_date('14/07/2012', 'dd/mm/yyyy') -- ISLEM_TAR is my partition column.so, when I run this query from sql developer, query works parallel. I didnt just get execution plan via sql developer interface, first I used "EXPLAIN PLAN FOR" statement (which I always do, I dont use developer tools interfaces generally) then used developer interface (just to be sure) but I didnt satisfied and then I run the query and and get real execution plan via:
    select * from table(dbms_xplan.display_cursor(sql_id => '7cm8cz0k1y0zc', cursor_child_info =>0, format=>'OUTLINE'));and the same execution plan again with PARALLELISM. so INDEXES and TABLE has no parallelism (DEGREE column in DBA_INDEXES and DBA_TABLES is set to 1).
    as I know, if I'm wrong please correct me, there is no reason to oracle run this query as parallel (I also did not give any hint). so I worried and run the same steps in "plsql developer" and query runs noparallel (inteface, explain plan for, dbms_xplan.display_cursor). sqlplus autotrace was the same( just autotrace, didnt try others dbms_xplan etc.) Based on that, I decided sql developer is causing to this (*edit: but I was wrong TOAD did same thing*).
    so I focused on sql developer and I disabled parallel query using:
    alter session disable parallel query;then run the statement again and there were no Parallelism (expectedly).
    so looked for execution plans:
    I run query twice. one with normal, one with session disabled parallel query. and look for executed execution plan for both. (child 0 and 1)
    -- WHEN PARALLEL QUERY IS ENABLE, SESSION DEFAULT
    -- JUST CONNECTED TO DATABASE
      select * from table(dbms_xplan.display_cursor('7cm8cz0k1y0zc', 0, 'OUTLINE'));
    | Id  | Operation            | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT     |               |       |       |  2025 (100)|          |       |       |        |      |            |
    |   1 |  PX COORDINATOR      |               |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)| :TQ10000      |  7910K|  1267M|  2025   (2)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR |               |  7910K|  1267M|  2025   (2)| 00:00:01 |    90 |    92 |  Q1,00 | PCWC |            |
    |*  4 |     TABLE ACCESS FULL| HAREKET_TABLE |  7910K|  1267M|  2025   (2)| 00:00:01 |    90 |    92 |  Q1,00 | PCWP |            |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
          DB_VERSION('11.2.0.2')
          OPT_PARAM('query_rewrite_enabled' 'false')
          OPT_PARAM('optimizer_index_cost_adj' 30)
          OPT_PARAM('optimizer_index_caching' 50)
          OPT_PARAM('optimizer_dynamic_sampling' 6)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$1")
          FULL(@"SEL$1" "HAREKET_TABLE"@"SEL$1")
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       4 - access(:Z>=:Z AND :Z<=:Z)
           filter(("ISLEM_TAR">=TO_DATE(' 2012-05-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "ISLEM_TAR"<=TO_DATE('
                  2012-07-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    --WHEN DISABLED PARALLEL QUERY
    --AFTER CONNECTED, EXECUTED "ALTER SESSION DISABLE PARALLEL QUERY"
    select * from table(dbms_xplan.display_cursor('7cm8cz0k1y0zc', 1, 'OUTLINE'));
    | Id  | Operation                | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT         |               |       |       | 36504 (100)|          |       |       |
    |   1 |  PARTITION RANGE ITERATOR|               |  7910K|  1267M| 36504   (2)| 00:00:04 |    90 |    92 |
    |*  2 |   TABLE ACCESS FULL      | HAREKET_TABLE |  7910K|  1267M| 36504   (2)| 00:00:04 |    90 |    92 |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
          DB_VERSION('11.2.0.2')
          OPT_PARAM('query_rewrite_enabled' 'false')
          OPT_PARAM('optimizer_index_cost_adj' 30)
          OPT_PARAM('optimizer_index_caching' 50)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$1")
          FULL(@"SEL$1" "HAREKET_TABLE"@"SEL$1")
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       2 - filter(("ISLEM_TAR">=TO_DATE(' 2012-05-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "ISLEM_TAR"<=TO_DATE(' 2012-07-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    as you can see, when I've just connected to database (no any statements run) OPT_PARAM('optimizer_dynamic_sampling' 6) is in my stats.
    when I disable parallel query on the session, this is not in stats...
    value of optimizer_dynamic_sampling is 2 in DB. so why is that query runs parallel ? I don't want that.
    thanks for answers

    >
    NOTE: first, I thought sql developer tool is causing this and I opened a thread on that category, thanks to @rp0428's warnings and advises, I realized that something else is happening. so reopened this thread in that category, I also need an admin to delete other one:
    https://forums.oracle.com/forums/thread.jspa?threadID=2420515&tstart=0
    as you can see, when I've just connected to database (no any statements run) OPT_PARAM('optimizer_dynamic_sampling' 6) is in my stats.
    when I disable parallel query on the session, this is not in stats...
    value of optimizer_dynamic_sampling is 2 in DB. so why is that query runs parallel ? I don't want that.
    >
    I answered this question in that other thread, that is now gone. I pointed you to, and quoted from, a blog that tells you EXACTLY why that is happening. And I also gave you a link to an article by Oracle ACE and noted author Jonathan Lewis. You either didn't see the links or didn't read them.
    Maria Colgan is an Oracle developer and a member of the optimizer developer team. She has many article on the net that talk about the optimizer, how it works, how to use it.
    The one I pointed you to, and quoted from, is titled 'Dynamic sampling and its impact on the Optimizer'
    https://blogs.oracle.com/optimizer/entry/dynamic_sampling_and_its_impact_on_the_optimizer
    >
    For serial SQL statements the dynamic sampling level will depend on the value of the OPTIMIZER_DYNAMIC_SAMPLING parameter and will not be triggered automatically by the optimizer. The reason for this is that serial statements are typically short running and any overhead at compile time could have a huge impact on their performance. Where as we expect parallel statements to be more resource intensive, so the additional overhead at compile time is worth it to ensure we can be best execution plan.
    In our original example the SQL statement is serial, which is why we needed to manual set the value for OPTIMIZER_DYNAMIC_SAMPLING parameter. If we were to issue a similar style of query against a larger table that had the parallel attribute set we can see the dynamic sampling kicking in.
    You should also note that setting OPTIMIZER_FEATURES_ENABLE to 9.2.0 or earlier will disable dynamic sampling all together.
    When should you use dynamic sampling? DS is typically recommended when you know you are getting a bad execution plan due to complex predicates. However, you should try and use an alter session statement to set the value for OPTIMIZER_DYNAMIC_SAMPLING parameter as it can be extremely difficult to come up with a system-wide setting.
    When is it not a good idea to use dynamic sampling? If the queries compile times need to be as fast as possible, for example, unrepeated OLTP queries where you can't amortize the additional cost of compilation over many executions.
    >
    If you read the article and particularly the first two paragraphs above Maria expalins why dynamic sampling was used in your case. And for a table with as many partitions as yours Oracle chose to use sampling level six (256 blocks, see article); a level of two would only sample 64 blocks and you have 90+ partitions. Oracle needs a good sample of partitions.
    The Jonathan Lewis article is titled 'Dynamic Sampling'
    http://jonathanlewis.wordpress.com/2010/02/23/dynamic-sampling/
    This article can also shed light on sampling as he shows how it appears that sampling isn't being used and then shows that it actually is
    >
    We can see that we have statistics.
    We can see that we delete 9002 rows
    We can see that we have 998 rows left
    We can see that the plan (and especially the cardinality of the full tablescan) doesn’t change even though we included a table-level hint to do dynamic sampling.
    Moreoever – we can’t see the usual note that you get when the plan is dependent on a dynamic sample (” – dynamic sampling used for this statement”).
    It looks as if dynamic sampling hasn’t happened.
    However, I “know” that dynamic sampling is supposed to happen unconditionally when you use a table-level hint – so I’m not going to stop at this point. There are cases where you just have to move on from explain plan (or autotrace) and look at the 10053 trace.
    So the optimizer did do dynamic sampling, but then decided that it wasn’t going to use the results for this query.

  • Hide/unhide radio buttons works in lab view, not in executable

    Hello,
    I simplified a program I am using. When I run in lab view the radio buttons are visible or not (as programmed). But when I create an executable, this function is not working any more?
    Do I need to make changes in the build settings?
    best regards,
    Rens van Geffen
    I attached the maintest.vi (lab view program) and the executable as well in a zipped folder.
    Attachments:
    My Application2.zip ‏742 KB

    I think it is a bug and reported it in the monthly bug thread.
    RvG wrote:
    Yes i know it is a dirty vi. But I only made it because I had a problem in a complicated vi, with lots of sub vi's etc. in that vi I have all wait functions etc.
    I did not care about any wait function. I was just objecting about the overuse of value property nodes and that exceedingly complicated inner while loop construct that really does not do that much.
    For example, your upper case structure contains writing to two value properties that exist in all cases, while the actual terminal just sits there outside the loop. The cases only need to contain what's different. Shared code elements belong outside the case! Instead of writing to value properties (which is synchronous and resource intensive), you can just write to the indicator terminal instead. Right?
    In parallel to writing to the "group", you also read from another value property instance of the same. Since this happens in parallel to the case structure, you will read a stale value, and not what has been generated by the case structure during the same interation.
    My code modification shows some of the simplications that can be done. Also note that data dependencies enforce the proper execution order.
    Writing to the visible properties only needs to be done when the situation changes and not with every iteration of the loop. Once written, these proerties stay until changed to something else.
    In summarry, these are just some recommendations. They will not fix the LabVIEW bug you exposed.
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for

  • Accounting entry against Excise invoice not created  for Service Tax sales

    Dear all,     I am facing one problem related to accounting entry against excise invoice not created, for service tax sales scenario in SEZ plant. Accounting entry against billing document is generated,which contains revenue account-Debit entry,servi

  • Account ID

    Hi All, Thanks for your assistant in solving my issues. Please in my QAS (client 200) system at invoice (FB60) level were i state the house bank and the ID for account details (sub-account) that i want to use to pay vendor, the system we pull the hou

  • CIFS authentication process, for a novice "windows" guy

    i have a virtual network configured with a server2003AD, a NW6.5.8 server with CIFS and an xp pro sp3 with no novell client on it. The XP authenticates to the AD. The CIFS is configured to use the AD domain controller to authenticate. Am i to underst

  • Oracle 11g install issues on Windows 7

    I have had some problems installing Oracle on Windows 7 in that it seemed to crash while checking prerequisites. After several attempts it for some reason continued the installation. I had accepted the default settings and was installing from a USB d

  • Messages crashes when I attempt to sign in to iMessage

    When I open the Messages app, I am greeted by a window that prompts me to sign into my iMessage account.  When I type in my password and click Sign In, nothing happens the first time, then the second or third time I click sign in, the app crashes. He