FGV multiple read write reliability

Hi All
Wonder if I could have some help please!.  I am working on a large application and I want to seek some advice on this please where I am using FGV to share large amount of data between various Timed and while loops. Interestingly, whole application is working, but it does not seem to follow ‘science’ by which I mean when I am reading FGV, I do not expect to be writing FGV as FGVs are non-reentrant. However, application is working and I want to be sure that it is a reliable application and would not generate run-time errors in future due to FGV not being written when it is supposed to do!
Your comments would be truly appreciated. Please advise:
Within Timed loop using ‘Create 1kHZ.vi’, I am reading using FGV at 1tick. However, in various other timed and while loops, I am writing into these FGVs. The data I am writing is not time critical, so I don’t mind if it is read in Timed Loop non-deterministically as long as it is eventually read, it would be fine. I understand that FGV architecture is non-reentrant and ‘Write’Instance’ may not execute while ‘Read Instance’ is executing. Does that mean that execution in the write loop stops and wait to push value in between short time window of 1ms which is the time period of timed loop?
Any advice on to improve this data communication. It is only application setting data such as buttons on/off statuses, references, scaling factors etc. It does not contain any IO data.
Is it possible to run Timed Loop or any loop in windows environment faster that 1ms. The option to set Timed Loop at 1us is not selectable.
Many Thanks in advance
Best regards
KWaris
 

k-waris wrote:
Hi All
Wonder if I could have some help please!.  I am working on a large application and I want to seek some advice on this please where I am using FGV to share large amount of data between various Timed and while loops. Interestingly, whole application is working, but it does not seem to follow ‘science’ by which I mean when I am reading FGV, I do not expect to be writing FGV as FGVs are non-reentrant. However, application is working and I want to be sure that it is a reliable application and would not generate run-time errors in future due to FGV not being written when it is supposed to do!
Your comments would be truly appreciated. Please advise:
Within Timed loop using ‘Create 1kHZ.vi’, I am reading using FGV at 1tick. However, in various other timed and while loops, I am writing into these FGVs. The data I am writing is not time critical, so I don’t mind if it is read in Timed Loop non-deterministically as long as it is eventually read, it would be fine. I understand that FGV architecture is non-reentrant and ‘Write’Instance’ may not execute while ‘Read Instance’ is executing. Does that mean that execution in the write loop stops and wait to push value in between short time window of 1ms which is the time period of timed loop?
Any advice on to improve this data communication. It is only application setting data such as buttons on/off statuses, references, scaling factors etc. It does not contain any IO data.
Is it possible to run Timed Loop or any loop in windows environment faster that 1ms. The option to set Timed Loop at 1us is not selectable.
Many Thanks in advance
Best regards
KWaris
You have basically the right idea of the FGV - one one caller will execute the FGV VI at any one time and any others will block. However I suggest that the data "traffic" be one way - one writer and muiltiple readers; otherwise there are better and less fragile options available.
Windows can't do any better than 1ms; in fact I would be suprised if you got this level of precision consistently at all. It's not a real-time OS so it can't be asked by LabVIEW to perform as one. If you really need a guarantee of faster precision then you need to move to real-time - that's why the 1us option isn't available.
However it seems like you are only doing UI operations - I'm suprised that you need faster response than around 10-20ms or so for UI responsiveness.

Similar Messages

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Sharing Mac iTunes library with Windows PC (I want to be able to read/write)

    Greetings from a new Mac user. I recently purchased a 27" iMAC running OS X 10.7.3. This iMac is now connected to my home network/workgroup.
    - I want to build a new iTunes Library on my 1TB internal Mac HD.
    - I want my 3 non Mac Machines (1 XP Prof and 2 Windows 7 Home) to be able to view the contents of my new (not yet built) iTunes Library resident on my iMac and of course be able to
    - play that content.
    - have play counts from my Windows-based be saved back to my iTunes Library.
    - be able to rate a song from any computer (Mac or Windows) within this iTunes Library and have that rating saved back to this iTunes Library.
    I don't want to have to rely on iTunes "Shared Library" function. In other words I want all PCs to be able to read/write to the same iTunes Library file.
    Before I purchased my iMac, my iTunes Library existed on one of my Windows-file system external drives. The pre-requisites for allowing multiple Windows PCs to access/share/write to the same iTunes Library was to:
    - share the Windows folder on the external drive in which the iTunes Library file was stored (allowing network users to be able to change the files), and
    - make sure all Windows-based PCs ran the same iTunes software version.
    Assume: I don't want to change the version of Windows iTunes software from the currently installed 10.5.0 version.
    Question 1: Can my Windows PCs access and utilize the same iTunes library file that I plan to build on my iMac hard drive?
    Question 2: What older version of Mac iTunes software should I install on my iMac in order to allow the Windows PCs to access and use the iMacs iTunes library file?
    Any help would be appreciated. Thanks.

    Hmmm...  thanks for the reply.  This of course, yields another question.  When you say 'you could format to FAT32 which both can use'; do you mean format the iTunes Hard drive on which my iTunes music now resides?  If so, this would mean I would need to copy the music off the internal iMac hard drive, then format the drive to FAT32, then copy the music back.
    Is this what you meant?

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • RT jitter! Can multiple reads to a variable / cluster cause a blocking condition?

    Howdy do.
    While incrementally developing and testing an application on a crio9068 (linux RT)  I've begun to see the 'finished late?' indicator in my main timed loop flicker. Starting to pull my hair out trying to figure out how to prevent this from happening. I made a 'hold max' vi and can see the longest execution time for each frame.
    The application runs fine at about 75% processor load with the front panel open, and the majority of iterations execute in time. Occasionally, I'll have a 'spike' in execution time, and all four frames in the timed loop take significantly longer than normal to execute, and the 'late' indicator says so.
    A couple questions I've had build up while chasing this:
    -If I use local varables to pass data between loops, but only write to the variable in one place, can I still cause a blocking condition/jitter by competing reads of that memory space?
    -If I use an FPGA read/write node to pass data between the timed loop and the FPGA, should I expect this to cause a problem? I selectively disabled a lot of my code, and it seems like this is where some of the delay occurs. What stumps me is that these parts of the code haven't changed in recent development and the thing never used to run late.
    -On the topic of the FPGA read/write node, I previously assumed that I shouldn't write to the same FPGA FP item in different subvis. However, the code is set up so that there are multiple parallel calls to the read/write node, just with different elements selected. Is this BAD?
    -Similarly, if I unbundle and read the same element from a cluster control in a 'parallel' fashion, can this cause a blocking situation, or is it the same as unbundling and wiring from there to multple places?
    -I am using the recently renamed "NI software calibration and management toolkit (SCM)," formerly Drivven CalView, to handle communication between the RT and a windows host. It also does neat fault management stuff. Anybody else using this, and is there any possibility I'm getting jitter by having too many calpoints in my deterministic loop?
    Any guidance on any of the above points would be greatly appreciated. If I don't make sense on any of the above points I can make example snippets to describe.
    Solved!
    Go to Solution.

    Tom,
    Thanks for your input(s). I'll stop obsessing over the local variables and the branched cluster wires for now.
    I didn't realize that all the code in the timed loop would be serialized beyond normal execution. In fact, this brings up another question I have. Somewhere I read that the overhead of multithreading would cause an issue. Since the 9068 has two cores, I had previously been setting the CPU selector in the timed loop to 'automatic', which seemed to load both cores roughly equally. Doesn't this mean that the process is being multithreaded? Funny thing is that even when I do select cpu 0 or cpu 1, they both are roughly equal in utilization while the timed loop is running.
    The period for the timed loop is set at 15ms, and the execution of all the frames usually occurs in less than 10ms. After several seconds I'll get a 'spike' in execution time, and it will take 20-30ms to complete an iteration. I'm not positive if my benchmark is valid, but if I look at the execution time for each frame and 'hold' the maximum time, it seems like they all (four frames) take extra time at this one instance. So that hasn't helped to narrow it down much. 
    It sounds like you have a method in mind for 'caching runtime data'. If you can point me in a good direction to gather more information about what the thing is doing it would help. I have run a strip chart of execution times, attached.
    How much should I expect having the front panel open will affect the determinism of the loop? I realized it added overhead, but since the overall CPU load is less than 60% (each) with all the bells and whistles (other loops) disabled, I thought it wouldn't be having an effect like this.
    Again, thanks for throwing ideas around, it really helps.
    Matt
    Attachments:
    iteration execution.png ‏16 KB

  • Read/Write Jar files?

    This is really a newbie question, but since there's a forum specifically for jar questions I figured it was better to start here and move it to the newbie section if y'all deem it appropriate.
    I have written a desktop application using Java 1.4.1 class libraries and intend the application to run on multiple platforms including Mac OS X, Windows XP and Linux. The application works standalone (not yet jarred) on the machine it was developed on, but now it's time to begin figuring out the distribution method.
    Right now, the application reads several files from a flat text-file database, allows the user to peruse and display the information in a variety of ways, and gives the user a method to add to the flat-file database as needed. The file is about a megabyte and is excerpted into memory at initialization, then not referred to again unless the user writes additional data. When the user generates new data, write traffic to the file is fairly light, maybe 2-3kbytes per session.
    I searched the forums for the best way to handle read/write data files for distribution and so far I haven't found anything that seems relevant, but surely the question must have been asked and answered before - maybe I'm using the wrong keywords?
    Anyway, I have three basic questions:
    1) Can I both read and write a file that's enclosed in a jar file? Or are files read-only once "jarred"?
    2) Assuming I can both read and write a file within my jar file, is reading and rewriting within the jar so inefficient as to make that a non-preferred approach?
    3) How do other folks who have a local read/write datafile in a desktop application deal with distribution? Keep the read/write datafile within the jar? Make a copy of it outside the first time the application is run and always read/write the copy outside the jar? Or some other strategy?
    Thanks for any suggestions you can give a newbie at the Java game.
    Jon

    Thanks, that was pretty much what I suspected.
    I have several data files and configuration files for this project, so I was trying to make the distribution as clean as possible. At least some of the config information can be hidden in Preferences, but I was struggling with the data files. I'll include the data files in my jar, then unpack them to the user's directory when launched the first time and work with them thereafter in the user's directory.
    Enjoy the Dukes!
    Jon

  • Reader writer thread

    I don't know how to start to implement a reader writer thread. I have a vector, and multiple threads. I would like to allow mulitple threads to read and one to modify. Does anyone where I should start? I know that I can't use
    synchronize(object) {
    since this will only allow one thread to acess the code.
    Thanks.

    It doesn't sound like you have a good background in concurrency yet. Take a look at this book.
    http://java.sun.com/docs/books/cp/
    Great book, great guy. Let me know if you need any help after you start.

  • Multithread read write problem.

    This is a producer consumer problem in a multi-threaded environment.
    Assume that i have multiple consumer (Multiple read threads) and a
    single producer(write thread).
    I have a common data structure (say an int variable), being read and written into.
    The write to the data sturcture happens occasionally (say at every 2 secs) but read happens contineously.
    Since the read operation is contineous and done by multiple threads, making the read method synchronized will add
    overhead(i.e read operation by one thread should not block the other read threads). But when ever write happens by
    the write thread, that time the read operations should not be allowed.
    Any ideas how to achive this ??

    Clearly the consumer has to wait for a value to become available and then take it's own copy of that value before allowing any other thread to access it. If it doesn't copy the value then only one consumer can act at any time (since if another value could be added while the consumer thread was accessing the common value then the value would change, affecting the consumer at random).
    In general what you're doing is using a queue, even in the special case where the maximum number of items queued is restricted to one the logic is the same.

  • Early 2011 (March) MBP hard drive crashed a week ago, all data was lost. Why is the Hitachi hard drive so prone to have defective read/write heads?

    Shouldn't Apple be better quality than this? I've used my old MBP from 2006 for 5 years with no problems. I've only had to back up data maybe once a year because it was so durable.
    But this 2011 MBP had the hitachi 320gb SATA drive and one day it started clicking and everything froze. I immediately shut it down took it to the Apple Store, in which the genius bar guy said there was nothing they could do other than swap out the drive (which I did myself as I was not prepared to be charged $250+ for services and waste 2-3 weeks as my warranty was 1 month overdue). Even with AppleCare coverage, Apple will still not recover the data...
    This is ridiculous! I strictly use the MBP for school/no gaming/haven't dropped it or moved it around rigorously/bought it instore (assuming it was handled properly). There is no reason for being such crap quality at $1300. I took it to a data recovery center and get this, the quote was $1700 as the read/write heads were defective and the surface of the disk was scratched, I could buy a new computer with that money! Were there no quality checks for these drives? I have PC laptops at home and none of them has had that problem! I'm planning to dispute this with Apple for losing years worth of valuable data.
    Does anyone have the same problem? Are early 2011 MBPs prone to hard drive failures?

    As Ogel stated drives fail. Unless you bought your 2011 Mac right when the early 2011 models came out you still should be under the one year warranty. They came out at the end of Febuary.
    So that drive should be covered, dependinig on when you bought your Mac. And they offer a extended warranty which if you had purchased it would cover the drive failure, Hardware not data.
    As far as the data, Apple includes the Time Machine program and there are 2, that I know of, cloning programs available to backup your data. If you did not use any of those programs you will ALWAYS LOSE YOUR DATA.
    That is a fact of computers. They fail, end of story. Unless you have multiple copies of your files you are going to lose them.
    Good Luck.

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • Multiple-reader environments with shared filesystem?

    Hi,
    we are thinking about offloading some read-only-tasks to a different server so that one server opens the JE-Environment read-write and the other server opens it read-only. What kind of filesystem is recommended to support this architecture?
    The easiest solution is of course using NFS to export the BDB-Files to the read-only-Server. But what performance does one expect?
    What would one do if there is a SAN storage infrastructure already available? Could one use Oracles OCFS2 to share the filesystem as a block device at SAN-Level between the two servers? One critical issue is that "flock" is not supported by OCFS2. Are there any other Filesystems that support multiple readers (multiple writers are not needed!) on top of a SAN?
    Any other ideas/suggestions for a high performance solution for a shared BDB-Environment within a SAN storage infrastructure?
    Stefan.

    Hi Stefan,
    Just to be clear, I am assuming that you do not want to access the r/w Environment via (say) NFS, but you do want the r/o environments to be over (say) NFS.
    We require flock to work properly so that would rule out OCFS2 (I confess ignorance on OCFS2 so I'll take your word for it that it doesn't support flock). As long as your NFS implementation supports flock, then NFS should be fine -- if it doesn't then bad things will happen. Performance over NFS will be based on how big the file system cache is on the r/o side, etc. I can't think of any reason why performance would be bad, but you should obviously try it and see how it is (please let us know).
    Regards,
    Charles Lamb

  • SoftMotion Straight Line Move Read/Write Targ Pos functional​ity

    I have been reading up/studying example code on the SoftMotion package from NI.
    I downloaded the "NI Week 2008 SoftMotion Development Module 2.1 & Compact RIO 6 Axes Coordinated Motion Demo" code to review how I might implement some homing routine on stepper axes, and ran across some undocumented code.
    I note that the Straight Linr Move Read/Write property of Targ Pos (Target Position) can take in either a scaler target or arrays of targets.  The detailed help for this node does not detail the difference in operation.
    From what I can decipher it seems to be that the Array input is for when you have mapped multiple axes to a "coordinate" and the array input sets the target positions for the individual axes in the coordinate.
    Is this true?
    Ryan Vallieu
    Automation System Architect
    Solved!
    Go to Solution.

    Thanks for the clarification.
    Ryan Vallieu
    Automation System Architect

  • Avi read write example : playback missing chunk of data at regular intervals

    Hi,
    I am writing a waveform data into avi write in order to read back (with has data input on) as in example avi read write with data shipping example but the graph played back is missing some amount of data at regular intervals and hence the waveform read back is choppy. 
    I am writing the same data into tdms also. when i read back it with tdms file viewer, it shows that it has entire data.
    note: i am writing the waveform data and avi in two seperate loops and using the porperty node value of the waveform data in the video loop in order to insert data into avi write (has data input on) and have the (wait ms multiple) input =10 in both loops
    i would like to post the video but it is 50 MB, is it possible to upload that much.
    can any one help me how/why this is happening..
    Thanks,

    Can you post a VI(s) as I have trouble understanding what you are doing
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • A Unix Utility (or Scripting) to Do Major Read-Write-Seek Testing on a HD?

    I'm wondering if there is a utility in Darwin -- or a (reliable, safe, free) Unix utility I can download -- that will do extensive read-write-seek testing on a hard drive?
    Or if not, could I use some kind of scripting commands in Terminal to do something along these lines? Which you'd have to walk me through... : )

    I don't think this is what you're looking for but at least the suggestion will keep your quest visible if it does nothing else.
    smartmontools can monitor hard drives with SMART and can run tests to check the integrity of the disks and (hopefully) warn of imminent failure. The links suggest you want something to test speed/performance, though, so I don't know that smartmontools will be of any use. Since I don't really understand the content of the links well, though, I thought I'd mention this just in case.
    smartmontools is available from sourceforge at http://smartmontools.sourceforge.net/. You may have to compile it yourself. I know I compiled it, though you could check for a pre-compiled version. (I might have decided to compile from source for some reason anyway.) The compilation is straightforward, though. Just takes a little time to get it configured the way you want and installed as a daemon if that's how you want to run it. (If you don't want to run it as a daemon, of course, you don't have to spend time getting that bit to work!)
    - cfr

  • Microchip 24LC16 EEPROM Read/Write

    hi... i am working on a project in which i have to write & Read EEPROM "Microchip 24LC16" at different locations. I am using NI 8451 card for read/write. I connect SDA, SCL & GND of EEPROM & NI-Card to each other. I tried two different i2c example code.
    1. General I2C Read/Write vi..
    2. Microchip 24lc512 Read/write script .vi.
    i am able to perform read & write through both example but facing different problem with each code.
    Problem in first code
    1. when i write 5 bytes or more than 3 bytes at 03CD location it writes & read it but as i Switch OFF the EEPROM & then Switch ON & try to read at location 03CD i find only 3 bytes(from 03CD to 03CF) & other bytes become zero.
    2. I try to write 16 bytes on different location i.e. 03D0, 03E0, 03F0. but i lost all the data when i read EEPROM after doing switch OFF & ON.
    Problem in second code
    1. when i write some bytes at a particular location ( say 03CD) & then perform read at any other location, i get same data which i wrote at 03CD.
    2. after writing at location 03CD, when i try to write at any other location it overwrite the last written bytes.
    Is there any specific code available for this particular EEPROM ? if yes then please send me the code.
    Please give me any solution, i have tried all my efforts .

    Hi Francis,
    From the sounds of things, you're looking at the Amtel AT25080A Read example VI found under the SPI Advanced folder in the Example Finder.  Is that correct?  If so, your description of the example's execution sounds correct.  The first byte indicates what sort of operation you'd like to perform, the next two indicate the starting address for this operation you would like to access and the remaining bytes are the number of bytes you would like to read beginning at that starting address (in this sense the address is auto-incremented based on the number of bytes you choose to read).
    In your case, it sounds like you want to do a very similar operation, only you will have one bit to determine the operation, 7 more bits to specify the address and the following bits to read the value at that address.  If you want to read the byte at that address and the next two beyond that, you can simply create an array of three bytes and build an array with this and your operation/address byte.  If you want to read bytes at disjointed addresses you will need to do multiple 845x SPI write read blocks and input the new address at that time (either in a sequential format or within a loop). 
    In general, if you are looking for more information concerning the operation of different blocks, it is helpful to have Context Help on (Help>>Context Help) and when you hover your mouse over wires or blocks the Context Help window will give you additional information.  Additionally, the NI-845x Software User Manual found under Start>>Programs>>National Instruments>>NI-845x could be helpful.  Hope this helps!
    Regards,
    Anna M.
    National Instruments

Maybe you are looking for