Serializin​g read/write operations of network-pu​blished shared variables

Hi all,
I'm developing a distributed application (PC + CompactRIO), and using shared variables (SVs) for inter-device communication. Here's my journey so far:
Intended procedure
PC parses file
PC writes processed file data (custom cluster, large) into the 1st SV 
PC writes a "grab data" signal/command (enum) into the 2nd SV
cRIO polls the 2nd SV
cRIO sees the command, then reacts by reading the 1st SV
Steps #2 and #3 were sequenced, using error wires.
Unexpected results
Even after the command is transmitted and the cRIO sees it, the cRIO could not read the data (which I wrote BEFORE the command) -- LabVIEW reported that the buffer was empty.
The operation succeeded when I placed a wait (5 seconds) between steps #2 and #3.
Questions
What am I doing wrong, and how do I achieve my desired outcome?
Is SV I/O asynchronous by design?
Is it possible to use event-driven programming to handle SV access? (i.e. does LabVIEW signal when the new SV value has propagated across the network?)

BillMe wrote:
Why do you have to "notify" the other end that data is available? The subscriber can simply sit in a loop doing a timed read just as if using a queue or notifier IPC. If it doesn't time out, you got new data. If it does time out, do some other processing if needed and then loop back for another timed read.
My sytem architecture is command-driven -- the cRIO listens for instructions from the PC interface (sent as an enum via one SV), and performs tasks (motor control) in response. One of the commands (the one described in this thread) happens to be "download a new motion profile from the PC, by reading the other SV". Given that the cRIO is already polling the command channel, I felt that there's no need to also poll the data channel (especially since the "download" command is issued very infrequenty). Plus, I thought that polling two channels would increase the chances of race conditions or illegal state transitions, particularly if the app is developed over a long term.
I am also new to LabVIEW, so my current programming style will heavily reflect my C++/Qt background while I get a feel for LabVIEW's strengths and weaknesses -- Qt is a heavily event-driven framework (even for networking!), where polling often means you're doing something wrong.
Still, thank you for pointing out that I can use timeouts to determine if new data has arrived -- my subscriber current writes a null command back into the SV when it has consumed the command, but you showed me that I don't have to.

Similar Messages

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • NFC tags read/write operations on low level

    Hi,
    I know this is little bit offtopic question - but since you are experts in the area I will try to ask you probably a pretty simple question:
    1/ I would like to know which protocol is used for the read/write operations to the NFC tags are used. According to my understanding after the tag is placed on the NFC reader (NFC phone, USB reader), it is powered and set to the ready state. Then the application protocol for read/write operation is used. As I think the exact format and the content of commands used for read/write is not specified in ISO 14443 and it is dependent on a tag hardware/manufacturer and will be different for FeliCa/Mifare/Innovision/etc. tags, so there is no way how to handle NFC tags read/write operations with the single implementation. Is that assumption correct?
    2/ Are there any tags, which supports the APDU 7816-4 commands for read/write operations?
    Thank you for reply
    Kind regards,
    STeN

    hello,
    you have to read the NFC forum specs. all of this will be better explained than by me.
    more than one protocol are used according the the contactless front end configuration and abilities. It includes ISO14443-A, ISO14443-B and Felica. Sometimes other protocols are also available, for example Innovatron (not Innovision lol)
    Mifare is not a protocol, it is a line of NXP products. These products use the lower layers of the ISO14443-A protocol specification.
    There are 4 types of tags
    1) using the lower layers of ISO14443-A
    2) using the lower layers of ISO14443-B
    3) something related to felica?
    not sure exactly about these 3, you have to read the specs. Everything is clearly understandable, not like ETSI.
    4) something using ISO7816-4 commands on top of ISO14443 A or B or others. You have SELECT, READ BINARY, UPDATE BINARY. You can implement that using javacard, I did it and it works. You need two binary files, that can be hardcoded.
    Regards
    Sebastien

  • Network streams vs shared variables

    I send data from a PXI RT System to users on different Windows computers via Shared Variable and Network Stream.  The user that receives the data via Network Stream writes the data to a disk file (aka DAQ computer).  The users that receive the data via Shared Variable displays it on front panels (aka Observers). 
    The data consists of a 1D SGL array where elements 0-3 are the timestamp, element 4 is the counter, and elements 5-1000+ are data.  The timestamp is GPS time and is displayed on all computers.  When I look at the timestamp on the DAQ it is slowly falling behind the current GPS time.  After 4 hours it can be up to a minute behind.  When I look at the timestamp on the Observers it is always displaying current GPS time.  When I look at the code on the PXI System, it is always sending the current GPS time.  The counter on the DAQ computer is also behind.
    I am using the Read/Write Single Element Stream functions with the default read/write buffer size of 4096.  The 'timed out?' output is always FALSE for both functions.  No errors are generated.  LabVIEW memory usage is constant during the whole time.
    On the PXI RT System the Network Stream and Shared Variable are being written to inside a Timed While Loop.  The users read the data within a standard While Loop.  Everyone is using LabVIEW 2011.
    It sounds like a buffer is slowly being filled up somewhere, but where?
    Solved!
    Go to Solution.

    On the PXI RT System:
    How often is data sent?
    Are you using a “Flush Stream” function after your “Write Single Element to Stream”?
    On the “DAQ Computer”:
    Are you buffering the reading of the data (i.e. feeding it to a queue)?
    You might try using a property node to read “Available Elements for Reading” to see if they are stacking up here.
    The buffer size is another option to consider.
    steve
    Help the forum when you get help. Click the "Solution?" icon on the reply that answers your
    question. Give "Kudos" to replies that help.

  • File Sharing (Read & Write) with a Network User - "Network User" Not Listed

    My boyfriend and I both have Macs, running 10.6.3. Both Macs are connected to the internet via Apple Wireless Express. I have a MacBook Pro, and my boyfriend has an iMac with a big fancy screen, so I like to use for my own work when I'm at home and he isn't - it's just easier and more comfortable to use than a laptop.
    I would like to find a way to use his computer to access a shared folder on my laptop with read & write access (so that I can modify the files on my laptop while using his computer). I have gotten to the point where I have a folder on my laptop with the files I want shared with his computer, and can access this folder from his computer. However, doing it this way (accessing my files on my laptop from his computer) only allows read-only access. I would like to be able to edit the files that are on my laptop using his computer, so I think I would need read and write access.
    It seems the easiest way to do this would be to add a user (the boyfriend) on my laptop and give him read/write access to this specific folder that is located on my laptop. I found instructions on how to do this, and it says that I need to add a "network user". These instructions seem to indicate that when I add a user, there should be options for accounts on my laptop, my personal address book, a new account on my laptop, and network users. I see the first 3 options...but no option to add a network user.
    Why is that? How would I make it so that from my laptop, I can add a network user...specifically my boyfriend's computer, who also uses the same internet network that I use?
    Thank you so much for any help!

    Also found this behavior:
    While logged into the Mac Pro as MPUser1, I connect to the MacBook Pro file sharing as MBPUser1.
    Then I "disconnect".
    Then I log out of the Mac Pro MPUser1, and login as MPUser2.
    The connection to MBP as MBPUser1 is still active!
    This means that MPUser2 can access whatever MBPUser1 can access WITHOUT knowing MBPUser1's login password !
    This seems WRONG! Anybody else seen this?

  • Read/write operation on SAP IDOC file

    Hi All,
    We are developing an application, which will be used for registering and processing of travel data for a client.One of itu2019s functionality is to get the data from external system in different format(like CSV, Excel, Fixed Flat file, XML, main frame file, DBF and SAP IDOC) and import it into Oracle databases.
    One of the design considerations for it, is to read these files and convert it into predefined XML format
    and then import these data from XML to oracle database.
    Currently we are analyzing the requirement and trying to find out all the Open Source JAVA API which can convert these different format files to predefined XML format, using some mapping file.
    We have found out Open Source JAVA API for all the file format, except SAP IDOC files.
    Any Java API to read/write the SAP IDOC file, Please advise
    Regards,
    Madhu
    Edited by: Madhu Sudhan on Feb 17, 2009 12:06 PM

    Hi All,
    We are developing an application, which will be used for registering and processing of travel data for a client.One of itu2019s functionality is to get the data from external system in different format(like CSV, Excel, Fixed Flat file, XML, main frame file, DBF and SAP IDOC) and import it into Oracle databases.
    One of the design considerations for it, is to read these files and convert it into predefined XML format
    and then import these data from XML to oracle database.
    Currently we are analyzing the requirement and trying to find out all the Open Source JAVA API which can convert these different format files to predefined XML format, using some mapping file.
    We have found out Open Source JAVA API for all the file format, except SAP IDOC files.
    Any Java API to read/write the SAP IDOC file, Please advise
    Regards,
    Madhu
    Edited by: Madhu Sudhan on Feb 17, 2009 12:06 PM

  • How do you create default Read/Write Permissions for more than 1 user?

    My wife and I share an iMac, but use separate User accounts for separate mail accounts, etc.
    However, we have a business where we both need to have access to the same files and both have Read/Write permissions on when one of us creates a new file/folder.
    By default new files and folders grant Read/Write to the creator of the new file/folder, and read-only to the Group "Staff" in our own accounts or "Wheel" in the /Users/Public/ folder, and read-only to Everyone.
    We are both administrators on the machine, and I know we can manually override the settings for a particular file/folder by changing the permissions, but I would like to set things up so that the Read/Write persmissions are assigned for both of us in the folder for that holds our business files.
    It is only the 2 of us on the machine, we trust each other and need to have complete access to these many files that we share. I have archiveing programs running so I can get back old versions if we need that, so I'm not worried about us overwriting the file with bad info. I'm more concerned with us having duplicates that are not up to date in our respective user accounts.
    Here is what I have tried so far:
    1. I tried to just set the persmissions of the containing folder with us both having read/write persmissions, and applied that to all containing elements.
    RESULT -> This did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    2. I tried using Sandbox ( http://www.mikey-san.net/sandbox/ ) to set the inheritance of the folder using the methods laid out at http://forums.macosxhints.com/showthread.php?t=93742
    RESULT -> Still this did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    3. I have set the umask to 002 ( http://support.apple.com/kb/HT2202 ) so that new files and folders have a default permission that gives the default group Read/Write permissions. This unfortunately changes the default for the entire computer, not just a give folder.
    I then had to add wife's user account to the "Staff" group because for some reason her account was not included in that. I think this is due to the fact that her account was ported into the computer when we upgraded, where as mine was created new. I read something about that somewhere, but don't recall where now. I discovered what groups we were each in by using the Terminal and typing in "groups username" where username was the user I was checking on.
    I added my wife to the "Staff" group, and both of us to the "Wheel" group using the procedures I found at
    http://discussions.apple.com/thread.jspa?messageID=8765421&#8765421
    RESULT -> I could create a new file using TextEdit and save it anywhere in my account and it would have the permissions: My Username - Read/Write, "Staff" or "Wheel" (depending on where I saved it) - Read/Write, Everyone - Read Only, as expected from the default umask.
    I could then switch over to my wife's account, open the file, edited it, and save it, but then the permissions changed to: Her Username - Read/Write, (unknown) - Read/Write, Everyone - Read Only.
    And when I switch back to my account, now I can open the file, but I can't save it with my edits.
    I'm at my wits end with this, and I can believe it is impossible to create a common folder that we can both put files in to have Read/Write permissions on like a True Shared Folder. Anyone who has used windows knows what you can do with the Shared folder in that operating system, ie. Anyone with access can do anything with those files.
    So if anyone can provide me some insight on how to accomplish what I really want to do here and help me get my system back to remove the things it seems like I have screwed up, I greatly appreciate it.
    I tried to give as detailed a description of the problem and what I have done as possible, without being to long winded, but if you need to know anything else to help me, please ask, I certainly won't be offended!
    Thanks In Advance!
    Steve

    Thanks again, V.K., for your assistance and especially for the very prompt responses.
    I was unaware that I could create a volume on the HD non-destructively using disk utility. This may then turn out to be the better solution after all, but I will have to free up space on this HD and try that.
    Also, I was obviously unaware of the special treatment of file creation by TextEdit. I have been using this to test my various settings, and so the inheritance of ACLs has probably been working properly, I just have been testing it incorrectly. URGH!
    I created a file from Word in my wife's account, and it properly inherited the permissions of the company folder: barara - Custom, steve - Custom, barara - Read/Write, admin - Read Only, Everyone - Read Only
    I tried doing the chmod commands on $TMPDIR for both of us from each of our accounts, but I still have the same behavior for TextEdit files though.
    I changed the group on your shared folder to admin from wheel as you instructed with chgrp. I had already changed the umask to 002, and I just changed it back to 022 because it didn't seem to help. But now I know my testing was faulty. I will leave it this way though because I don't think it will be necessary to have it set to 002.
    I do apparently still have a problem though, probably as a result of all the things I have tried to get this work while I was testing incorrectly with TextEdit.
    I have just discovered that the "unknown user" only appears when I create the a file from my wife's account. It happens with any file or folder I create in her account, and it exists for very old files and folders that were migrated from the old computer. i.e. new and old files and foders have permissions: barara - Read/Write, unknown user - Read Only, Everyone - Read Only
    Apparently the unknown user gets the default permissions of a group, as the umask is currently set to 022 and unknown user now gets Read Only permissions on new items, but when I had umask set to 002, the unknown user got Read/Write permissions on new items.
    I realize this is now taking this thread in a different direction, but perhaps you know what might be the cause of this and how to correct or at least know where to point me to get the answer.
    Also, do you happen to know how to remove users from groups? I added myself and my wife to the Wheel group because that kept showing up as the default group for folders in /Users/Shared
    Thanks for your help on this, I just don't know how else one can learn these little "gotchas" without assistance from people like you!
    Steve

  • Socket read write on Solaris 8 too slow.

    Hi
    We have an application which consists of several server instances.
    The front-end is web-based , using JSP/Servlets. We are using Tomcat 4.0.
    The servlet makes several connections to the underlying servers. But the read/write operation is too slow. Same setup runs much quicker on Windows.
    We are running jdk1.4.1_02 (stable ?). Would much appreciate any help.
    cheers
    Projyal

    tomcat version --3.2.3
    j2sdk version----1.4
    plarform---------solaris
    solaris version--8
    using the aboue configuration i need to deploy JSP codes on the solaris platfrom..can any technical expert guide me on how to do this..would be grateful if anyone could suggest a good site where i can download free tuorials of "JSP" for development in HTML files format or HELP format]
    thank you
    regards
    brijesh

  • MSMQ read/write is slow

    Sir,
    I have installed MSMQ in my Windows server 2008 R2 operation system but the queue reading and writing is very slow .It gives me 150 to 160 messages per second for write or read  and the speed decreases if the read and write operation is performed at
    a time .
    Earlier I have MSMQ on Windows server 2003 .Read/write operations is fine and performance is good.It gives me 1000 to 2000 messages per second .
    Regards,
    Sandeep

    Hi,
    Thanks for posting here.
    Regarding the current issue, please try to refer to the following article to see if it could improve the performance.
    How to improve MSMQ disk performance
    http://blogs.msdn.com/b/johnbreakwell/archive/2008/02/13/msmq-disk-performance.aspx
    Hope this helps.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • 10.5.8 on G4 - firewire read/write fails silently, corrupting data.

    I've already posted this problem on another board here
    http://discussions.apple.com/thread.jspa?threadID=2565228&tstart=0
    But it seems more relevant to this forum.
    I've recently encountered this problem which caused me to lose a lot of data. The setup is mac mini g4 + external firewire hard drive from Freecom.
    What is happening is that read/write operations do not result in copy fidelity. Here's a terminal log to illustrate:
    ariel:Freecom FW 1TB kaitlin$ cp another.avi another1.avi
    ariel:Freecom FW 1TB kaitlin$ ls -lh another*
    -rw-r--r-- 1 kaitlin staff 176M 2 Sep 11:48 another.avi
    -rw-r--r-- 1 kaitlin staff 176M 2 Sep 12:07 another1.avi
    ariel:Freecom FW 1TB kaitlin$ md5 another.avi
    MD5 (another.avi) = 6eedf37d80b81f61f0c4a8f71dfea57c
    ariel:Freecom FW 1TB kaitlin$ md5 another1.avi
    MD5 (another1.avi) = b8cffac964d494279a508f11151ee529
    So basically the original and its copy have different hashcodes which means they're not the same file, so I can't trust the firewire. It does this consistently with any file read/write operation to/from any destination. The machine does NOT behave like this with external USB drives or the internal drive.
    I've got two computers, both G4 1.5 GHz of the same era (Mac mini and Powerbook 15"), and the same symptoms appear on both machines, as well as when hooking the drive up to a FW800 port.
    I tried connecting the drive to a friend's Intel Mac Mini, and it does NOT exhibit this behaviour. So I'm suspecting it's a recent software update that caused the drive to fail, since I have used the same firewire disk for a few months without any problems.
    I've reverted to MacOS X 10.5.0 and the problem goes away. Then I re-did the update to 10.5.8 and tested it - the problem comes back. It's present on any firewire device I could test on (including a 3rd gen firewire ipod).
    So basically, firewire on 10.5.8 seems broken on a G4
    - kaitlin

    You seem to have done a good job of proving that the Firewire drivers in 10.5.8 don't work properly on your model of Mini. I don't think there's a UNIX solution to that problem. If the drivers are incompatible, only Apple can fix them, and it won't.
    If any firmware updates were ever released for your model, make sure you have the latest one.
    The only other thing you could try that you might not already have done is to do a clean install of 10.5, then run the combo updater (not a series of intermediate updaters) to get back to 10.5.8. That procedure has been reported to solve strange problems on occasion.

  • Labview is reading zero from shared variable

    Hello everyone 
    -I have a problem in labview shared variable cause it reads only zero value for all inputs and outputs and timers in S-1200 ...etc
    but in NI OPC server using quick client it reads the correct value but when adding them in labview project they are not reading the correct value and only retruing zero
    any suggestions 
    Thanks

    Hi,
    I recently finished the commisioning of the system I started to work upon. Hence the delay to come back to this thread.
    During the commissioning of the system, I was concentrating on finalizing the system functionality and was using only the PC#1. Now when all the system functions were finalized, we duplicated the changes to PC#2 and started the operation.
    What we observed when the PC#2 was started, was that if we were to open the same HMI page as in PC#1, the operation of the HMI was very sluggish. When we selected a different page for both PC#1 and PC#2, the HMI operation was good,  all operations seemed to happen in real time!!
    To give a background of the HMI, we have a subpanel to which the desired HMI page/screen is loaded. This loads a VI related to the HMI page.
    Here, I am guessing that when I load the same page in both the PCs, the read/write operations to the shared variable(it is the same in both PC, and is deployed in a library on the cRIO) are some how hindered due both PCs trying to access the same network resource from different locations.
    Is this guess correct? Any work arounds or solutions? Do I need to post any more information?
    Thanks in advance,
    Regards

  • Are the read and write operations atomic for an array in a local variable.

    Hi,
    I would like to know when you access an array in a local variable, is it an atomic operation?
    Thanks,
    Mat

    Thanks for the comments. I agree with you. However, I my case, race conditions and synchronization are not issues. Therefore, the only thing that matters to me is that the write and read operation of the array must be atomic. I know that I can implement that with a LV2 style global but I want to avoid it if possible.
    If writing and reading to an array are atomic operations then I can simply use local or global variables.
    All I need to know is: Is reading or writing an array in a local variable an atomic operation?
    Thanks,
    Mat

  • Slow read and write operations on DAQmx

    I am trying to build up a feedback control system using PCI-6052E and PCI-6722 cards, so that the computation of the control algorithm is performed on computer's CPU. I am trying to reach sampling period of 1kHz. It turns out that the bottleneck of my system are the read and write operations from and to cards that consume lot of processor time.
    An example code (C#) that shows how the reads and writes are implemented is as attachment. On my tests the example code produces a read-time of 1000 samples on 6 channels 7.58s and a write-time of 4.69s. Is there any way to improve the performance?
    The program is running on Windows XP on 1000Mhz processor.
    Attachments:
    DAQmxPerformanceTest.cs ‏3 KB

    Petteri,
    I don't have the hardware to reproduce this, but I have a few ideas. For analog output, are you creating a task, starting it, and calling write repeatedly, or are you simply calling write? While an AO Task will auto start on write, it will also go through the process of stopping when the write is complete. Which means next time you call write, the task will need to start again. It will be much more effecient if you explicitly call start on the task once, perform as many writes as required, and stop/clear the task when you are done. This same principle applies to you analog input reads as well.
    I hope this helps,
    Dan

  • File read and write operations

    how do use file read and write operations?
    can anyone give simple program?

    http://www.tutorialspoint.com/cplusplus/cpp_files_streams.htm
    http://www.cplusplus.com/doc/tutorial/files/
    check this
    and with mfc
    http://www.functionx.com/visualc/fileprocessing/serialization.htm
    https://msdn.microsoft.com/en-us/library/6337eske.aspx
    http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=90

  • IO Operation (read/write files) in IPAD AIR downgrade ~50% than the one in IPAD4

    Summary:
    UnZip the same file with the same native unzip method (native zlib) in IOS7, ipad air take +50% more time consumption than the one in iPAD4.
    for unZip, it main perform the IO read/write files in ipad, does apple air already changes the file system ?
    Steps to Reproduce:
    UnZIP a file with native zlip on both ipad AIR and IPAD 4 :
    IPAD AIR: 16GB
    IPAD 4: 16GB
    ZIP file size: 700KB
    Files in zip file: 225 files (120 files are images)
      after remove the images, the time on IPAD AIR will save 50% time consumption
    unZip method: the native zlib
    unzip time on IPAD AIR+ IOS7: ~1200ms
    unzip time on IPAD 4+ IOS7: ~800ms
    Expected Results:
    The unzip time should be almost the same on IPAD AIR and IPAD4.
    Actual Results:
    For same IO operation read/write file, IPAD AIR gets the 50% downgrade performance than the one in IPAD4
    Version:
    IPAD4: 16GB WIFI + IOS7
    IPAD AIR: 16GB WIFI + IOS7
    does anyone else encounter the same problem ?
    A bug already submitted for apple: https://devforums.apple.com/message/993060

    I even tried porting the code to Gumbo and running it there - still, no fonts are being enumerated.
    If you're too lazy to read the whole above post, here's the problem in one sentence
    An SWF that contains a textfield with embedded fonts, when launched by itself succeeds to return the embedded font using Font.enumerateFonts(false), however when loaded using Loader.loadBytes into AIR, it fails to see those fonts even though the textfield in it is displayed and editable.
    How do I make the loaded child application and AIR see the embedded font?

Maybe you are looking for