Multiple write operation on datasocket possible?

I am using datasocket to communicate between to applications, the first application writes a command on the datasocket url and the second application reads this command and after interpreting it writes some parameters on the same url, but i am having problems with this type of arcitecture. some times the commands are not read by the second application. I also wanted to know if after writing the command if i do a read operation on the same url then will the command be lost for the second application?

Hi Philip
Thanx a lot for your answer. i have created 2 diffrent items now and read write takes place on seperate items. But now i have another problem, i.e if i convert the vi's to application, the server is not able to read what is written on the url. i have attached the vi's along wid this mail, kindly change the url accordingly, before converting them to application and running.
thanx for your answer
Arun
type of command can be changed from 0 to 2
for 0 server answers as OK
for 1 server currently does not reply
for 2 server answers as OK
but if it is converted to application server does not the receive the commands.
Attachments:
datasokclient.zip ‏40 KB

Similar Messages

  • Multiple writes possible in DS in labview but not in CVI?

    Hi
    Am using Datasocket to communicate between to VI's, In one of the VI's am opening a new dataitem in "readwrite" mode and doing both read and write operation, and in the other VI am just writing on the dataitem by supplying the URL to the write function, surprisingly this is possible even though i havent created the dataitem from DS manager and have not selected the option of "allow multiple writes".
    But if am creating a CVI application, which opens the dataitem in "readwrite" mode, and if i write on that dataitem from labview, it gives me an error, i even tried selecting the "allow multiple write" on this dataitem.
    Please suggest some solution. Also if i open a lot of Dataitems will it cause any problems?, i guess our project needs atleast 100 dataitems if i will have to use command and response in different dataitems.
    thanx for the help
    Arun

    Hello,
    The following Knowledge Base describes two possible soltions for this issue.
    Why Does My DataSocket Connection in LabVIEW Fail with "Error: Can't connect to data item, another w...
    The second solution is what you have already described with checking the box to Allow Multiple Writers. I am not sure why that is not working in your case, but I would suggest trying the first soltuion suggested in the KB. It involves creating the datasocket item a certain way in LabVIEW, and more details are available in the KB.
    Scott Y
    NI

  • The first binary file write operation for a new file takes progressively longer.

    I have an application in which I am acquiring analog data from multiple
    PXI-6031E DAQ boards and then writing that data to FireWire hard disks
    over an extended time period (14 days).  I am using a PXI-8145RT
    controller, a PXI-8252 FireWire interface board and compatible FireWire
    hard drive enclosures.  When I start acquiring data to an empty
    hard disk, creating files on the fly as well as the actual file I/O
    operations are both very quick.  As the number of files on the
    hard drive increases, it begins to take considerably longer to complete
    the first write to a new binary file.  After the first write,
    subsequent writes of the same data size to that same file are very
    fast.  It is only the first write operation to a new file that
    takes progressively longer.  To clarify, it currently takes 1 to 2
    milliseconds to complete the first binary write of a new file when the
    hard drive is almost empty.  After writing 32, 150 MByte files,
    the first binary write to file 33 takes about 5 seconds!  This
    behavior is repeatable and continues to get worse as the number of
    files increases.  I am using the FAT32 file system, required for
    the Real-Time controller, and 80GB laptop hard drives.   The
    system works flawlessly until asked to create a new file and write the
    first set of binary data to that file.  I am forced to buffer lots
    of data from the DAQ boards while the system hangs at this point. 
    The requirements for this data acquisition system do not allow for a
    single data file so I can not simply write to one large file.  
    Any help or suggestions as to why I am seeing this behavior would be
    greatly appreciated.

    I am experiencing the same problem. Our program periodically monitors data and eventually save it for post-processing. While it's searching for suitable data, it creates one file for every channel (32 in total) and starts streaming data to these files. If it finds data is not suitable, it deletes the files and creates new ones.
    On our lab, we tested the program on windows and then on RT and we did not find any problems.
    Unfortunately when it was time to install the PXI on field (an electromechanic shovel on a copper mine) and test it, we've come to find that saving was taking to long and the program screwed up. Specifically when creating files (I.E. "New File" function). It could take 5 or more seconds to create a single file.
    As you can see, field startup failed and we will have to modify our programs to workaround this problem and return next week to try again, with the additional time and cost involved. Not to talk about the bad image we are giving to our costumer.
    I really like labview, but I am particularly upset beacuse of this problem. LV RT is supposed to run as if it was LV win32, with the obvious and expected differences, but a developer can not expect things like this to happen. I remember a few months ago I had another problem: on RT Time/Date function gives a wrong value as your program runs, when using timed loops. Can you expect something like that when evaluating your development platform? Fortunately, we found the problem before giving the system to our costumer and there was a relatively easy workaround. Unfortunately, now we had to hit the wall to find the problem.
    On this particular problem I also found that it gets worse when there are more files on the directory. Create a new dir every N hours? I really think that's not a solution. I would not expect this answer from NI.
    I would really appreciate someone from NI to give us a technical explanation about why this problem happens and not just "trial and error" "solutions".
    By the way, we are using a PXI RT controller with the solid-state drive option.
    Thank you.
    Daniel R.
    Message Edited by Daniel_Chile on 06-29-2006 03:05 PM

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • Retention guarantee causing multiple DML operations to fail ?

    WARNING: Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
    ^^
    From the Ora Docs (10.2) Section - Introduction to Automatic Undo Management (Undo Retention) states the above.
    This would mean that other DML operations if requiring space in undo, would therefore fail with a ORA-30036 error. Is this correct understanding ?
    If so then one way to avoid this ORA error is to have autoextend defined. ??

    From the Ora Docs (10.2) Section - Introduction to Automatic Undo Management (Undo Retention) states the above. Is it from
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm#sthref1482
    >
    This would mean that other DML operations if requiring space in undo, would therefore fail with a ORA-30036 error. Is this correct understanding ? Not all DML operations requiring undo space would fail; only those transactions for which there is no space left in undo tablespace would fail. And yes this is one of the possible error messages that one can get.
    If so then one way to avoid this ORA error is to have autoextend defined. ??Yes but that is just pushing the brick wall two feet away. Besides, with auto extension turned on and inappropriate undo_retention parameter settings, you will have issues with disk space (if any).

  • How to use multiple WSDL operations in One BPEL process Recieve Activity ?

    Is there anyway to attach multiple WSDL operations with a Single BPEL process ? How ?

    Thanks Melvin for your quick respond.
    When I create a BPEL process, It asks me to give the XSD as an Input. When I import the XSD it asks me to select an operation not more than one. Let suppose I select addRequest and finish the wizard.
    Now what I can see, It create Recieve activity with an Input msg of Add operation. Well what I understand from your statement is that, I remove the recieve activity and put the Pick activity.
    Now my question is How to mention the other operations like update and delete ? And how to test them ?? The link provided by you is just tell me what Pick activity can do for me, But its not telling me how to use it ?? and how the give the operation to it. Where should I need to change in the BPEL ???

  • 'BAPI_PO_CREATE1'  Multiple account assignment is not possible for AFS item

    'BAPI_PO_CREATE1'  -> This BAPI works perfectly without the  'account assignment' option . But  for purchase requisitions which have account assignments  BAPI returns the error  - E|8W |185   |Multiple account assignment is not possible for AFS items.
    Can somebody please help me to get this error resoleve .
    My coding I have done like below.
    DATA: pohead  TYPE bapimepoheader.
    DATA: poheadx TYPE bapimepoheaderx.
    CONSTANTS : c_x VALUE 'X'.
    DATA: exp_head TYPE bapimepoheader.
    DATA: return  TYPE TABLE OF bapiret2 WITH HEADER LINE.
    DATA: poitem  TYPE TABLE OF bapimepoitem WITH HEADER LINE.
    DATA: poitemx TYPE TABLE OF bapimepoitemx WITH HEADER LINE.
    DATA: posched  TYPE TABLE OF bapimeposchedule WITH HEADER LINE.
    DATA: poschedx TYPE TABLE OF bapimeposchedulx WITH HEADER LINE.
    DATA: POACCOUNT  TYPE TABLE OF BAPIMEPOACCOUNT WITH HEADER LINE.
    DATA: POACCOUNTX TYPE TABLE OF BAPIMEPOACCOUNTx WITH HEADER LINE.
      pohead-comp_code = '1000'.   "IEQ1 plant. "'1000'.
      pohead-doc_type   = 'NB'     .
      pohead-creat_date = sy-datum   .
      pohead-vendor = EKKO-LIFNR. "'0000500004'.
      pohead-purch_org = purch_org.
      pohead-pur_group = purch_grp.
      pohead-langu      = sy-langu   .
      pohead-doc_date   = sy-datum.
      poheadx-comp_code  = c_x.
      poheadx-doc_type   = c_x.
      poheadx-creat_date = c_x.
      poheadx-vendor     = c_x.
      poheadx-langu      = c_x.
      poheadx-purch_org  = c_x.
      poheadx-pur_group  = c_x.
      poheadx-doc_date   = c_x.
      poitem-po_item    = iLineItem.      "1.
      poitem-material   = req_item-MATERIAL.   " '000000000040000234'.
      poitem-plant      = req_item-PLANT.
      poitem-quantity   = req_item-QUANTITY.
      poitem-net_price  = NET_PRICE.
      poitem-price_unit = PRICE_UNIT.
      poitem-shipping   = 'Z1'.
      poitem-preq_no    = req_item-PREQ_NO.
      poitem-preq_item  = req_item-PREQ_ITEM.
      poitem-acctasscat = 'K'.
      APPEND poitem.
      poitemx-po_item    = iLineItem. "1.
      poitemx-po_itemx   = c_x.
      poitemx-material   = c_x.
      poitemx-plant      = c_x .
      poitemx-quantity   = c_x .
      poitemx-tax_code   = c_x .
      poitemx-item_cat   = c_x .
      poitemx-acctasscat = c_x .
      poitemx-net_price  = c_x.
      poitemx-price_unit = c_x.
      poitemx-shipping   = c_x.
      poitemx-preq_no    = c_x.
      poitemx-preq_item  = c_x.
      poitemx-acctasscat = c_x.
      APPEND poitemx.
      POACCOUNT-PO_ITEM = iLineItem.
      POACCOUNT-SERIAL_NO = iLineItem.
      POACCOUNT-GL_ACCOUNT = '0000211010'.
      POACCOUNT-SD_DOC = '0001001056'.       
      POACCOUNT-ITM_NUMBER = '000100'.       
      POACCOUNT-CO_AREA = '1000'.
      APPEND POACCOUNT.
      POACCOUNTX-PO_ITEM = '00001'.
      POACCOUNTX-SERIAL_NO = '01'." '01'.
      POACCOUNTX-PO_ITEMX = 'X'.
      POACCOUNTX-SERIAL_NOX = 'X'.
      POACCOUNTX-GL_ACCOUNT = 'X'.
      POACCOUNTX-SD_DOC = 'X'.
      POACCOUNTX-ITM_NUMBER = 'X'.
      APPEND POACCOUNTX.
      CALL FUNCTION 'BAPI_PO_CREATE1'
        EXPORTING
          poheader  = pohead
          poheaderx = poheadx
        IMPORTING
        exppurchaseorder = ex_po_number
        expheader        = exp_head
        TABLES
          return    = return
          poitem    = poitem
          poitemx   = poitemx
          POACCOUNT = POACCOUNT
          POACCOUNTX = POACCOUNTX.

    I  found the answer

  • File Adapter Write Operation inserts a new line at the end

    Hi,
    I am using the write operation in the File Adapter from BPEL Process. The data is written successfully to the file and file is also created in the location specified in the adapter wsdl. But in the file created, it creates a new line at the end of the file. This new line has to be avoided when writing data to a file.
    Has anyone faced this and solved this ?
    I am SOA 10.1.3.4.
    Cheers,
    - AR

    It is a bug in 10g and will be resolved in one of the versions of 11g.
    Cheers,
    -AR

  • Has anyone else ever had a problem where you had to perform 2 datasocket writes before the datasocket read would pick up the change?

    I have a local VI that is simply a control that writes to the datasocket server whenever the control value changes.(the dataitem on the server is permanent - its initialized and never released by the server)
    In the same local VI I have a datasocket read polling a different dataitem on the server.
    The remote machine has a VI that reads the permanent dataitem on the server once per second.
    For some reason, after adding the local VI mentioned above, the remote VI stopped picking up the first change in the permanent variable.
    I'd start both the local and remote VIs...
    Then change the local control and the remote V
    I would not update - as if the datasocket write(upon adjusting the control) did not take place. So I'd change the control value again - this time the remote VI would update to this new value. And from here on out - the remote VI would update correctly. This problem only occurs when the local VI is first started up.
    What in the heck is going on?

    dingler44 wrote:
    >
    > Has anyone else ever had a problem where you had to perform 2
    > datasocket writes before the datasocket read would pick up the change?
    >
    > I have a local VI that is simply a control that writes to the
    > datasocket server whenever the control value changes.(the dataitem on
    > the server is permanent - its initialized and never released by the
    > server)
    > In the same local VI I have a datasocket read polling a different
    > dataitem on the server.
    > The remote machine has a VI that reads the permanent dataitem on the
    > server once per second.
    > For some reason, after adding the local VI mentioned above, the
    > remote VI stopped picking up the first change in the permanent
    > variable.
    > I'd start both the local and remote VIs...
    > Then change the local control and the remote VI would not update -
    > as if the datasocket write(upon adjusting the control) did not take
    > place. So I'd change the control value again - this time the remote
    > VI would update to this new value. And from here on out - the remote
    > VI would update correctly. This problem only occurs when the local VI
    > is first started up.
    >
    > What in the heck is going on?
    Gorka is right, this came up on Info-LV a few days ago. Someone
    described a similar problem. I replied that I had seen similar
    behaviour, reported it to NI, and they verified a bug. There is no fix
    yet, but they are aware of it and will fix it. No anticipated release
    date for the fix.
    Regards,
    Dave Thomson
    David Thomson 303-499-1973 (voice and fax)
    Original Code Consulting [email protected]
    www.originalcode.com
    National Instruments Alliance Program Member
    Research Scientist 303-497-3470 (voice)
    NOAA Aeronomy Laboratory 303-497-5373 (fax)
    Boulder, Colorado [email protected]

  • NFC tags read/write operations on low level

    Hi,
    I know this is little bit offtopic question - but since you are experts in the area I will try to ask you probably a pretty simple question:
    1/ I would like to know which protocol is used for the read/write operations to the NFC tags are used. According to my understanding after the tag is placed on the NFC reader (NFC phone, USB reader), it is powered and set to the ready state. Then the application protocol for read/write operation is used. As I think the exact format and the content of commands used for read/write is not specified in ISO 14443 and it is dependent on a tag hardware/manufacturer and will be different for FeliCa/Mifare/Innovision/etc. tags, so there is no way how to handle NFC tags read/write operations with the single implementation. Is that assumption correct?
    2/ Are there any tags, which supports the APDU 7816-4 commands for read/write operations?
    Thank you for reply
    Kind regards,
    STeN

    hello,
    you have to read the NFC forum specs. all of this will be better explained than by me.
    more than one protocol are used according the the contactless front end configuration and abilities. It includes ISO14443-A, ISO14443-B and Felica. Sometimes other protocols are also available, for example Innovatron (not Innovision lol)
    Mifare is not a protocol, it is a line of NXP products. These products use the lower layers of the ISO14443-A protocol specification.
    There are 4 types of tags
    1) using the lower layers of ISO14443-A
    2) using the lower layers of ISO14443-B
    3) something related to felica?
    not sure exactly about these 3, you have to read the specs. Everything is clearly understandable, not like ETSI.
    4) something using ISO7816-4 commands on top of ISO14443 A or B or others. You have SELECT, READ BINARY, UPDATE BINARY. You can implement that using javacard, I did it and it works. You need two binary files, that can be hardcoded.
    Regards
    Sebastien

  • Need information regarding Solaris kernel Delayed write operation

    Hello Friends,
    I need information regarding the delayed write operation in Solaris.
    Would be thankful if anyone can give me some details.
    Thanks in advance

    1) When you post code, please use[code] and [/code] tags as described in Formatting tips on the message entry page. It makes it much easier to read.
    2) Don't just post a huge pile of code and ask, "How do I make this work?" Ask a specific question, and post just enough code to demonstrate the problem you're having.
    3) Don't just write a huge pile of code and then test it. Write a tiny piece, test it. Then write the piece that will work with or use the first piece. Test that by itself--without the first piece. Then put the two together and test that. Only move on to the next step after the current step produces the correct results. Continue this process until you have a complete, working program.

  • Slow read and write operations on DAQmx

    I am trying to build up a feedback control system using PCI-6052E and PCI-6722 cards, so that the computation of the control algorithm is performed on computer's CPU. I am trying to reach sampling period of 1kHz. It turns out that the bottleneck of my system are the read and write operations from and to cards that consume lot of processor time.
    An example code (C#) that shows how the reads and writes are implemented is as attachment. On my tests the example code produces a read-time of 1000 samples on 6 channels 7.58s and a write-time of 4.69s. Is there any way to improve the performance?
    The program is running on Windows XP on 1000Mhz processor.
    Attachments:
    DAQmxPerformanceTest.cs ‏3 KB

    Petteri,
    I don't have the hardware to reproduce this, but I have a few ideas. For analog output, are you creating a task, starting it, and calling write repeatedly, or are you simply calling write? While an AO Task will auto start on write, it will also go through the process of stopping when the write is complete. Which means next time you call write, the task will need to start again. It will be much more effecient if you explicitly call start on the task once, perform as many writes as required, and stop/clear the task when you are done. This same principle applies to you analog input reads as well.
    I hope this helps,
    Dan

  • Trace Stack Error - Multiple step operation generated errors

    When importing a mapping table into Financial Data Management, I received an error code 2147217887 multiple-step operation generated errors. Check each status value. I was exporting a mapping table from our development application into our production application, so I am not sure what caused this error message. Any ideas would be greatly appreciated. Thank you.

    I solved this problem.
    Oracle Provider for OLE DB 9.x is supporting Unicode.
    But I didn't handle Unicode (DBTYPE_WSTR).
    I added Unicode handling code in my source, then there was no error.

  • Can i run multiple osx operating systems on my mac book pro?

    hello, i was wondering if i can run multiple osx operating systems on my mac book pro? i currently have the latest version of lion but many of my applications require snow leopard to operate... thank you for any help.

    The easiest solution is to partition your drive to provide a second partition on which you can install Snow Leopard. It need not be a large partition because you can still access applications on the other partition.
    To resize the drive do the following:
    1. Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
    After the main menu appears select Disk Utility and click on the Continue button. Select the hard drive's main entry then click on the Partition tab in the DU main window.
    2. You should see the graphical sizing window showing the existing partitions. A portion may appear as a blue rectangle representing the used space on a partition.
    3. In the lower right corner of the sizing rectangle for each partition is a resizing gadget. Select it with the mouse and move the bottom of the rectangle upwards until you have reduced the existing partition enough to create the desired new volume's size. The space below the resized partition will appear gray. Click on the Apply button and wait until the process has completed.  (Note: You can only make a partition smaller in order to create new free space.)
    4. Click on the [+] button below the sizing window to add a new partition in the gray space you freed up. Give the new volume a name, if you wish, then click on the Apply button. Wait until the process has completed.
    You should now have a new volume on the drive.
    It would be wise to have a backup of your current system as resizing is not necessarily free of risk for data loss.  Your drive must have sufficient contiguous free space for this process to work.

  • File read and write operations

    how do use file read and write operations?
    can anyone give simple program?

    http://www.tutorialspoint.com/cplusplus/cpp_files_streams.htm
    http://www.cplusplus.com/doc/tutorial/files/
    check this
    and with mfc
    http://www.functionx.com/visualc/fileprocessing/serialization.htm
    https://msdn.microsoft.com/en-us/library/6337eske.aspx
    http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=90

Maybe you are looking for