Slow UDP write

Hi everyone!
I built an application that read data from sensors using a DAQ and send it via UDP to a Java application.
The problem is that Labview sends only 1 or 2 datagrams/s (I saw this fact using "String2 indicator") and the DAQ sample rate is 1k/s. Why do I lose so much data?
I attach a sketch of my program.
(The Java program works well... I test it with another application).
Thanks,
Veronica
Attachments:
Untitled.png ‏26 KB

pincpanter wrote:
I agree with Mark that Express vi's should not be used, however I don't see any race condition in the code.
You are correct, there isn't a race condition. I didn't look at teh names closely enough to notice it was String and String 2. However, I would avoid the use of local variable and use wires instead. Take some time to learn how dataflow works and use it in your programs. Leverage the poswer of LabVIEW, don't use bad practices.
Mark Yedinak
"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot

Similar Messages

  • Cluster wired to UDP Write Crashed LabVIEW

    In the Windows version of LabVIEW 6.0, I am allowed to wire a cluster's output directly to the Data In of a UDP Write node. However, when it is executed, LabVIEW crashes. The Linux version 6.1 doesn't allow that connection. Is this a change between 6.0 and 6.1 or a bug in the Windows version?

    No, it never behaved correctly. I am very inexperienced with LabVIEW. I was practicing building a front panel on a Windows 2000 machine with 6.0 eval. I wired the UDP Write as I described and tried to run it. Each time I clicked the button that activated the UDP "code", LabVIEW crashed. I copied the .VI to our Linux box with LabVIEW 6.1 Full Version and, when I brought it up, the line between the cluster and the UDP Write was dashed. I then changed the diagram so that all of the items in the cluster were converted to U8 using byte-swapping and word-splitting, converted it to a binary array, converted that to a string and sent the string to the UDP Write and cured the problem. I wanted the inputs into the cluster to have the exact data structure as defi
    ned in the C header for readability.

  • Hello i have a problem with my macbook pro become to slow to write any thing, to do any thing and the fan is always on never stop what can i do?

    hello i have a problem with my macbook pro become to slow to write any thing, to do any thing and the fan is always on never stop what can i do?
    i already reinstall the software many times and still the same i dont know what is happening is there is somebody who can help me please im will waiting

    There are any number of possibilies... you could have problems with runaway applications, causing your machine to overheat and setting the fans off, you could have bad fan sensors, etc.
    Open Activity Monitor (Applications>Utilities) and see if you have any apps that may be hogging the CPU. Call back.
    Clinton

  • UDP Write

    I am trying to send flattened data of images through UDP in a wireless network of sizes of around 8000 bytes. After sending around 50 frames , the UDP write gets stuck. It doesnt return error and it doesnt even time out (I have 10 sec time out time).
    Is it because of the size of data?

    Hi aman,
    I was actually looking for the how much of the processer LabVIEW is using not
    how much memory.  If you take a look at
    the image below I need the number highlighted. 
    I need this information for while the VI is locked up and while your
    computer is idle.  This will help me know
    if the VI is completely using your computers resources or if maybe it has stopped
    executing.  It would also be good to know
    what version of LabVIEW you have and what you are trying to communicate with.
    Is the communication going over a network or to another computer through a
    crossover cable?  How reliable is this
    issue?  In words does it happen every
    time you run it or will it sometimes work and sometimes not?
    Adam H
    National Instruments
    Applications Engineer

  • Internal hard disk really slow to write new files

    Hi
    I've had some problems with my mac lately;
    it was running really slowly so I ran tech tool deluxe and it detected a bad surface sector.
    I wasn't sure how to fix this so I copied everything from my internal hard disk onto 2 external hard disks.
    Then I deleted all documents, files and most apps from the internal hard drive and zeroed out all free space using disk utility. (Which took four days).
    Finally I did a re-install of leopard and ran tech tool deluxe again, which came back with no problems.
    However, now I'm trying to copy all the files back onto hard disk it's taking a very long time e.g. iTunes files totaling 31.41GB would take 14hours
    Trying to move the same iTunes files to the other external hard disk takes 25 minutes.
    Any help would be appreciated

    Then I deleted all documents, files and most apps from the internal hard drive and zeroed out all free space using disk utility. (Which took four days).
    It should not take four days to zero out free space. And it should not take 14 hours to write 30GB of data from a directly connected external drive. There is probably a problem with the part of the drive that was not deleted and zeroed; in the system installation. Or the problem could be in the disk directory structure.
    The only way to fix it is to erase the entire drive. You should not need to zero out data, although doing so would do no harm and should take no longer than a few hours (depends on size of drive). Then reinstall a fresh copy of Mac OS X. After going through the updates and installing your apps, restore your personal data from those external drives.
    If the drive is still acting slow, you may have a hardware problem with the drive.

  • Oracle 10G Slow Disk Writes

    We have currently purchased a "SUN SPARC Enterprise T5220" with 2 x 146GB 10K RPM SAS and 2 x 72.8GB 10K RPM SAS disks. I have installed Oracle 10g and created a datbase. The TEMP tablespace and REDO logs and data files all reside on dedicated disks. I have mounted the TEMP, REDO and Datafile filesystems as forcedirectio and set the Oracle setting FILESYSTEMIO_OPTIONS to "SETALL". The speed when writing to disk seems very slow, most of the top waits on the database are for "log file parallel write", "db file parallel write", "direct path write temp",
    cann anyone advise on what I might have done wrong.

    Why do directIO on a local disk. DirectIO should be used when you are using a disk array that has cache on it to avoid the double cache hits (IE system cache hit one and then the cache hit on the array).

  • Formatted on  XP - EXTREMELY SLOW read/write on the Mac !!!!

    Man this iPod is wack!! Im using the iPod mini in Disk Mode to transfer bits (updaters since my XP machine is NEVER connected to the net) back and forth between my PowerBook and my XP Intel based machine. Obviously i set up and formatted the iPod Mini on the XP machine.
    Observation, file read/write times are unacceptably SLOW on Mac and the PC. Did Apple deliberately make this thing unusable in this situation? This thing has a transfer speed as if it were connected to a USB 1.1 port but it is NOT, on any of my machines. I format my USB Flash Drive (SanDisk Cruzer Titanium) on the same XP machine, use the same ports and it's read/write times are at least 5 times faster than the Mini !!!
    When i format and setup the iPod Mini on the Mac then install MacDrive on the PC astonishingly the read/write times significantly increase on BOTH Mac and the PC. As i suggested i think there is something drastically wrong with Apple's PC iPod setup, there is NO way my other devices formatted on the PC have such abysmal performance . Man i click on a folder in the Finder of the PC formatted iPod and it take about 3 seconds until the files inside are displayed...this is also the case on the PC.
    Anyone else experienced this?
    I was going to buy another iPod specifically for transferring data between my Mac's and my colleagues PC's, but there is no way i will with such severely degraded read/write performance.
    Best

    Apparently this is not a single isolated case.
    Scott Aronian Posts :
    "On Mac FAT32 Format Slower Than HFS "
    My 60GB iPod came in Windows FAT32 format which I liked since I could use it in disk mode on both Mac & Windows. However it seemed slow to start playing video and when switching between video files.
    I did some timing tests and on the Mac FAT32 was much slower than HFS.
    FAT32
    Start playing movie = 12s
    Stop movie and restart at same location = between 12-30s
    HFS
    Start playing movie = 5s
    Stop movie and restart at same location = between 6s
    If you have a Mac and your new 5G iPod is formatted for Windows, you may want to use the iPod Updated to restore and make the drive HFS.
    There is NO such performance problems with USB Flash drives when formatted to Fat32 on PC's i have tested many and they perform similar on BOTH platforms.
    Something fishy goin on here

  • WRT54GS slow UDP multicast speed

    Hi everyone. A techie question for you! I am having problems using UDP multicast/broadcast over UDP. My PC is wirelessly connected to our LinkSys WRT54GS router. I have a small application that broadcasts data over UDP. When I send data using unicast (to a specific IP address) the router is reasonably fast, over 2mbps. But when I send data using multicast or broadcast modes over UDP, it is really slow, like about 100kbps. When I connect to the router with an ethernet cable, it runs multicast very fast. When I connect to another router (a D-Link) it also runs multicast very fast. I have the "Filter Multicast" option unticked on the firewall page. Any ideas about why it is running so slow? Thanks! Matthew.

    Hi, 
    I'm also having trouble with milticast traffic on WRT54G2.
    My linux milticast router (igmpproxy) is connected to the internet on one side and to local network on the other site. On one LAN site, there is a WRT54G2 acces point / switch.
    If I watch IPTV on the LAN, the WLAN is extreamly slow to the point being unusable.
    Is there a solution to this problem?
    Similar problems:
    http://forums.linksysbycisco.com/linksys/board/message?board.id=Wireless_Routers&message.id=118765
    http://forums.linksysbycisco.com/linksys/board/message?board.id=Wireless_Routers&message.id=73514

  • Slow ssd write speed on late 2012 mac mini i5

    Hi everyone. Just upgraded my mac mini late 2012 i5 to 120gb sandisk extreme ssd. But most i can get on it is 150mb/s write speed. Read speed is around 430mb/s. have trim support enabled on it. Any ideas what could be wrong with it? Did I miss something? Why writing speed is so slow??? Is this could be caused by old hdd back-up? Will fresh install of 10.8.2 do the job?
    Thanks

    mantasxxi wrote:
    When i did os install to my ssd i got recovery partition automaticly. I have downloaded recovery assistant from apple i think it does the same thing as recovery partition on hdd.
    Correct. The steps I suggested keep the Recovery HD on the Fusion drive. Because when you run the Terminal commands, the selected partitions to create the Fusion drive will get erased. So you'll have to reinstall OS X and if it's reinstalled using an external Recovery HD you won't get the Recovery HD on the Fusion drive. I reviewed this link from ars Technica and one of the screen shots toward the end shows the Recovery HD at the end of the HDD. Whereas the SSD just shows a Boot OS X partition.
    When you use the whole disks, you'll get a result like this link and the person is asking how to get the Recovery HD back on.
    If you want to continue using Command-R instead of booting to an external Recovery HD, you'll want to follow the steps I outlined. If this doesn't matter to you then you can boot from an external Recovery HD.

  • Slow disk writes VM 3.1

    I know I must have something configured wrong but I can't figure it out.
    I have tested disk write access on my VM servers in V2.2 and V3.1, using these commands:
    hdparm -tT /dev/mapper/<lun>
    and
    dd if=/dev/zero of=output.img bs=8k count=256k
    the output is very close between the V2.2 and V3.1 servers, so that's good.
    But then when I execute those command on the actual VMs themselves within the 2.2 and 3.1 environments, I get different results.
    The hdparm command is still equivalent between the two.
    But the dd command shows my old V2.2 VMs getting around 650mb/s while my new V3.1 VMs are getting between 8mb-30mb/s. Does anyone have an idea off the top of their head why this would be?
    Oh, I just tried one more thing... most of my V3.1 VMs were converted from V2.2 via template import. I just tried the same commands on a couple of new VMs I made in V3.1 (from OL6-64bit templates) and they returned 360mb/s and 750mb/s. So I wonder what's wrong with the VMs that I converted from VM2.2?

    Hi there,
    We're having exactly the same problem as you appear to have experienced. Very slow write I/O in guest VMs (domU) but fast I/O on the host on the same iSCSI file system.
    Write I/O inside the guest is between 3-20MB/sec (dd bs=2048k count=512), whereas on the host its 95MB/sec - which is hitting the practical limits of our GigEth iSCSI SAN.
    I've checked inside the guests and can see no problems of write caches being disabled in dmesg. Curiously we hit this same wall on both a RHEL6.3 and Solaris10U10 guest.
    I've tried both sparse and non-sparse files and the performance is the same. Read performance is fine, but this write bottleneck is a showstopper. Would appreciate any assistance you guys might have while I await a response from Oracle..
    Setup:
    - Oracle VM 3.1.1 update 485
    - Sun Fire X4150 Server
    - Sun Storagetek 2510 iSCSI SAN

  • UDP write causes generic error

    Hi,
    I have a test system made out of cFP-2210 and some modules. Test System is a stand alone but it should also be controlled with PC. Test system should send a short UDP message to all 139.0.0.* adresses (port 12343) when it has loaded some sequence, but writing to 139.0.0.255 address causes error 42. If I change it to write directly to my PC's ethernet card(ip 139.0.0.3), sending works fine(sender picture). Anyone have any idea why? Is there some fixed limit that cfp can write to at the same time?
    Or can it cause some problems that cfp's own ip is 139.0.0.30?
    My firewall does not show any traffic in those ports I have used, even when the UDP writing is succeeded. In other ports do have a lot of action (mainly ports 5000 and 6000). And there is not any firewalls blocking traffic.
      this the receiver on PC
     and this is the sender on cFP when working. When I change the ip to 139.0.0.255, it does not work.
    Thanks for replies,
    Mika

    And I forgot to say that I am using Labview RT 8.5.1

  • Fast Mac write speeds, slow XP write speeds

    Just got a new 1Tb TC and it works great on both my Macs and my one XP gaming rig except for one flaw, the XP box (with Gigabit ethernet) is painfully slow writing to the TC disk. A 40Mb file took more than five minutes to write on XP while a 700Mb file took under a minute on the Mac side. I thought it might have something to do with the file system setup on the TC, but when I look at the disk properties in Explorer it shows up as a Fat32 drive. I know almost nothing about networking Windows machines, so I'm forced to think it has something to do with that, can anyone point me in the right direction?

    Same problem except add slow read times to XP as well.
    I had hoped to us the TC as a media store for a combination of XP, Leopard, and Xbox devices.
    Happy with how the backups are working but very disappointed that the TC doesn't seem to be able to serve 3MB music files to XP in under a couple minutes.
    Seems like a bug and is ridiculous. Apple a quick firmware fix please!!!

  • How can I fix slow AFP writes to our Xserve RAID?

    Hi,
    This problem has been posted before but I couldn't find a solution. We have a Intel Xserve connected via fibre channel to a 14 disk 10.5 TB Xserve RAID. It's setup as two RAID5 volumes soft raided into one volumes (RAID 50).
    There are multiple AFP shares on the RAID being served by the Xserve. The problem is that writes over AFP randomly go really slowly. Sampling using fs_usage shows a load of RdMeta operations from the AppleFilerServer process. I think it's the same issue that was posted here:
    http://discussions.apple.com/message.jspa?messageID=8153758#8153758
    We've tried everything mentioned in that post - with no success. Has anyone managed to solve this problem? Many thanks,
    Jamie

    It might be an HFS+ filesystem but because I'm seeing this on a simple striped RAID in a PowerMac. I was writing a script to look for system files starting with 32 zero bytes:
    find /System/Library/ -type f -exec echo -n "{}:" \; -exec xxd -l 32 -c 64 -p {} \; | grep ':0000000000000000000000000000000000000000000000000000000000000000' | cut -f 1 -d':'
    Sometimes it appeared to halt for a long time and fs_usage showed RdMeta or WrMeta blasting along at ~100MB/s. After a few more runs it no longer has this problem. I wonder if there's a tuning bug in HFS hot file management or defragmentation.

  • I'm using a 1D Arrary with 27 different elements and would like to send those data over UDP Write and UDP Read functions.

    I'm using a 1D array with 27 different elements. I would like to transfer that data over a UDP connection, and read it back by using UDP connections.
    But I would like to read 3 elements at a time (On read side) and send those to 9 different ports.
    Note: the data will go to only one PC with one Network Address)
    * 1st elements (0,1,2) and send to port #XXX to see those 1st 3 elements.
    * continue until the elements reaches up to 27
    This is what I have done but I'm finding myself in pitfalls...
    Send side:
    I'm using a UDP Open connection on send side to send my data. So with selected a Source Port, I have created a UDP Op
    en connection. I�m using only one source port to send all the data across the channel. I would like to read 1st 3 elements and send those data across with an assigned Destination port. I would like to do that for 9 times b/c there are 27 elements in the array. So I�m using a For Loop and setting N count to 9. So I�m not getting any errors when I execute and no answer on the other side at all.
    Read side:
    I�m using a UDP Open connection to read in the data with port #. I�m using while loop to with Boolean inside by making a true all the time. Inside that While loop, I�m using For Loop to read the 3 elements data a time and send to a right port address. (As start out I was just trying to see if it works for only one port).
    Attachments:
    UDP_SEND_1.vi ‏40 KB
    UDP_READ_1.vi ‏31 KB

    You are not getting any errors because UDP is a connectionless protocol. It does not care if anyone receives it. Your example will work fine with the following considerations.
    (1) Don't use the generic broadcast address (255.255.255.255).
    (2) You are listening on port 30000. So why are you sending to port 1502, nobody will receive anything there.
    The destination port of the outgoing connection must match the server port of the listener. (The source port is irrelevant, set ot to zero and the system will use a free ephemeral port).
    (3) On the receiving side, you are not indexing on the received string, thus you only retain the last received element. Then you place the indicator outside the while loop where it never gets updated. :-(
    (4) Do yourself a favor and don't micromanage how the data is sent across. Just take the entire array, flatten it to string, send it across, receive it, unflatten back to array, and get on with the task.
    (You can do the same with any kind of data).
    I have modified your VI with some reasonable default values (destination IP = 127.0.0.1 (localhost)), thus you can run both on the same PC for testing "as is". Just run the "read" first, then every time you run "send", new data will be received.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    UDP_READ_1MOD.vi ‏29 KB
    UDP_SEND_1MOD.vi ‏27 KB

  • A way to solve hard drive with slow read write speed

    Just to share.
    Currently I am trying to upgrade my Mac Pro 2008, and replaced the 2TB startup disk with a 120GB OWC SSD.  After the OS installation, I started to move my photo library to a bigger 2 TB hard drive in Bay 2.
    After completion, I started to import the photos to Aperture 3, but it crashed and failed to complete. I had a feeling that something wrong with my hard drive that holding the picture files, therefore, I tried to copy the disk to an external Firewire Hard Disk, the copy was very slow and showing an effective copy speed of 2 MB/s.
    Disk utility, repair permission did not help and proved it was ok.
    I did a Google search, and a Mac user suggested to use an Apple software called "TechTool Deluxe", therefore, I might as well have a try.
    After a few tests, it did show up an error and asked me whether I would like to repair it, so I clicked Ok.
    It took a second to repair and now the copy speed showing 50 MB/s.

    Sounds like trouble with non system drive; with where the preference for where Aperture library is perhaps?
    You have a couple backups of your drives in case you want to just update changes and reformat.
    Is the old system drive and OS still around? might want to pull that. Eliminate confusion, cross-link of files or paths.
    With SSD you want to clone and have a system image for restore - same as anytime, but even more so.
    I've read but never understood the problems people have had with installing or cloning Mac OS to an SSD, but seems "real" nonetheless. And some problems with setting up Aperture was mentioned.

Maybe you are looking for