UDP Write

I am trying to send flattened data of images through UDP in a wireless network of sizes of around 8000 bytes. After sending around 50 frames , the UDP write gets stuck. It doesnt return error and it doesnt even time out (I have 10 sec time out time).
Is it because of the size of data?

Hi aman,
I was actually looking for the how much of the processer LabVIEW is using not
how much memory.  If you take a look at
the image below I need the number highlighted. 
I need this information for while the VI is locked up and while your
computer is idle.  This will help me know
if the VI is completely using your computers resources or if maybe it has stopped
executing.  It would also be good to know
what version of LabVIEW you have and what you are trying to communicate with.
Is the communication going over a network or to another computer through a
crossover cable?  How reliable is this
issue?  In words does it happen every
time you run it or will it sometimes work and sometimes not?
Adam H
National Instruments
Applications Engineer

Similar Messages

  • Cluster wired to UDP Write Crashed LabVIEW

    In the Windows version of LabVIEW 6.0, I am allowed to wire a cluster's output directly to the Data In of a UDP Write node. However, when it is executed, LabVIEW crashes. The Linux version 6.1 doesn't allow that connection. Is this a change between 6.0 and 6.1 or a bug in the Windows version?

    No, it never behaved correctly. I am very inexperienced with LabVIEW. I was practicing building a front panel on a Windows 2000 machine with 6.0 eval. I wired the UDP Write as I described and tried to run it. Each time I clicked the button that activated the UDP "code", LabVIEW crashed. I copied the .VI to our Linux box with LabVIEW 6.1 Full Version and, when I brought it up, the line between the cluster and the UDP Write was dashed. I then changed the diagram so that all of the items in the cluster were converted to U8 using byte-swapping and word-splitting, converted it to a binary array, converted that to a string and sent the string to the UDP Write and cured the problem. I wanted the inputs into the cluster to have the exact data structure as defi
    ned in the C header for readability.

  • UDP write causes generic error

    Hi,
    I have a test system made out of cFP-2210 and some modules. Test System is a stand alone but it should also be controlled with PC. Test system should send a short UDP message to all 139.0.0.* adresses (port 12343) when it has loaded some sequence, but writing to 139.0.0.255 address causes error 42. If I change it to write directly to my PC's ethernet card(ip 139.0.0.3), sending works fine(sender picture). Anyone have any idea why? Is there some fixed limit that cfp can write to at the same time?
    Or can it cause some problems that cfp's own ip is 139.0.0.30?
    My firewall does not show any traffic in those ports I have used, even when the UDP writing is succeeded. In other ports do have a lot of action (mainly ports 5000 and 6000). And there is not any firewalls blocking traffic.
      this the receiver on PC
     and this is the sender on cFP when working. When I change the ip to 139.0.0.255, it does not work.
    Thanks for replies,
    Mika

    And I forgot to say that I am using Labview RT 8.5.1

  • I'm using a 1D Arrary with 27 different elements and would like to send those data over UDP Write and UDP Read functions.

    I'm using a 1D array with 27 different elements. I would like to transfer that data over a UDP connection, and read it back by using UDP connections.
    But I would like to read 3 elements at a time (On read side) and send those to 9 different ports.
    Note: the data will go to only one PC with one Network Address)
    * 1st elements (0,1,2) and send to port #XXX to see those 1st 3 elements.
    * continue until the elements reaches up to 27
    This is what I have done but I'm finding myself in pitfalls...
    Send side:
    I'm using a UDP Open connection on send side to send my data. So with selected a Source Port, I have created a UDP Op
    en connection. I�m using only one source port to send all the data across the channel. I would like to read 1st 3 elements and send those data across with an assigned Destination port. I would like to do that for 9 times b/c there are 27 elements in the array. So I�m using a For Loop and setting N count to 9. So I�m not getting any errors when I execute and no answer on the other side at all.
    Read side:
    I�m using a UDP Open connection to read in the data with port #. I�m using while loop to with Boolean inside by making a true all the time. Inside that While loop, I�m using For Loop to read the 3 elements data a time and send to a right port address. (As start out I was just trying to see if it works for only one port).
    Attachments:
    UDP_SEND_1.vi ‏40 KB
    UDP_READ_1.vi ‏31 KB

    You are not getting any errors because UDP is a connectionless protocol. It does not care if anyone receives it. Your example will work fine with the following considerations.
    (1) Don't use the generic broadcast address (255.255.255.255).
    (2) You are listening on port 30000. So why are you sending to port 1502, nobody will receive anything there.
    The destination port of the outgoing connection must match the server port of the listener. (The source port is irrelevant, set ot to zero and the system will use a free ephemeral port).
    (3) On the receiving side, you are not indexing on the received string, thus you only retain the last received element. Then you place the indicator outside the while loop where it never gets updated. :-(
    (4) Do yourself a favor and don't micromanage how the data is sent across. Just take the entire array, flatten it to string, send it across, receive it, unflatten back to array, and get on with the task.
    (You can do the same with any kind of data).
    I have modified your VI with some reasonable default values (destination IP = 127.0.0.1 (localhost)), thus you can run both on the same PC for testing "as is". Just run the "read" first, then every time you run "send", new data will be received.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    UDP_READ_1MOD.vi ‏29 KB
    UDP_SEND_1MOD.vi ‏27 KB

  • Slow UDP write

    Hi everyone!
    I built an application that read data from sensors using a DAQ and send it via UDP to a Java application.
    The problem is that Labview sends only 1 or 2 datagrams/s (I saw this fact using "String2 indicator") and the DAQ sample rate is 1k/s. Why do I lose so much data?
    I attach a sketch of my program.
    (The Java program works well... I test it with another application).
    Thanks,
    Veronica
    Attachments:
    Untitled.png ‏26 KB

    pincpanter wrote:
    I agree with Mark that Express vi's should not be used, however I don't see any race condition in the code.
    You are correct, there isn't a race condition. I didn't look at teh names closely enough to notice it was String and String 2. However, I would avoid the use of local variable and use wires instead. Take some time to learn how dataflow works and use it in your programs. Leverage the poswer of LabVIEW, don't use bad practices.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Error 54 in udp write when sending to ip 127.0.0.1 !

    hello friends !
    i have installed labview 2012 and my projects (which i wrote in labview 2010) have some incompatibility issues.
    one of them is error 54 (the network address is ill-formed etc...) when trying to send data to myself - to IP 127.0.0.1
    in labview 2010 it was OK. now it is an error.
    can someone help  ?
    Solved!
    Go to Solution.

    Could be rather a network card driver problem than a LabVIEW one. If you wire an IP address to the Open function you tell the socket to route all traffic through that network. Technically 127.0.0.1 is the loopback device which is a virtual network card in itself so depending how the network socket stack is configured to route the traffic, the order to route the traffic through a specific physical network device may prevent it to ever reach the local loopback device.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • UDP Parallel Read/Write

    I have an application where I am trying to receive and transmit data using UDP on the same port.  Ideally, I'd like to have this functionality separated out into separate loops so that sending out data is not dependent on receiving data and vice versa.  The code I have written works fine on my RT system (cRIO VXWorks) and is able to execute these task in parallel but fails on my Windows XP machine.  I have attached a highly simplified version to illustrate the issue.
    On the Windows machine, the UDP Read VI will terminate execution (even with a timeout of -1) if the UDP write VI is called and return error 66.  On my RT system, the UDP Read VI will continue to wait until it receives data (as expected), even if the UDP write VI is called.  Is this expected?  Anyway to get the code to execute the same way on my Windows machine?
    Thanks,
    Damien
    PS. I've tried it on both LabVIEW 8.6.1 and 2011.
    Attachments:
    UDP Parallel Read Write.vi ‏14 KB

    Hey Guys,
    In case any of you are interested or if anyone else is experiencing the same issue, I finally had an oppurtunity to dig into this issue a little deeper.  The issue is more basic than just trying to read and write data in parallel using the UDP primitives.  See attached VI, run with highlight execution.
    The attached VI represent a client application which uses "UDP write" to send a message to a server and "UDP read" to read a response from that server (it is actuallu a slightly modified version of a NI example).  Now, lets say the server goes down and the client is still trying to communicate.  The client should still send out its message using the "UDP write".  This operation does not return an error since UDP is connectionless and it doesn't care if anyone is listening.  The next function, UDP read, should wait for the full timeout and then return error 56.  However, if you run the code, you will see that it does not wait for the full timeout but rather it immediately returns errror 66.  This is the cause of the bug.  I have confirmed that LabVIEW running on a realtime system (VXworks) and running on MacOS does not exhibit this behaviour but rather both function properly, returning error 56 only after waiting for the full timeout period.  This bug is what causes my original code to error out even though a -1 is wired to the timeout of the UDP read (it just happens to be in parallel with the UDP write).  There is something wrong with the UDP read function on Windows system.  NI support has acknowledged this error on Windows systems and they are filling a corrective action request.  I'll post that number once I get it.
    (Just for fun, disable the write operation and watch that the read function will wait for the timeout?)
    Attachments:
    UDP Client_2 0.vi ‏12 KB

  • Increase UDP sending size over 64k bytes and get error -113,sending buffer not enough

    Dear all,
    I have a case that I must send a data over 64k bytes in a socket with UDP . I got a error-113 shows "A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram was smaller than the datagram itself.".I searched for this issue and got the closest answer as below:
    http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D?OpenDocument
    It said I have to change buffer size with Wsock.dll. I used the same mathod to increaes the send buffer to 131072 bytes by choice optionname to so_sndbuf (x1001) and give it value with 131072 and it worked fine without error. However I still got an error 113 while sending data with " UDP Write.vi ". It seems UDP write.vi reset the buffer size? Are there any other things cause the error?
    I attached example code. In UDP Sender.vi you can see I change send buffer size to 131072 and send date included a 65536 bytes data.There is also a UDP receiver.vi and there is some missing VI which you can get from the LINK. But it's not necessary.
    Attachments:
    UDP Sender.vi ‏14 KB
    UDP Receiver.vi ‏16 KB
    UDP_set_send_buffer.vi ‏16 KB

    The header for a UDP packet includes a 16 bit field that defines the size of the UDP message (HEADER AND DATA)
    16 bits limits you to a total size of 65635 bytes, minus the header sizes; a minimum of 20 bytes are required to define an IP packet and 8 bytes for UDP leaving an effective data payload of 65507
     bytes.
    LabVIEW is not the issue...
    http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_structure
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

  • How does one send a file via UDP?

    I need to send a binary file via UDP. The file has 99 32-bit binary words of data. How do I input the data to the UDP write command? I don't want to send the data sequentially..I would like to send all 99 words in a single UDP transmission. TIA.
    -jlivermore

    Something to be careful of is that while UDP is fast, it is not secure. You can lose packets and there is no way for the sender or receiver to know it was lost. Because UDP doesn't have a "connection" like TCP does, the sender doesn't know if the data is going anywhere and the receivers have no way of knowing if anything is being sent.
    Picture a radio station. A guy working the night shift hopes there is someone somewhere listening to his broadcast, but he doesn't know. Likewise, if you turn on a radio and hear nothing but static, you may be listening to a frequency where there are no stations broadcasting, or the station transmitter might be off the air, or there might be interference keeping you from hearing the signal, but again you don't know which.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • UDP with cRIO and error 113

    Hi all,
    i'm using the UDP protocol to exchange information between a PC and a cRIO; the first is the master, the second is the slave.
    In the cRIO i developed a piece of code in order to realize a loopback, so that the message sent from the PC is sent back to the PC itself as it has been received, without any modification.
    First, i tried sending a short message (100 B) from the PC to the cRIO; in this case, no problem at all
    Then, i tried sending a long message (20 kB) from the PC to the cRIO; in the cRIO code, the error number 113 was raised when trying to send back the message to the PC (calling the UDP write function)
    What can be the origin of the different behaviour between the PC and the cRIO? Seems to be a question of buffer size... Can it be dependent of the OS? How can i increase it in the cRIO?
    Thanks!
    aRCo

    Hi aRCo,
    According to Wikipedia (https://en.wikipedia.org/wiki/User_Datagram_Protoc​ol), the max. theoretical limit for a UDP datagram is 8 bytes for header + 65,527 bytes of data (65,535 bytes in total). However, "the practical limit for the data length which is imposed by the underlying IPv4 protocol is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP header)". The wiki article goes on to say that larger packets are possible with Jumbograms.
    As for the NI documented limits of 8K, this would appear to be a LabVIEW-imposed limitation. (Perhaps this was related to an earlier limitation of the UDP protocol that has since been upgraded. If I recall correctly, the Windows version of LabVIEW also had this 8K limitation in the past.)
    Perhaps NI relaxed this limit, or maybe you have jumbo packets enabled on youir PC, and this is allowing more throughput (-- of course both of these possibilities are only speculation on my part).
    In any event, if you limit the UDP packet size to the documented 8K limitation, your code should probably be fine across all LV platforms. (How's that for stating the obvious..?)
    Anyway, good luck with your application.
    -- Dave
    www.movimed.com - Custom Imaging Solutions

  • Question about UDP behavior

    I have a program that uses a UDP transmitter.
    I use UDP because I specifically do not care if anyone is listening.
    I use UDP OPEN with a specific port to get a CONN ID.
    I use UDP WRITE with the conn ID and a specific DESTINATION IP and DESTINATION PORT when transmitting.
    I use UDP CLOSE when the program quits.
    This works fine - I can receive the data in the PXI box just fine. 
    QUESTION:  If the receiver software is NOT running, does the UDP WRITE actually take up bandwidth on the net?
    I can imagine that it might be spewing data out, without regard to whether anyone is listening.
    I can also imagine that the receiver might post a signal saying "I'm listening", and the transmitter knows whether someone is listening or not, and doesn't transmit if no one is listening.
    So, which is correct?
    I'm trying to decide if I need to implement a transmitter on/off control. 
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

    When last I looked into UDP it does indeed "wiggle the wire" going out of the back of your PC. What you switch/router does after that is dependent on its configuration.
    Note: I got out of low-level network stuff before switch were invented so take my words with a grain of salt.
    I was directed to Proceess monitor ( the below utility) when X-shooting a nasty bug by NI Support. Amoung many other things it will show you UDP messages traffic.
    Just trying to help,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    ProcessMonitor.zip ‏1248 KB

  • Detecting UDP transmissi​on status

    Well, I have to devise a UDP test program. A quick scan of the LV help and I had a couple of UDP VIs that send and receive - wow. By putting the UDP Write VI and UDP Read VI into loops I have the basis of an effective link test. After a bit of tweaking to allow me to set block lengths, addresses etc, all is well. So to control everything I have a Wait VI inside the TX loop that waits for a number of ms set by a control on my front panel - the shortest wait I can get is 1ms - that being the resolution on the PC clock. However I am only using a fraction of the 1Gb network (<1% according to task managers Network status tab) lengthening the block towards 65Kbytes ups this number but still gives a low % use - clearly I need to hit the TX faster than once per ms. Heho I try writing a delay loop based on doing two random number generations and multiplying the results (10 of these on my m/c is approx 1 microsecond) however even doing 1ms worth does not have the same effect as a 1ms Wait - I add a 0 Wait (help says this relinquishes the hold the VI has on running but that has no discernable effect. What happens is the TX VI reports UDP errors and dies. After a lot of experimenting I hit on the 'simple' expedient of sitting in a loop on the Write VI only leaving when I get no error or an error other than 55 (Error 55 being port busy) this takes the bandwidth up to over 90% (on occasions and >85% consistently. I did try limiting the number of retries but even a limit of 300 was sometimes exceeded while just hitting UDP Write till error 55 goes away always works (just). At the receive end the number of good blocks drops with high speed - but not badly and that is what I would expect. So my ultimate question is - Is there anyway, other than the '1 ms Wait' or sitting in the 'UDP Write not returning 55 loop' of sensing the status of a UDP port or even waiting till the port is free to be written to. Secondly, leaving the link running in this high speed state will eventually 'crash' (after 10 - 20 minutes) bizarrely the Task manager shows the link still at >85% but the RX end just times out - if I leave the RX PC alone and reboot the TX then the timeouts stop and the RX is happy - this suggest that something in the TX (like port address etc) is becoming corrupted when pumping out 65kbyte blocks at 900kb/sec - but then what do I expect hammering the link at that speed - so second question is any thoughts on how to find where this issue occurs and has anyone else seen this?

    I am attempting to establish the maximum reliable speed of a UDP link and teaching myself about UDP as I go. I seem only to be able to attach 3 files (need 4) so will send 4th as separate message. Please be kind about my code it is experimental Two of the VIs send and receive a simple message while the other two put/check a ckecksum on the message (could be removed with no great harm as UDP puts on its own chksum). To run 'full tilt' set 'ms delay' and microsecond delay' to 0 in TX VI. Ignore microsend delay code it was experimental. Set block length parameters to big values to see high data rates (e.g. 65000 - but set max block length to read to 66000 before running Read VI. VIs will work both running on same machine (localhost in TX). You will have to work out your own network addresses to run over a real link.
    Attachments:
    UDP Receiver.vi ‏25 KB
    UDP Sender.vi ‏32 KB
    NMEA CHKSUM_SVI.vi ‏9 KB

  • UDP comms on windows 7 compatability

    I have a configuration where I am talking to a unit using UDP messaging the messaging is a command response setup where by the CVI program sends a command and then waits for the response to be returned
    The program outline that I have is as follows
    Configure UDP write port
    Configure UDP read port
    Loop
    UDPWrite   
    UDPRead
                Do stuff
    End loop
    Close read port
    Close writ port
    End program.
    For completeness the code is split into a number of functions that reside in the same DLL that is called from Test Stand.
    The channel handle for the TX and RX UDP end points are stored in static Global variables.
    The code works fine for most of the time but when I have executed  the loop a number of times (the number is random some time a few hundred times some time a couple of thousand)
    I get an error message either on the write UDP message or on the read from UDP functions
    The error is
                -6810
    kUDP_ChannelClosed
    The channel has been disposed
     If I step round to the next UDP command
    I get the following error  -6800
    kUDP_InvalidChannel
    I have tried to trap the error and re configure the UDP port but I get the error
    -6811
    kUDP_PortInUse
    So I am unable to trap the error re open the port and continue.
    My application requires the program to operate 24/ 7 so if such an error occurs I need to be able to recover.
    Test setup is as follows.
    Test stand 2012 version 5.0.0.252
    CVI 2012 Version 12.0.0.0 (422)
    Pc is running windows 7 pro 64 bit
    Pc is a Intel core i3 processor.
    Program was complied as a 32 bit application.
    And is running in debug mode
    I have transferred the same application to a windows XP machine and this has continued to execute for  over 12hours with out error where the  windows 7 machine would not last 1 hour of operation without  an error.

    First of all try to add some delay after you open/close the ports. Maybe 500ms could be enough.
    As the most of the times it is recommended to open the ports on request and close after the usage, check that you did not make recurrents calls at the same time.
    Is a basic but you can start from there.
    There is "No C" in spanish.
    If you can think it, you can develop it.

  • I've got soap in my udp?

    I have a LV 2011 compiled application running on a private network talking to a microcontroller, 
    While using the LV UDP write/read functions (command, response), I occasionally get what looks like a soap services discovery message in our data stream.  It cant possibly be from the microcontroller and I can only conclude its originating in the application or runtime itself?  The application has no web services vi's or related in the project, I do open and close the UDP port on every command / responce.
    Example:
    Sent Command: "GET,MSG,1,:34bc9d"
    Received: "</wsa:MessageID></soap:Header><soap:Body><wsdrobe><wsd:Types>wsdpevice</wsd:Types></wsdrobe></soap:Body></soap:Envelope>1€
    76D1
    Ùãsvelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsd="http://schemas.xmlsoap.org/ws/2005/04/discovery" xmlns:wsdp=458515949:ata:4B-41-60-00-37-12-00-00
    ParsedMsgObj: 1 MsgId: 1458 Mask: 0 Len: 8 ptrData: 536871184 uTime: 3382654
    Flags->394
    :255406297"
    What should have been received: "Data:4B-41-60-00-37-12-00-00 ParsedMsgObj: 1 MsgId: 1458 Mask: 0 Len: 8 ptrData: 536871184 uTime: 3382654 Flags->394 :255406297"
    Whats intersting is that the data I am looking for is there, at the end but polluted.
    It feels to me like something in the default build profile is getting turned on? Is there something I can turn off at the build stage? This does not occur in the development environment.
    Environment:
    Win7 (only running the LV app and runtime - no 3rd party software is installed, and I have pruned windows services to the minimum, no firewall, Ive accounted for all running threads)
    Private, non routed network.
    2011 32bit, 
    udp port 824

    Wolfeman, 
    After conferring with my resources, I would like to ask if it is possible that your buffer size is set to be much larger than the receive message you expect? If so, you may be unexpectedly receiving data from an unrelated web service in your receive message which we may be able to filter out by using a smaller buffer. You may also use Wireshark, a packet analyzer, to capture a log file of the packets being sent/received by your system which may help us to locate the problem. As a last resort, you may also consider simply adding in a filter that can run inside your code to look for the particular SOAP framework messages being added in to your receive message if it appears to be the same every time. Please let me know how changing the buffer size effects the behavior of this VI and we can determine how to proceed from there. Have a great day!
    On a related note, I've also found this blog post that seems to have some of the same characteristics as what you are experiencing. While the details of the issue are slightly different, they both deal with an inconsistency happening when using UDP and a microcontroller. 
    Wes W
    Application Engineering
    National Instruments
    www.ni.com/support

  • UDP ip address

    I have two ethernet PCI cards in my PC.  One of them is gigabit (Intel PRO/1000 GT) and the other one is 10 BASET.  I found a couple of functions, i.e., UDP Write and UDP Read in my Labview 6.0 which may work for my purposes.  I was able to set a unique IP address for each of the PCI cards.  Each of these cards will be used to communicate with a unique remote device (therefore a unique IP address).  I see that in UDP Write and UDP Read, there is a way to specify the IP address of the remote device.  I don't see how I can associate the IP of my PCI cards with the UDP Write and Read functions.  I have fixed the routing table.  Now the missing part is "how may I send a UDP message out via a particular PCI card or receive a UDP message via a particular PCI card.  Thanks for your help.   

    You can specify the network address to listen for UDP datagrams on when you use UDP Open (LabVIEW 8.20 and later). This would require opening a reference for each network card. There was no way to do this before 8.20.
    I believe LabVIEW will perform the UDP Write using the network card specified when opening the connection reference, but I am not positive. Maybe someone from NI can confirm this?
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

Maybe you are looking for

  • How can I change where an iTunes download is saving?

    My iPod has a larger capacity than my computer's hard disk, so I bought an external drive to store my music on. I moved my iTunes library folder to this G: drive and instructed future downloads to be sent there as well. This has worked well. Problem-

  • Message Schema changes on receiver side..Need Help!!

    Hi All, The scenario is ECC --> PI --> MDM. The message payload schema that is seen in the Message Monitoring (Adapter Engine) in RWB is different from the message that reaches MDM side. Also, specifically the Message header is lost in the schema on

  • Concatenate rows from the same table

    Hi, I have a table t1 with one column and 2 rows. I need to make a column with 1 row that is a concatenation of the rows from table t1. Can someone help me with this? Thank you.

  • Version 8.02 recordset changes !!!!!

    As i understand it - In version 8.02 the recordsets are created differently now! WHY?????? Something to do with SQL injection problems. The problem is i can't get my sql to work now. I can't find any info on how to get my code to work now. I have thi

  • Sorting within itunes library

    I have saved a "learn a language" cd onto itunes. It has many units each comprising many exercises. Itunes unfortunately sorts it by exercise first and unit second. Is there any way I can overide this sorting so listening to the lessons are sequentia