Serial asynchronous architecture

Hi,
I have a problem with which way would be best in terms of some architecture.  At present I have an existing program which consists of 4 asynchronous loops running in parallel.  Each loop is a state machine which is controlled via a global notifier to start the state machine off.  I also have 4 Marposs digital acquisition systems for which i have written the Drivers from scratch using VISA.  All of these marposs devices communicate through 1 serial port.  I have successfully, in isolation managed to acquire data using the three vi's listed below.
 - Start Acquisition (Start.vi)
 - Read Digital Array Values (Read.vi)
 - Stop / reset Acquisition (Stop.vi)
This gives me data acquired from the device timed in the hardware at say 0.25mS/Sample (Which i need this speed)
The problem I have is in putting this same process into each of the 4 mentioned asynchronous loops.  I know they will cause conflict as they stand.  However I was thinking of putting the Acquisition into a Daemon and only executing the acquisition (With a notifier / rendezvous) when all stations are ready for measurement.  This I have done in the past however I believe this could have an adverse impact on throughput, which is already close to specification Limits.
Another method maybe to put the subvi's into some kind of wrapper which has some kind of access control?
Is it possible to put each of the Marposs devices onto a virtual Serial Port? even though they use one Physical Port? <-- Long shot but would be nice!
I am hoping someone has already had a similar problem to this before and can advise me on the proper way forward.  If not i think i will go ahead and go down the daemon route, which i am not too keen on doing.
Thanks in advance
Craig
LabVIEW 2012

Hi, Thanks for the Questions.
The Units are on a RS485 bus going through a USB adapter (Not Ideal).  The units used to be polled in a similar fashion you suggested.  However this is not enough for our needs as we need to profile the measurements for better accuracy and post processing etc.  What the new method needs to do is Tell that probe to start Acq, wait untill part / movement has been made then collect that data from the FIFO within the hardware.  At the moment there are 3 low level vi's that have this functionality but I am having trouble trying to see how these might go into a Daemon type polling loop which can supply all 4 asynchronous stations.  Each station is in effect a state machine which sets a bunch of automation controls in a sequence then "Measures"using the Marposs devices so I suppose the timing for the acquisition is indeterminant to a degree unless some sort of notifieris used.
LabVIEW 2012

Similar Messages

  • CNiVisaSession::Read() Serial asynchrone, is it possible?

    Hello,
    System: Measurement Studio 6.0
    We want to use Serial with VISA in asynchrone mode (with handler if possible). We have made a small program (based on CDialog) to test it, but every time we call CNiVisaSession::Read(,,jobID) (jobID for asynchrone mode!), it returns VisaSuccessIoSync, why? (what conditions are missing; event queue in place of handler; or serial VISA cannot be asynchrone at all?)
    In fact, we want that CNiVisaSession::Read() returns immediately and uses asynchrone mode (actually, it blocks).
    We use the following code (in class CMyClass):
    Header:
    class CMyClass: public CDialog
    ViByte m_nSerialBuffer[255];
    CNiVisaJobId m_nSerialJobId;
    CNiVisaSession m_nSerialSession;
    void InitNIVisaSerial();
    ViStatus OnNIVisaEvent(CNiVisaEvent& nEvent);
    Body:
    CMyClass():m_nSerialSession("ASRL1::INSTR", VisaExclusiveLock){}
    void CMyClass::InitNIVisaSerial()// called from OnInitDialog
    // initialize serial
    m_nSerialSession.SetAttribute(VisaSerialBaudRate, (uInt32)9600);
    m_nSerialSession.SetAttribute(VisaSerialParity, VisaParityNone);
    m_nSerialSession.SetAttribute(VisaSerialDataBits, (uInt16)8);
    m_nSerialSession.SetAttribute(VisaSerialStopBits, VisaStopOne);
    m_nSerialSession.SetAttribute(VisaSerialFlowControl, VisaFlowNone);
    m_nSerialSession.SetAttribute(VisaTimeout, (uInt16)5000); //time out of 5 secs
    try {
    // we want the 0x0D character termination
    m_nSerialSession.SetAttribute(VisaSerialEndIn, VisaEndTermChar);
    m_nSerialSession.SetAttribute(VisaTerminationChar, (uInt8)0x0D);
    // OnNIVisaEvent is called on completion,ok
    CNiVisaEventHandlerId nID;
    m_nSerialSession.InstallEventHandler(VisaEventIoComplete, *this,
    OnVisaEventHandler(CMyClass, OnNIVisaEvent),
    nID);
    m_nSerialSession.EnableEvent(VisaEventIoComplete, VisaHandler);
    catch (CNiVisaException* e)
    e->ReportError();
    e->Delete ();
    // called from Read button
    void CMyClass:nReadBtn()
    // here we send asynchrone read, but every time
    // the command complete in synchrone mode, for which raison?
    ViStatus nStatus = m_nSerialSession.Read(m_nSerialBuffer, 255, m_nSerialJobId);
    if(nStatus == VisaSuccessIoSync) // go always in this condtion after timeout
    AfxMessageBox("Reading synchrone");
    Thanks a lot,
    Alain

    Alain:
    NI-VISA performs asynchronous I/O differently based on the bus type and whether you are using the queue or the handler. For Serial, to make the callbacks work efficiently, there is a threshold level that you need to set in MAX. Go to the Tools menu and choose the VISA options. Set the Minimum Async Transfer to a smaller number of bytes (probably 250) and then try your application again. It should now do the I/O in another thread, and that is the thread where your callback should occur. You can verify this in NI Spy.
    Dan Mondrik
    Senior Software Engineer, NI-VISA
    National Instruments

  • ALSB [2.6] Asynchronous Proxies Coupled

    Hi,
    I'm trying to implement an asynchronous architecture using ALSB. There are 3 proxies: A, B & C.
    Proxy A is invoked synchronously by Client and sends back and Acknowledgement. Proxy A routes to Proxy B which is a JMS Proxy. Proxy B routes to Proxy C which is also a JMS Proxy. Proxy C synchronously invokes Client in order to send back a Business Response to the Request sent by the Client to Proxy A.
    I have seen that if either Proxy B or C are disabled, then the Client gets a SOAP fault back detailing the same.
    Surely this is not the decoupling of Proxies A -> B and B -> C one would expect when a JMS destination lies between them?
    Best regards,
    Nial Darbey

    It sounds like the message is not actually going to JMS but rather straight to Proxy B. Try publishing to a Business Service to put the message in JMS instead.

  • Configuration management on Solaris 10 (something like Win Registry)

    Hello,
    We are currently in a process of porting a set of java applications from Windows to Solaris 10. Although the process is relatively straight forward we stumbled upon a problem with configuration management.
    On Windows we rely on Registry to keep track of e.g. installed applications, java jdks etc. that is used by our own management software. Basically Registry provides an api to access a set of keys and values. Every installed application adds a key:value that will be picked up by the management software.
    On the surface it appears to be no direct equivalent of windows Registry on Solaris. We can write our own e.g. file based database or use something like Electra.
    Do I miss something obvious and there is something in Solaris that can be used to store global configuration information?
    Thanks,
    Alexey

    For tracking installed software in Solaris, I can think of a few items that may help.
    1. /var/sadm/install/contents lists all the software packages and their contents. This is great for figuring out what file, binary, command came from what package:
    grep /usr/bin/tip /var/sadm/install/contents
    /usr/bin/tip f none 4511 uucp bin 56172 6659 1168034330 SUNWcsu
    warning: grepping for just "java" here will give tons of output. The more specific the better.
    2. /var/sadm/pkg will list all the packages installed on the system, even 3rd party packages.
    3. /opt is the filesystem reserved for the so-called "unbundled" Sun products. For example, here is where you would find the Sun Ray server software directories ( /opt/SUNWut ......), the Serial Asynchronous Interface software (/opt/SUNWsaip . . . ), the old HP printer software (/opt/HPNPL . . ) etc.
    Wouldn't this management software you mention have a version for Solaris 10 to do this for you?

  • Expose assynchronous service in ESB

    Hi all.
    I'd like to know if it's possible to expose a service in ESB which has two port types, just like we do in BPEL.
    The idea is to expose a WSDL in ESB which has two port types, request and callback. ESB should support WS-Addressing and call the requester port dynamically, based on the information provided in the SOAP header.
    Is that possible??
    Thanks.
    Denis

    Hummmm.... That's too bad for me :-(
    I'm planning to go on an asynchronous architecture, given that in the traditional synchronous architecture I have the following issue:
    For each synchronous communication, ESB gets a thread from the thread pool and this thread gets busy until the service provider returns with the response.
    I could not see a way (I don't even know if it's possible) of allocating specific thread pools for each service or group of service providers.
    The problem of having a single thread pool is that, if one service provider unexpectedly gets massively requested, the entire ESB gets stuck, and all other service providers end up suffering from this single problem.
    Having an assynchronous architecture would prevent the ESB from consuming its entire thread pool in case of massive peak requests, as it only mediates the requests and the thread is free just after the mediation.
    So, I have the idea of implementing services using request and callback ports.
    The problem is that ESB should be able to mediate the assynchronous response back to the requester.
    Is anyone working on a similar kind of architecture??
    Thanks.
    Denis

  • Pipilined block with latches driven by different clock in FPGA

    Hi,
       Thanks for the previous help. I have made some progress using Labview FPGA. Actually, I have had a CORDIC running in the 5640R FPGA. However, the CORDIC I just implemented is unrolled bit-parallel pipelined FPGA which consumes too many slices on the FPGA. Now I turned to implement a bit-serial pipelined architecture and I met the following programming problem.
       The architecture I want to implement is in the attached.
       The blue blocks are combinatorial logic. In ths previous design, there are several large block between each latches in red. Say, the red latch latches every  24 ms. In order to implement bit-serial, I have to divide one large block into four smaller block and implement mux and demux. Now I had 5 new latches in green, each latches every 4 ms.
       Any hints on how to programme this structure into Labview?  I would really appreciate the help.
    David
    ps: The second attached picutre is the unfinished code for the small block.

    You should use a Single-Cycle Timed Loop to have more control over the timing of your code.  You can use feedback nodes or shift registers as registers and case structures or the select tool to determine what gets written to the registers and when.
    I envision your code to look something more like this:
    Regards,
    Joseph D.
    Message Edited by jdigiova on 04-17-2007 04:52 PM
    Attachments:
    PipelinedSCTLArch.JPG ‏95 KB

  • How to get server data without reading from the socket stream?

    My socket client checks for server messages through
                while (isRunning) { //listen for server events
                    try {
                            Object o = readObject(socket); //wait for server message
                                tellListeners(socket, o);
                    } catch (Exception e) {
                        System.err.println("ERROR SocketClient: "+e);
                        e.printStackTrace();
                    try { sleep(1000); } catch (InterruptedException ie) { /* ignore */ }
                }//next client connectionwith readObject() being
        public Object readObject(Socket socket) throws ClassNotFoundException, IOException {
            Object result = null;
    System.out.println("readObject("+socket+") ...");
            if (socket != null && socket.isConnected()) {
    //            ObjectInputStream ois = new ObjectInputStream(socket.getInputStream());
                ObjectInputStream ois = new ObjectInputStream(new DataInputStream(socket.getInputStream()));
                try {
                    result = ois.readObject();
                } finally {
    //                socket.shutdownInput(); //closing of ois also closes socket!
        //            try { ois.close(); } catch (IOException ioe) { /* ignore */ }
            return result;
        }//readObject()Why does ois.readObject() block? I get problems with this as the main while loop (above) calls readObject() as it's the only way to get server messages. But if i want to implement a synchronous call in adition to this asynchronous architecture (call listeners), the readObject() call of the synchronous method comes to late as readObject() call of the main loop got called before and therefore also gets the result of the (later) synchronous call.
    I tried fideling around with some state variables, but that's ugly and probably not thread safe. I'm looking for another solution to check messages from the server without reading data from the stream. is this possible?

    A quick fix:
    - Add a response code at the beginning of each message returned from the server indicating if the message is a synchronous response or a callback (asynch);
    - Read all messages returned from the server in one thread and copy callback messages in a calback message queue and the synch responses in an synch responses queue;
    - Modify your synchronous invocation to retrieve the response from the responses queue instead from the socket. Read the callback messages from the corresponding queue instead from the socket.
    Also take a look at my website. I'm implementing an upgraded version of this idea.
    Catalin Merfu
    High Performance Java Networking
    http://www.accendia.com

  • Best architecture for testing multiple (8) serial devices

    I am not new to LabVIEW but am being tasked with a challenging project I want to be prepared for as best I can.  I am familiar with Event Structures and State Diagrams but feel these may not be adequate for the following specifications:
    1.  Most likely 8 serial devices need tested simultaneously
    2.  Tested every 2 hours for 48 hours
    3.  Records for each one must be created and saved at each interval
    I want to learn Queue's etc and am thinking a combination of producer consumer and the above two mentioned may be required.
    Any basic architecture recommendations that may work would be appreciated.  FYI, I have never used Producer Consumer.
    Thanks

    As tbob pointed out, the architecture will be fully dependent on the exact details of the problem.  That being said, I do have my two cents to put in.  In my work, I deal a lot with serial communication (too much, in my opinion); in some instances, I am dealing with the same device on different machines and in other instances, different devices talking on different ports on the same machine.  Regardless of what the implementation is, you will find that there is a lot of commonality to be found in serial IO (open port, close port, configure port, send ascii command, receive ascii data, block simultaneous attempts to R/W, etc, etc) that is really independent of implementation.  To me, this just screams for an object-oriented approach.  The ability to reuse or change the details of the code while limiting changes to the interface itself is awesome.  In your case, if you have 8 similar devices, I see 8 instances of a class which inherits properties from a Serial IO superclass; if we are dealing with different types, then you can exploit the inheritance features of objects to basically call 8 instances of different children of the Serial IO superclass. 
    All that being said, I think some of the details are missing as tbob pointed out.  However, instantiation of 8 objects will prevent some of the issues that might be associated with asynchronicity as all of the properties and methods would be associated with an instance. 
    Anyway, that's my thought.  Give me another week or two and I will probably have an example of this up.
    Cheers, Matt 
    Matt Richardson
    Certified LabVIEW Developer
    MSR Consulting, LLC

  • Problems with asynchronous visa read with USB to serial port adapter

    I have an application that sends and recieves data from a power supply and on most computers (desktop PCs), the application runs fine. I found that on at least one computer, using a Keyspan Tripp Lite USB to RS232 adapter, in one out of 25 queries, I wasn't getting the whole reply back from the instrument. After some debugging I found out if I switch the VISA write and read calls from synchronous to asynchronous I don't see the error any more. Is there a way to disable asynchronous mode for that computer? I'd rather not have to find all the VISA write calls in my application and update each one just to support that computer.
    CLD (2014)

    Try playing with the Tx ACK Advance setting on that port first.  (From the Keystone config utility)  You might also look into the Rx FIFO buffer settings (16 is a short default buffer)
    There is no (Easy) way to universally disable Async .   VISA Async tranfers MAY complete synchronously anyhow (It throws a warning when that happens)  So if Keystone settings do not help you might want to swap out to a FTDI chip enabled USB - serial device.
    Jeff

  • Asynchronous serial input with an sbRIO FPGA

    Background:
    As part of my capstone project, I'm trying to read data transmitted serially from an IMU. The host is an sbRIO 9602.
    As far as I'm aware, the protocol is not exactly standard: data is sent asynchronously in packets. Each packet consists of 12+ bytes in immediate sequence, each having a start and stop bit, and then the line goes idle [high] until the next packet. Each data byte is preceded by a frame bit, and only contains 7 bits of actual data, so the packet has to be further processed to get actual useful data.
    I've seen the FPGA serial I/O example program floating around. The code I inherited was actually based on it, but it's overly complex for this application, and I'm not convinced it would actually work for this protocol at all. I couldn't get it to work, at any rate. I rewrote the sampling code in its entirety twice trying to get it to work, but haven't made a lot of progress. -On the bright side, the current VI is much simpler and more straight forward than my first attempt...
    The problem:
    I can read the first 70 or so bits of a packet fine, then the program skips a bit. That throws off the start/stop bits, and basically renders everything after meaningless. In this screenshot the data is as read in, in order from top to bottom:
    I'm fairly certain this means my sampling interval isn't perfect [this suggests about 1.4% too long], but I'm totally stumped on how to avoid it. What's worse, this is actually on the lowest possible output setting from the IMU, 2400 baud. Realistically, we're hoping to have it working at either 230.4k or 460.8k baud.
    The prior version of my code had the packet being read in 10-bit [1 byte] chunks, processing, then reading the next chunk. I encountered exactly the same error. I assumed that having too much code in the sampling process was causing the timing to be thrown off, so I changed it to read off the entire packet into a bit array and then process it afterward [while no data is coming in]. I've attached this version of the code for reference. It's cleaner, but no change as far as the error is concerned.
    Am I correct in my evaluation, or is there something else going on? More to the point, is there a way of fixing or working around the problem so that I can get reliable samples [even at 100-200x the bit rate]?
    As an aside, I've only been working with LabVIEW for a couple weeks; please tell me if I'm using poor habits or doing anything incorrectly.
    Any help will be immensely appreciated. Thank you.
    Attachments:
    IMU_serial_in.vi ‏61 KB

    Hi Ryan,
    I have a suggested methodology, but I don't currently have any example code to share that would get you started.
    The challenge you have is even if you sample at the exact right baud rate of your incoming signal, the phase of the FPGA clock will not be exactly the same as the source signal.  Now complicate that with your sample frequency and baud rate will always be slightly different, and you will get the sampling drift effect you described where data eventually is clocked in wrong.  On short transmissions, this may not be a problem because the sampling can be re-aligned with a start bit, but for long, continuous streaming, it eventually fails as the sampling and source signals drift out of phase.
    I would suggest over-sampling the DIO line, using a debounce filter if necessary, and use a measured time between edge detections to constantly adjust your sampling period and phase to keep your sampling aligned with the incoming data.
    The proposed LabVIEW code I imagine would be a single-cycle timed loop based state-machine.  Essentially the state machine could detect edges that occur near the baud rate you expect to receive, and then would adjust the sampling period to ensure you are sampling the data inbetween transitions while the incoming waveform is stable.
    With this method running at 40MHz, you would essentially have ~43 clock ticks/samples of each clock cycle at 921.6kbps, and you should be able to pull out the right samples at the right time in the waveform.
    Hope this helps, and if I find a good example of this, I'll send it your way.
    Cheers,
    Spex
    National Instruments
    To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be...

  • Mapping asynchronous to a serial interface

    Hello everybody,
    i got an ISR3845 with PVDM2 inside. The goal is to map a incoming call depending on the caller number to a dedicated serial interface.
    How do i configure a static mapping from a asynchronous to a dedicated serial interface ?
    Thank you very much for your support
    regards
    stephan

    You have to install NI-VISA.
    It is possible to avoid this seperate install by including all of the required support files.
    I believe there is a knowledge base article that lists all of the files that need to be included in your build.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Asynchronous serial com with DAQ6034E

    Hello,
    I am a new CVI programmer, and i would like to submit a problem.
    I use a DAQ6034E card with LabWindows/CVI. I would like to communicate with devices via an asynchronous serial bus. Is it possible to do such a thing with a digital I/O, and how can I do (especially for reception) ?
    Thank's for your help !
    Franck...

    Frank,
    You could use the DIO lines on the 6034E. But keep in mind the limitations of the DIO lines on this card. These DIO lines are programmed IO only. That means, that data transfers (in or out) can only be done via software timing. You cannot do any hardware timing on the DIO lines for this card. If you need hardware timing on this operation, you will need to use a DIO card.
    But, if you can work with only software timing, you have the advatage in this card of being able to configure line direction on a line by line basis, which I think can be very beneficial in serial communication.
    Nick W.

  • Architectural Issues with AIA Asynchronous MEP

    Hi,
    We are implementing an AIA service which should follow an Asynchronous (request/delayed-response) MEP. I need some advice from people who have implemented this pattern.
    The rough requirements are that we have two ABCS Requesters for two different systems, which read from files and call the EBS (eventually there may be an EBF involved as well but don't want to confuse the question). The response for the operation will take up to 10 minutes but we need to deal with the response when it comes back at the requester level.
    It looks something like this
    ABCSRequester1
    <--> EBS <--> ABCSProvider1
    ABCSRequester2
    My question is, why does AIA only create a single port service in ABCS Provider and it forces me to call back via a seperate WSDL when I select the request/delayed-response option in he service constructor.
    This means that I must have two EBS services (one EBS Request and one EBS Response) - unless I am missing something (which is entirely possible) and dynamically route the response to the appropriate endpoint and then correlate it.
    I understand this is possible, but it seems an overkill to me when I could have two ports on the Provider ABCS wsdl (which the service constructor does not create), which I could then callback on and the EBS will then create a callback to the calling client. This would make the whole process easier to manage IMO.
    I could change the generated WSDL and add a callback port, but as the service constructor generates the ABCS like this, I do not want to just alter this without understanding maybe why I shouldn't.
    Do people recommend the Fire and Forget Async MEP as seems to be documented in AIA documentation for a scenario like this and if so why?
    Have people implemented the two port single WSDL solution (as is created by SOA Suite if you build a service as Async from scratch) as an AIA provider?
    If anybody has a sample Async project in AIA showing this sort of scenario, I would love to see it.

    Hello,
    Provider ABCS does not need to have 2 port types for this scenario. You would have 2 port types in Provider ABCS only if it interacts with provider application in an asynchronous manner. In that situation, provider application will be sending the callback response by invoking the service operation defined in the callback response port type within provider ABCS.
    Your question is more about why we need to have 2 EBS - one for processing request and another for processing callback response. For this situation, you do not have to add additional port type in provider ABCS WSDL.
    You can certainly leverage the mediator’s call back facility to eliminate the Response EBS that is normally recommended for processing the callback. 11g mediator does have the support and you can leverage that.
    We did look at this feature during the 11g adoption and decided not to leverage this feature after prototyping for the following reasons:
    1.     EBS Operations need to support fire-and-forget as well as asynchronous delayed response MEPs. We had to support this functionality of Requester ABCS at run time indicating in the EBM whether it needs the response or not. EBS.Operation would be invoked by both fire-and-forget and asynchronous delayed response MEP requesters; and for the delayed responses, ResponseEBS.Operation would be used. Hardwiring the MEP at the EBS Operation level would lead us to create 2 operations – one for fire-and-forget and another for asynchronous delayed response
    2.     This stateless programming model allows us to be OSB-friendly
    3.     To be backwardly compatible with 2.x services
    Any custom solution built by you can certainly leverage the 11g Mediator's callback facility provided your requester ABCS is coded always to receive a callback from EBS. You could certainly leverage this pattern if you dont have a need for Requester ABCS to invoke EBS using fire-and-forget pattern.
    Ravi

  • Asynchronous serial message read

    Hi, everybody.
    I have the message based device. I don't know when the device will send the message, so I didn't know how should I create the VISA read function to read whole string from the device(because it reads only data, which are enabled at serial port buffer).May be it can be done by using VISA bytes at port in while loop, but I don't know exactly how.Can you explain it to me?Or do you have any other solution?Thank you for your help.
    Solved!
    Go to Solution.

    Hi
    I reworked the basic serial read and write example, just to give you some programming hint. Also remember that then reading from the serial buffer, you do not need to read all the content. Read the number of bytes you need. The serial buffer is FIFO type. So the oldest content will always be first in the queue to read. 
    My example is somewhat simple, and I did not test it but should work. Check out queues and notifiers (in example finder) for more advanced programming techniques.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Basic Serial Write and Read async.vi ‏33 KB

  • Asynchronous serial

    It appears that when calling viReadAsync, it will not return until it times out or it receives at least as many bytes as I requested.  Is there a way to make it return when it finds any data in the buffer?  Currently, I am setting 'count' to 1, but this seems a bit inefficient. 
    Once viReadAsync and viWaitOnEvent have completed, I am having problems getting the number of bytes read.  Is this always going to be the same number I requested?  I tried using viGetAttribute to get VI_ATTR_RET_COUNT, but couldn't make that work.  When I use the device session as the first arg, it returns VI_ERROR_NSUP_ATTR.  When I use the job ID from viReadAsync or VI_EVENT_IO_COMPLETION, VI_ERROR_INV_OBJECT is returned.  What is the secret to getting the number of bytes read?
    Will viReadAsync ever return more bytes than I requested?  What do I need to do to get all bytes in the input buffer?  Do I need to check the number of bytes in the buffer, then request that many?  How do I do this (I can't find that function/attribute)?

    Are you using CVI? This is the LabVIEW forum. I can tell you though, that you can be use the VI_ATTR_ASRL_AVAIL_NUM to determine how many bytes are actually in the buffer and only do a read if the byte count is greater than 0. You will never get more bytes than what you request but you can get less. You will get less if you have the termination character enabled and the data includes the character. A VISA read will then terminate when the term character is detected, no matter how many bytes you specify.

Maybe you are looking for

  • Error in using IDOC create contract

    Hi experts,                I am using standard IDOC BLAORD03 to create contract,but comes across error 'Enter purchase group' as Purchase group is a mandatory filed,but in my IDOC i have value in EKGRP which is Purchase group,I look into FM IDOC_INPU

  • Itunes 10.5 will not install

    itunes 10.5 will not install, keep getting messages saying componet not installed, and installation only option to cancel

  • Problem to change ST-A/PI to 01K_ECC600  during Prepare to upgrade  ERP60

    I tried to upgrade ERP2004 to ERP6.0. During in Prepare Phase, it prompt with below screen : Product update and media decision overview: Decide about update type and the Add On media kind. Select the Add-on you want to make or change a decision: Sele

  • Customer payment  against invoices

    Dear friends . Please suggest me that is there any report in FI for customer payment against invoices Customer No     Name     Company Code     Fiscal Year     Invoice No     Invoce Posting Date     Invoice Amount     DZDoc No         DZ Amount     D

  • Problem with access specifier and static???

    I've a question. I saw a code written for Sun's Java Tutorial. The program had three class methods and all of them start like this: static public void methodName1{ static public void methodName2{ and static public void main(String[] args){ My questio