Best efficient general practice for serial communication

hello,
when talking to a serial instrument, do you have to insert millisecond waits in the while loop or will the reading and writing from the instrument generally control the loop speed of the loop?  On the basic read write example they feed in the Bytes at Port into the VISA Read.... is this making the VISA not read until those bytes are avilable? I'm wondering if this was in a while loop if the loop would only iterated once the VISA has executed the read... which in turn depends on whether the bytes are avilable at the port?
I'm trying to figure out a general best practice way of setting up a serial device in an effecient manner, for something where you send a write then read.... in the basic serial read write example there is a delay in-between the write and read... is this an arbitrary number? does it need to be there if you're not reading until a specified number of bytes are at the port anyway?
much thanks!

As far as I use the serial VI's for communication I do not use Loops, I do use loops when I need to send a bunch of query commands. Bytes at port just gives a count that how many bytes are there at the port, even if you don't use it and wire byte count in read VI you must get the response, but you will not be sure whether you have recieved exact data. Time delay between write command read response also depends on how the protocol is setup.
I would always prefer VISA configure -> Bytes at port -> Read (this will empty any previous buffer)-> Visa write -> Bytes at port -> Visa read -> Visa Close.

Similar Messages

  • I w'd like to know if LabVIEW 6.0 Application Builder includes the daqdrv (for data acquisition) and serpdrv (for serial communication) support files by default.

    Building an application to communicate to a device by serial port

    Hi velou
    The LabVIEW 6i Application Builder no longer requires daqdrv. Regardless of your Application Builder and LabVIEW version, you must always install the appropriate driver files on the target machine. For example, if your application communicates with a DAQ board and a GPIB board, then you must install NI-DAQ and NI-488.2 on your target machine. If you are using the VISA VI's for serial communication, than you have to install NI-VISA . If you are using the "old" serial VI's, than you have to include serpdrv separately.
    Luca P.
    Regards,
    Luca

  • Best General practice for performance and tuning.

    Hi ,
    Can some body let me know the General Best practice for performance and tuning for Oracle Application 11i and 10g.
    that can be implemented and suggest in a new environment.
    Regards,

    Hi,
    Please see the following documents/threads.
    Note: 744143.1 - Tuning performance on eBusiness suite
    Note: 864226.1 - How Can I Diagnose Poor E-Business Suite Performance?
    Note: 362851.1 - Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12
    Note: 216205.1 - Database Initialization Parameters for Oracle Applications Release 11i
    EBS, performence issue
    Re: EBS, performence issue
    Oracle Apps Tuning
    Re: Oracle Apps Tuning
    Regards,
    Hussein

  • General practice for storing variable in Web Application

    I am working for an enquiry program which across several JSP pages. I need to pass the criteria variable across tha pages.
    For general practice, the variable will be stored in the session or as a hidden input.
    Or any other better solution.
    Please help
    regards,
    Fannie

    Storing the vars in the session would probably be the way to go. Using hidden fields in your HTML would allow the user to view source and see the vars. Using hidden fields also can be easily spoofed and would be a maintenance nightmare. Depending on you app requirements, I'd have a Criteria object (stored in the session) that encapsulates app criteria gathered.

  • How to detect disconnection for serial communication?

    /dev/cua/[a/b] are used for asynchronous communication. We attempt to detect if one end closes/drops the connection. termio(7I) states that it can be done thru turning on HUPCL, turning off CLOCAL, and catching SIGHUP. We tried different ways and we had never caught SIGHUP. Any advise would be appreciated.

    hello, i think i have implemented what i wanted ... however, 1 last issue... i am observing that labview is reading the data twice of the serial port... for example..
    the actual data is :
    0x120x280x04; however, labview is reading 0x12 0x12 0x28 0x28 0x04 0x04.... i think that labviw clock is way faster than my uC clock, and before uC is sendin new data, labview is reading watever is in its internal buffer.. no matter if it is updated by uC or not... 
    Now on LabVIEW 10.0 on Win7

  • What is the best/efficient DB Design for a device reservation system?

    Hi,
    I have to design table structure for a device reservation system. The user can reserve any device for a period of time (start time and end time). 
    Based on the reserved devices
    1) I need to show on the UI if the device is reserved or available
    2) for any logged in user, to reserve, how do I validate and check all the devices and allow the user to reserve the device
    with minimal logic
    Each reservation for a particular time for a device may be present as one record, when the user is reserving we have to consider the entire set of records to arrive at a conclusion whether the device is reserved or free for that particular amount of time.
    How do we do this with minimal logic and what is best db/table design for this kind of system.

    You may have a Users/Customers table (contains users personal info)
    You may have a Devices table (contains info about devices (name, supplier and etc))
    You may have a reservation table that contains a userid + deviceid + startdate + end date+ quantity...
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Best practices for securing communication to internet based SCCM clients ?

    What type of SSL certs does the community think should be used to secure traffic from internet based SCCM clients ?  should 3rd party SSL certs be used ?  When doing an inventory for example of the clients configuration in order to run reports
    later how the  data be protected during transit ?

    From a technical perspective, it doesn't matter where the certs come from as there is no difference whatsoever. A cert is a cert is a cert. The certs are *not* what provide the protection, they simply enable the use of SSL to protect the data in transit
    and also provide an authentication mechanism.
    From a logistics and cost perspective though, there is a huge difference. You may not be aware, but *every* client in IBCM requires its own unique client authentication certificate. This will get very expensive very quickly and is a recurring cost because
    certs expire (most commercial cert vendors rarely offer certs valid for more than 3 years). Also, deploying certs from a 3rd party is not a trivial endeavor -- you more less run into chicken and egg issues here. With an internal Microsoft PKI, if designed
    properly, there is zero recurring cost and deployment to internal systems is trivial. There is still certainly some cost and overhead involved, but it is dwarfed by that that comes with using with a third party CA for IBCM certs.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Best practice for class communication

    Hi all,
    I'm trying to get my head around how to communicate through scope. I hope/think this is a pretty basic question but I'm pretty stuck in the AS2 mentality.
    To make a simple example, say I have 2 classes, ClassA and ClassB. ClassA loads an instance of ClassB:
    package
      public class ClassA
         private var _classB:ClassB;
         public function ClassA
             _classB:ClassB = new ClassB();
      class ClassB {} 
    Now I want ClassB to communicate with ClassA. It's easy for ClassA to invoke a method on ClassB, _classB.somePublicMethod();, but how does ClassB communicate back up to ClassA?
    I know one method is making a custom event and adding a listenor in ClassA that binds to ClassB while having ClassB dispatch that custom event. Is there any easier way I'm not aware of? Some kind of parent/super/etc method of talking to the class that instantiated ClassB?
    edit:
    Incase it matters or changes the approach someone would recommend, what I'm trying to do is make some touchscreen-esque scrolling functionality. The same stuff built into any smartphone. If you touch the screen and drag the touchscreen in certain contexts knows you want to scroll the information you're looking at. If you just tap something however it knows that you want to interact with what you tapped. That's what I'm trying to figure out how to do in as3.
    I have a touchscreen and I have navigation that doesn't fit all on the same screen. I'm trying to allow the user to scroll the navigation up/down to see all of the options but also let the user tap a nav item to let them select it. So I'm making a reusable class that's pretty abstract on the concept of allowing me to point to an object and say this object can be clicked, but if the user drags they intend to scroll a display.
    The actual end use of this is, as a personal learning exercise, I'm trying to duplicate a doodle/paint program my 3yr old son has in flash. He has a touchscreen laptop and he can scroll through a long list of artwork he can touch and it paints it on the screen. I'm trying to mimic that functionality where I try to determine if someone is trying to drag/scroll a list or select something in the list.
    That said, in that context ClassA is the painting app and ClassB is a reuable class that's applied to a navigation area whose job is to inform ClassA if the user intends to drag or select something. I make my nav items in ClassA and then instantiate ClassB. Because I need to 'wait' until ClassB tells ClassA what the user is doing, it's not a return value type of situation. I have to wait for ClassB to figure out if the person is trying to click or drag, then communicate that back to ClassA so it knows how to handle it.

    I will definitely use an event. I've never made a custom event but the top google search (always a blog) has good comments on the approach so I'm using this approach at it.
    Anyone think that approach is bad/outdated/refineable?
    edit:
    Man, it's just one of those days. This is all working fine and well but I can honestly say in no project have I ever needed to make a custom event and I've been using flash since the early 90s with nothing but telltarget.....
    I do have one question, because I (admittedly) spent a freaking hour (*sigh*) on trying to figure out why I'd dispatch an event and it wasn't picked up.
    A quick psuedo example:
    package{
      public class ClassA {
        private var _classB:ClassB = new ClassB();
        public function ClassA() {
            this.addEventListener(CustomEvent.WHATEVER, _doSomething);
        // _doSomething func......
      class ClassB {
        parent.dispatchEvent(CustomEvent(CustomEvent.WHATEVER, { foo:"bar" }));
      class CustomEvent extends Event
        public static const WHATEVER:String = "whatever";
        public function CustomEvent(type:String, params:Object, bubbles:Boolean = false, cancelable:Boolean = false)  {
                super(type, bubbles, cancelable);
                this.params = params;
         // clone/tostring overrides.....
    Is it better symantics to do it that way with parent.dispatchEvent() or should I have done _classB.addEventListener(...) and then in ClassB I this.dispatchEvent()?
    What screwed me up for an hour was I was just this.dispatchEvent() instead of parent.dispatchEvent() and the event was never seen in the parent. It (hindsight) makes obvious sense I need to dispatch the event in the scope of whatever is looking for the event but somehow that wasn't really explained to me in the tutorials (like I linked). Their examples made the event, listener and dispatcher in the same place. I'm dispatching the event from a separate class so it didn't occur to me I needed to send that event back to the scope the listener existed in... Oy.. vey....

  • What are the best printer deployment practices for Win Server 2012 R2?

    I have about 40 printers deployed around my school. My users move around my building and log into several computers throughout the day. I need to consistently get the correct group of printers to map to the computer upon startup and set a default printer.
    I have tried to use GP, but the inability to set a default printer within the computer policy is a crippling issue. I have tried using third party software (Kaseya DPM) where I can set printers and default printers, but the real-world, daily deployment is
    inconsistent. I have a logon script that I used to use, but the printers were trying to map before the network was established; the printer mapping was failing because the script was too fast.
    This is not a new idea. What is the best way to consistently deploy printers that are mapped to specific computers?

    Am 04.08.2014 um 22:10 schrieb VermontTech:
    > but the inability to set a default printer within the computer policy is
    > a crippling issue
    Why not using user policy with item level targeting for the computer name?
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • Best datafile management practices for iMovie

    I work across two different machines (my desktop and my laptop) on iMovie Projects.  My desktop has nearly all of my video stored on a 6TB external drive, but when I travel and want to bring along a project with me on a  portable 2 TB drive.  And then when I return, I want to be able to move the updated project back to my desktop and the 6 TB drive. 
    I will want to also be able to bring along all of the related event files so I can add or change video content, if need be.
    One of the problems is variious resources tend to disappear (music from my iTunes libary and video clips and or pictures from Aperture) 
    So my question is what is the best approach to this problem.
    Suggestions?

    You are asking really good questions.
    e2photo wrote:
    Is file management using Final Cut Pro any easier?  How large is the learning curve for Final Cut Pro?
    File Management is quite similar in Final Cut Pro X, but you have additional options. The paradigm of Events, Projects, and Archives is the same. You have more ability to use keywords, and more ways to search through your events. You can also rename clips for your convenience. You can't rename clips in iMovie without messing up the date metadata.
    The learning curve is fairly large for Final Cut Pro. The basic interface is similar to iMovie, but at every stage you have many options and much more control. At minimum, you would want to get a good book on FCP, and you might invest in some of the video training that is available from Ripple Training or Larry Jordan.
    If all you need is to make simple movies of 25 minutes or less, iMovie may be plenty. If you need more, the Final Cut Pro X is a great product.
    It might be analogous to going from iPhoto to Aperture and Photoshop. It is doable, but some training helps.
    Here are some of the key differences...
    1) iMovie uses Apple Intermediate Codec as the editing codec. Final Cut Pro uses ProRes 422 as the editing codec. (There are also more exotic flavors like ProRes 444, for the times when you need alpha channel in your video). 
    Apple Intermediate Codec is OK for iMovie, because in iMovie, you will only render a clip once, and the render is done when you Share your project.  Final Cut Pro is capable of making much longer movies with complex effects. You might do multiple renders on a clip, so it is important to have a codec that maintains high fidelity to the original even through multiple renders. ProRes 422 that codec. In addition, Apple Intermediate Codec uses a 4-2-0 colorspace while ProRes uses a 4-2-2 colorspace. I am not an engineer, but this means potentially more accuracy and more colors available.
    2) Final Cut Pro makes it easy to edit with multiple camera angles. Think of a rock video where there is a music track and you cut seamlessly between a wide angle shot, the lead guitar, the singer, and the drummer. This is easy in FCP. It is possible in iMovie, but only with a lot of manual effort.
    3) You have a lot more control over color managment in FCP.
    4) You have a lot more control over audio in FCP. In addition to built in sound editing capablilities, you have all the synthesizers from Logic Pro available to you as well.
    5) In your editing, you have a lot more capability in FCP.
    6) In essence, you can do everything you can do in iMovie, but a lot more, and with a lot more control.
    7) In addition, Final Cut Pro will import your iMovie Project and Events, if you like, and you can contiue editing them in FCP. However, you can contimue to edit them in iMovie as well. And if you do it correctly, it does not take up much more space on your hard drive. In other words, you will have a Final Cut event with a hard link to your physical clips, and you will have an iMovie event with a hard link to the same physical clips. It will look like 2 separate events, but you only use up the space of one event. The only downside of doing this is that you will be editing in Apple INtermdiate Codec.
    8) You can do simple greenscreen in iMovie. In FCP you have full control over making a compositing key and using it as you like.
    9) You can do simple speed changes in iMovie (fast or slow motion). In FCP, you have infinite control over the speed changes and you can select from 4 different modes of blending the frames so they look smooth.
    I could go on and on, but that should give you an idea.
    Here is the Help manual for FCP, if you want to browse around.
    http://help.apple.com/finalcutpro/mac/10.0.5/#

  • Best iPad/general coverage for travelers?  (including theft, water)

    I'll be traveling in Ecuador for a year and am looking to add some insurance/coverage to my new iPad 3. I say it's new, but I've had it now for 85 days, so I know some insurance/coverage options might be out if they require the device to be purchased within the last 30 days.
    I'm not sure if I would be better off with a general traveler's insurance but I figure I would like to have this insurance here and abroad.
    I need options for international coverage including:
    theft
    drops
    spills, water, humidity
    possibly loss (I'm not sure if this is a must yet)
    can be purchased after 30 days
    Here is what I have seen thus far:
    Worth Ave group
    Safeware
    Gocare (not sure if they are international)
    ProtectMe (not sure if they are international)
    Protect Your Bubble? (not sure if they are international)
    Securranty (not sure if they cover theft and looks like they have a 30 day limit)
    Obviously I have more research to do on these options but couldn't find everything right away.
    Does anyone have experience with these or recommendations? I would love to hear it.
    Thank you.

    I did take a look at the case but since I'll be living in the humidity, I wouldn't be willing to open the case and actually use the iPad. If it wasn't for the humidity, this would be a nice option.
    So instead I'm looking for a waterproof case, but unfortuneately most of them are basically high-end ziploc bags.
    I'm hoping that Lifeproof come out with their case before I leave or Otterbox releases an Armor Series case for the iPad, but I'm not holding my breath.

  • Using TestStand How I pass an array of data into a DLL (IPC3.dll) for serial communication

    I am ussing a DLL created by another party. I have the list of the C declaretions. I have been able to write a seq that can turn the comport ON/OFF or select a different port but I have not been able to send or recieved any data. I have created an array of bytes(unsigned 8-bit integers)to send and recieved data but nothing goes out or in.

    Hi Toro,
    There is an example in your \Examples\AccessingArrays\PassingArrayParametersToDLL directory that illustrates exactly how to pass TestStand arrays as arguments to dll functions. The source files for the .dll are located in the same directory.
    For more information on passing arrays as parameters to modules you should read the "DLL Flexible Prototype Adapter" section of Chatper 13 in the TestStand User Manual, and pay special attention to the subsection entitled "Array Parameters". You can access the User Manual from the TestStand Start Menu group, the TestStand Sequence Editor's Help menu, the \Doc directory, or online at the following link:
    http://digital.ni.com/manuals.nsf/websearch/50B69DA356B8D38C86256A0000660E6B?OpenDocumen
    t&node=132100_US
    Jason F.
    Applications Engineer
    National Instruments
    www.ni.com/ask

  • Best Practice for starting & stopping HA msg nodes?

    Just setup a cluster and was trying to start-msg ha and getting error about watcher not being started. Does that have to be started separately? I figured start-msg ha would do both.
    For now I setup this in the startup script. Will the SMF messaging.xml work with HA? Whats the right way to do this?
    /opt/sun/comms/messaging64/bin/start-msg watcher && /opt/sun/comms/messaging64/bin/start-msg ha
    -Ray

    ./imsimta version
    Sun Java(tm) System Messaging Server 7.3-11.01 64bit (built Sep 1 2009)
    libimta.so 7.3-11.01 64bit (built 19:54:45, Sep 1 2009)
    Using /opt/sun/comms/messaging64/config/imta.cnf (not compiled)
    SunOS szuml014aha 5.10 IDR142154-02 sun4v sparc SUNW,T5240
    sun cluster 3.2. And we are following the zfs doc. I haven't actually restarted the box yet, just doing configs and testing still and noted that.
    szuml014aha# ./start-msg
    Warning: a HA configuration is detected on your system,
    use the HA start command to properly start the messaging server.
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Warning: Cannot connect to watcher
    Critical: FATAL ERROR: shutting down now
    job_controller server is not running
    dispatcher server is not running
    sched server is not running
    imap server is not running
    purge server is not running
    store server is not running
    szuml014aha# ./start-msg watcher
    Connecting to watcher ...
    Launching watcher ... 11526
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Starting store server .... 11536
    Checking store server status ...... ready
    Starting purge server .... 11537
    Starting imap server .... 11538
    Starting sched server ... 11540
    Starting dispatcher server .... 11543
    Starting job_controller server .... 11549
    Also I read in the zfs / msg doc about the recommendations:
    http://wikis.sun.com/display/CommSuite/Best+Practices+for+Oracle+Communications+Messaging+Exchange+Server
    If I split the messages and indices, will there be any issues should I need to imsbackup and imsrestore the messages to a different environment without the indices and messages split?
    -Ray
    Edited by: Ray_Cormier on Jul 22, 2010 7:27 PM

  • Handling multichannel serial communication on FPGA

    NI recently released cRIO modules for serial communication on RS232 and RS485/422.  We need multiple RS232 channels for our RIO systems and previously the solution would have been to use a serial port server and then communicate with that using raw TCP/IP. Having the serial communication handled by RIO modules seems a bit more elegant though so I'm looking into using the RS232 module NI9870 instead.
    NI has supplied an example on how to use the 9870 - a serial loopback on port 0. I've successfully modified this to be a general driver for a selectable port (added port selection and initialization), however I really need to code a solution that handles ALL the 4 channels in parallell. I'm not sure what would be the best way to do that though. DMA FIFOs seem to be the best option for host-target communication, however in this case there are 4 channels....and I may have multiple modules so at most it could be that I need to handle as much as 32 ports. The example uses interrupts for synchronization....which off course is also a limited resource. All in all I'll probably figure out a way to do this, however if anyone else have done something similar already it would be great to hear your views on this.
    Coming from the Windows-programming world (this is the first FPGA work I've done) I was also hoping to make this driver as general as possible, maybe even be able to write a wrapper containing both VISA based serial functions and the FPGA host code. That way it could be transparent how the port is accessed.
    This would require 1 code that would be able to handle a variety of module configurations (well, 1 to 8 NI9870 modules in the chassis), however that does not seem to be feasible(?). When you read or write to a port you cannot just refer loosely to it with a number e.g. - the read and writes are referenced to a specific port...this means that unlike in a Windows application where you can let the user just configure that he wants to use COM port num XX and the serial functions will accept that number regardless of whether the port exists or not (returning an error if it does not exist or is in use) - the FPGA code has to have all the items it will call pre-defined. If the chassis can have 16 ports on 4 9870 modules it does not seem possible to use the same FPGA code if the chassis currently only has 1 module...Is this the reality, or is it possible to create a more flexible solution?
    If it turns out that the FPGA has to be reprogrammed if another serial module is added (or one is removed) it would seem much better to drop the modules and use a port server and tcp/ip instead...that way using new ports will only be a question of configuration (IP and port), not reprogramming...That may not be possible always (if you need the compact size and roughness only a single RIO chassis would offer), but in our case it is - it' just not as elegant hardware-wise.
    MTO

    Well, the multichannel experiment only wrote data, it did not read...(see attached VIs) however the concept can be used in both directions: Add a header to the FIFO data where the header tells the recipient what port the data is going to or comes from...Then use e.g. a state machine to read the FIFO, split the header and data, and route the data to the loop handling the port.
    If you are going to read data you will get into one of the downsides to this namely that you will need to have a central communications manager that reads the incoming data and distributes it to the requesting VI. This way you can have more parallell access to both read and write, however the reads will have to be routed through this handler. How big a performance gain this would give is unknown though. You still only have one DMA FIFO for each direction so there is a limit to how parallell things can get, but logically this might get you closer than the NI example...
    MTO
    Attachments:
    9870 MultichannelIO Experiment.zip ‏1002 KB

Maybe you are looking for