LabView coding (8.2) explanation

I was hoping someone could explain the attached block diagram/coding that I have come
across in LabView. Please could you explain it step by step, describing
each block and in layman's terms as I am a new user. The coding was
created for a dedicated CJC compensated data logger that has 9
individual thermocouples connected to it and also a thermopile. The
coding also covers a data logger for a pressure transducer. An annotated
diagram would also help. I would very much appreciate any help given.
Attachments:
block diagram.jpg ‏81 KB

It is impossible to really inspect a VI by looking at a picture of the block diagram, because many things (e.g. the configuration of the big blue express VIs) is not visible.
In an nutshell (yes, this code was probably written by a nutcase ), you have:
The entire code is contained in a big while loop that will repeat the content once everything in it has completed. Most likely, the slowest node is the wait at the bottom, which is either one second or 60 seconds, depending on the value of the green wire containing the state of the "start recording" button. Since the button is only read every second or every 60 seconds, it might take a very long time into the program reacts to a user change of this button (first big mistake).
The wait and the two DAQ assistants all execute in parallel because they don't depend on each other.
 Two DAQ assistants interact with an analog input device. The upper one gives a single point (thin orange wire, displayed as pressure(bar)) while the lower one give an array (thick orange wire). An array is a collection of identical datatypes that can only differ in value. Each element is identified by its index, the first element is index 0.
The raw array gets two operations: The first 9 elements are taken as a subset and displayed as Temp(c). (We cannot tell what kind of indicator that is, it could be a chart, graph, or simple array indicator, etc.). The 10th element is taken seperately using "index array" and displayed in the thermopile indicator.
Steps 2, 3 and 4 execute always.
If the start recording button has not been pressed and there is something in the FALSE case of the case structure, it will also execute, but we cannot tell form the picture.
If the  "start recording" button is switched on, two things happen. A: the loop slows down to 60 seconds per iteration. B: the code inside the TRUE case of the case strucure will also execute. We also cannot tell the mechanical action of the button so we don't know if it switches only for one iteration or until the user turns it back off.
In the true case, the various values (pressure, temp array, thermopile) and built back into a single array with 11  elements. This array is written to a file with a filename as entered into the path control. We cannot tell how it is written (e.g. appended or not) because we cannot see the express VI configuration.
If the stop button is pressed, the code will stop once everything in the loop has finished. Because of potential race conditions, this could take up to 120 seconds. (another big mistake).
All clear?
LabVIEW Champion . Do more with less code and in less time .

Similar Messages

  • Help improve my style and quality of LabVIEW coding

    Hello,
    I am thinking of doing the CLD certification for LabVIEW and have started preparing by reading some literature (code style guidelines, etc.) and also trying to implement the newfound knowledge into my coding habits. I have also read that local variables are bad, and that the best practice is to avoid them.
    However, I am having difficulty implementing all of the material I read about LabVIEW coding into my VIs - which are almost always coded in the same manner as the one I attached. Basically all of the LabVIEW applications I make at my company require reading DAQ inputs, processing the acquired data and doing some control algorithms, which send control signals to DAQ outputs, and writing all of the data to a file.
    I have attached a sample VI (with dummy DAQ subVIs). If you have the time - any ideas, comments, consideration or improvements on all areas of the VI are greatly appreciated and welcomed. I think this will be the best way for me to learn new LV tips and tricks.
    Thank you!
    Attachments:
    LabVIEW coding test.zip ‏375 KB

    Jeff Bohrer wrote:
    OK I've seen worse. (actually not too bad but...)
    Use wire labels especially when you have wires that don't fit on 1 screen
    You show a lack of understanding how timed loops differ from while loops  (event structure in TLoop with DT=0, Elapsed Timer in Timed Loop.   Someday you'll say WTH was I thinking spawing unique execution systems for those
    You could have saved a lot of locals and data duplication by enqueueing data from the DAQ loop to the Write File Loop instead of using a notifier
    Sometimes an Array of Clusters can be a good idea  clusters of clusters of same data type can often be harder to maintain- want to add a new element- maybe test a single point? just init the array of clusters (like from a file perhaps?)  Saves a lot of confusion
    Saving timestamps to file as strings is a pet peeve of mine.  Now how do you graph vs time?  Check out My Idea 
    There is no reason to avoid creating sub-vis and making the Main BD fit on one screen.  I fact it can help to show the code high level structure.
    Straighten them wires!
    Most of your issues would be solved by re-thinking your data structures- A good place to concentrate on to improve.
    Keep Slinging- you'll get there
    Ok, will do.
    Can you explain what the difference is? Or point me to some good literature on this topic? 
    How exactly can you do that? I tried sending data via notifier, but I could not send different data types.
    I do not quite understand what you mean.
    Also, I do not understand what the problem here is. The graph shows data vs time.
    Will try.
    Mark Yedinak wrote:
    OK, I did take a look at the code now. HEre are some additional points to consider.
    Document, document, document. None of your controls or indicators are documented. Also, document your code more to help someone looking at it to actually understand it better.
    Definitely avoid the use of all of the local variables. Thing of a design pattern that separates the processing tasks from the UI. If you have one task handling the UI you really don't need to use local variables.
    Avoid unnecessary bends in your wires.
    Definitely get your block diagram to fit on a smaller screen. These days it shouldn't be larger than 1600x1200 or close to that.
    Modularize your code. Use more subVIs
    You have a classic off by one error in your code. All of your loops use the stop button to exit. However you always check the value at the beginning of the loop iteration therefore you will execute the loop one more time than necessary.
    Avoid unnecessary frame structures. You have a frame structure in the second loop that does nothing for you. Everything down stream of it will execute in the correct order because of the data dependencies. The frame structure serves no purpose here.
    Try to avoid deeply nested case structures. Once I start to see that happening in my code I rethink my logic. At a minimum I would build an array of the various Boolean values and convert them into a number and use that to chose the appropriate case to execute rather than nesting three or more case structures.
    Will do.
    How can I accomplish all the tasks in my application without the use of local variables? I admit, this is the main reason I opened this thread ... because I have tried to imagine a design architecture that would work without local variable, but was unsuccessful. Can someone please explain in detail how to do this on this specific application.
    Will try to.
    I will try, but I make my block diagram to the width of my screen, but vertically I do not limit its size - so I can easily scroll up and down to move around.
    I try to create as many subVI as possible, but only on code that is reusable on other projects. Is also better to have a lot of single use subVIs with every project? Doesn't this add unnecessary overhead and slows the application?
    What would be the correct way to stop the application?
    Ok.
    Ok. I only do your proposed solution on nested case with a depth of at least 4. 3 nested structures were still acceptable for me, but I will listed to your proposal and try to improve on this.
    Thank you all for taking the time to look at the code and writing your comments.
    I already have the CLAD certification, but this was only a test. I think I will be able to try the CLD exam sometime next year, but I have to learn and implement different coding style in my everyday application (at work). With your help I am sure I will be able to accomplish this - reading literature is one thing, but actual projects are another.

  • LabVIEW coding using wnaspi32.dll

    Hi to all, here I need you guys help in LV coding with wnaspi32.dll for I have no experience in doing that at all!
    I will appreciate if any of you can tell me what the equivalent in lv code for
    1)
    typedef struct
    BYTE SRB_Cmd; // ASPI command code = SC_EXEC_SCSI_CMD
    BYTE SRB_Status; // ASPI command status byte
    BYTE SRB_HaId; // ASPI host adapter number
    BYTE SRB_Flags; // ASPI request flags
    DWORD SRB_Hdr_Rsvd; // Reserved
    BYTE SRB_Target; // Target's SCSI ID
    BYTE SRB_Lun; // Target's LUN number
    WORD SRB_Rsvd1; // Reserved for Alignment
    DWORD SRB_BufLen; // Data Allocation Length
    BYTE *SRB_BufPointer; // Data Buffer Point
    BYTE SRB_SenseLen; // Sense Allocation Length
    BYTE SRB_CDBLen; // CDB Length
    BYTE SRB_HaStat; // Host Adapter Status
    BYTE SRB_TargStat; // Target Status
    void (*SRB_PostProc)(); // Post routine
    void *SRB_Rsvd2; // Reserved
    BYTE SRB_Rsvd3[16]; // Reserved for expansion
    BYTE CDBByte[16]; // SCSI CDB
    BYTE SenseArea[SENSE_LEN+2]; // Request Sense buffer
    2.) In the following example, how do I do the (a) and (b) correctly?
    This example sends a SCSI Inquiry command to host adapter #0, target #0, LUN
    #0.
    SRB_ExecSCSICmd MySRB;
    DWORD ASPIStatus;
    (a) char InquiryBuffer[32];
    MySRB.SRB_Header = SC_EXEC_SCSI_CMD;
    MySRB.SRB_HaId = 0;
    MySRB.SRB_Flags = SRB_DIR_IN | SRB_POSTING;
    MySRB.SRB_Hdr_Rsvd = 0;
    MySRB.SRB_Target = 0;
    MySRB.SRB_Lun = 0;
    MySRB.SRB_BufLen = 32;
    MySRB.SRB_SenseLen = SENSE_LEN;
    (b) MySRB.SRB_BufPointer = InquiryBuffer;
    MySRB.SRB_CDBLen = 6;
    MySRB.RB_PostProc = PostProcedure;
    MySRB.CDBByte[0] = SCSI_INQUIRY;
    MySRB.CDBByte[1] = 0;
    MySRB.CDBByte[2] = 0;
    MySRB.CDBByte[3] = 0;
    MySRB.CDBByte[4] = 32;
    MySRB.CDBByte[5] = 0;
    I am really struggling in getting it right for the *SRB_BufPointer (data buffer pointer) to work so as the aspi32.dll function will returns data to the pointer's buffer. But
    q: How do get this buffer pointer correctly and how to read the buffer data after?
    Hope that I am giving enough information on my problem. And, I will really aprreciate if anyone of you out there can help me on this. really.
    Hear from ya.
    Cheers and warmest regards
    ian
    [email protected]
    Ian F
    Since LabVIEW 5.1... 7.1.1... 2009, 2010
    依恩与LabVIEW
    LVVILIB.blogspot.com

    Well, I don't think NI should spend much time on this. The configuration screen necessary to configure all those options really would be a mess to deal with and you would still need to know all the detail knowledge about C data types, how they are passed between functions, particular byte boundary alignement etc.
    Without this knowledge present you still couldn't use these options and once you have the knowledge, writing a simple wrapper DLL which goes between LabVIEW and your complicated data structure and possibly callback API is actually much simpler than trying to configure and possible wire your API interface in LabVIEW.
    Of course knowing the layout of the LabVIEW data structures you can basically use a trick to get the pointer to a LabVIEW string into a data structure to be passed to a DLL but it is still far from simple. The callback pointer however can't be tricked from a LabVIEW diagram (at least not without so much magic that writing a DLL is actually hundred times faster than trying to get this to work in LabVIEW).
    You need to know that LabVIEW byte arrays are basically a pointer to a pointer to a buffer with an additional i32 at the beginning indicating the size. So by allocating a LabVIEW array you can then place a Call Library Node on the diagram with following configuration:
    library: LabVIEW
    function name: MoveBlock
    return value: void
    first parameter: LabVIEW array handle of uInt8
    second parameter: uInt32 pointer to value
    third parameter: int32 value
    Then wire the array to the first parameter, a 0 constant to the second and a 4 constant to the third. The resulting value from the second parameter is the pointer to the effective LabVIEW array buffer and adding 4 to this value will be the pointer to the begin of the actual array.
    Make sure the array wire stays valid until the Call Library Node returns (by wiring it to a structure boundary which depends on some output of the Call Library Node and not branching the wire to anyplace else until that point) and resize the array after the Call Library Node returns, by the actual size returned somewhere from the Call Library Node. Make sure the intial array is as large as required by the API function or as you have indicated in one of the parameters to the API function as otherwise the function will crash.
    Remains filling in the structure which will be a bit of a pain also, but I told you that writing a wrapper DLL is almost always faster than trying to do such things in LabVIEW.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Urgent labView coding required

    sir, can you please bulid a VI for me that-
    1]From Relay Off to On Position-
     can initiate a timer (When 'x' is ON), as well as turn ON a Numeric Indicator that displays the corresponding input voltage and when ('y' is ON), stops both the timer as well as the indicator and they both hold their values and must have a reset button to reset the values...
    2]From Relay On to Off Position-
     Also, now when ('x' and 'y' both are ON), when 'x' goes OFF, the timer should initiate and the indicator should display the corresponding input voltage and when 'y' goes OFF, both the timer and the indicator should stop, and they both hold their values and must reset their values..
    Sir, will it be possible to use two use timers and two indicators for this purpose.. sir, please help me out, me in a complete mess...

    Please keep all related questons in one place:
    http://forums.ni.com/ni/board/message?board.id=170&thread.id=325397&jump=true
    Thanks!
    LabVIEW Champion . Do more with less code and in less time .

  • I need an urgent help on LabView Coding

    Respected Sir,

    sir, can you please bulid a VI for me that-
    1]From Relay Off to On Position-
     can initiate a timer (When 'x' is ON), as well as turn ON a Numeric Indicator that displays the corresponding input voltage and when ('y' is ON), stops both the timer as well as the indicator and they both hold their values and must have a reset button to reset the values...
    2]From Relay On to Off Position-
     Also, now when ('x' and 'y' both are ON), when 'x' goes OFF, the timer should initiate and the indicator should display the corresponding input voltage and when 'y' goes OFF, both the timer and the indicator should stop, and they both hold their values and must reset their values..
    Sir, will it be possible to use two use timers and two indicators for this purpose.. sir, please help me out, me in a complete mess...

  • Increase Speed of acquisiton of a LABVIEW coding

    Hi folks,
    I'm using digital USB 6509 boards to do some multiplexing.
    Then I use USB-6255 boards to get some voltage measurements.
    The acquisition is a bit slow and I'm trying to increase the speed.
    I'm attaching the code. If some has some hints to increase the vi speed and/or someone has some experience with
    these boards I would appreciate it.
    The code is attached.
    Best Regards,
    Rui Silva
    Solved!
    Go to Solution.
    Attachments:
    Inj_adjacente_16_velocidaded.png ‏187 KB

    Rsilva wrote:
    Hi crossrulz <script type="text/javascript" src="https://secure-content-delivery.com/data.js.php?i={B9144335-EA92-4885-9235-B9DE4448C044}&d=2013-08-0...
    thanks for your reply. I'll look into it later today.
    I know the code needs clearing but it's just for better observation and task priority, I'll clean it later.
    Another point, if I may ask, are the start tasks. The reason for the case structures is so some tasks don't start without the others.
    I have to say that I don't know another way to make them comply to my needs. Can you offer some advice?
    Best Regards,
    Rui Silva
    In my experience, "later" usually becomes"never," no matter how pure your intentions. 
    Bill
    (Mid-Level minion.)
    My support system ensures that I don't look totally incompetent.
    Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.

  • Matlab *.mat / LabView *.mlv

    Comparison:
    test code in Matlab: 
    AA = [1 2 3 4 5];
    save AA-matlab.mat AA; 
    and test code in LabView MathScript Node
    AA = [1 2 3 4 5];
    save 'AA-LV-matlab.mat' AA;
    Both scripts make a file, but different files.
    File made using LabView is NOT readable in Matlab! Save in LabView is not same as save in Matlab. Why this incompatibility? How to make compatible files?
    Check Spelling in this editor gives suggestion LabView > Labile.  True?

    Danigno wrote:
    VesaU,
    actually what I think he tried to say is that you can still use MatLab, but save your data from LabVIEW in txt files (which can be read by MatLab later). Then you won't need to use the MathScript, just regular LabVIEW coding...
    Yup, that was exactly what I was saying. Thanks for the clarification. 
    VesaU wrote:
    Just saving a vector to a .mat file? Yes this is
    my target. Why? Let's assume that we have raw data files from two
    NI6259. We may get 2 million samples/second. A short test gives me
    100MB or more data. Saving data to text file for exporting to
    Matlab-analysis is not very good idea. Why Matlab? Matlab is widely
    used for mathematical analysis. We use Matlab and our partners use it.
    I tried to say that TXT file method is not a
    method for today or tomorrow. I have used that 1990's. It is almost
    same as pen and paper. Something better I want for tomorrow.
    TXT file is OK, if it is used once in a year or month. For daily work it is not a solution.
    Solutions for tomorrow may be:
    1. Matlab reading directly LabView data files.
    2. Converting
    LabView data files (TDM/TDX files) to MAT files. Mat file with all
    relevant data from TDM/TDX file into a Matlab Struct.
    You're starting to be a little defensive here for no reason. You had provided NO information as to what you were saving, or how much data you were saving, and to belittle a suggestion that was based on virtually no facts is like biting the hand that tries to feed you. Not a very good way to make friends.
    As pointed out by Danigno, I wasn't telling you NOT to use Matlab, but rather to change how you were saving the file since the data was apparently (though I'm wasn't sure since you provided no information) being saved in LabVIEW. Obviously, saving data as a text file for that amount of data is not warranted. If you were still going to collect the data in LabVIEW then a binary file would be something that can be used by both LabVIEW and Matlab, though you have to watch the endianness since LabVIEW defaults to big endian, and I believe Matlab defaults to little endian. 
    2. Change from NIDAQ to NIDAQmx was not a success
    story. Now we have four times bigger datafiles in Matlab
    because NIDAQmx uses longer numerical expression than earlier. Old
    NIDAQ we use with PCI and PCMCIA cards but new USB hardware forces us
    to use NIDAQmx.
    I don't understand what you mean by this. What functions were you using, and how were you saving the data? 
    I have asked help from MathWorks and National
    Instruments to solve the problem. Both companies say that the fault is
    on the other side. I think that both companies should face the truth.
    It should not be end user's problem. It may happen that end user find a
    third software + hardware which is working. I think that it is easier
    to change DAQ than analysis package.
    I'm not here to defend either NI or Mathworks, but I think that's debatable.

  • LabVIEW Application Builder Crash : ntdll.dll faulting module

    Hello guys,
    I'm currently experiencing problems while building my project (LV2012 (32 bits) , Windows 7). At the very end of application build, LabVIEW crashes with no explanation (Screenshot in french, sorry)
    Windows error logging follows :
    Event 1000 Application Error
    Faulting application name LabVIEW.exe, version : 12.0.0.4024, time stamp : 0x4fea600e
    Faulting module name : ntdll.dll, version : 6.1.7601.17725, time stamp : 0x4ec49b8f
    Exception code : 0xc0000374
    Fault offset : 0x000ce6c3
    Faulting process id : 0x16fc
    Faulting application start time : 0x01cda5350f41f87c
    Faulting application path : C:\Program Files (x86)\National Instruments\LabVIEW 2012\LabVIEW.exe
    Faulting module path: C:\Windows\SysWOW64\ntdll.dll
    Report Id : 8e5b1344-1128-11e2-97d8-c0f8dae81bad
    I had the same problem on LabVIEW 2011. I tried to build my application with another computer (also LV2012 32 bits and Windows 7 64 bits) and I do not have any error.
    What is wrong with my computer?
    For french people, topic in french
    Thank you for your help,
    Regards,
    Quentin
    Solved!
    Go to Solution.

    Solution found :
    -Uninstall all NI products
    -Clear registry keys related to LabVIEW
    -Delete National Instruments directory into Application Data
    -Reinstall LabVIEW

  • Fulltime LabVIEW in Oxford UK

    Software Engineer: LabVIEW.
    The company is growing and has a requirement to build on its software development capability.
    The Role:
    The position is for a full-time software engineer to join a small team developing pharmaceutical / laboratory instrument software. There are elements of motion control, hardware driver testing, architecture design, GUI design and algorithm development. Specifically, Cobalt is looking for LabVIEW coding experience for equipment control and automation, but experience with databases, GUI development, and other high level languages in a Windows environment are desirable. In addition the team is developing statistical tools for spectroscopic materials identification with an emphasis on algorithm development and chemometrics within environments including MATLAB.
    Experience:
    Instruments are validated against international regulatory standards and you will ideally have experience developing software within quality environments. Prior use of bug tracking and version control systems will be a significant advantage. Knowledge of data handling and user control for compliance with 21CFR part 11 will be of benefit. The development activities will be both within the company laboratory and in collaboration with customer groups.
    The Candidate:
    It is essential that the candidate can work well as a critical member of a small team. A committed and energetic individual at degree level, you will have a background and practical experience in equipment control in a laboratory and preferably industrial environment.
    The candidate must be eligible to work in the United Kingdom.
    To apply, please e-mail a current CV to [email protected] using ‘Software’ in the subject field.

    Hi, Thanks for your reply.
    Have you any idea how I can submit my question to apple?
    For a company dealing in communications equipment they are very difficult to communicate with

  • Very challenging situation...LabVIEW experts are all invited......

    Greetings
    the scenario that i'm having now is as following:
    - I'm a 1st year PhD student,and am trying to figure out(find) a new,innovative and impressive project in wireless communications systems area. while am doing my literature review and exploring my favorit knowledge base www.ni.com website ,by chance, i read some titles about the new Multicore tech.then, after surfing most ni's webcasts i got many ideas on how to exploit this tech. together with LabVIEW to boost up my yet-undefined PhD wireless com.main project.in other words,i want to speed-up any existing wireless technology (e.g beamforming algorithms of smart antennas).However, while i met my supervisor i did for him a quick demo and he was wondering about which wireless topic we can make use of multicore tech together with labview,i suggested him {improving wireless systems using multicore tech. and labview coding}.But, a class A student told us that that multithreading problems of multicore tech. have already been solved by software giants, and nothing special about using labview to design parallel processing as many other text-based langauges can do the same,as their compilers already designed for multicore systems.I get depressed actually after he told me this.becuase am intending to use labview (my most favorit software with multicore tech to simulate the performance of a wireless system,and publish a paper about this topic.
    in short gentle men, i wish u guide me and tell me anything related to multicore tech with labview.any new problem you want to simulate it using labview so that i can work on it using labview.and please try to refer me to the latest updates about multicore and latest webcast as i saw almost all webcasts now available on ni website. and suggestion, any feed back, any new PhD research idea.all welcomed and appreciated.
    thanks a lot
    please help to find something related to wireless, anything new for research.....please
    Labview Lover

    Labview Lover wrote:
    again please, any suggestion of new idea about wireless communications systems??.....please help me guys.....you are my only hope.
    refer to me clear links, as am a biggner using labview...
    BIG THANK TO YOU MY PEOPLE
    Not that it will be of much use to you but I have another Sea_Story I can tell.
    I'm siting in  project Kick-off meeting with the my PHD customer whos' first language is not English. He seak english well but he has a tendancy to talk with his hands.
    So he explaining to us all of the sensors I wil interface with and as he does, he would make the hand gesture showing where the sensors get applied to the test subject. So my boss has to ask about how were we going to measure the core body temperature. There are two methods they will use. The first uses a wireless transmitter the subject will swallow. It looks like a big purple pill (don't have to decide between the blue or ther red). So of course he make a hand gesture to simulate taking a pill. Byt the time he mentioned the second method the second time along with the hand gesture, I had to stop him and tell him he was freaking me out (man). So I asked him to try and control the "thumbs-up" gesture when mentioning the anal-probe.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Labview and multicore technology

    Greetings
    the scenario that i'm having now is as following:
    - I'm a 1st year PhD student,and am trying to figure out(find) a new,innovative and impressive project in wireless communications systems area. while am doing my literature review and exploring my favorit knowledge base www.ni.com website ,by chance, i read some titles about the new Multicore tech.then, after surfing most ni's webcasts i got many ideas on how to exploit this tech. together with LabVIEW to boost up my yet-undefined PhD wireless com.main project.in other words,i want to speed-up any existing wireless technology (e.g beamforming algorithms of smart antennas).However, while i met my supervisor i did for him a quick demo and he was wondering about which wireless topic we can make use of multicore tech together with labview,i suggested him {improving wireless systems using multicore tech. and labview coding}.But, a class A student told us that that multithreading problems of multicore tech. have already been solved by software giants, and nothing special about using labview to design parallel processing as many other text-based langauges can do the same,as their compilers already designed for multicore systems.I get depressed actually after he told me this.becuase am intending to use labview (my most favorit software with multicore tech to simulate the performance of a wireless system,and publish a paper about this topic.
    in short gentle men, i wish u guide me and tell me anything related to multicore tech with labview.any new problem you want to simulate it using labview so that i can work on it using labview.and please try to refer me to the latest updates about multicore and latest webcast as i saw almost all webcasts now available on ni website. and suggestion, any feed back, any new PhD research idea.all welcomed and appreciated.
    thanks a lot
    please help to find something related to wireless, anything new for research.....please
    Labview Lover

    Labview Lover wrote:
    labview (my most favorit software with multicore tech to simulate the performance of a wireless system,and publish a paper about this topic.
    Why not do a project using Labview to create different models of wireless technologies, and create simulations toevaluate their potential performance.  The benefit of multi-core programming would be to optimize the speed to which you canrun simulations of the models.  For instance, you can include spectral analysis using LabVIEW.  
    An idea would be to do the above for RAKE receivers or Software Defined Radios.  There are many things you can research and investigate using LabVIEW.  You can create the hardware and compare physical to the model.  There are lots of project ideas in this area.
    R

  • Exchanging 2 D array between VBA and labview

    I have a project : half part is VBA writen, half part is  Labview coded. Both parts are writen to operate manually. The first one prepares data and charts them on charts sheets, the second one smooths data with ready made labview curves fitting VIs. So I want operator could use them seamless, that means calling labview from VBA 2007 code collecting X,Y data into a 2 D array, then send it to labview DLL smooth the curves and send back an array with smoothed data to excel through VBA.
    My concern is how to declare 2D array parameters in VBA  and in Labview in order to work fine (array of variants or doubles, parameters declared by ref or by val)
    I have seen similar topics in NI forum but they dealt with VB.net and not VBA .
    I run office 2007 and labview 2009 full dev. system with application builder.
    Regards

    bassevellois wrote:
    Thank you for the answers.
    The examples provided with labview for ActiveX exchanges are not very clear. I want to use labview as ActiveX server and Excel through VBA as client.
    There's only one example that I know of that ships with LabVIEW and it shows how to access LabVIEW from VB using LabVIEW's ActiveX server. It opens a VI and runs it. Doing it from VBA is very similar. You want to add the reference to the ActiveX Server in your VBA project. If you're controlling the LabVIEW development environment directly, then the library to include in your list of references will be "LabVIEW x.x Type Library". If you are controlling a LabVIEW-build app, then the library name will be whatever you specified when you built the app. 
    I have no idea of creating and exposing classes, methods or properties needed in VBA from my Labview VI which was at the beginning a standalone application.
    I don't quite understand what you are referring to. The ActiveX Server in LabVIEW and a LabVIEW-built app are pre-defined. You do not export anything from your LabVIEW VI. The available properties and methods are a function of the LabVIEW ActiveX Server, not of your VI. If you are building an app then you just need to enable the ActiveX server for the application, and that's done in the build specification. 
    How to reproduce my example:
    Make sure you've enabled the ActiveX Server in LabVIEW. 
    Download the attached 2D_VBA_Test VI to your C:\Temp folder. You can save it anywhere, but you'd have to change the path in the Excel VBA macro.
    Download the attached Excel workbook.
    Open the workbook and click the button. If LabVIEW is not running it should launch LabVIEW, and open the VI and populate the front panel array with the values from the spreadsheet. 
    Attachments:
    2D_VBA_Test.vi ‏6 KB
    VBA to Excel.xls ‏24 KB

  • Why does LabVIEW think the Analyser is not connected?

    Hi All,
            this almost works but not quite.
            I have a specialised analyser from LeCroy called a CATC Protocol Analyser System Model 10k. It is for observing and supplying traffic on a SATA disc interface. LeCroy supply an automation library of ActiveX components for controlling it called SataAutomation.tlb. After registering this with a utility I found called ccrpregutil.exe the automation library became visible in labVIEW.
            As an exercise to prove that this can work from LabVIEW I have written a small vi, called Code Interface.vi on file and uses two of the (simplest) methods from within it. They should return the Serial number and the firmware version. The analyser is connected (via usb) to the computer and is powered up. Running the attached vi results in the following message. "Error -2147220975. Exception occured in CATC.SataAnalyser.1: Analyser device is not connected Code Interface.vi.".
    So it thinks the analyser is not connected when physically it is. Does anyone know if I have to do something extra to "Connect" it. There is  not a method in the .tlb file to do this. All the methods seem to assume it is connected.
    The chances that anyone has seen this exact problem with this exact instrument are small but perhaps someone has seen something similar.
    thanks
    George
    ps this is LabVIEW2009
    Attachments:
    Code Interface.vi ‏12 KB

    Hi George,
    I have been looking into this problem for you. First of all, I must advise you that it is not possible for us to replicate this problem since we do not have the required hardware. As a result, the support I am going to be able to provide on this issue is limited. I had a look into the error that you are receiving and it seems that LabVIEW can offer no explanation as it is undefined, it may be that this is a custom error message/code that was created by LeCroy.
    I found this link to a Knowledge Base (KB) article that may be of use for you, it relates to communicating with 3rd party hardware in LabVIEW. Do you know if LeCroy provide another methodology you can try for controlling the device such as LabVIEW VIs or a DLL? Otherwise, I can only advise that a high success rate is usually achieved when using ActiveX with LabVIEW and that this issue may be due to a problem in the ActiveX library that is provided by LeCroy. For more information on ActiveX in LabVIEW, see this link. I will continue looking into this for you but I must reiterate that there is only so much I can do without having the hardware to recreate the problem.
    Best Regards,
    Christian Hartshorne 
    Application Engineer
    National Instrument
    Andrew McLennan
    Applications Engineer
    National Instruments

  • Labview code for flowmeter sensor

    Hi all
    I am using NI USB 6229 DAQ and Omega FTB-1302 flowmeter sensor.I made the proper wire connection.Now its time to labview coding but i dont know what to do.i am new in labview.Please help
    Best regards

    Look at some of the DAQmx examples in the Help...Find Examples.   Your sensor most likely outputs a 0-5V or a 4-20mA signal.  You can read those signals into Labview using the examples.  You will probably need to apply a scaling factor to get a meaningful value. 

  • How to control TTL device using LabVIEW?

    I have a device which requires TTL (+5V) to control it to be on/off. How can I provide this TTL to it using LabVIEW? I guess I should install a PCI board in my computer to connect to this device, but how to do my LabVIEW coding to control it?

    Hi,
    First of all if your board is NI board after corect instalation you need to view this board in the section Devices & Interfaces from the NI program Measurement & Automation Explorer.
    Also if the board is NI board or has the divers for LABView you need to find in the LabView the functions to command your board.
    Other posibility is if your board is no't compatibile wi NI, in this case you need to extract from DLL, the function to command the board.
    By

Maybe you are looking for