I2c labview Implementation

Dear All,
I am developing one application which communicates with a µC (texas Instruments M430F160) usign I2C.
The communication must run using a serial port (RS 232)--> This is a requirements.
I cannot use any additional hardware. Loking in internet I have found several solution with FPGA and so on but anything that has been implemented directly in Labview.
Is there anybody that has found a solution for that?
Thanks in advance for your support.
Regards
Andrea

Hi All,
after collecting your questions, I discussed with other colleagues and I gor additional mode info about the system and requirements.
The complete system has 2 µC's: one is the Master and the second one the Slave.
The communication between the µC's is done using I2C.
In  normal operation, there is a software (loaded in an external EEPROM) that is taking care all: communication, setting, interrupts and so on.
When we are in debugging mode, we need the PC that is sending command using the RS232 to the Master.
Actually to perform that, we use TeraTerm with terminal VT100 and the code is wrote with Notepad.
In attachment, we have an example of code and one picture of the communication system between PC and Master µC.
Now, our target is to replace this terminal with a GUI developed in Labview.
Before the start the complete application, I wanted to creat some small VI's to checl the communication using the serial protocol.
In any cases, yesterday I did some additional tests (using a logic analyzer) and I have figure out that what I was sending to the Master with Labview is not the same when I used TeraTerm. 
In particular the termination characters were different.
For example, I sent 2 commands:
WI F3
RI 01
These commands perform a reading of 1 byte in register at Fe address.
In attachment you could find the datalog using Labview and TeraTerm.
I think that something is missing in my VI's that consider the termination and (second problem) at the moment I do not habe any solutions to read the back from µC without using READ VI.
I have to say sorry again from my previous posts and no so precise info. Hopefully some of you have some ideas.
Thanks in advance.
BR
Andrea
Attachments:
System.png ‏90 KB
Code Example.txt ‏2 KB
Labview_vs_TeraTerm.xlsx ‏13 KB

Similar Messages

  • LabVIEW Implementa​tion of VXI-11 Discovery Protocol

    Has anyone implemented the VXI-11 discovery protocol using LabVIEW? I know I can use MAX to detect LXI instruments, but I like my code to be able to perform inventory functions independent of MAX. Is there a way to call the discovery routine within MAX from LabVIEW?
    Chris
    Practical Physics, LLC
    www.practicalphysicsllc.com
    National Instruments Alliance Partner
    Certified LabVIEW Developer

    Hello Chris,
    If the device is not registered in MAX, then it will not show up using this method. However, if you know the IP address of the instrument, then you can add it manually. There is no way to have MAX scan for instruments programmatically. Here is a tutorial on LXI.
    Thank you,
    Ryan
    National Instruments
    Applications Engineer

  • Is there a function in labVIEW (implemented in Imaq for example) to add a row or more rows to an image ??

    Hi,
    I would like to know if this is possible. I know it's possible to replce a row with another using IMAQ, but I don't want to replace a row in an image but to ADD a row to an image. Is this possible??
    Thanks in advance
    Lennaert

    By using a combination of the IMAQ Vision VI's and LabVIEW's array functions, you can insert a row into an image.
    In the IMAQ Vision palette, there are two functions that convert images to and from arrays: IMAQ ImageToArray and IMAQ ArrayToImage. With the ImageToArray function, you can turn your image into a 2D array of pixel intensities. Once you have this array, you can use the Insert Into Array function in the array palette to add your row. (Note: This row would be an array of intensity values) Then you can convert your new 2D array back into an image using the ArrayToImage VI.
    Hope this information helps!

  • Can't step into CVI step that calls LabVIEW DLL?

    Windows 2000 SP1
    TestStand 2.01f
    LabWindows CVI 6.0
    LabVIEW 6.1 Runtime
    We have a framework based on TestStand and CVI. A customer has supplied us
    with a DLL written in LabVIEW 6.1 and packaged with the Application Builder
    that we need to call from a CVI test library DLL. They say they can't
    supply us with a non-LabVIEW implementation. We installed the LabVIEW 6.1
    run-time. We added code to the CVI test library to dynamically load and
    unload the LabVIEW DLL using LoadLibrary and FreeLibrary. LoadLibrary is
    called in a function in the MainSequence Startup step group, FreeLibrary is
    called in a function in the MainSequence Cleanup step group.
    Things run as expected when the CVI DLL is built as Release or Debug and the
    CVI
    adapter is configured to run in-process.
    However, if we try to debug CVI DLL by building it as Debug and configuring
    the CVI adapter to run in an external instance of CVI, things hang the first
    time we try to step into, or run, a CVI test library step that calls one of
    the functions in the LabVIEW DLL. On the Applications tab in Task Manager,
    the item named LabVIEW is marked as 'Not Responding'. The only way to
    recover is to kill the LabVIEW process, which takes down CVI and TestStand
    with it. If none of the LabVIEW DLL functions are called, no problems are
    seen (no hang).
    I assume the problem has something to do with the library getting mapped to
    the wrong process space (TestStand v. the external CVI instance). Is there
    any way to solve this problem? Any ideas or suggestions would be
    appreciated.
    Joe

    > Out of curiousity, what happens if you attempt to debug your DLL from
    > CVI? For example, configure TestStand to run its steps 'inProcess'...
    > but then close the TestStand application and in CVI, configure it so
    > that the Specified External Process dialog points to TestStand's
    > SeqEdit.exe (CVI launches TestStand when debugging the project). Once
    > TestStand is running, run your test and set your break points as
    > usual, you should be able to step into the CVI code if nothing else.
    > If not, I would be interested in hearing what problems you encounter.
    When 'debugging' SeqEdit from CVI, we experienced no lock up.
    Thanks for this suggestion. Debugging from CVI is a workaround for now,
    though not highly desirable as it is reverse from normal debug proc
    edure
    (user can't step into CVI from TestStand). Still would like to know if
    'normal' debugging of this problem is possible.
    > The nice thing about debugging directly from Labwindows/CVI while
    > TestStand runs 'inprocess' is that you can avoid some library linking
    > errors, which may be the source of the troubles you are seeing.
    The problem appears to be general to LabVIEW DLLs called from an external
    instance of CVI under TestStand. We were able to reproduce the problem with
    a simple LabVIEW VI compiled to a DLL, then called from a simple CVI DLL
    under TestStand. We will package up some sample code and submit it to NI
    tonight or tomorrow.
    Thanks for you help.
    Joe

  • Is it possible to establish an I2C communicat​ion using PCIe-6363?

    Hi,
    Is it possible to establish an I2C communication using PCIe-6363? If yes any example would be appreciated.
    All I2C labview solution are using LabVIEW FPGA modules but PCIe-6363 doesn't seem to be compatible with FPGA modules.
    However PCIe-6363 can generate dgitial input/output signal up to 1MS/S which is more than enought for 100KHz I2C communcation.
    Thanks
    Charles

    Thanks for your link,
    it clearly explains that a Per-Cycle Tristating DAQ is necessary so I can't use the PCIe-6363 for I2C communication.

  • Difference between the Matlab and LabVIEW Hann (Hanning) functions

    Hi All,
    I'm hoping someone out there can satisfy my curiosity. I have been porting over some code from Matlab to LabVIEW and came accross a difference in the two implementations of the Hann function. Essentially:
    LabVIEW has array length = n
    http://zone.ni.com/reference/en-XX/help/371361J-01​/lvanls/hanning_window/
    Matlab has array length -1 = N.
    http://www.mathworks.co.uk/help/signal/ref/hann.ht​ml
    (where n= N in the two implementations)
    As I'm not very clued up on signal processing I was hoping someone could explain why there is a difference in implementation and why array length was chosen for the LabVIEW implementation. According to wolfram the parameter a is full width at half maximum:
    http://mathworld.wolfram.com/HanningFunction.html
    http://mathworld.wolfram.com/FullWidthatHalfMaximu​m.html
    Is the difference just due to the interpretation of what this value should be?
    I've attached a modified version of the window comparison example to illustrate the diference.
    Thanks,
    Attachments:
    Window Comparison from example.vi ‏32 KB

    ^N wrote:
    LabVIEW has array length = n
    Matlab has array length -1 = N.
    Could part o the confusion be due to the fact that LabVIEW arrays start with index zero and matlab arrays start with index one?
    I'll look at your code later...
    LabVIEW Champion . Do more with less code and in less time .

  • Pulse generation PCI-6220

    Hi there,
    I´m an absolutely newbie to labview and hope to get some advices as I´m completely stuck at the moment.
    I´d like to generate variable TTL pulses to 3 different lines. Since the PCI6220 card only has two counters I´ve to go for the normal hardware correlated DIO lines.
    For now I´d be happy to see it working just for one line as follows:
    __|   |________|           |_ ...
    I´ve to be able to set the pulse width of the high time for the first and the second pulse as well as the two different low times. This scheme should furthermore than be repeated n times. Thus having 5 variables, the length of the pulses in ms: 'Low1','High1','Low2','High2', and the number of repetitions: 'n'.
    I may be horribly wrong with this, but I think working with the duty cycle doesn´t work for that application, does it?
    Assuming I use the frequency generation of a counter as a sample clock to my pulse generation, it seems rather simple taking for instance 'High1' corresponding pulses of the counter clock to generate the 1st pulse then 'Low2' pulses for the subsequent low pulse and so forth. Could anyone give me a hint to do so, or are there better/other ways how to achieve this? Are there eventually vi's available I could start with (haven´t found proper one´s in this forum nor in the Labview implemented library)?
    Many, many thanks in advance for any help!!
    Robert

    Rob:  I looked at your example earlier when I was near my LV machine.  From memory:
    1. I think I recall that you specified PFI 2 as the sample clock source for the digital task while using CTR 0 to generate the clock.  According to this doc, the default output pin for CTR 0 is "terminal" 2.  However, that does NOT turn out to be another name for PFI 2.  Rather, terminal 2 is designated as PFI 12 as can be seen here.   (This stuff is also visible in MAX when you select your device, right-click and choose "device pinouts").
    2. I recall you used a U32 array version of DAQmx Write. You may need to use the U8 version on your 6220 board.  Also, the init values you wrote before the loop alternate between 255 (all bits high) and 0 (all bits low).  The values you write inside the loop alterate between 1 (LSB high, all other bits low) and 0 (all bits low).
    3. You defined the digital task for finite generation, filled its buffer before the loop, then attempted to keep overwriting it inside the loop.  These are not mutually consistent.  If you want finite generation, fill once only.  If you want continuous generation, it'll take some care not to overwrite too soon.
    4. Minor nit: It may not matter in your app, but often its best to start up the digital task before starting the counter task that generates its clock.  You can accomplish this by simply routing the error cluster from the digital task's DAQmx Start up to the counter task's Create Virtual Channel.
    I'm not near LV now to look at the recent example from Christian M.  Hope it suits your needs...
    -Kevin P.

  • Problem with Lookout 6.5 tag sharing

    This is a WHOPPER, although a weird one...
    I have one grad student grabing shared variables from my Lookout server processes for inclusion in his LabVIEW implementation.
    After I upgraded the server's Lookout to 6.5, he reported problems reading values from one of the four processes running on my server.
    1.  Oddly enough, he had no problem displaying the tags in Max or in Activex Hypertrend within his LabVIEW executable, but he could not map the variables to his shared variables and thus he could not display them or include them in his Citadel.
    2.  Using Tag Monitor, all the values from the "troubled" process on my Lookout server, come up as "access denied".
    3.  Using Tag Monitor on the Lookout server (while all four processes were running in Lookout), I had the same results...."access denied".
    4.  All Lookout clients (both 6.5 and 6.2) have no problems accessing the troubled process tags both by expression and Lookout Hypertrend.
    The troubled process, has the last alphabetical process name and the last alphabetical file name.  All four processes are loaded using the startup settings in Lookout.ini.  Lookout runs as a service on WIndows 2008 x86 Hyper-V virtual machine.
    Because the processes are all production, I have not yet been able to mess with process names and loading order to possibly elaborate on the symptom.
    It is "maybe" and issue that Lookout 6.5 can't properly share out (via shared variable engine) variables from more than 3 processes at a time???
    Please advise ASAP.
    Ed

    The access denied error is more like a security issue. In Lookout Object Explorer, right click on process, and select Configure Network Security. If I give Everyone "No Access", or I just remove any permissions, I will get the access denied error in Tag Monitor.
    Actually Lookout has no limitation on the number of exported processes or tags. All the tags in each process are the tags in Logos Server. The LabVIEW access these tags by Logos protocol. If Lookout set the security, and the LabVIEW doesn't have enough access level, it will get access denied error. For example, lookout gives no access to the process.
    You can configure the Network Security for process, as well as any objects. By default, Everyone has Read&Write permission.
    Ryan Shi
    National Instruments

  • Share Data between Pro and Designer Form

    Good evening,
    Old Form:
    I recently inherited an adobe pro form created a few years ago. The naming conventions are horrible, and they refuse to replace the form to spit out an xml file. But they want the data to fill into an access database.
    New Form:
    I was able to create an adobe designer form, with proper naming conventions, that exports an xml file that can be imported into the access database.
    Help:
    Is there a way to link the old form data to the new form, with the naming conventions being different?
    Ideally, id like to create a button that will display a dialog box to select the old form and import the data without converting to an xml first…is this possible?
    Thanks in advance!

    Hello Pieter,
    Ok, it sounds like you are asking about 5 different things here.  Let me break them up and respond individually.
    There is no overhead inherent to using a MathScript node on RT.  Since the MathScript code is implemented on top of LabVIEW code, there will be varying levels of performance with respect to a native LabVIEW implementation, but simply having the node itself does not cause any overhead.
    On a Windows platform (or other desktop platform), the answer is the same: the node does not introduce any overhead into your algorithm.
    If you use the application builder to create a compiled program that uses the run-time engine, you are no longer able to change the .m file.  Or, to be a little more technically accurate, the LabVIEW run-time engine does not have a compiler.  Thus, you can change the .m file all you want, but the actual .m file that is used by your built application is the version that was present when you built the application.
    The behavior you observe with MathScript executing new code will only happen if you are running a VI with the full development version of LabVIEW.  We need the LabVIEW compiler in order to generate new code when you update a .m file and this is only present with the full development version of LabVIEW.
    The search path is only necessary when you are developing your VI on the host.  You can set a search path by right-clicking on "My Computer" in your project, choosing Properties, and then "MathScript: Search Paths."  Note: your VI will need to be in the "My Computer" target to locate your .m file.  After you have created your MathScript node, you can drag the VI to the target.  You are correct that the .m file is compiled when the program is deployed and you do not need to transfer the .m file to the target.
    Grant M.
    Staff Software Engineer | LabVIEW MathScript | National Instruments

  • LV listbox size limitation

    Is there a limitation to the number of entries a listbox will display? My attached program indicates there is. If you run it, I can only scroll through to 32767 items. Ultimately, I want a program I can list a directory of files(80000+) and allow the user to multi-select for further operations. I experienced this limit with that program in development... so I put the attached program together to confirm the same limitation. I know I can develop an alternate method, but I would like to just use the tools(listbox) provided.
    Thanks.
    Attachments:
    Listbox test.vi ‏1663 KB

    Maelstrom wrote:
    > If it runs on Windows it is (ultimately) a standard Windows control.
    > The consistancy is via an abstraction layer. Case in point LabVIEW
    > for Mac looks just the same (or nearly) as LabVIEW for Windows and
    > yet Mac does not have ActiveX support.
    Well, I didn't say it was an Acitve X control, only that IF you are not
    talking about an Active X control, that it is completely a LabVIEW
    control not relying on any control implementation in the Windows
    standard controls.
    I can assure you that LabVIEW controls are ENTIRELY implemented by
    LabVIEW and do not use any Windows standard controls in any way. LabVIEW
    goes however to the extend to try to imitate the look and feel of the
    native platform controls if you use the so called dialog controls but
    that is a complete LabVIEW implementation.
    >Yeah well. I did not really want to dredge into the details but yes,
    >the actual problem is with the Windows ScrollBar control. Its not just
    >the ListBox as you might well imagine. Most Windows controls which use
    >a *standard* scrollbar will have the same issue. Some controls implement
    >thier own custom scrollbar internally and these can access up to 4 gigs
    >but these are far and few between.
    I'm pretty sure even the scroll bar is a complete LabVIEW
    implementation. It would be strange to implement all other controls
    completely in LabVIEW (and believe me they are) and only import the
    sroll bar from the Windows standard controls. A simple check on non
    Windows systems would show if the list box can handle more than 32000
    elements there. I really doubt it.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Error 1097 while deploying

    Hi,
    I am having a problem when deploying a model from Veristand (Windows) to a PXI equipment.
    Let's start from the beggining: I have written a simulink model and compiled it for Veristand 2014 target. This generates a DLL, that I load it in Veristand at my Windows PC. From Veristand (Windows) I am able to run the model and the results are the same as Simulink. Now I want to run the model in the PXI (PXI-8176 with Phar Lap ETS 13.1). When I try to deploy the model to the PXI target, I've got the following error:
    • Start Date: 05/05/2015 11:16
    • Loading System Definition file: C:\Users\Public\Documents\National Instruments\NI VeriStand 2014\Projects\DC_Motor\DC_Motor_Ecosim\DC_Motor_Ecosim.nivssdf
    • Initializing TCP subsystem...
    • Starting TCP Loops...
    • Connection established with target Controller.
    • Preparing to synchronize with targets...
    • Querying the active System Definition file from the targets...
    • Stopping TCP loops.
    Waiting for TCP loops to shut down...
    • TCP loops shut down successfully.
    • Unloading System Definition file...
    • Connection with target Controller has been lost.
    • Start Date: 05/05/2015 11:16
    • Loading System Definition file: C:\Users\Public\Documents\National Instruments\NI VeriStand 2014\Projects\DC_Motor\DC_Motor_Ecosim\DC_Motor_Ecosim.nivssdf
    • Preparing to deploy the System Definition to the targets...
    • Compiling the System Definition file...
    • Initializing TCP subsystem...
    • Starting TCP Loops...
    • Connection established with target Controller.
    • Sending reset command to all targets...
    • Preparing to deploy files to the targets...
    • Starting download for target Controller...
    • Opening FTP session to IP 169.254.104.111...
    • Processing Action on Deploy VIs...
    • Gathering target dependency files...
    • Downloading DC_Motor_Ecosim.nivssdf [71 kB] (file 1 of 4)
    • Downloading DC_Motor_Ecosim_Controller.nivsdat [6 kB] (file 2 of 4)
    • Downloading CalibrationData.nivscal [0 kB] (file 3 of 4)
    • Downloading DC_Motor_Ecosim_Controller.nivsparam [0 kB] (file 4 of 4)
    • Closing FTP session...
    • Files successfully deployed to the targets.
    • Starting deployment group 1...
    The VeriStand Gateway encountered an error while deploying the System Definition file.
    Details:
    Error 1097 occurred at Project Window.lvlibroject Window.vi >> Project Window.lvlib:Command Loop.vi >> NI_VS Workspace ExecutionAPI.lvlib:NI VeriStand - Connect to System.vi
    Possible reason(s):
    LabVIEW: An exception occurred within the external code called by a Call Library Function Node. The exception might have corrupted the LabVIEW memory. Save any work to a new location and restart LabVIEW.
    =========================
    NI VeriStand: Call Library Function Node in SIT Model API.lvlib:Load Model DLL.vi->SIT Model API.lvlib:Initialize Model.vi->Model Execution.lvlib:Initialize Model Loop Data.vi->NI VeriStand Engine.lvlib:VeriStand Engine State Machine.vi->NI VeriStand Engine.lvlib:VeriStand Engine.vi->NI VeriStand Engine.lvlib:VeriStand Engine Wrapper (RT).vi
    • Sending reset command to all targets...
    After this, I cannot connect to the PXI and I have to reset it.
    Any idea?
    Thank you,
    Jesús M. Zamarreño

    Nothing to see in this VI and DLL. For one a DLL is just a compiled code so there is very little to see from the DLL alone. Also a DLL does not contain any other information as to how it is supposed to be called. The implementation of the driver is terrible with all the calls in one single VI and half a dozen booleans to select which one of them to call. It deserves maybe a 6 out of 10 for functionality, a 1 for style and a 2 for cleanliness.
    Without the API documentation of that driver DLL, there is absolutely nothing possible to say about the correctness of the CLN configurations. As simple as the functions look, there is still a chance that this configuration got wrong somewhere. Another a least as likely cause is that the DLL itself is poorly written too. If the LabVIEW implementation is any hint for the quality of the DLL driver implementation, then I guess it would be simpler to throw it all away and reimplement the driver purely in LabVIEW.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • What is this element/feature?

    Hi all,
    What is the “Format 0” box above?  Not asking specifics, but generalities.  When you click on it, it expands like below.  I would like to make CPHA an input to this function (i.e. provided from the front panel).  Haven’t run across something like this before, so wondering if somone could provide some pointers on where to look.  This is NI provided example code that I am modifying, specifically the DAQmx SPI code.  Thanks!
    Gary

    NO. you cannot change the polymorphic selector from the Front panel. 
    A polymorphic vi is a set of closely related vi's called instance vis (they all have the same connector panes although the datatype may differ).  they assist the developer by easilly replacing one type of function with another similar function but the specific instance vi must be determined and in memory at compile time and cannot change at run time.  If there were a way to change the instance vi selection from the FP the block diagram code would change at run time and that is not allowed.
    That being said:  there are ways to dynamically dispatch a vi (choose the vi to run programatically)  but, not by using polymorphism as LabVIEW implements polymorphism.  Is there a need for your application to do this?  we' can point you in that direction but, there are some more advanced LabVIEW features to master to achieve this behavior.
    Jeff

  • Expected (wrong?) results from Express VI - a bug?

    Using a Lab-Vierw supplied example made with express VIs fro signal egneration and ifltering.
    MOdified the express VI so that instead of using a Butterworth lowpass it would use a Bessel with
    variable order (variable # of poles).
    Unexpecte result: when choose corbner freq = 1 KHz and drive with 1 KHz signal, I should get
    output = 0.71 times input since at the corne freq, the output shuld be 3 dB down.
    And I do get that result with a BUtterworth (all orders tried) or with a Bessel, if the Bessel is 1st order.
    But for Bessel of order 2 or more, the attentiation is greater than expected, greater than it should be.
    Bessel with Fc = 1000 Hz:
    8 pole: Out/In = 0.18 when f=1000; Out/in=0.71 when f=500 Hz.
    4 pole: Out/In = 0.39 when f=1000; Out/In = 0.71 when f=645 Hz.
    2 pole: Out/In = 0.57 when f=1000; Out/In = 0.71 when f=790 Hz.
    1 pole: Out/In = 0.71 when f=1000 as expected.
    Is this a bug or is there another explanation?

    The KB that I posted does show the transfer functions for the LabVIEW implementation of the Bessel functions.  As WCR points out this does change what the cutoff frequency is actually referring to when using higher order Bessel filters.  The LabVIEW help should reflect what the actual definition of the cutoff frequency for each filter, but it does not.  I also checked the Chebyshev filter VIs and it does not refer to the ripple dB when specifying what the cutoff frequency is for that type of filter. 
    So, does the LabVIEW implementation of the higher order Bessel functions give results that might not be expected?  Yes.  Should the help files be updated to reflect this?  Probably. 
    One thing that I did want to point out is that you can view the transfer functions used for the different types of filters using the Filter Express VI.  If you open the configuration page and select the Transfer function as the View Mode, it will show how the transfer function looks for each filter.  If you select Butterworth, you can increase the order and see it lock on the -3 dB point.  If you select Bessel, you can increase the order and watch the curve shift to the left, decreasing the dB values at the cutoff frequency. 
    Andy F.
    National Instruments

  • PID control in FPGA

    Hi,
    New year wishes to all.
    the Pid control block of FPGA(I think even in windows), LabVIEW is differentiating the Process variable instead of differentiating the error, Is that same?
    Regard,
    N. Madhan Kumar.
    Solved!
    Go to Solution.
    Attachments:
    pid Conventional.gif ‏1 KB
    PID in Labview.png ‏3 KB

    I don't have the control design / PID toolkits installed at the moment as I rarely use them nowadays but when I last did it, I ended up implementing my own. This should be a good starting point:
    It is about as basic of a parallel PID implementation as I can come up with (for Windows - hence the doubles, if on the FPGA you'll probably be using floats). You may wish to extend to allow the controller to reset (as per the LabVIEW implementation by reinitialising the shift registers).
    I've attached the VI (LV2012).
    Certified LabVIEW Architect, Certified TestStand Developer
    NI Days (and A&DF): 2010, 2011, 2013, 2014
    NI Week: 2012, 2014
    Knowledgeable in all things Giant Tetris and WebSockets
    Attachments:
    PID.vi ‏13 KB

  • Error 1097 while controlling a Velmex

    I'm using a two axis Velmex setup to quickly move a probe quickly in and out of a flame. The Velmex is supposed to move the probe into the flame, let it collect data, move it out of the flame, then repeat this process at a slightly more advanced position inside the flame. After this is finished, the second stepper motor is supposed to move the probe up and the process starts again. Halfway through the routine, however, it stops working and I wind up getting Error 1097. Why is this occuring?
    Here's the VI for reference.
    Solved!
    Go to Solution.
    Attachments:
    Velmex VI test.zip ‏59 KB

    Nothing to see in this VI and DLL. For one a DLL is just a compiled code so there is very little to see from the DLL alone. Also a DLL does not contain any other information as to how it is supposed to be called. The implementation of the driver is terrible with all the calls in one single VI and half a dozen booleans to select which one of them to call. It deserves maybe a 6 out of 10 for functionality, a 1 for style and a 2 for cleanliness.
    Without the API documentation of that driver DLL, there is absolutely nothing possible to say about the correctness of the CLN configurations. As simple as the functions look, there is still a chance that this configuration got wrong somewhere. Another a least as likely cause is that the DLL itself is poorly written too. If the LabVIEW implementation is any hint for the quality of the DLL driver implementation, then I guess it would be simpler to throw it all away and reimplement the driver purely in LabVIEW.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

Maybe you are looking for

  • Why does Finder only burn DVD's at 2x?

    I have a MacBook Pro with a 8x Superdrive. How come when I try to burn a data dump onto DVD from the finder, the only option I have is 2x? I can't even go lower to 1x let alone the advertised speed of 4x. here are the specs of the drive: Maximum writ

  • Job termination in source system

    Dear All,      I am trying to load payroll data using Datasource ZHR_PY_1 . I am doing init data for the first time using infopackage for this datasource. I have tried to pull a couple of years, 2 months, even 1 day of data, but all give me the same

  • When can I upgrade to Mountain Lion

    I currrently am running Mac OS X Version 10.7.5 and I have run software update and it states that there are not undates available. However Mountain Lion Version 10.8 is available. Why when I go to software upgrade does this update not should up?

  • HT201302 How to importing personal photos and videos from iOS devices to my computer with Windows 8?

    myHow to importing personal photos and videos from iOS devices to  computer with Windows 8? Nothing indicating iPhone 4s folder for images and videos in Win 8 anymore after plugged in USB cable to PC with the iPhone 4s. Nothing pop up anymore asking

  • HP Pavilion g6-2210sa Factory Reset WHERE TO BUY?

    Hi I have HP Pavilion g6-2210sa and I would like to buy Factory Reset disk. Their is no link on their website so where can I buy the recovery disk taht will Factory Reset back to window 8? I did not make any recovery disk so thats why I need to buy a