Thr type of hbitmap in the labview

HI
   I got a Dll  which is found in the internet.
Now I want to use "Call Library Function Node" to get data from this DLL.
Unfortunately,I don,t know how to set the "HBITMAP" without knowing the right type in labview.
the DLL code :  extern "C" __declspec(dllimport) bool __stdcall DataMatrixDecodeBitmap(HBITMAP hImage, char *pResult, int *pSize,int nTimeOut);
Is any body know the right type of HBITMAP in the labview ? array ? or labview can,t support this kind of type.
Any suggestions would be very appreciated.
Thanks
Solved!
Go to Solution.
Attachments:
111.png ‏28 KB

Hi
try to look in the header what is this type of data. It may be a custom structure, definedin this file. If so, wire a cluster with the same elements which are in the structure.
EDIT : found on google
Although Windows Handles are under Windows 32bit mostly pointers to memory
it would be a bad idea to try to access those data directly. Windows handles are
meant to be opaque data types whose internal structure is only known to Windows
and more specific to Windows versions similar or better than the one who created it.
The Windows API gives you (almost) all the functionality to access the specific
information inside a handle. HBITMAPS would need to be accessed with the API
function GetDIBits(). This function works with a device context handle HDC you
probably need to get from your grabber interface. This is necessary to convert
the device dependant information in a HBITMAP into the device independant
(the DI in GetDIBits) information in the parameters you want to receive.
So,this is a I32 or I64, depending on your system. It's a pointer to a memory space.
And here is a previous discussion : Forum NI
Giuliano Franchetto
Student at the l'Ecole Nationale Supérieure des Mines de Saint-Etienne, cycle ISMIN (FRANCE)

Similar Messages

  • Improving the LabVIEW Help: Working with Data Types

    Have you ever had trouble figuring out how to work with the waveform data type, the dynamic data type, or some of the other more complex data types in LabVIEW? As a tech writer on the LabVIEW team, I'd like to improve our documentation about working with data. What would you like to see? What would be helpful? Have you ever given up on a particular data type because it didn't work? Have you ever created a replacement data type because you preferred not to use a LabVIEW data type?
    Lacy Klosterman Rohre | Marketing Editor | National Instruments | 512.683.6376 | ni.com/newsletter

    I've been programming LabVIEW since around 3.1 / 4.0.  Over the years and versions, I've found it necessary to approach a lot of the new datatypes and algorithms with some healthy skepticism because I don't think we're usually given a "fair and balanced" overview.  Some of the things that are highly promoted that bring "ease of use" for beginners (and I *do* understand the importance of that when you're growing your LabVIEW user base) can carry significant performance penalties that are mentioned in more of a whisper, if at all.  I'd just like a more full disclosure of the tradeoffs -- the good, the bad, and the ugly.
    Waveforms -- very rarely use them, unless required for analysis functions.  I have a vague notion (between a belief and a memory) that they used to carry a significant performance penalty compared to arrays, but that the gap is now much smaller.  Still, I'm most comfortable with my old habit of using data arrays.  Some of this came about because a lot of my work uses counters where I may have a variable "dt" value or where my AI is clocked by a counter and the "dt" isn't internally generated.
    Dynamic signals -- have never used them.  Have no clear idea what good they're supposed to be, and they seem to be tied in primarily with Express VI's, which I have also studiously avoided.
    Variants -- Use them only slightly.  Performance issues are a question mark.  Exception: have learned that the implementation of Variant properties allows their use as an efficient way to create associates for string lookup tables. 
    Digital Waveform -- have typically avoided it except when graphing digital data during debug.  Haven't found any compelling reasons to use it.  I do like the notion of a compress/expand capability for sparse digital data, but haven't exercised it enough to trust the implementation. 
    Recap:  I don't need more simple "how-to's" in the help.  I need a LOT more "why bother's" that include both pros and cons. 
    -Kevin P.

  • How does the LabVIEW adapter get VI type information?

    Hi, all,
    As I've mentioned in another thread,we're trying to programmatically generate TestStand scripts.  Since we're using the LabVIEW adapter, we will need to get the type ingotmation of controls and indicators from some VIs that the scripts will use. (These are all kept in one library.)
    I'd rather collect that type information from the VIs themselves than manually collect it or hardcode it.Then, if we add or modify the VIs, the new types will be generated.
    The LabVIEW adapter must have the ability to do this.
    Does anyone here know how it does?
    Thanks much,
    - Steve.

    SPM,
    As you may have already noticed, in the LabVIEWModule class there is a Parameters property that contains the items connected to the VI's connector pane.  This parameters container is read-only, meaning that while you can get each individual parameter and modify it, you cannot add new parameters.
    The parameters property will be created for you when you call Module.LoadPrototype.  This will read the VI's connector pane information from disk, and create the correct parameters property for you.  You can find an example of creating a LabVIEW step programmatically and setting its connector pane in this document:  Programmatically Create TestStand Sequence File with a Step that Calls a LabVIEW Code Module. 
    Please note that this example has not been updated since TestStand 3.0, and as such uses an obsolete method LabVIEWModule.LoadPrototype.  Please use Module.LoadPrototype instead.
    Josh W.
    Certified TestStand Architect
    Formerly blue

  • Assistance with Printing to Zebra QL220 using the LabVIEW report VI

    I currently am trying to use the LabVIEW report VI to output a formatted set of strings to the above mentioned printer.
    The UMPC hardware that runs the developed application is bare bones XP box and doesn't' have MS office installed
    I use the LV new report set to a standard report and  successfully set orientation to portrait or Landscape, set margins and set report fonts and headers.. However  in sending the text strings to this label printer (labels are 45mm X 60mm) I find that that two issues arise.
    1. Printing out double labels. The pagination fails and prints a blank label after each print when text is output.  However if I disable all headers and body text (i.e blank label print). This pagination works fine.?
    2. The formatting of the information on the page reliably, I currently use inserted blank spaces, is there a better way?
    I thought, perhaps I should try using the ZLP programming language, but then not sure how to I send it to a USB printer? Has any one had any experience with this and these label printers ?  
    Thanks
    Greg Nicholls

    hi all
    i am C sharp programer
    and i have zebra QL 220 plus
    and roll type is 42X20mm
    and i have the zebra sdk
    and i create mobile application in C# smart device
    and i tring to connect to printer from my application by bluetooth
    in sdk i got this and use
    using System;
    using ZSDK_API.Comm;
    using System.Text;
    using System.Threading;
    // This example prints "This is a ZPL test." near the top of the label.
    private void SendZplOverBluetooth(String theBtMacAddress) {
    try {
    // Instantiate a connection for given Bluetooth(R) MAC Address.
    ZebraPrinterConnection thePrinterConn = new BluetoothPrinterConnection(theBtMacAddress);
    // Open the connection - physical connection is established here.
    thePrinterConn.Open();
    // Defines the ZPL data to be sent.
    String zplData = "^XA^FO20,20^A0N,25,25^FDThis is a ZPL test.^FS^XZ";
    // Send the data to the printer as a byte array.
    thePrinterConn.Write(Encoding.Default.GetBytes(zplData));
    // Make sure the data got to the printer before closing the connection
    Thread.Sleep(500);
    // Close the connection to release resources.
    thePrinterConn.Close();
    } catch (Exception e) {
    // Handle communications error here.
    Console.Write(e.StackTrace);
    // This example prints "This is a CPCL test." near the top of the label.
    private void SendCpclOverBluetooth(String theBtMacAddress) {
    try {
    // Instantiate a connection for given Bluetooth(R) MAC Address.
    ZebraPrinterConnection thePrinterConn = new BluetoothPrinterConnection(theBtMacAddress);
    // Open the connection - physical connection is established here.
    thePrinterConn.Open();
    // Defines the CPCL data to be sent.
    String cpclData = "! 0 200 200 210 1\r\n"
    + "TEXT 4 0 30 40 This is a CPCL test.\r\n"
    + "FORM\r\n"
    + "PRINT\r\n";
    // Send the data to the printer as a byte array.
    thePrinterConn.Write(Encoding.Default.GetBytes(cpclData));
    // Make sure the data got to the printer before closing the connection
    Thread.Sleep(500);
    // Close the connection to release resources.
    thePrinterConn.Close();
    } catch (Exception e) {
    // Handle communications error here.
    Console.Write(e.StackTrace);
     and once i use ZPL method it print 17 barcod always with 16 blank Patches (labels)
    and  when i use CPCL method it print 12 BarCode always with 11 blank Patches (labels)
    and i dont know why ?
    it must print 1 Patch (label)
    what i can do  and i dont think there is eny rong with my code
    all waht i want is how i can give Length and width and how much label print coz it is always print 17 or 12 what i can do ?

  • Is this roughly how the labVIEW Execution Systems work?

    I've not taken a class in OS design, so I don't know the strategies used to implement multitasking, preemptive or cooperative. The description below is a rough guess.
    LabVIEW compiles Vis to execute within its own multitasking execution environment. This execution environment is composed of 6 execution systems. Each execution system has 5 priority queues (say priorities 0->4). Vis are compiled to one or more tasks which are posted for execution in these queues.
    An execution system may either have multiple threads assigned to execute tasks from each priority queue, or may have a single thread executing all tasks from all priority queues. The thread priorities associated with a multithreaded execution system are assigned according to the queue that they service. There are therefore 5 available thread priority levels, one for each of the 5 priority level queues.
    In addition to the execution queues, there are additional queues that are associated with tasks suspended in various wait states. (I don't know whether there are also threads associated with these queues. It seems there is.)
    According to app. note 114, the default execution environment provides 1 execution system with 1 thread having a priority level of 1, and 5 execution systems with 10 prioritized threads, 2 threads per priority queue. NI has titled the single threaded execution system "user interface" and also given names to the other 5. Here they will be called either "user interface" or "other".
    The "user interface" system is responsible for all GUI actions. It monitors the keyboard and mouse, as well as drawing the controls. It is also used to execute non-thread-safe tasks; tasks whose shared objects are not thread mutex protected.
    Vis are composed of a front panel and diagram. The front panel provides an interface between the vi diagram, the user, and the calling vi. The diagram provides programmatic data flow between various nodes and is compiled into one or more machine coded tasks. In addition to it own tasks, a diagram may also call other vis. A vi that calls another vi does not actually programmatically branch to that vi. Rather, in most cases the call to another vi posts the tasks associated with the subvi to the back of one of the labVIEW execution system�s queues.
    If a vi is non-reentrant, its tasks cannot run simultaneously on multiple threads. This implies a mutex like construction around the vi call to insure only one execution system is executing the vi. It doesn�t really matter where or how this happens, but somehow labVIEW has to protect an asynchronous vi from simultaneous execution, somehow that has to be performed without blocking an execution queue, and somehow a mutex suspended vi has to be returned to the execution queue when the mutex is freed. I assume this to be a strictly labVIEW mutex and does not involve the OS. If a vi is reentrant, it can be posted/ran multiple times simultaneously. If a vi is a subroutine, its task (I think there is always only one) will be posted to the front of the caller's queue rather than at the back of the queue (It actually probably never gets posted but is simply mutex tested at the call.) A reentrant-subroutine vi may be directly linked to its caller since it has no restrictions. (Whether in fact labVIEW does this, I don�t know. In any event, it would seem in general vis that can be identified as reentrant should be specified as such to avoid the overhead of mutexing. This would include vis that wrap reentrant dll calls.)
    The execution queue to which a vi's tasks are posted depends upon the vi execution settings and the caller's execution priority. If the caller's execution priority is less than or equal the callee's execution settings, then the callee's tasks are posted to the back of the callee's specified execution queue. If the caller's execution priority is greater than the callee's specifications, then the callee's tasks are posted to the back of the caller's queue. Under most conditions, the vi execution setting is set to "same as caller" in which case the callee�s tasks are always posted to the back of the caller's execution queue. This applies to cases where two vis are set to run either in the other execution systems or two vis are set to run in the user interface execution system. (It�s not clear what happens when one vi is in the �user interface� system and the other is not. If the rule is followed by thread priority, any background tasks in the �other� systems will be moved to the user interface system. Normal task in the �other� systems called by a vi in the �user interface� system will execute in their own systems and vice versa. And �user interface� vis will execute in the caller�s �other� system if the caller has a priority greater than normal.)
    Additionally, certain nodes must execute in the "user interface" execution system because their operations are not thread-safe. While the above generally specifies where a task will begin and end execution, a non-thread safe node can move a task to the �user interface� system. The task will continue to execute there until some unspecified event moves it back to its original execution system. Note, other task associated to the vi will be unaffected and will continue to execute in the original system.
    Normally, tasks associated with a diagram run in one of the �other� execution systems. The tasks associated with drawing the front panel and monitoring user input always execute in the user interface execution system. Changes made by a diagram to it own front panel are buffered (the diagram has its own copy of the data, the front panel has its own copy of the data, and there seems to be some kind of exchange buffer that is mutexed), and the front panel update is posted as a task to the user interface execution system. Front panel objects also have the advanced option of being updated sequentially; presumably this means the diagram task that modifies the front panel will be moved to the user interface execution system as well. What this does to the data exchanged configuration between the front panel and diagram is unclear as presumably both the front panel and diagram are executing in the same thread and the mutex and buffer would seem to be redundant. While the above is true for a control value it is not clear whether this is also true for the control properties. Since a referenced property node can only occur on the local diagram, it is not clear it forces the local diagram to execute in the user interface system or whether they too are buffered and mutexed with the front panel.
    If I were to hazard a guess, I would say that only the control values are buffered and mutexed. The control properties belong exclusively to the front panel and any changes made to them require execution in the �user interface� system. If diagram merely reads them, it probably doesn�t suffer a context switch.
    Other vis can also modify the data structure defining the control appearance and values remotely using un-reference property nodes. These nodes are required to run in the user interface system because the operation is not thread-safe and apparently the diagram-front-panel mutex is specifically between the user interface execution system and the local diagram thread. Relative to the local diagram, remote changes by other vis would appear to be user entries.
    It is not clear how front panels work with reentrant vis. Apparently every instance gets its own copy of the front panel values. If all front panel data structures were unique to an instance, and if I could get a vi reference to an instance of a reentrant vi, I could open multiple front panels, each displaying its own unique data. It might be handy, sort of like opening multiple Word documents, but I don�t think that it�s available.
    A note: It is said that the front panel data is not loaded unless the front panel is opened. Obviously the attributes required to draw an object are not required, nor the buffer that interfaces with the user. This rule doesn�t apply though that if property references are made to front panel objects, and/or local variables are used. In those cases at least part of the front panel data has to be present. Furthermore, since all data is available via a control reference, if used, the control�s entire data structure must be present.
    I use the vi server but haven�t really explored it yet, nor vi reference nodes, but obviously they too make modifications to unique data structures and hence are not thread-safe. And in general, any node that accesses a shared object is required to run in the user interface thread to protect the data associated with the object. LabVIEW, does not generally create OS level thread mutexes to protect objects probably because it becomes to cumbersome... Only a guess...
    Considering the extra overhead of dealing with preemptive threading, I�m wondering if my well-tuned single threaded application in LV4.1 won�t out perform my well-tuned multithreaded application in LV6.0, given a single processor environment�
    Please modify those parts that require it.
    Thanks�
    Kind Regards,
    Eric

    Ben,
    There are two types of memory which would be of concern. There is temporary and persistent. Generally, if a reentrant vi has persistent memory requirements, then it is being used specifically to retain those values at every instance. More generally, reentrant code requires no persistent memory. It is passed all the information it needs to perform its function, and nothing is retained. For this type of reentrant vi, memory concern to which you refer could become important if the vis are using several MBytes of temporary storage for intermediate results. In that case, as you could have several copies executing at once, your temporary storage requirements have multiplied by the number of simultaneous copies executing. Your max memory use is going to rise, and as labview allocates memory rather independently and freely, the memory use of making them reentrant might be a bit of a surprise.
    On the other hand, the whole idea of preemtive threading is to give those tasks which require execution in a timely fashion the ability to do so regardless of what other tasks might be doing. We are, after all, suffering the computational overhead of multithreading to accomplish this. If memory requirements are going to defeat the original objective, then we really are traversing a circle.
    Anyway, as Greg has advised, threads are supposed to be used judiciously. It isn't as though your going to have all 51 threads up at the same time. In general I think, overall coding stategy should be to minimize the number of threads while protecting those tasks that absolutely require timely execution.
    In that sense, it would have been nice if NI had retained two single threaded systems, one for the GUI and one for the GUI interface diagrams. I've noticed that control drawing is somewhat slower under LV6.0 than LV4.1. I cannot, for example, make a spreadsheet scroll smoothly anymore, even using buffered graphics. This makes me wonder how many of my open front panel diagrams are actually running on the GUI thread.
    And, I wonder if threads go to sleep when not in use, for example, on a wait, or wait multiple node. My high priority thread doesn't do a lot of work, but the work that it does is critical. I don't know what it's doing the rest of the time. From some of Greg's comments, my impression is it in some kind of idle mode: waking up and sleeping, waking up and sleeping,..., waking up, doing something and sleeping... etc. I suppose I should try to test this.
    Anyway that's all an aside...
    With regard to memory, your right, there are no free lunches... Thanks for reminding me. If I try this, I might be dismayed by the additional memory use, but I won't be shocked.
    Kind Regards,
    Eric

  • Can the cursors in a property node be changed to the LabView 7.X format instead of 8.X?

    I plotted data and used the property node thing to create cursors so that I can select individual data points.  The LabView 7.0 or 7.1 format for the cursors was useful because I could punch in a number and the cursor would then jump to that number.  But with LabView 8.0, this option is apparently no longer available, and I must manually move the cursor to where I want it to go.  This is very time-consuming, and I would like to know if there's a way to revert to the LabView 7.0 format, or change the existing format so that I can tell the cursor where to go by punching in a number.  Any ideas?
    Attachments:
    question 090504.jpg ‏233 KB

    Sorry but no. You are stuck with the 8.0 style cursor option. I find the labview 8.x cursor display ugly and useless. Even my customers are very quick to specify that they do not want Labview 8.x styled cursors. I really miss the 7.1 type cursor display 
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Calling a char** array_name type of variable in the DLL.

    Hi,
    I have to call an external DLL from LabVIEW but I have the following issue. One one of the other forum posts, this issue, I think, had been addressed, by I guess editing the DLL itself, or maybe I did not understand the solution right. But I do not have access to the DLL code. I just have the header which has a function definition with char **.
    The "Import DLL" functionality of LabVIEW is mistaking char** for a string, and so I cannot use that too. If I wire a string to the input, then the DLL throws an exception giving Error 1097.
    Calling convention is C.
    The header file has the following function definition:
    DWORD Send(DWORD args, char ** strng)
    So I think the DLL expects an array of Strings. I tried the Type: Array of "8 bit unsigned integers" using "Array Handle" in Call Library Function Node, which led to the following preview:
    uint32_t Send(uint32_t args, Array1DUint8_t **strng);
    Thus I give one string , and convert the characters to 8 bit unsigned array and LabVIEW code looks as shown as below: (Instead of Array of Strings). The input "1" is number of arguments/strings.
    This code runs without error 1097. But my DLL reads the string as random symbols. One example is shown as:
    I do not know why my DLL is reading random things from memory. I am pretty sure its expecting pointer to pointer of character, and may be LabVIEW is giving something different.
    I also tried an array of strings, and used "Adapt to type" in the CLN, but then the output changes from random characters to some number. (Like x5bc087305bc003b5.....
    Any help will be appreciated.
    Also, I have come across posts which say I have to write an external wrapper. I do not want to do anything outside the LabVIEW environment as far as possible, and moreover have almost zero experience in writing one. Thus if a solution in LabVIEW exists, then it would be optimum.
    Thanks.

    You need to know what the DLL expects here: a pointer to a single string, an array of pointers to strings, or something else?  Is the "strng" value an input, an output, or both?  Without an understanding of the function call requirements it's not possible to provide specific advice.  However, it is most likely possible to get this working in LabVIEW.  You will probably need to use DSNewPtr, MoveBlock, and DSDisposePtr.  These are all memory manager functions that you can call from a Call Library Function Node by setting the library name to "LabVIEW" and these functions are documented in the LabVIEW help.  They will allow you to get a pointer to a string, or an array of pointers as appropriate.  You can find one example of using these functions to generate an array of strings in this thread at LAVA: "Passing array of string to C dll."

  • How to install an entire directory structure and files with the LabVIEW installer

    I think I may be missing something on how to do this - preserve a file directory structure in the LabVIEW build process.
    Is there an easy way to distribute an entire directory structure (multiple directory levels/text files) and have the LabVIEW build process properly preserve and install them?   I included a directory in my build specification but it completely flattened the file structure and then so when the installer builds it, the resulting structure in the /data directory is flattened as well.    I'd prefer not to go through each file (~100) and setup a specific target location.   Is there an easy way to preserve the file structure in the build process/installer process that I am missing (maybe a check box that I didn't see or an option setting somewhere)?
    -DC

    Hi DC
    Sorry for the delay in responding to your question.
    You should be able to do this using a 'Source Distribution' as described below:
     - Right click on 'Build Specifications' and select 'New' - 'Source Distribution'
     - Provide a name
     - Add your files to 'Always included' under 'Source Files'
     - Under 'Destinations' select 'Preserve disk hierarchy' and ensure that 'Directory' is selected as the 'Destination type'
    When building an installer your can then choose to include your source distribution and it will be installed to a location of your choosing.
    Hope this helps.
    Kind Regards
    Chris | Applications Engineer NIUK

  • How to access Call Back Functions using *.dll in the Labview?

    Hai,
    I am Pavan Ram Kumar Somu.
    I am new to Labview, currently I am working on MVB Interface.
    I need to access the API functions from *.dll file in Labview, as of now , I am doing this with Call function Library node in Labview but it does not support the following data types like
        1. Pointer Arguments(To which memory it points in Labview)
        2. function pointers Arguments
        3 .pointers in structures and pointer structures in structures and many other data types.
    Please Answer the below queries also:
    1. How to pass pointer arguments to API functions in DLL and how to collect pointer  
        return types from API functions in DLL
    2. How to pass structure arguments to API functions in DLL and how to collect structure
        return types from API functions in DLL
    3. How to use callback functions(nothing but function pointers) in Labview and how to
        collect callback fuctions return types from API functions in DLL
    I need your help while passing these datatypes to API functions in DLL from labview.
    Suggest me if there is any other alternative for implementing this task.
    I am referencing some examples here:
    Examples:
    I)
    Unsigned short int gf_open_device(void *p_device_config, unsigned long int client_life_sign_timeout, unsigned short int *device_error)
    void *p_device_config: How to access/pass these arguments in LabView and to which memory location it points in LabView.
    II) #include <windows.h>
         #include <process.h>
         HANDLE rcvEvent0, rcvEvent1;
    /* Function call*/
    CanGetReceiveEvent(handle[0], &rcvEvent0);
    Above is a piece of C code, Now I want to use HANDLE datatype which is windows based, how to use these type in the LABVIEW.
    With regards
    Pavan Ramu Samu

    "Somu" <[email protected]> wrote in message news:[email protected]...
    Hai,
    I am Pavan Ram Kumar Somu.
    &nbsp;
    I am new to Labview, currently I am working on MVB Interface.
    &nbsp;
    I need to access the API functions from *.dll file in Labview, as of now , I am doing this with Call function Library node in Labview but it does not support the following data types like
    &nbsp;&nbsp;&nbsp; 1. Pointer Arguments(To which memory it points in Labview)
    &nbsp;&nbsp;&nbsp; 2. function pointers Arguments
    &nbsp;&nbsp;&nbsp; 3 .pointers in structures and pointer structures in structures and many other data types.
    &nbsp;
    Please Answer the below queries also:
    &nbsp;
    1. How to pass pointer arguments to API functions in DLL and how to collect pointer&nbsp;&nbsp;
    &nbsp;&nbsp;&nbsp; return types from API functions in DLL
    &nbsp;
    2. How to pass structure arguments to API functions in DLL and how to collect structure
    &nbsp;&nbsp;&nbsp; return types from API functions in DLL
    &nbsp;
    3. How to use callback functions(nothing but function pointers) in Labview and how to
    &nbsp;&nbsp;&nbsp; collect callback fuctions return types from API functions in DLL
    &nbsp;
    I need your help while passing these datatypes to API functions in DLL from labview.
    &nbsp;
    Suggest me if there is any other alternative for implementing this task.
    &nbsp;
    &nbsp;
    I am referencing some examples here:
    Examples:
    I)
    Unsigned short int gf_open_device(void *p_device_config, unsigned long int client_life_sign_timeout, unsigned short int *device_error)
    &nbsp;
    void *p_device_config: How to access/pass these arguments in LabView and to which memory location it points in LabView.
    &nbsp;
    II) #include &lt;windows.h&gt;
    &nbsp;&nbsp;&nbsp;&nbsp; #include &lt;process.h&gt;
    &nbsp;&nbsp;&nbsp;
    &nbsp;&nbsp;&nbsp;&nbsp; HANDLE rcvEvent0, rcvEvent1;
    &nbsp;
    /* Function call*/
    CanGetReceiveEvent(handle[0], &amp;rcvEvent0);
    &nbsp;
    Above is a piece of C code, Now I want to use HANDLE datatype which is windows based, how to use these type in the LABVIEW.
    &nbsp;
    With regardsPavan Ramu Samu
    Search the forum (forums.ni.com) for callback, pointer or handle, and you'll find that it is all possible, but not very easy.
    e.g.: http://forums.ni.com/ni/board/message?board.id=170&message.id=88974&requireLogin=False
    Regards,
    Wiebe.

  • I have some software problems of running matlab script node in the LabVIEW program.

    Hi there,
    I wrote a simple matlab code, like x= 1, y = x*2. Then I tried to put them into matlab script node in LabVIEW.
    However, when I ran the program, I couldn't run and Labiew show some error. 
    Then I tried the method talked before on this forum. Please see the following link.
    http://forums.ni.com/ni/board/message?board.id=Mat​hScript&thread.id=571
    I followed the instruction and open the "command prompt," type some commands then I could run the program, and simultaneously, the LabVIEW program open "Matlab command line window" and run my labview code. However, if I didn't use the command prompt, the  "matlab command window" will not be opened by running the labview program. It would just show some error. And even if I turn on the matlab program in advance, I still couldn't run the labview program.
    Is there any effecient way to deal with this problem without using "command prompt?" Because I try another PC with both matlab and labview installed on it, I "can" run my simple matlab script node on it, and it will open the matlab command line window automatically by running my code. Therefore, I thought there might be some software linking problems in the previous PC I used . Do you have any ideas?
    Thanks in advacne.  

    This board is for Mathscript, but you seem to have a matlab problem. You should probably post in the LabVIEW forum instead.
    What is your LabVIEW version? What is you matlab version?
    LabVIEW Champion . Do more with less code and in less time .

  • In What Format datas are stored in the LabView???

    Hello Friends
                    In what Format datas are stored in the Labview.Likewise Icon,Text format Etc.
    Jayavel

    Data Types : 
     http://zone.ni.com/reference/en-XX/help/371361B-01/lvexcodeconcepts/manager_data_types/
    Data Logging :
    http://zone.ni.com/reference/en-XX/help/371361B-01/lvconcepts/choosing_a_file_i_o_format/ 

  • C library function call - Unavailable Type for one of the parameters in the Function Prototype.

    Hi,
    I'm doing a job that has been already done by some others: implementing a LabVIEW SQLite Wrapper, I know how to do that with .NET alas I would like to do that in C, mostly for performance purposes and my poor pointer knowledge is kinda make me stuck.
    A couple of informations are kindly provided here:
    http://www.sqlite.org
    http://www.sqlite.org/cintro.html
    What I would like to do, is just to open a connection to a SQLite Database (if not existing, the SQLite engine will create the embedded Database and the related file to save the data and everything). The function to perform the operation is given in page below:
    http://www.sqlite.org/c3ref/open.html
    It seems pretty simple:
    int sqlite3_open(
    const char *filename, /* Database filename (UTF-8) */
    sqlite3 **ppDb /* OUT: SQLite db handle */
    int sqlite3_open16(
    const void *filename, /* Database filename (UTF-16) */
    sqlite3 **ppDb /* OUT: SQLite db handle */
    int sqlite3_open_v2(
    const char *filename, /* Database filename (UTF-8) */
    sqlite3 **ppDb, /* OUT: SQLite db handle */
    int flags, /* Flags */
    const char *zVfs /* Name of VFS module to use */
    However I'm struggling a bit about the following type:
    sqlite3 **ppDb /* OUT: SQLite db handle */
    And I'm not really sure about which type to use when I'm calling this function from LabVIEW
    Any idea, I guess it's real easy, but I'm not really used to have a type which is I suppose the DataInstance but as it's not clearly explicted in the LabVIEW interpreted C Library Function prototype (InstanceDataType makes sense but not sure though) I'm not really sure what I'm showing in the attached screenshot is valid or not.
    My VI seems to work like a charm, but don't really know if I'm doing something wrong.
    Another prototype that I have no idea about the proper LabVIEW call is the close function:
    http://www.sqlite.org/c3ref/close.html
    Let me get this traight, usually a parameter has a name, right? but seems that nope:
    int sqlite3_close(sqlite3*);
    int sqlite3_close_v2(sqlite3*);
    So also no idea about the parameter setting for this one... has to be considered as the self instance like the one calling this function is this... but I'm not passing any object?
    Really confusing...
    sqlite3*
    I might sound really silly, but if anybody could point me some directions, I would be really grateful for that.
    Thanks
     

    Ehouarn wrote:
    However I'm struggling a bit about the following type:
    sqlite3 **ppDb /* OUT: SQLite db handle */
    And I'm not really sure about which type to use when I'm calling this function from LabVIEW
    This parameter should be a pointer-sized integer, passed by pointer. Doesn't matter if it's signed or unsigned. The SQLite library will allocate memory for you, then put a pointer to that memory location into the pointer-sized integer that you pass in.
    As for the close function, you should pass that same pointer-sized integer, but this time pass it by value (because it's referenced with a single *, not two). There's nothing wrong with the documentation omitting the parameter name. For the purposes of a function prototype, the parameter name is unimportant, since all you need to know is the type of the data. How the function chooses to refer to that parameter internally is irrelevant.

  • I cannot run the matlab script node in the labview

    Hi there,
    My LabVIEW version is 8.5, and matlab version is R2008a.
    I wrote a simple matlab code, like x= 1, y = x*2. Then I tried to put them into matlab script node in LabVIEW.
    However, when I ran the program, I couldn't run and Labiew show some error. 
    Then I tried the method talked before on this forum. Please see the following link.
    http://forums.ni.com/ni/board/message?board.id=MathScript&thread.id=571
    I followed the instruction and open the "command prompt," type some commands then I could run the program, and simultaneously, the LabVIEW program open "Matlab command line window" and run my labview code. However, if I didn't use the command prompt, the  "matlab command window" will not be opened by running the labview program. It would just show some error. And even if I turn on the matlab program in advance, I still couldn't run the labview program.
    Is there any effecient way to deal with this problem without using "command prompt?" Because I try another PC with both matlab and labview installed on it, I "can" run my simple matlab script node on it, and it will open the matlab command line window automatically by running my code. Therefore, I thought there might be some software linking problems in the previous PC I used . Do you have any ideas?
    Thanks in advacne.  

    Hi, and thanks for the post! I hope your well today.
    This may help,
    MATLAB® version 2006b (7.3) and version 2008a do work with LabVIEW
    8.20. This problem can occur if a new version of the MATLAB® software
    is installed along with a previous installation, which is then
    uninstalled. The uninstallation of the older MATLAB® software results
    in removing the ActiveX/COM components LabVIEW uses. This can cause the
    MATLAB® script node in LabVIEW to stop working. There are three
    solutions to this problem.
    Uninstall the MATLAB® software and reinstall it. This should install the necessary ActiveX component.
    Restore the MATLAB® automation server. Browse to the \bin directory (e.g. MATLAB\R2006b\bin) and type matlab -regserver ,
    which will reset the registry settings and start the server. The server
    can then be closed and the MATLAB® script node will work properly again
    in LabVIEW. LabVIEW may need to be restarted if VIs are open that
    contain MATLAB® script nodes.
    Leave both versions of the MATLAB® software installed.
    MATLAB® is a registered trademark of The MathWorks, Inc. Other
    product and company names listed are trademarks and trade names of
    their respective companies.
    Did you have an older version of MATLAB before 2008a?
    Kind Regards,
    James.
    Kind Regards
    James Hillman
    Applications Engineer 2008 to 2009 National Instruments UK & Ireland
    Loughborough University UK - 2006 to 2011
    Remember Kudos those who help!

  • The interface type is valid, but the specified interface number is not configured

    Hi
    I'm all new to using LabVIEW, which I have to use for a project. I'm trying to make a setup with a Keithley 2000 multimeter and an Agilent U2722A SMU. But I can't figure out how to get these instruments to communicate with LabVIEW. I can see and send commands to the Keithley 2000, but not the Agilent U2722A, with Agilent Connection Expert. But when I use LabVIEW I can't see any of them, and if I use the drivers I've found with "NI Instrument Driver Finder - Configure Search" I pops up with an error message saying
    "Error -1073807195 occurred at VISA Open in Keithley 2000.lvlib:Initialize.vi->Keithley 2000 Read Multiple.vi?            
    Possible reason(s):
    VISA:  (Hex 0xBFFF00A5) The interface type is valid, but the specified interface number is not configured.?"
    I've read all the threads I could find about this problem, but none of them helped. I've checked that the NiVi488.dll is checked in MAX under the VISA options. When I open the VISA Interactive Control I see an ASRL1>>ASRL1::INSTR and ASRL10>>ASRL10::INSTR. I don't know why it says ASRL, when I'm using an USB/GPIB interface, but the Keithley 2000 have 10 as the address. (the Agilent U2722A is connected directly by USB)
    Assume I know nothing.
    Thanks

    I'm using the GPIB-USB-HS. I also used this on the development PC when I exported the hardware configuration.
    This shows up in my MAX config and when I scan instruments, all of them show up. I can query them in MAX no problem.
    My installer includes all the .exe's from my project. As I said, I've done this with my previous 2009 installer without any issue. I upgraded my installer since I upgraded my project for version 2013. The error only happens when I run my code.
     

  • Microsoft Moves into Robotics ["similar to the LabView-ba​sed software"]

    Microsoft Moves into Robotics
    The software giant thinks it can make robotic engineering easier with a set of standards: its own of course.
    By Daniel Turner
    Saturday, September 02, 2006
    technologyreview.com
    Microsoft believes the demand for consumer, research, and military robots will grow significantly--and it wants to own the market.
    At the annual RoboBusiness conference this past June, the software giant released the first "community technical preview" of Microsoft Robotics Studio (MSRS). Now, in its second preview version, MSRS is both a product and the lynchpin of a new educational push: the Institute for Personal Robots in Education (IPRE).
    Founded by Microsoft Research, Georgia Tech, and Bryn Mawr College, the computer science and robotics program is aimed at college and graduate students. Together, the product and program are designed to bypass small, cheap robots, such as the Roomba (see "Hacking the Roomba"), in favor of a world of robots that are more complex and PC-like.
    MSRS is a visual programming environment, similar to the LabView-based software provided with LEGO's Mindstorms NXT kit. It allows users to drag and drop box-like symbols for simple, low-level behaviors and services (such as accessing a sensor) and string them together to create complex robotic programs. MSRS also uses the AGEIA PhysX physics engine, which powers many PC games, to provide a visual simulation of the robot and its environment, complete with realistic friction, drag, gravity, and other factors.
    Another feature of MSRS is that it provides a method for controlling robots over a network through a PC's Web browser. In addition to requiring Windows on the PC side, MSRS robots must use a CPU that supports Microsoft's .NET runtime, which could rule out the inexpensive and less power-hungry processors used in many robots today.
    "We're trying to make it easier for people to write applications for robots," says Trandy Trower, general manager of Microsoft's Robotics Group. He says the current robotics community is too diverse, with many different hardware and software variants, to be efficient. "[MSRS is] like what Microsoft did with MS Basic," he says, "in smoothing out the fragmentation of PC hardware." Trower claims that MS Basic became a "de facto standard," which then allowed developers to write to one target and use a set of common tools.
    "Robotics programming is very ad hoc," says Tucker Balch, associate professor of Georgia Tech's College of Computing and director of the IPRE. He notes that many students in robotics often have to spend much of their time recreating solutions that already exist to basic problems (such as how to program a wheeled robot to move in a straight line).
    "Each robot is a one-off new development," says Balch. A large part of the work, he says, is making modules--software components that take input from sensors and deliver output other components can comprehend--work together. This low-level busy work can thwart his pedagogical goal: to teach 3,000 students about computer science at a high level; "so the robotics part has to be easy and robust," he says. Compounding the problem, various sensors and other robot components are made by different companies. "At present," Balch says, "we have to get source code and manually integrate the pieces."
    Programming frameworks such as Pyro and network robot control servers such as Player/Stage are already used by Balch and others. But none of them has become a standard. And Microsoft has struggled to capture the market: the company's WinCE software never took off as an embedded operating system for robots. As a result, integration remains a piecemeal, often onerous task.
    "Integration is the hardest part of the process," says Balch; in fact, for larger robotics projects, he's contracted with companies that specialize in robotics integration, such as Evolution Robotics.
    Paolo Pirjanian, president and CTO of Evolution Robotics, also attended the RoboBusiness conference this June, and was one of the few voices to express concerns about Microsoft's move into robotics. He says it's not just because his company markets its own ERSP robotics platform, which he says is "similar in spirit to MSRS."
    "I think it's a positive signal to the industry," Pirjanian says about Microsoft's entry into robotics and about Trower's statement at RoboBusiness that he sees the industry taking off in 5 to 10 years. However, Pirjanian says he's concerned that adopting Microsoft's product as a platform could marginalize an entire segment of robotics, one he feels is crucial for its future.
    "Our vision [of robotics' future] is embedded solutions onto low-cost hardware," Pirjanian says. "In most robots in the near future, products will have to be cost-optimized," he said. This, he added, would mean lower-cost processors--the type that could not support the overhead required by Windows and MSRS. Pirjanian pointed to the Roomba, which uses only a 16-bit processor, coupled with clever programming, to reach a consumer price point.
    But small, specialized, and relatively unintelligent robots seem to have no place in the thinking of Microsoft's Trower. He waxes enthusiastic about a day when his desktop computer can control household devices, displacing the autonomy of robots to a centralized source.

    Microsoft copies everything from a graphical desktop OS over game consoles with G5 processors to a LabVIEW/Lego-Mindstorms-like graphical programming environments that interacts with the real world.
    The only good news is that their copies never top the originals. This is a universal law and pretty much independent of consumer response too (they all seem to prefer to go for the fast buck).
    Regards and happy platform independent wireworks
    Urs
    Urs Lauterburg
    Physics demonstrator
    Physikalisches Institut
    University of Bern
    Switzerland

Maybe you are looking for

  • Can not shut down late 2011 MacBook Pro and turn it back on again without the hard drive disappearing?

    Ok, I have searched for months for an answer to this question to no avail, probably because no one has experienced it before. I am hoping however that you guys can help me figure this one out cause it is driving me nuts! Also, I am not entirely sure

  • Problem with customised subscreen in ME21N

    I am using BadI ME_GUI_PO_CUST  & ME_PROCESS_PO_CUST to add a customized subscreen in the header. But the subscreen is only visible with ME23N & ME22N not with ME21N . I am new to BadI, so help is needed. I have used the following code: In BadI ME_GU

  • Soap to rfc..dont get a response...

    I have pasted below the complete steps for doing a rfc to soap synchronous scenario...i can send a message and update a custom table in r/3 with the data..however i am not getting any response, i have tested the rfc on r/3 and checked the table ..whi

  • Disable Filevault on Yosemite?

    Hey, I just updated to Yosemite and in the set-up window I activated Filevault having a different idea about it. However, when I went to disable, it won't let me until it's all encrypted. Will I have to wait until it's all encrypted to disable it? Or

  • How to you get the transfer button in the lower right hand corner?

    I am trying to transfer my purchases from one computer to another through home sharing. The songs come up, but there is no transfer button in the lower right hand corner, so i cannot transfer the purchases. How do you get the button there?