Digitising Analogue Test Rigs

I am looking for some LabVIEW Engineers to work on a contract basis in the
UK to perform updates to a large number of analogue test rigs. The contract
will last about 6 months and is likely to pay between £30 and £35 per hour.
If you have 12 months+ LabVIEW experience and are considering a career move,
please email your CV to [email protected] or call me on 01444 884222.
You must have a valid work permit to work in the UK.

What DAQmx driver do you have installed?
As noted here: http://digital.ni.com/public.nsf/allkb/B0D5630C0A50D5C6862578E800459248 LabVIEW 2009 is compatibile up through DAQmx 9.7, which can be downloaded here: http://joule.ni.com/nidu/cds/view/p/id/3811/lang/en
Craig H. | CLA | Systems Engineer | National Instruments

Similar Messages

  • Program Stalls Briefly causing test rig damage

    I am using LabView and a C-DAQ 9178 to control a hydraulic test rig.  The C-DAq utilizes a USB connection to the host computer running Windows 7. The program is a closed loop setup utilizing a PID subroutine to control each of the hydraulic valves and pump.  The control can be based on either cylinder position or load.  I am using a 9219 module to read the load cells, position indicators, and valve spool position.  A 9264 module takes care of the voltage outputs to the valves, relays, etc.The program will run great for extended periods of time (the machine is a fatigue tester, so it accumulates load cycles on a test sample over time).  But every once in a while, it will miss. And by miss, I mean it stalls.  The program is writing certain paramaters to a text file for review later.  What we see is a major delay in the time component of the data file.
    It will look something like this.
    Time:
    604.150
    604.200
    604.250
    604.300
    604.524
    604.554
    604.575
    The problem with this delay is that if the test subject is fairly stiff, the program stalls, causing the valve to stop and stay in whatever position was it's last signal (normally open).  This causes the cylinder to keep moving and normally results in unwanted damage of the test subject.  The program has checks built into that will shutdown the pump, center the valves, release the load, and unload the system pressure if the cylinder moves out of a given position or force range.  However, because the program stalls, these checks do not work until the program resumes and it is too late, the damage is done.
    What can I do to eliminate this stalling problem?  Any help is greatly appreciated.
    Thanks
    Adam Kuiken
    Senior Test Engineer
    Arctic Cat Inc.

    First, you need to determine if this is a Windows problem, or if your LV code is responsible.
    If your data files are large, and you are building the text file using inefficient LV methods, you may be making an unexpected call to the memory manager. This is just one possibility -- there may be others. (Restructuring the code, with the file save in an independent loop, may aleviate the issue to some extent.)
    If you have eliminated the possibility of a LV-based problem, then Windows could very possibly be the culprit (as suggested by others in this thread). Windows optimization *might* get you there, but there is always some uncertainty if Windows is in the equation. In testing, it might appear to be fine, but then Windows decides to scan the DVD drive, and you experience the failure again.
    Side Note: Back in the LV 2007/2008 days, I had a similar intermittent Windows problem. It appeared to be related to the DVD drive. The solution was simply to put a DVD disk into the DVD drive. This prevented Windows from scanning the drive at unexpected times, solving the problem. 
    If Windows system timing turns out to be the culprit, migrating to LV RT might be the easiest path forward. cRIO is a popular choice, but I personally prefer PXI or a desktop PC configured to run RT.
    If your application doesn't require the user to engage other Windows applications (Excel, Word, etc), and additional hardware costs are the primary concern, one option is to configure your existing PC to run RT. This can be a cost-effective solution if there is some reluctance to purchasing additional cRIO or PXI hardware.
    Here is a link to the requirements for running LV RT on a desktop PC:
    http://www.ni.com/white-paper/8239/en/
    We have successfully used LV RT on customized PC hardware when the application requirements exceeded the top-of-the line PXI controller, or when the cost for a capable PXI system has been prohibitive. In both of these situations, a custom LV RT PC makes good sense.
    Anyway, good luck with your issue.
    -- Dave
    www.movimed.com - Custom Imaging Solutions

  • DAQCard 1200 / SC-2043-SG - Other Analogue Sensor Inputs?

    Hello!
    I am relatively new to these things, so sorry for any ignorance
    I have a self designed servo-hydraulic structural test rig with a load cell and displacement sensor on the hydraulic cylinder.
    Signals from both these sensors are inputted via a CB 50LP connector board and then a DAQCard 1200 to a PC using a Labview 6i program, through pins 1 and 2.
    I also have an in-house made strain gauge signal box that inputs to pins 3, 4, 5 & 6.
    The DAQ card is using NRSE.
    Now, this all worked fine for years, but now the in-house strain gauge board is kaput, so I am trying to connect up a SC-2043-SG that we have instead of the kaput inhouse board and the CB 50LP.
    I have read the DAQ Card and SC-2043-SG manuals, and pretty much understand what I have to do, but....
     Page 3.6 in the manual says, "Pins 1-8: Analog Input Channels 0 through 7—These pins carry the conditioned strain gauge bridge signals (referenced to AISENSE) to the DAQ board. They are not routed to screw terminals."
    So my problem is, how can I connect the first two analogue channels (pins 1 & 2) to my load cell and displacement transducer, whilst using pins 3 to 8 for the strain gauges, with the SC-2043-SG?
    I though I could just cut the 1 & 2 wires of the ribbon connector from the DAQ card to the SC-2043-SG and connect them directly to the sensors (and then not use strain gauge channels 1 & 2 on the SC-2043-SG), but am not sure if this would work / have any unwanted implications.
    Would this be OK, or is there an easier, neater way?
    Thanks
    Bife
    Solved!
    Go to Solution.

    Hi Bife,
    The SC-2043 is designed for strain gauge only, so you can't connect basic voltage signals.
    It is true that you can cut the cable in order to provide your own signals to the DAQ card.
    (not recommended, but it should work)
    You can also contact us so that we can suggest a better hardware (maybe the SC-2345 that can do multiple types of signal conditionning in a single enclosure).
    Best regards,
    Thomas B. | CLAD
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    >> Inscrivez-vous gratuitement aux Journées Techniques : de l'acquisition de données au contrôle/com...

  • Reg: Creating bottlenecks for testing on databases

    Hi all,
    I am doing database benchmarking for one database and i want to compare with other databases like sql server,postgress etc...
    For that i want to create test scenarios and benchmark on multiple databases.
    eg:where we can stress the database by creating the problem...
    Its urgent......
    Thanks in advance..

    In order to do this, you need to populate your server with some test data (preferably a high volume of data) and then write scripts that access the data again and again, with larger and larger numbers of logged in users.
    Normally, you would use specialist loading software, such as Mercury LoadRunner (which I think is owned by IBM these days) - they are very expensive but worthwhile if you have specific performance targets that you need to meet and of course the budget!
    Otherwise, you could write your own test rig but this is probably a project in itself - having need to find a low-cost benchmarking tool myself earlier in the year, I know that there is next to nothing on the market which could be considered 'low cost'!
    cheers.
    R

  • PE7 - capturing analog video (VHS/Analogue video camcorder)

    I'm a complete novice, so please excuse my ignorance on this subject.
    Re the subject heading - To quote from the Help System:
    1. To use video from analog sources in your Adobe Premiere Elements project, you must first convert (digitize) the footage to digital data, because Adobe Premiere Elements only accepts direct input from digital sources.
    2. Use an AV DV converter to bridge the connection between your analog source and the computer. Connect the analog source to the converter and connect the converter to your computer. Adobe Premiere Elements then captures the digitized footage.
    OK as far as it goes.
    In my case I intend to use the Canopus ADVC300 as the digital video converter. The connections to this from analog devices (VHS/Analogue camera) and to the PC (via Firewire) appear straightforward.
    In PE7 when starting a new project one has to choose from the following input sources - DV Camcorder; DVD (Camcorder or PCDVD Drive); Digital Still Camera; Webcam or WDM Device; HDV Camcorder; AVCHD or other hard disk/memory camcorder; Mobile phone & players; PC Files & Folders.
    As far as I can tell there appears to be no means of selecting digitally converted analogue video, which is where I came in.
    Perhaps the digitised analogue stream from the ADVC300 is saved to a file, in which case one would select 'PC Files & Folders'. Is this what is meant by 'Adobe Premiere Elements then captures the digitized footage'?
    I would appreciate any advice on this subject. Responses from experienced ADVC300 users would be especially welcome.
    Regards
    Leonard

    You want to load these files from PC Files and Folders, Leonard. Just browse to them.
    You could also simply capture directly into your project using your DV bridge (the Canopus). Here's more information on capturing analogue, from the FAQs at the top of this forum.
    http://www.adobeforums.com/webx/.3bb95e46
    By the way, if you're interested in more detailed information on capturing video as well as more detailed information on all of the tools in this program and how to use them, you may want to look into "The Muvipix.com Guide to Premiere Elements 7," written by that esteemed expert Steve Grisetti, who hosts this forum, co-founded http://www.muvipix.com as a site for supporting amateur and semi-professional videomakers and has written over 160 of the FAQs at the top of this forum.
    It's available at Amazon.com or through the Muvipix.com store:
    http://astore.amazon.com/chuckengelsco-20/detail/0615248993/104-3709942-5611121

  • Engine test cell automation using labview

    I want to know the hardware details of my application(engine test cell automation using labview).
    Amit Amrutkar

    >> I want to know the hardware details of my application(engine test cell automation using labview).
    I have a feeling that that one of the following happened  :
    A. The "Post" button was clicked by mistake before the compilation was over.
    B. The OP wants to know the hardware details for  his application which looks like an engine test cell automation. ( It thus must have been in a hardware forum I guess )
    Post more details on the number of analog channels , required resolution, sampling time etc and number of digital channels and there is a possiblity that we can start helping..
    Raghunathan
    LV2012 to Automate Hydraulic Test rigs.

  • Unidrive SP CANOpen PDO

    Hello, 
    I am currently working to control
    a Control Techniques Drive system via LabView and CanOpen, such that the drive
    system acts as a vehicle emulator. So far I’ve been able to read and write
    different parameters from LabView to the Control Techniques Drive system
    through Service Data Object (SDO) Protocol. However, I would like to
    communicate to the test rig using Process Data Object (PDO) Protocol. If I
    manually configure the PDO1 from the drive, I can read values stored in it
    without a problem, however when it comes to using PDO Write, I don’t get any
    error, but I also don’t get any response. One of the default values for the
    RxPDO1 includes setting the speed of the motor, so if I want to access that
    parameter to write a desired speed in percentage,
    What is the format of the value that LabView needs
    to convert to CanOpen format?
    What needs to be connected to CanOpen data in?
    How can I write to different channels stored in one
    PDO?
    As I go through the CT CanOpen
    manual, there seems to be a gap in how to do the mapping for the different
    PDOs, and how to access the specific menu and parameter. If I want to configure
    different PDO objects beside the first one that can be configured manually, I
    have to follow a series of steps which begin by writing enable to the address
    of the PDO object i.e. TxPDOB 0x1A01, sub-index 0.
    Is enable in this case a zero value that I need to
    write through SDO to this sub-index?
    When setting up a PDO (master-required) do
    communication parameters need to be written before enabling that PDO?
    Frames for PDO contain a control bit, is it in this
    case parameter 6.43?
    I’m attaching what I’ve been
    working on so far, plus the CT CanOpen datasheet, and I hope you can point me
    in the right direction. To make things a bit easier, I'm also attaching a slight modified
    version of the Analogue Output VI (Untitled2.vi), that's found in the CANopen examples
    folder. In the drive, there are 4 PDOs, each with 4 TxPDO and 4 RxPDO
    channels. The first PDO can be configured directly from the drive, and
    that's how I've been able to read parameteres. However, when it comes
    to write, that's when the problem comes. With this VI, the first RxPDO
    channel is configured to set the speed of the motor (directly
    configured in the drive), but I'm not getting any response.
    Thanks for your help,
    Enrico 
    Attachments:
    CTCanOpen359-555.pdf ‏1905 KB
    PDO_Write.zip ‏162 KB
    PDO_Test.vi ‏54 KB

    Hey George,
    Thanks for your reply. I'm currently using LabVIEW version 8.5.
    To communicate with the drive I'm using CANopen, NI-PCI-CAN Software version 1.1.1
    I have attached now the SubVI's that were missing, I apologize for that. As for the control bit, there is a parameter in the drive that controls the motor 6.43, called 'control word enable' that sets that specific bit to 1. I've done it before the PDO configuration, but I still have the same issue, unless it has to be set at another point.
    To make things a bit easier, I'm attaching a slight modified version of the Analogue Output VI, that's found in the CANopen examples folder. In the drive, there are 4 PDOs, each with 4 TxPDO and 4 RxPDO channels. The first PDO can be configured directly from the drive, and that's how I've been able to read parameteres. However, when it comes to write, that's when the problem comes. With this VI, the first RxPDO channel is configured to set the speed of the motor (directly configured in the drive), but I'm not getting any response.
    Thanks again,
    Enrico 
    Attachments:
    PDO_Write.zip ‏125 KB
    Untitled 2.vi ‏61 KB

  • Error in reading voltage using DAQmx with PXI-6259

    Hi,
    We are trying to acquire analogue voltage from 5 channels using PXI-6259. We trigger the data sampling with clk 0 (i.e. using PFI 12).
    The problem we had is:
    1)When we tried to get voltage from 5 channels, we get 2 data at the same after twice the period. For example, If we set the 
      sampling rate to be of period 1 second, we get 1 signal at 2 second and around (2+t), where t is a random time less than the execution period of the loop.
      Then the next cycles would be 4 second and (4+t).
    2) This timing problem did not occur if we only acquire voltage from 2 or less channels.
    3) We tried to run exactly the same program with NI USB-6210 and everything worked just fine.
    4) We replaced the DAQmx read with DAQ assistant and still encountered the same problem.
    Attached is the a simple program that showed the error. Is this possibly a bug in PXI or DAQmx?
    Please help us
    Arul (CLAD)
    Attachments:
    test rig.vi ‏74 KB

    Good afternoon Nitin,
    Thanks for contacting National Instruments with your issue, we'll try our best to resolve it for you as quickly and efficiently as possible.
    As the error message stated:
    "Analog input virtual channels cannot be created out of order with respect to their physical channel numbers for the type of analog device you are using. For example, a virtual channel using physical channel ai0 must be created before a virtual channel with physical channel ai1."
    If you need to randomly switch which channels you want to either control IEPE excitation or coupling on, a better approach rather than having all of these channels in one task as you've done in your attached VI, is to parse your program down to multiple tasks that can run simultaneously. 
    I have attached a screenshot of this approach, which hopefully is not a compromise to your application.
    Best of luck with your project.
    Sincerely,
    Minh Tran
    Applications Engineering
    National Instruments
    Attachments:
    ParallelTasks.JPG ‏52 KB

  • Stable display readings

    When displaying analog values from sensors like Temperature probes ( PT
    100 ), flowmeters and pressure values it is quite irritating to see the
    second decimal or at times even the first decimal point rolling
    rapidly.
    Most times it is due to overriding noise on the signal. But even when
    we follow strict guidelines with respect to shielding, grounding,
    differential mode inputs I still cannot completely eliminate
    this.  And all my channels have low pass filters to remove the AC
    line noise and I also resort to a moving average of atleast 5
    samples.  One would expect  a channel like Room Temperature
    to be stable - it simply cannot vary  that fast .
    What I want to see on the screen is a display that is as steady as an
    analogue meter. Is there any proven method in software apart from
    moving average to do this ? Reducing the display precision would be one
    another way - but then it also reduces the resolution of the required
    display.
    Thanks
    Raghunathan
    Raghunathan
    LV2012 to Automate Hydraulic Test rigs.

    Raghunathan,
    One point I would like to put forth: Setting the optimal range for your measuring device, so that the parameter is measured with a pretty good resolution.
    For ex, if you anticipate temperature variation in range of 10 - 60 deg C, it would be advisible to keep the high and low limit correspondingly.
    This would improve your resolution and give a pretty stable readings

  • Book recommendation - user interface hardware

    Hi all,
    Could you please recommend me a book related to user interface design and hardware related knowlege? That will also be helpful.
    Thanks
    Need recommenda​tion for hardware and software
    Hi all,
    I am doing a project related to current control.
    The purpose is to keep the current at constant. So I measure the current and get the value; and send a analog signal to tell our machine to discharge or to charge in order to keep the current at constant. I need to write the program to give analog signal to machine based on the current input.
    Also, the machine can communicate with user. User can monitor the machine operation statuse and current level, etc.
    I want to use LabView to program, since it doesn't cost a lot of time. I saw there is LabView RT LabView FPGA, etc. I am confused whether all of these are just diffrent function for LabView or different software. Which one I should use?
    For hardware, I have no idea what is compactible with LabVeiw. What I should use?
    I am totally new to this. I'll appreciate a lot if anyone can give suggestions.
    Thanks

    Hi,
    In terms of books specific to GUI design there was one written by David Ritter, LabVIEW GUI - Essential Techniques but I'm pretty sure it's out of print. A very good book to get started with LabVIEW is 'LabVIEW for Everyone', by Jeffrey Travis and Jim Kring. This is the one that I picked up when I was plunged into the deep end and had to start building applications in LabVIEW. I still use it now because it covers a bit of everything and is usually enough to point me in the correct direction even if it doesn't have all the answers.
    In terms of your other questions, LabVIEW RT is a real-time version of LabVIEW normally used for things like CompactRIO or PXI chassis hardware. The FPGA add on is for programming FPGA's such as the backplane in the CompactRIO device. The big advantage of both of these is that they avoid the need to use WIndows, MacOS or Linux which when dealing with critical rig control or aquisition can be a big plus. The bad side is that the hardware is more expensive and there's an additional software cost. For a simple application such as yours these would be serious overkill unless they're highly critical.
    For the applications I run I use the LabVIEW Developer suite coupled with National Instruments hardware and compile executables that run under Windows. This does mean that I have to add in code to cope with situations when Windows decides to crash, or do something odd, but reduces the cost significantly. In your case it appears you have a single station controlling a simple process so the world is your oyster. The lowest cost option would be LabVIEW basic version running on a PC using a USB DAQ - cheapest that can output analogue signals is a USB-6008 but these are quite limited in their capabilities. I tried to use the slightly higher spec version (USB-6009) to run a lashed up test rig for a short period and had lots of problems because I was using the digital ins, outs, counter, analogue ins and outs all at the same time - it appears you can use any one quite happily up tot he stated performance but it complains when you try to use all together.
    If your budget permits then the USB-6211 is a better choice in my opinion. You can look up the specs and prices for your location at www.ni.com
    If you're a student then you can get LabVIEW at a very much reduced cost.
    Hope this helps.
    Paul
    Regards
    Paul
    CLD running LabVIEW 2012 32 & 64 bit on Windows 7 64 bit OS.

  • Usb 6501 unresponsive causing BSOD(Blue Screen Error)

    There are a lot posts about the same issue but mine is a little peculiar so i decided to post it.
    I am using two usb DAQ in the same pc(usb-6501, usb-6001), connected to the usb port on the back. The usb-6501 is used to obtain digital inputs from read switches, Sensors through an ssr. The usb-6001 is used to control 2 double valve solenoid, 1 dc motor, 2 indicator lamps. usb-6001 is also used to read analog values of current(using hall effect sensor) and voltage(using potentiometer).
    At first i was facing problems with the usb-6001(the usb-6501 was working fine at this point) resetting during operation accompanied by BSOD. Then i learned it was due to my relay, which requires 30 mA of current to switch so i used the ULN2003a to interface the usb-6001 with the relays and after that the application was running perfectly for 4 days.
    Now the usb-6501 is having the same problem and when i perform "Self Test" from NI MAX it shows "Error Code:50405" and i am able to reset the device from NI MAX only sometimes, other times i would have to unplug the USB device then plug it back in. As the application is used for an automated test rig the customer is frustrated by this problem. Once the card becomes unresponsive(or after the card is reset) BSOD occours.
    I have checked all the device drivers and OS for any errors but they are fine. i have even tried changing the ram to solve the BSOD with no use.
    System Details:
    Windows 7 SP1
    NI MAX 14.0
    power saving is disabled 
    I have attached the latest mini dump files as they might help in finding out the reason behind this problem(File extensions changed for the purpouse of uploading).
    I need to know: Is there any permanent solution for this problem? and what is the reason for this problem.
    Attachments:
    060115-12480-01.txt ‏315 KB
    053015-12324-01.txt ‏315 KB
    052615-17440-01.txt ‏315 KB

    Additional info : i did an analysis of the dump files and this is the result
    0: kd> !analyze -v
    * Bugcheck Analysis *
    DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
    An attempt was made to access a pageable (or completely invalid) address at an
    interrupt request level (IRQL) that is too high. This is usually
    caused by drivers using improper addresses.
    If kernel debugger is available get stack backtrace.
    Arguments:
    Arg1: fffff88009fe10b1, memory referenced
    Arg2: 0000000000000002, IRQL
    Arg3: 0000000000000000, value 0 = read operation, 1 = write operation
    Arg4: fffff8800657922e, address which referenced memory
    Debugging Details:
    READ_ADDRESS: GetPointerFromAddress: unable to read from fffff800032c30e8
    fffff88009fe10b1
    CURRENT_IRQL: 2
    FAULTING_IP:
    nifslk+822e
    fffff880`0657922e 0fb650ff movzx edx,byte ptr [rax-1]
    CUSTOMER_CRASH_COUNT: 1
    DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT
    BUGCHECK_STR: 0xD1
    PROCESS_NAME: System
    TRAP_FRAME: fffff80000b9c6b0 -- (.trap 0xfffff80000b9c6b0)
    NOTE: The trap frame does not contain all registers.
    Some register values may be zeroed or incorrect.
    rax=fffff88009fe10b2 rbx=0000000000000000 rcx=0000000000000000
    rdx=fffff80000b9c8e0 rsi=0000000000000000 rdi=0000000000000000
    rip=fffff8800657922e rsp=fffff80000b9c840 rbp=fffff88009fe10b1
    r8=0000000000000000 r9=0000000000000000 r10=0000000000000000
    r11=0000000000000000 r12=0000000000000000 r13=0000000000000000
    r14=0000000000000000 r15=0000000000000000
    iopl=0 nv up ei ng nz na po nc
    nifslk+0x822e:
    fffff880`0657922e 0fb650ff movzx edx,byte ptr [rax-1] ds:c8e0:fffff880`09fe10b1=??
    Resetting default scope
    LAST_CONTROL_TRANSFER: from fffff80003091be9 to fffff80003092640
    STACK_TEXT:
    fffff800`00b9c568 fffff800`03091be9 : 00000000`0000000a fffff880`09fe10b1 00000000`00000002 00000000`00000000 : nt!KeBugCheckEx
    fffff800`00b9c570 fffff800`03090860 : fffffa80`06adc250 fffffa80`05c3a060 fffffa80`06adc250 00000000`0000ffff : nt!KiBugCheckDispatch+0x69
    fffff800`00b9c6b0 fffff880`0657922e : fffff880`05d47468 fffff880`01d91f90 00000000`00000000 fffff880`09fe0770 : nt!KiPageFault+0x260
    fffff800`00b9c840 fffff880`05d47468 : fffff880`01d91f90 00000000`00000000 fffff880`09fe0770 fffff880`09fe17a0 : nifslk+0x822e
    fffff800`00b9c848 fffff880`01d91f90 : 00000000`00000000 fffff880`09fe0770 fffff880`09fe17a0 fffff880`09fe10b1 : 0xfffff880`05d47468
    fffff800`00b9c850 00000000`00000000 : fffff880`09fe0770 fffff880`09fe17a0 fffff880`09fe10b1 fffff800`00b9c8e0 : nipalk+0x75f90
    STACK_COMMAND: kb
    FOLLOWUP_IP:
    nifslk+822e
    fffff880`0657922e 0fb650ff movzx edx,byte ptr [rax-1]
    SYMBOL_STACK_INDEX: 3
    SYMBOL_NAME: nifslk+822e
    FOLLOWUP_NAME: MachineOwner
    MODULE_NAME: nifslk
    IMAGE_NAME: nifslk.dll
    DEBUG_FLR_IMAGE_TIMESTAMP: 51f2daeb
    FAILURE_BUCKET_ID: X64_0xD1_nifslk+822e
    BUCKET_ID: X64_0xD1_nifslk+822e
    Can anyone help me with what this means?

  • Timed loop and CPU usage

    Platform is WIN_XP Pro and machine is a P4 at 2.5Ghz with 512 Mb ram.
    LV7.1 + PCI 6229
    I am using  50ms Timed loop for running a state machine inside it
    and also a  whole lot of other things like reading / writing
    DAQMx  functions;  file I/O functions and such. As the
    project involves a  main and sub-panlel set up local variables
    could not be elimnated fully and there should be something like 150 of
    them. But not all are accessed always - maybe about 15 of them at any
    given time depending on the SM staus.
    Problem :
    Once started the "Finished late"  indication  is off and
    the  actual timing  alternates between 49 to 52 ms. The CPU
    usage is around 25%.
    But as time goes by,  the system gets unstable : After 15 minutes
    or so, the Finished Late indication is always ON and the CPU usage is
    gradually tending towards or exceeds 100%. 
    Obviously the machine control timing now gets affected and things slow
    down badly. Closing the application ands restarting repeats the above
    cycle.
    I am at a loss  to understand what is happening ?  WIll
    breaking down the single Timed Loop to multiple ones help  ? WIll
    that be an efficient way of parallel threading ?
    I can post the code but its quite large and will do it as a last resort.
    thanks
    Raghunathan
    Raghunathan
    LV2012 to Automate Hydraulic Test rigs.

    Hello,
    It sounds like an interesting problem.  It would be worth some experimentation to figure out what's going wrong - attempting to decouple major "pieces" of the code would be helpful.  For example, you could try breaking your code into multiple loops if that makes sense in your architecture, but perhaps you could even eliminate all but one of the loops to begin with, and see if you can correlate the problem to the code in just one of your loops.
    Another concern is that you mention using many local variables.  Variable read operations cause new buffer allocations, so if you're passing arrays around that way, you could be hitting a problem of forcing your machine to perform many allocations and deallocations of memory.  As arrays grow, this can be a bigger and bigger problem.  You can use other techniques for passing data around your block diagram, such as dataflow if possible (just simple wires), or queues where dataflow can't dicatate program flow completely.
    Hopefully looking into your code with the above considerations will lead you in the right direction.  In your case, removing code so that you can identify which elements are causing the problem should help significantly.
    Best Regards,
    JLS
    Best,
    JLS
    Sixclear

  • In line or Sub-VI ?

    I have created a VI to do the following :
    1. Get raw data from 4 AI channels once every 50ms.
    2. Accumulate these in a 2D array as long as a cycle lasts.
    3. At the end of the cycle find the mean of all four arrays and check if they are In-Bound and save the result in a BooleanArray.
    4. Repeat steps 1 to 3 for three iterations.
    5. At the end of the third iteration, find out if each of the channles have been within bound based on the boolean array and declare a Verdict.
    ( Verdict is FAIL if all the three iterations of one or more channels are FALSE )
    The VI is sfucntional and attached. The various indicators help to track the performance and not need in the final version. I only need the controls for CycleCount,  NoOf Channels, LoLimit array and HiLimitArray and the final Verdict information.
    Question is :
    1. How to convert this into a sub VI  so that I can call this from inside of a 50ms timed loop that the has the DAQMx AI read function?
    2. Or is it easy to just copy this code inside of a TimedLoop which my application anyway has ?
    Thanks
    Raghunathan
    Raghunathan
    LV2012 to Automate Hydraulic Test rigs.
    Attachments:
    ParamOK_ArrayIn.vi ‏113 KB

    Raghunathan wrote:
    2. Or is it easy to just copy this code inside of a TimedLoop which my application anyway has ?
    First of all, your VI is overly complicated. You need two shift registers just to keep track if indices because you use "insert into array" to appen a row to an existing array. You don't need those if you use "built array".
    (Actually, if performance is an issue, you should allocate a fixed size array and replace columns as you go. In this case you would need these shift registers again )
    The final code is almost nothing, and I don't think it is worth to create a subVI for it. Just put it inline in the main VI.
    You are doing things again way too complicated! Here are some examples:
    Your code skips an acquisition set whenever it does the calculation (your new data array doe not enter the case!). You should use all data!
    You do an incredible song and dance to duplicate what we already have in "AND array elements" and "Or array elements".
    You don't need the "NoOfChannels" control, because the number is given by the array dimension already.
    The attached is a quick attempt to solve some of these issues. Now it seems to operate correctly. See it if makes sense to you. The code works well at 50ms loop time.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    ParamOK_ArrayInmod.vi ‏104 KB

  • Data Acquisition Step Type

    Hi, is there a step type to perform Data Acquisition and Basic comm control such as RS232 and GPIB within teststand ?
    This question was raised when I was training one of our Technicians today, he asked/suggested that having a step type which performs Data Acquisition and comm control would reduce there development time down dramatically, They have just started to use Teststand on small cycle test rigs as part of an evaluation stage. They have no experience with LabVIEW or CVI, they basically want a quick  and easy way to produce a test sequence with Data Acquisition and comm control.
    if there is anything like this at the moment can you possibly reply. If there is not,  is this a something which will be avaible in the future ?
    Message Edited by ds1638 on 12-09-2008 02:44 PM

    Data Acquisition usually refers to a DAQ board from NI so I think what you are looking for is a generic NI-VISA step type and a specific RS-232 step type. I have a NI-VISA step type that was originally written with LabVIEW 6.1 and for use with TestStand 2. I've atached that below. I don't have the .ini file handy and don't have time to recreate it but from the code, you should be able to see what needs to be added to the step properties. The basic step is a modified Pass/Fail Step.
    It's not that difficult to create your own custom steps. I've done a lot and it only takes a little while longer than doing a code module in LabVIEW or CVI. The extra time is in creating the edit step.
    I've got an RS-232 step type but part of it is product specific so it is proprietary and I can't post that. Look at the VISA example and the other examples of custom step types that are on the developer zone and see how far you get.
    Attachments:
    VISA IO.zip ‏188 KB

  • How to measure the baseline of a noisy, pulsed signal

    Hi
    I am measuring the torque exerted by a large motor on a shaft using a load cell and lever arm. The shaft runs at approx 150 rpm. I have attached a drawing that shows the output I get. This is a test rig.
    I have written some code that measures the maximum peak out of a group of approx 5 peaks and writes this to a shift register. This gives me an idea of the maximum torque "spike".
    I also wish to measure the baseline torque (due to the bearings in the machine). Even when highly filtered (my noise filter is set to 49Hz) the signal exhibits this noise which is probably due to vibration in the system. The signal is zeroed when the motor is not running.
    Does anyone have an ideas on how to measure the "baseline" torque? The large spike in torque prevents me from doing a running average. Can anyone think of a way of averaging just the noisy part of the signal to get an average value? I aim to to subtract the average baseline torque from the peak value to get an idea of the torque due to the event which causes
    the spike.
    Any help would be greatly appreciated.
    Many thanks.
    Attachments:
    drawing of torque signal.gif ‏26 KB

    Thanks for the reply. I understand what you are saying. However, I might have to modify my method for measuring the peaks if I choose to implement your idea. I have taken a screenshot of my "peak finder" code and attached it.
    Bascially, the reset terminal is wired to a timer which outputs a pulse every few seconds. This resets the vi (a standard NI one I think) and sets the peak magnitude back to zero. This way, I am windowing the signal and measuring the maximum peak in every window. This is what I need to do.
    So I could use a logical filter to feed data to the running average only if;
    the amplitude of the signal is less than a certain threshold
    and if the current value has similar low peaks either side of it
    How would you construct the code to delay the evaluation so that the values in front and behind of the current data point can be analysed?
    thanks again
    Attachments:
    peak_find_screenshot.jpg ‏45 KB

Maybe you are looking for