CRio FPGA read 1024000 samples and transfer to RT

Hello,
I have a cRIO application and i have to read 1024000 samples, on 3 channels at a sampling rate of 25.6 khz..  My NI 9232 offers the posibility of sampling at such a rate. The problem is that i do not know how to aquire and transfer that amount of data to the RT.  I'd be grateful if someone could help me.
Attachments:
FPGA.vi ‏60 KB
RT.vi ‏359 KB

Valentina,
You are correct to mention that your data will not be continuous with the abort node. I only mentioned that you should use the abort node because your FPGA VI was not sampling continuously and you were restarting the FPGA from the RT target.
If I understand your application correctly, you’d like to continuously analyze a set of 1024000 samples on the RT target. Is this correct?
If so, why are using a For Loop to read from the AI node only N times (where N = Samples per Channel)? Why not place the AI node and FIFO Write method in a while loop, as done in the Streaming Data (DMA).lvproj example, which you can find in the LabVIEW Example Finder? By placing these nodes in a while loop, you’ll get a continuous stream of data from the FPGA until you tell the while loop to stop.
On the RT side, you can also use a while loop to continuously read the data in the FIFO. If reading 1024000 samples from the FIFO at once does not work for you, you should consider setting the number of elements to read from the FIFO to a value where 1024000 is a multiple of it. This approach might take a little more work because you have to join N iterations of the FIFO data together in order to pass a set of 1024000 samples to your analysis function.
Regards,
Tunde S.
Applications Engineer
National Instruments

Similar Messages

  • How to read a table and transfer the data into an internal table?

    Hello,
    I try to read all the data from a table (all attribute values from a node) and to write these data into an internal table. Any idea how to do this?
    Thanks for any help.

    Hi,
    Check this code.
    Here i creates context one node i.e  flights and attributes are from SFLIGHT table.
    DATA: lo_nd_flights TYPE REF TO if_wd_context_node,
            lo_el_flights TYPE REF TO if_wd_context_element,
            ls_flights TYPE if_main=>element_flights,
            it_flights type if_main=>elements_flights.
    navigate from <CONTEXT> to <FLIGHTS> via lead selection
      lo_nd_flights = wd_context->get_child_node( 'FLIGHTS' ).
    CALL METHOD LO_ND_FLIGHTS->GET_STATIC_ATTRIBUTES_TABLE
      IMPORTING
        TABLE  = it_flights.
    now the table data will be in internal table it_flights.

  • Incorrect data type when writing to FPGA Read/Write Control

    I have run in to a problem this morning that is causing me substantial headache.  I am programming a CompactRIO chassis running in FPGA mode (not using the scan engine) with LabVIEW 2012.  I am using the FPGA Read/Write Control function to pass data from the RT Host to the FPGA Target.  The data the RT host is sending comes from a Windows host machine (acting as the UI) and is received by the RT Host through a network published variable.
    The network published shared variable (shared between the RT and Windows system) is a Type Def cluster containing several elements, one of which is a Type Def cluster of fixed point numerics.  The RT system reads this shared variable and breaks out the individual elements to pass along to various controls on the FPGA code's front panel.  The FPGA's front panel contains a type def cluster (the same type def cluster, actually) of fixed point numerics.
    The problem comes in the RT code.  After I read the shared variable I unbundle the cluster by name, exposing the sub-cluster of fixed point numerics.  I then drop an FPGA Read/Write Control on the RT block diagram and wire up the FPGA reference.  I left click on the FPGA Read/Write Control and select the cluster of fixed point numerics.  I wire these together and get a coercion dot.  Being a coercion dot hater, I hover over it the dot and see that the wire data type is correct (type def cluster of fixed point numerics), but the terminal data type is listed as a cluster containing a Boolean, code integer and source string, also known as an error cluster.  I delete the wire and check the terminal data type on the Read/Write Control, which is now correctly listed as a type def cluster of fixed point numerics.  Rewiring it causes the terminal to revert back to the error cluster.  I delete the wire again and right click on the terminal to add a control.  Sure enough, a type def cluster of fixed point numerics appears.  Right clicking and adding an indicator to the unbundle attached to the network shared variable produces the proper result.  So, until they are attached to each other, everything works fine.  When I wire these two nodes together, one spontaneously changes to a error cluster.
    Any thoughts would be appreciated.

    My apologies I never got back to responding on this.  I regret that now because I got it to work but never posted how.  I ran in to the exact same problem today and returned to this post to read the fix.  It wasn't there, so I had to go through it all over again.
    The manifestation of the problem this time was that I was now reading from the Read/Write FPGA front panel control and writing to a network published shared variable.  Both of these (the published shared variable and the front panel control) were based on a strict type defined cluster, just like in the original post.  In this instance, it was a completely different cluster in a completely different project, so it was not a one-off thing.
    In addition to getting the coercion dot (one instance becoming an error cluster, recall), LabVIEW would completely explode this time around.  If I saved the VI after changing type definition (I was adding to the cluster) I would get the following error:
    Compile error.  Report this problem to N.I. Tech Support.  Copy cvt,csrc=0xFF
    LabVIEW would then crash hard and shutdown without completing the save.  FYI, I'm running LabVIEW 12.0f3 32-bit.
    If I would then reopen the RT code, the same crash would occur immediately, ad nauseam.  The only way to get the RT code to open was to change the type defined cluster back to the way it was (prior to adding the new element).
    I didn't realize it last time around (what originally prompted this post), but I believe I was adding to a type def cluster when this occurred the first time.
    So, how did I fix it this time around? By this point I tried many, many different things, so it is possible that something else fixed it.  However, I believe that all I had to do was to build the FPGA code that the RT code was referencing.  I didn't even have to deploy it or run it... I just had to build it.  My guess is that the problem was the FPGA Reference vi (needed to communicate with the FPGA) is configured (in my case) to reference a bit file.  When the development FPGA Main.vi ceases to match the bit file, I think that bad things happen.  LabVIEW seems to get confused because the FPGA Main.vi development code is up and shows the new changes (and hence has the updated type def), but when you ask the RT code to do something substantial (Open, Save, etc), it refers to the old bit file that has not yet been updated.  That is probably why the error getting thrown was a compile error.
    I'm going to have to do an additional round of changes, so I will test this theory.  Hopefully I will remember to update this post with either a confirmation or a retraction.

  • Which is the best way to edit this program and make it read 1 sample from each channel?

    The original program was made with Traditional NI-DAQ. I have edit it to DAQmx the best that i could. The program it's already applying the voltages that are generate in the code(Daqmx Write.vi). But i'm having problems with acquiring voltages it's giving me rare readings(Daqmx Read.vi)  i don't know if i have to make a (Daqmx Start Task.vi) for each channel in the program or if i can make it work with a single one. Notice i have not make many significant changes because this program is already running in another lab and they give to us the program so we didn't have so much problems but instead of getting the BNC-2090 they got the BNC-2090A that uses DAQmx instead of Traditional. So anyone can help?
    Solved!
    Go to Solution.
    Attachments:
    2 Lock-In, 2 V Amp, Vd Amp - 090702(MTP).vi ‏100 KB
    2 Lock-In, 2 V Amp, Vd Amp - 090702(MTP)new.vi ‏107 KB

    A BNC-2090 is just a connector block.  It has no effect on whether you need to use DAQmx or traditional DAQ.  That is determined by the DAQ card you are connecting the terminal block too.
    You might be referring to this document Differences Between the BNC-2090 and BNC-2090A Connector Blocks, but that is just saying to the change in the labels of the terminal block to accurately reflect the newer DAQ cards.
    What problems are you having with the new VI you just posted?  Are you getting an erro rmessage?  I don't know what "rare readings" mean.
    You really shoud look at some DAQmx examples in the example finder.  Some problems you are having is that your DAQ blocks are all sort of disconnected.  Generally, you should be connecting the purple wire from your create task function, throught the start, read or write, and on to the close task.  Many of your DAQ functions are just sitting out there on little islands right now.  You should also be connecting up your error wires.
    With DAQmx, you should be combining all of your analog channels in a single task.  It should look something like Dev0/AI0..AI7.  Then use an N channel 1 sample DAQmx read to get an array of the readings, which you can then use index array to break apart.
    Other things you should do is replace the stacked sequence structures with flat sequence structures.  Turn on AutoGrow for some of your structures such as the loops.  In the end, you might find you can eliminate some sequence structures.

  • Read Attachment from SAP inbox and Transfer to Application Server Folder

    Hi
      I have one requirement . I have to developed one background program to read the all the attachment in SAP inbox and transfer all this attachment to sap application server folder.
    Thanks and Regards
    Shyam

    Hi Rajendra ,
    please try this code snippet , here we call a selection screen that allows us to browse the file name .
    PARAMETER : p_file TYPE localfile OBLIGATORY .
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file .
      data: user_action type i, filename type filetable, result type i,
            fn type file_table.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOG
      EXPORTING
        WINDOW_TITLE            = 'SELECT FILE'
       DEFAULT_EXTENSION       =
       DEFAULT_FILENAME        =
       FILE_FILTER             =
       INITIAL_DIRECTORY       =
       MULTISELECTION          =
      CHANGING
        FILE_TABLE              = filename
        RC                      = result
        USER_ACTION             = user_action
      EXCEPTIONS
        FILE_OPEN_DIALOG_FAILED = 1
        CNTL_ERROR              = 2
        ERROR_NO_GUI            = 3
        NOT_SUPPORTED_BY_GUI    = 4
        others                  = 5
      if user_action = cl_gui_frontend_services=>action_ok.
        clear p_file.
        loop at filename into fn.
          p_file = fn-filename.
        endloop.
      endif.
    Hopefully it helps

  • HT203433 After reading the sample for Hard Measures and buying and downloading, the downloaded book was missing 9 chapters. Anyone know how to fix this?

    After reading the sample for Hard Measures and buying and downloading, the downloaded book was missing 9 chapters. Anyone know how to fix this?

    Hi Charles...
    Try re downloading the iBook >  Downloading past purchases from the App Store, iBookstore, and iTunes Store

  • Efficient way get FCE4 Log and Transfer to read .mts files stored on drive?

    Hi All
    I've searched the FCE discussion forum and not found an answer verified by more than one user to this question: What is an efficient way to get FCE4 (via the Log and Transfer window) to see .mts files from an AVCHD camera stored on a drive (NOT via the camera -- directly from the drive)?
    I am trying to plan the most space-efficient system possible for storing un-transcoded .mts files from a Panasonic AG-HMC151 on a harddrive so that I can easily ingest them into FCE4. I am shooting a long project and I want to be able to look at .mts files so that I can decide which ones to transcode to AIC for the edit.
    Since FCE4 cannot see .mts files unless they have their metadata wrapper the question is really 'how do I most efficiently transfer .mts files from the camera to a storage harddrive with their metadata wrappers so that FCE4 can see them via the log and transfer window?'
    Nick Holmes, in a reply in this thread
    http://discussions.apple.com/thread.jspa?messageID=10423384&#10423384
    gives 2 options: Use the Disk Utility to make a disk image of the whole SD card, or copy the whole contents of the card to a folder. He says he prefers the first option because it makes sure everything on the card is copied.
    a) Have other FCE users done this successfully and been able to read the .mts files via Log and Transfer?
    In a response to this thread:
    http://discussions.apple.com/thread.jspa?messageID=10257620&#10257620
    wallybarthman gives a method for getting Log and Transfer to see .mts files that have been stored on a harddrive without their metadata wrappers by using Toast 9 or 10.
    b) Have any other FCE4 users used this method? Does it work well?
    c) Why is FCE4 unable to see .mts files without their metadata wrappers in the Log and Transfer window? Is it just a matter of writing a few lines of code?
    d) Is there an archiving / library app. on the market that would allow one to file / name / tag many .mts clips and view them prior to transcoding into space-hungry AIC files in FCE?
    Any/all help would be most gratefully received!

    I have saved the complete file structure on DVD as a backup, but have not needed to open them yet. But I will add this. As I understand the options with Toast you are infact converting the video to AIC or something like it. I haven't looked into it myself, but I can't imagine the extra files are that large, but maybe there are significant, I don't know. The transcoded files are huge in comparison to the AVCHD file.
    A new player on the scene for AVCHD is Clipwrap 2.0. As I understand this product. It rewraps the AVCHD into a wrapper the Quicktime can open and play. This is with the MTS files only, the rest of the file structure is not needed. The rewrap is much faster that the transcode to AIC. So you have the added benefit of being able to play the files as well as not storing the extra files. The 2.0 version (which is for AVCHD) was just recently released. I haven't tried it and don't personally know of anyone who has. You might want to try this, there is a trial version as I recall.

  • How to read multiple Digital samples and plot a chart with time stamps

    Hi,
     Could anyone send me a code that:
    1. Reads 'multiple samples(lets say 4) from single digital input' 
    2. 'plot digital data as a chart with time stamps'
    3. Find frequency
    4. Log data into file with time stamps
    I have attached the code which i tried.
    Thanks,
    LK
    Attachments:
    DigitalNSample.vi ‏27 KB
    DigitalNSample.vi ‏27 KB

    Hi,
     Could anyone send me a code that:
    1. Reads 'multiple samples(lets say 4) from single digital input' using NI USB 6009 or NI USB 6251.
    2. 'plot digital data as a chart with time stamps'
    3. Find frequency
    4. Log data into file with time stamps
    I have attached the code which i tried.
    Thanks,
    LK
    Attachments:
    DigitalNSample.vi ‏27 KB

  • FPGA target to host DMA transfer speed

    Hello,
    ------------Task summary:
    I'm currently working on a data acquisition-heavy project using a PXIe chassis system (plus a host computer), see below for the components.
    PXIe-PCIe8388 x16 Gen 2 MXI-express (controller)*
    PXIe-1082 (chassis)
    PXIe-7966R (FPGA)
    NI 5772 (AC version, IO Module)
    *note: the controller is connected to a PCIe port on the host computer with the full x16 bandwidth.
    For my application, I need to acquire a fixed number of samples (16000) from each channel of the IO module at a fixed sampling rate (800MS/s). Each acquisition will be externally triggered at a fixed frequency, 50kHz. The number of acquisitions will also be fixed. Right now I'm aiming for about 90000 acquisitions per session.
    So in summary, for each acquisition session, I will need (16000 samples per acquisition) * (90000 acquisitions) * (2 AI channels) = 2.88e9 samples per acquisition session.
    Since each sample is transferred as a 16-bit number, this equates to 5.76GB per acquisition session.
    The total time per acquisition session is (90000 acquisitions) / (50kHz per acquisition) = 1.8 seconds.
    --------------Problems:
    I'm having problems transferring the acquired data from the FPGA to host. I think I'm seeing an overflow on the FPGA before the data is transferred to the host. I can go into more detail pending an answer to my questions below.
    --------------Questions:
    I want to ask a few general questions before posting any code screenshots. Assuming my math is correct and the host computer 'good' enough, is it theoretically possible to transfer data at my required throughput, 5.76GB/1.8seconds = 3.2GB/s using the hardware that I have?
    If it is possible, I can post the FPGA and host VIs that I'm using. If not, I will have another set of problems!
    Thanks,
    Michael

    thibber wrote:
    Hi Michael,
    I have a few questions / observations for you based on your post:
    First, you mention that you are using the PXIe-PCIe8388 x16 Gen 2 MXI-express.  This is only compatible with the NI RMC-8354, so when you mention the streaming speeds you are looking to acheive, is this streaming back to the RMC, or to something else?  Is the NI RMC-8354 the host computer you are mentioning?
    When it comes to streaming data with the NI 5772 and PXI 7966R, there are a few different important data rates.  First, the NI-5772 can acquire at a maximum rate of 1.6 GS/s with 12 bit resolution = 2.4 GB/s.  This is only if you are using 1 channel, for 2 channels the rate is halved.  Are you planning on using 2 separate 5772 and 7966Rs?
    The 7966R can stream data at a maximum rate of 800 MB/s, so we have a data rate coming into the FlexRIO's FPGA (2.4GB/s) and going out of the FlexRIO's FPGA (.8 GB/s).  The data that isn't being sent back to the host accumulates in the FPGA's DRAM.  Lets say we have all of the FPGA's DRAM available to store this data (512 MB).  Our effective accumulation rate is 2.4 - 0.8 = 1.6 GB/s, so our FPGA is going to fill up in about 1/3 s, streaming a total of 0.8+0.512 = ~1.3 GB back to the host before saturating and losing data.
    There are a few options, therefore, to reach your requirement.  One might be duplicating your setup to have more cards.  1.3 GB x 3 = 4GB, which meets your need.  Also, the 7975R can stream data back to the host twice as fast and has 2GB of DRAM onboard, so you could store more data and stream faster, therefore meeting your requirement.
    I hope that this information helps clarify what concerns come into play for this type of application.  Please let me know if anything above is unclear or if you have further questions.
    Thanks for replying. To answer your first question: I'm transferring to a desktop computer. The controller is able to connect with a PCI express x16 slot in the desktop computer. I'm not sure how to technically describe it, but the controller plugs into the PXIe chassis, then there is another card that plugs into the host computer's PCI express x16 slot, and finally there is a large cable that connects the card in the host computer and the controller. 
    For your second paragraph: the reason I used 16-bit numbers in my calculations is because that's how the data is handled in the FPGA after it has been acquired (assuming I keep it as an integer), is that correct? Then it's packed in chunks of 4 (one U64) before being inputted to the target to host FIFO (that's how the NI 5772 examples do it). Right now I'm only using one FPGA and I/O module, and I'm using both AI channels (I need to simultaneously sample two different inputs).
    I might be able to live with half of the sampling rate, 400MS/s for both channels, if that means I will be able to acquire a larger amount of data. Getting another FPGA and IO module is also an appealing option. It depends on what my advisors think (I'm a graduate student), and if they want to buy another FPGA and IO module.
    Questions:
    I have a question about the 7966R vs the 7975R that you mentioned. I could probably find the information in the specifications, but I figured I would just ask you here. Is there any advantage to using the 7966R over the 7975R in terms of programmable logic elements? From what I could quickly read, the 7975R has more DSP slices and RAM, but does it have less general purpose logic blocks than the 7966R? The reason I'm asking is because the project that I'm working on will eventually involve implementing as much signal processing on the FPGA as possible. But obviously figuring out the acquisition part of the project is more important right now. 
    The other question I have is related to something nathand said in response to my first post. Is using multiple target to host FIFOs faster than using 1 target to host FIFO (assuming the combined sizes are equivalent)? I noticed that the FPGA has a max of 16 target to host FIFOs. Does each target to host FIFO reserve some amount of bandwidth? Or is the total bandwidth just divided by the amount of target to host FIFOs that I use in a given FPGA VI? Ex: If I only define 2 target to host FIFOs, each would have half of the total bandwidth, if I define 3 target to host FIFOs each would have 1/3, etc.
    Hi Michael,
    A few updates to my previous post:
    First, I think I could have explained the sampling rate a bit more clearly.  Using 2 channels instead of 1 means that each channel will have half the sampling rate (800 MS/s), but the total acquisition rate will still be the same (1.6 S/s).
    There are some other options you might want to look into as well regarding your acquisition.  For instance, is it acceptable to use only the 8 most significant or least significant bits of your measurement?  Or to discard a section of your acquisition that is irrelevant to the measurement?
    Also, if you do end up wanting to look in the direction of a 7975R, you would also want to likely switch to a 1085 chassis to fully utilize the improved streaming speeds.  The 1082 has a limitation of 1 GB/s per slot, while the 1085 can achieve up to 4 GB/s per slot.
    I look forward to hearing what other observations or concerns arise in your testing.
    Andrew T.
    National Instruments
    I'll go ahead and respond to your latest response too. Thanks again for your help.
    I think I understand the streaming rate concept. I'm not using time interleaved sampling. My application requires using the simultaneous sampling mode. I need two channels of input data.
    Unfortunately I don't think I can sacrifice on bit depth. But for right now I can probably sacrifice half of the sampling rate, and reduce my acquisition duty cycle from 100% (constantly streaming) to 50% (acquiring only half of the time). My acquisition rate will still need to be 50kHz though. I'm planning to compromise on sampling rate by summing pairs of data points instead of simply decimating, and then transferring the data to the host.
    Questions:
    We (my advisors and I) think that the summing pairs approach would preserve more information than simply throwing away every other point. Also, we can avoid overflow because each 16-bit number only contains 12-bits of actual information. The 16-bit number will just need to be divided by 16 before summing because the 12-bits of information are placed in the 12 MSBs of the 16-bit number. Does that sound right?
    As for upgrading the hardware, that would be something I would need to discuss with my advisors (like I said in my above response to your previous post). It would also depend on any exchange programs that NI may have. Is it possible to exchange current hardware for some discount on new hardware?

  • Can I detect if a module is present in CRIO FPGA?

    I have a cRIO-9073 controller chassis with 8 NI 9237 modules. I plan to use FPGA reads - not the scan engine. Can I detect whether a module is present or not within my FPGA VI? If not, is there a way to do this within the RT Host VI? (This is my first cRIO project so I'm still a bit green). I can foresee my client needing to replace a module sometime in the future and still needing to operate the system with the remaining modules - so the system must be robust and adaptible to fault conditions.
    - Thanks in advance
    - Tore
    Solved!
    Go to Solution.

    I switched to the Find Hardware VI and it doesn't error, but what I need is network devices. I have expansion chassis on the network that I need to confirm are connected. The Find Hardware VI only lists off local devices. I asked about the error over at this topic here:
    https://forums.ni.com/t5/LabVIEW/LV-RT-System-Conf​ig-Find-Systems-vi-Error-quot-2147220620-quot/m-p/​...
    Thanks,
    James
    LabVIEW Professional 2014

  • [cRIO] FPGA compile time

    Hello, I was wondering the normal time for LabVIEW to compile a program for cRIO FPGA target.
    It has been taking 15 minutes per compile. I was wondering whether this is normal or whether there
    is something I could do to make it faster. 
    - Thank you 

    This is the compile report of the code I am working with:
    Status: Compilation successful.
    Compilation Summary
    Logic Utilization:
      Number of Slice Flip Flops:       5,531 out of  40,960   13%
      Number of 4 input LUTs:           5,475 out of  40,960   13%
    Device Utilization Summary:
       Number of BUFGMUXs                        2 out of 8      25%
          Number of LOCed BUFGMUXs               1 out of 2      50%
       Number of External IOBs                 164 out of 333    49%
          Number of LOCed IOBs                 164 out of 164   100%
       Number of MULT18X18s                     14 out of 40     35%
       Number of Slices                       3874 out of 20480  18%
          Number of SLICEMs                      4 out of 10240   1%
    Clock Rates: (Requested rates are adjusted for jitter and accuracy)
      Base clock: 40 MHz Onboard Clock
          Requested Rate:      40.408938MHz
          Theoretical Maximum: 54.191730MHz
      Base clock: MiteClk (Used by non-diagram components)
          Requested Rate:      33.037101MHz
          Theoretical Maximum: 61.406202MHz
    Start Time: 9/18/2008 12:13:18 PM
    End Time: 9/18/2008 12:29:22 PM
    This is the compile report if I delete most of the code.
    It only has a while loop and I/O pin read and write, and an inverter.
    [Screenshot]
    Status: Compilation successful.
    Compilation Summary
    Logic Utilization:
      Number of Slice Flip Flops:         583 out of  40,960    1%
      Number of 4 input LUTs:           1,021 out of  40,960    2%
    Device Utilization Summary:
       Number of BUFGMUXs                        2 out of 8      25%
          Number of LOCed BUFGMUXs               1 out of 2      50%
       Number of External IOBs                 164 out of 333    49%
          Number of LOCed IOBs                 164 out of 164   100%
       Number of Slices                        671 out of 20480   3%
          Number of SLICEMs                      4 out of 10240   1%
    Clock Rates: (Requested rates are adjusted for jitter and accuracy)
      Base clock: 40 MHz Onboard Clock
          Requested Rate:      40.408938MHz
          Theoretical Maximum: 85.113627MHz
      Base clock: MiteClk (Used by non-diagram components)
          Requested Rate:      33.037101MHz
          Theoretical Maximum: 70.566650MHz
    Start Time: 9/18/2008 2:00:03 PM
    End Time: 9/18/2008 2:12:19 PM

  • Crash while creating a new cRIO FPGA Project

    When I want to create a new cRIO FPGA Project with the wizard, Labview freezes and crashes ("Labview 8.6 Development System had stopped working".) The wizard discovers my cRIO-9073 Integrated controller, but when it's trying to discover the CompactRIO chassis, the program freezes and automaticly shuts down.
    In MAX, there is no problem in discovering the FPGA-chip in the cRIO-chassis (RIO0)..
    I tried to use the new FPGA project wizard in Labview 8.5. While searching for the cRIO chassis, the program doesn't freeze, but says that the cRIO-chassis cannot be found..
    When I manually want to select the chassis, I can only choose for the cRIO-9072 and cRIO-9074. There is no cRIO-9073-icon.. ?
    What am I doing wrong, or is it a bug ?
    Regards,
    Kenneth
    Message Edited by Kennie87 on 02-16-2009 09:23 AM
    Message Edited by Kennie87 on 02-16-2009 09:23 AM
    Solved!
    Go to Solution.
    Attachments:
    lvrt_err_log.txt ‏23 KB

    Hi JMota,
    Thanks for your quick reply.
    I'm using Vista Home Premium SP1 32-bit. And I have admin privileges. I'm the only user of the computer.
    I've installed NI-RIO 3.1 on the pc. So it's the same version as on the controller.
    When I try to add the controller and chassis directly from the Project Explorer, then I have the same problem.
    The program freezes while it's trying to discover the chassis..
    I'm able to select the cRIO-9073..
    But in the second step, the program freezes when it's trying to discover the chassis..
    When I try your second method, the same problem occurs.
    I manually selected the specific controller (RT CompactRIO target), with the chassis (cRIO-9073) and the integrated FPGA-chip (RIO0), and at last the NI-9472 DO-module in slot 1.
    Then, I've changed the IP of the controller from 0.0.0.0 into the IP assigned in MAX 192.1681.3.
    When I click on connect (right-click RT CompactRIO target - connect), the same story.. Program freezes and eventually shuts down.

  • FPGA Read/Write Control Issues

    Hello all!  Rather new to using FPGA, but I have an interesting issue that's popping up.
    Currently pulling in RAW voltage data from a set of sensors (Pressure Transducers, Load Cells, etc) through a cRIO DAQ.  Have the FPGA file setup to pull in that data already and have the main VI and all the sub-VIs working just fine.
    What I'm trying to do is save the raw voltage data (TDMS files) on the lower level and the convert and display on the upper level so that I don't have to convert and save (speed up saving data).  So that leaves 3 distinct levels/sections:
    On FPGA that pulls in the raw data
    On FPGA that saves the raw data
    Main VI that does all the controls/conversions/displays etc.
    Number two is where I'm having an issue.  I want to save all the data in parallel so I'm creating a save FPGA for each I/O device (8 Relays to command solenoid valves, 3 Pressure Transducers, 1 Load Cell, 4 Thermocouples).  To do this I want to create a separate VI for each device (not sure if that's a smart thing to do).
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    Any reason why it might be doing this?  Any ideas/suggestions at all on how to go about setting this up in general?
    Thanks!

    Hi,
    HySoR wrote:
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    Could you take a screenshot and post this part of your code? I'm having trouble understanding what you are describing.
    Craig H. | CLA | Systems Engineer | National Instruments

  • Log and Transfer with Panasonic HD

    Recently I purchased the Panasonic HDC-HS 100, which is a hybrid SD card/HDD, (AVCHD). I use the Mac OS X (10.4.11) with Final Cut Pro 6.0.5. I am having problems importing my footage onto my computer. When opening the log and transfer window in FCP, it does not recognize the camera (error messages indicate there is an invalid directory structure). Also, the media has been saved to my computers hard drive. How do I get this footage on my computer so I can start editing! Thanks for the help.

    Do you have an external card reader? Or are you playing back from the camera? When you copied the contents of the card to the computer, did you copy the whole card, or just the video clips? You have to copy the ENTIRE card, as is. Create a folder on your external hard drive and name it P2(name of project)#. So if you have 3 cards you used for the project "My Haircut" you would have 3 folders named
    P2myHaircut-1
    P2myHaircut-2
    P2myHaircut-3
    That is just a sample naming scheme. You can use what you want, just be consistent so you can find your footage easily. Then, mount your SD card on the desktop, open it, select all, and drag to the appropriate folder. Make sure you select all. Then, open Log and Transfer, go to the upper right corner of the Log and Transfer browser and click on the drop down button. You will see a choice that says Create Custom Path. Click that, then locate the Folder you just created. Click once on that folder and then "choose". DON'T choose the internal folders, just choose your P2myHaircut# folder. You should see your clips pop up in that browser. If this doesn't work, you may be missing a component for that camera. Honestly, I don't know anything about your camera, but that is the process for Log and Transfer.

  • FPGA Read/Write Control Function Issues

    Hello all!  Rather new to using FPGA, but I have an interesting issue that's popping up.
    Currently pulling in RAW voltage data from a set of sensors (Pressure Transducers, Load Cells, etc) through a cRIO DAQ.  Have the FPGA file setup to pull in that data already and have the main VI and all the sub-VIs working just fine.
    What I'm trying to do is save the raw voltage data (TDMS files) on the lower level and the convert and display on the upper level so that I don't have to convert and save (speed up saving data).  So that leaves 3 distinct "levels/sections":
    On FPGA that pulls in the raw data
    On FPGA that saves the raw data
    Main VI that does all the controls/conversions/displays etc.
    Number two is where I'm having an issue.  I want to save all the data in parallel so I'm creating a save FPGA for the I/O devices (8 Relays to command solenoid valves, 3 Pressure Transducers, 1 Load Cell, 4 Thermocouples).
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    I've attached a screen cap of the current problem.  The set-up on the bottom (with only one target) doesn't work.  But the second I add more than one target, it works.
    Any reason why it might be doing this?  Any ideas/suggestions at all on how to go about setting this up in general?
    Thanks!
    Attachments:
    FPGAError.jpg ‏35 KB

    HySoR,
    You might check the documentation for "data" terminal of "TDMS Write" (http://zone.ni.com/reference/en-XX/help/371361H-01/glang/tdms_file_write/). One DBL element is not accepted, but 1D DBL array is accepted.
    data is the data to write to the.tdmsfile. This input accepts the following data types:
    Analog waveform or a 1D array of analog waveforms
    Digital waveform
    Digital table
    Dynamic data
    1D or 2D array of:
    Signed or unsigned integers
    Floating-point numbers
    Timestamps
    Booleans
    Alphanumeric strings that do not contain null characters

Maybe you are looking for

  • How do I sync my iPod now that they updated iTunes and I can no longer find anything!?

    I found > FILE > DEVICES > SYNC But anything after devices doesn't work. It's light grey and not clickable... It won't sync automatically... [Aside... Why can't they update in a way that makes things MORE user-friendly, not LESS :-( Every time they u

  • Bluetooth Help

    I'm sure this issue has been posted in here already, but I'm sick of looking for it and am getting irritated with this issue.  I have had the 8310 for a couple of months now, and love the phone.  My wife decided to get the same phone yesterday and we

  • Extended idoc DEBMAS04

    Hi, I extended one basic IDOC  type DEBMAS04. where I have to write coding for extended segments .How to  debug  that code. Please tell me necessary steps Regards kish

  • Problem in installing the webTest's canoo.

    I have downloaded the Build And Scr of the canoo from the site. i have even set the path in control panel to canoo\bin. now when i am runing the webtest -bulidfile installTest.xml from ther command prompt it is giveing the following error Exception i

  • Dialog Step In Waiting Status: Execution Interrupted

    Hi Frds, I have existing WF. Its work fine in development System . But in Quality system Dialog Step stuck in Waiting Status. The Log Message is 'Execution interrupted' . Note:  Requested Start is taken care also executed T-code SWU_OBUF. Please advi