Labview 8.0 FPGA DMA question

I am developing a control system which needs to access a big mass of
variable data (32 Mbyte) that i can put into the host computer RAM. In
labview tutorial and also on other web resources it is well described
how the DMA FIFO can be used to "write to" host PC memory (RAM) but
nothing about "reading from" the PC RAM. Is it possible? And which is
the control to be used? Do I need some lower level coding?
Thanx
Davide

Hi Davide,
As of right now, the DMA transfers are one-way only (FPGA to host).
Best regards,
David H.
Systems Engineer
National Instruments

Similar Messages

  • FPGA DMA Size Allocation

    Hi all,
    My application involves grabbing images from a 3-taps, 16-bit camera using FlexRIO. The PXI controller I am using is Windows-based while the FlexRIO module that I have is a PXI-7954 + NI 1483 adapter. The size of the image I am grabbing is 2560 x 2160, U16 and the clock cycle is 100 MHz. I've been trying for over a week and up to today, I still am not able to get the image from the camera as I kept on getting the DMA Write Timeout error. Right now, the DMA size in the FPGA is set at 130k but whenever I tried to increase this further, I get a compilation error. I've tried to have the host program to grab 100k data points from the FPGA DMA at every milisecond but it seems that, I am capped at about 10-15ms. Perhaps, Windows has its own limitation...
    Attached is the program that I am using, modified from the LabVIEW shipped example. Please advice, how do move forward from here? or, is it possible to further increase the DMA buffer size up 10x higher than the current limit?
    Attachments:
    1-Tap 10-Bit Camera with Frame Trigger.zip ‏1684 KB

    Hi Shazlan
    Apologies for taking so long to reply to you.
    You are correct in saying that the latest driver is IMAQ 4.6.4 and this can be downloaded from our website if you have not done so already.
    If you have already installed the IMAX 4.6.4 driver, has this managed to resolve your issue?
    Also, have you tried to run the compilation again and obtained a report outlining the problems?
    As a side note - I have been looking into the possibility of downloading some sort of driver for the Samos camera you are using from Andorra.  While National Instruments have not created a driver for this device, Andorra do have a Software Development Kit (SDK) which they say works with LabVIEW.  You may find it useful to have this so that you no longer have to write the driver yourself.  This may then save resources on the FPGA.
    Keep me updated on your progress and I will continue to look into this issue for you.
    Regards
    Marshall B
    Applications Engineer
    National Instruments UK & Ireland

  • HOST to FPGA DMA Transfers

    Hi,
    We're having trouble using the Host to FPGA DMA feature available in
    LabVIEW 8.20 with IFRIO. After starting a compile, we get the following
    error message:
    An internal software error has occurred.
    Please contact National Instruments technical support at ni.com/support
    with the following information:
    Error -61048 occurred at This target does not support DMA Output (from the host to the target).
    Possible reason(s):
    LabVIEW FPGA:  This target does not support DMA Output (from the host to the target).
    Any help would be greatly appreciated.
    Thanks.

    Hi Manik:
    We did not support DMA output on the PCI-5640R when we released NI-5640R 1.0. This is why you are getting the error message that you are seeing.
    We plan to add support for DMA output in an upcoming release.
    ----abhay

  • Fpga DMA FIFO compilation error

    Hello,
    I have a cRIO 9074 with FPGA. I am trying a simple piece of code to learn how to aquire data that is generated on the FPGA at a rate of 10 KHz and transfer it to the Host VI for processing it later offline. However, I encounter this compilation error when compiling this basic FPGA VI containing a FIFO
    write node (picture of the VI attached below). In the Compilation Report, it says that there were 256 Block RAMs used (the
    total number is 40), therefore an error was produced. The exact compilation error
    notification from the Xilinx report is reproduced below:
    # Starting program map
    # map -o toplevel_gen_map.ncd -intstyle xflow -timing toplevel_gen.ngd
    toplevel_gen.pcf
    Using target part "3s2000fg456-4".
    Mapping design into LUTs...
    Running directed packing...
    Running delay-based LUT packing...
    ERRORack:2310 - Too many comps of type "RAMB16" found to fit
    this device.
    ERROR:Map:115 - The design is too large to fit the device.  Please check the Design Summary section to
    see which resource requirement for your design exceeds the resources available
    in the device. Note that the number of slices reported may not be reflected
    accurately as their packing might not have been completed.
    NOTE:  An NCD file will still be
    generated to allow you to examine the mapped design.  This file is intended for evaluation use only,
    and will not process successfully through PAR.
    Mapping completed.
    See MAP report file "toplevel_gen_map.mrp" for details.
    Problem encountered during the packing phase.
    Design Summary
    Number of errors   :   2
    Number of warnings : 125
    ERROR:Xflow - Program map returned error code 2. Aborting flow
    execution...
    Bitstream Not Created
    Timing Analysis Passed
    What does this mean? How can I fix this error?
    Thank you,
    Bogdan
    Solved!
    Go to Solution.
    Attachments:
    FPGA.png ‏16 KB

    Sorry, I forgot to mention that...LAbVIEW 2009. And yes, this is the only loop on the FPGA.vi. I just made up this code to understand how exactly I would save some data on the host for subsequent processing, but I didn't get to that point because the VI on the FPGA does not compile successfully. Do you know of any example of the most basic code for DMA FIFOs between the FPGA and Host Computer? This should be pretty straight forward, but for some reason it's not.
    Thanks,
    Bogdan

  • Flexrio FPGA dma and dram simulation

    I have a pair of Flex RIO 7966r boards where I am trying to perform DRAM to DMA transfers.  Each FPGA uses both banks of DRAM.  One bank for capturing Camera Link frames, the other bank for capturing sub-frames from within each frame (And performing some processing on the sub-frame data).
    Each DRAM bank is written into from their own target scopes FIFOs.
    Each DRAM bank is read into their own target-to-host DMA FIFOs.
    When only one side is operating or the other (Capturing sub-frames by themselves or full frames by themselves) everything flows nicely.  But when I capture both at the same time, there appears to be some sort of contention in the DRAM (I suspect from the DMA engine).  Since I am simulating all of this, I would like to ask if anyone has the detailed descriptions of the DRAM interface signals below?  These are generated by LabView but I have found no explanation of what they mean in any documentation.
    Also, in the simulation build, there is obviously a DMA simulation.  But from within the simulator, I can find no signals related to the FPGA based DMA FIFOs or the simulated DMA transfers.  All I can infer about the DMA transfers is the effect on the DRAM above.  The DMA FIFO is being populated directly from the DRAM (Yes, this is a highly modified variant of the 10 tap cameralink (with DRAM) example from the NI-1483 examples).
    Does anyone know how I can see the DMA bahavior from within a simulation?  This would most likely allow me to see exactly why the contention is taking place.
    Thanks!

    Hey xl600,
    I'm not immediately sure how to have ISim display the DMA Engine behavior, but I'll see if I can I dig anything up for you. I've come across a couple of other users encountering issues with FIFO signals appearing in ISim over on the Xilinx forums, so it might be worthwhile to post there as well in case it happens to be due to ISim itself.
    Regards,
    Ryan

  • No option to simulate cRIO device after installing Labview and RT/FPGA Modules from downloads (missing driver support?)

    Hi guys--
    Basically, I need to simulate a cRIO device without yet having the hardware on hand, like this:
    How to Simulate FPGA Hardware Targets Using the Project Explorer with LabVIEW
    Instead, I only get the options shown in the attachment "Add Target Options.jpg".  I assume this is because of missing drivers, which I assume is due some mistake I made during a recent fresh install of LV.  The install went like this:  I downloaded (not disks) and installed the following (in this order, latest versions all around):
    (1) LabView Development System
    (2) Real-Time Module
    (3) FPGA Module
    (4) NI-RIO (install prompted by previous install)
    (5) DAQmx
    At the end of (1) and (2), I got the screen shown in the second attachment ("Drivers Install Message.jpg"), but could not get it to recognize the "NI Device Drivers" folder (which I also downloaded), or any of its sub-folders.
    I feel I'm missing some obvious option for loading up the necessary drivers after the install, but can't seem to find reference to it in the forums.  Could anyone give me a little push in the right direction?
    Thanks a bunch, and have a great day.
    Solved!
    Go to Solution.
    Attachments:
    Add Target Options.jpg ‏57 KB
    Drivers Install Message.jpg ‏47 KB

    Michael--
    Thanks for your reply.  In answer to your questions, the DAQmx install seemed to go smoothly, and I believe this is evidenced by available my MAX Simulated Device options (shown in the attachment to this post).  My best guess (could be wrong here) is that I cannot simulate a cRIO device because I was unable to install the drivers after the Real-Time Module installation (as described in my first post).
    The source path selection screen for drivers ("Drivers Install Message" attachment in first post) told me "...you can install NI device drivers later.", and since it would not recognize anything in the NI Device Drivers folder I downloaded, I clicked on "Later".
    I could narrow the issue down a bit if I could now install those drivers, but this is my first time installing without disks, and I can't figure out how to properly install the drivers from the folder I have sitting on my hard drive.  Have I overlooked instructions somewhere on this procedure?
    Thanks a bunch for your help.
    Attachments:
    MAX Device Simulation Options.jpg ‏52 KB

  • LabVIEW 2010 - applications and RTE question

    Hi,
    I'm currently using LabVIEW 2009 and my company is upgrading to 2010, and I had a few questions about building applications and running them on other computers after we make the switch.
    1)  Will computers that only have the LabVIEW 2010 Run-Time engine installed be able to run applications built with LabVIEW 2009 (and earlier)?
        1.1) If not, will we have to rebuild all our existing programs to use the 2010 RTE?
    2) Vice versa, will exes built in 2010 be able to run on computers that only have the 2009 RTE installed?
        2.1) If not, is there an easy way to downconvert programs written in 2010 so that they run on the 2009 RTE?
        2.2) Will we be able to use any of the new features in 2010 (like network streaming, export graphs to excel) if we downconvert to 2009?
    3.  If we rebuild some applications that were originally written in 2009 will we need to install the 2010 RTE on the target computers so they can run the new version of our application?
    Thanks!
    -Nick
    Solved!
    Go to Solution.

    2.1) Yes, Save for previous... and rebuild the application in LV 2009.
    2.2) No because the new features are not present there.
    Waldemar
    Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
    Don't forget to give Kudos to good answers and/or questions

  • Rt fpga dma timeout not waiting

    Hello
    I have a Target to Host DMA between. The read function sits in a while loop with a 2000ms timeout. Upon timeout it loops round and recalls the read function. On the second, and subsequent calls, it does not wait for the timeout time. I assume that this is because it was already itemed out. The error is wired to a shift register.
    Does the timeout result in an error on the error line?
    Is there a way to clear the timeout?
    If i change it to an indefinite wait, is there a way to force the read to wake up?
    Thanks
    Solved!
    Go to Solution.

    jonnnnnnn wrote:
    Hello
    I have a Target to Host DMA between. The read function sits in a while loop with a 2000ms timeout. Upon timeout it loops round and recalls the read function. On the second, and subsequent calls, it does not wait for the timeout time. I assume that this is because it was already itemed out. The error is wired to a shift register.
    Does the timeout result in an error on the error line?
    Is there a way to clear the timeout?
    If i change it to an indefinite wait, is there a way to force the read to wake up?
    Thanks
    Does the timeout result in an error on the error line? Yes, errro -54000 if memory serves correctly. Did you try highlighting execution to see this...
    Is there a way to clear the timeout? Don't wire -1, I got burned by this. The DMA actually polls in the background (I think but I never verified) so a negative timeout can slam your processor. You can maybe try interrupts, although I didn't use this method I just threw a not-too-long timeout on there and cleared the error if it was -54000
    If i change it to an indefinite wait, is there a way to force the read to wake up? The read will only return when data is there, unless you have a timeout.
    CLA, LabVIEW Versions 2010-2013

  • Labview Real-Time and CompactRIO questions

    Hi
    everyone! We have new troubles about CompactRio and Real Time module:
    -We want to
    translate into real-time module a VI we have done with Labview 8 .2. In order
    to do that we have discovered an option placed in Project Explorer  (Tools->Real-Time
    module->Communication Wizard) that seems to translate automatically into Real-Time
    language. Is it true?
    - If it was
    true we would want to do a transmitter VI in CompactRio (RT target)  and a receiver VI in the Host. But the
    Communication Wizard creates a VI in the Host and a VI in the RT target (in
    Real Time language) for each VI we enter to translate ( so we obtain 4 VI’s in
    the same project , two of them as transmitter, and two of them as receiver)
    -We have
    tried to deploy some VI’s to the CompactRio and we get always an error.
    CompactRio’s configuration is allright and it is well recognized by MAX. Why
    does the error appear, and how can we work it out?
    We
    need to configure the Serial Port placed in CompactRio in order to adjust some
    features, such us bauds value. MAX recognized the CompactRio’s Serial Port, but
    all the opctions to configure them are disabled.
    I hope you
    can answer these questions. Thank you.
    Ander

    Hi again,
    Still didn't find the problem on my side. Here's a picture of my code. If I put the code surrended in red at place (1), I don't get the error, but if I put the code at place (2), I get the error. Why oh why....?
    I get the error when I close the browser or click Back button of the browser or Stop the application with the red dot or stop the application with the Quit button on my front panel.
    This morning I found a post on this forum about getting an error using Property node and Remote panel (http://forums.ni.com/ni/board/message?board.id=170&message.id=252705). I did what they suggest: wire the property output to an indicator and it works (disable "Enable automatic error handling dialod" didn't work... I don't know why)... until I put extra code in my VI. 
    I really need help please!
    Thank you
    Stephanie
    Attachments:
    MemoryManager line 437 error.JPG ‏228 KB

  • Host to Target DMA question

    Hello Guys,
    I have a problem regarding DMA Access from Host to Target. I access DMA at an interval of 1.7uS I get 12 U32 data from DMA every 1.7uS, problem is, I can see that DMA can take more time than the normal. Is there a more efficient way to get 12 U32 data at a time?
    Thanks,

    Hello Jeremy
    I am using NI 7833R FPGA module. what I did was to place read DMAs in a sequence structure (1 in each frame so that makes 12 frames). then inside each frame, I placed an output port which I set to True then false then true agan while passing through the sequence structure.
    When I run the program, I can see that it takes around 125nS for each DMA access, But there are times that it takes more time to complete 1 access which takes more than 125nS.Most of the time, The 12 U32 data takes 1.5uS to complete. But due to the unknown reason, it sometimes takes more time to access DMA.
    Thanks

  • DMA question

    I've read Writing Device Drivers, but I've still got a few holes in my understanding of the dma routines.
    I'm developing a driver for a PCI device that requires a very high sustained data rate, so I'm looking for every way possible to improve throughput. The device will be on a dedicated machine, so taking resources away from other processes is not an issue. We will have a pool of large (~64 MB) buffers in user space and a user process will notify the driver when the contents of a buffer need to be transferred to the device. The device will perform DMA to transfer the buffer contents to the card.
    I would like to be able to allocate a few DMA handles (using ddi_dma_alloc_handle) in the attach entry point and use these handles to transfer the large buffers. Will I be able to allocate handles for such large DMA objects?
    When initiating the transfer, I would get an unused pre-allocated DMA handle (via ddi_dma_addr_handle ?) and program the DMA engine on the device. Does this copy the contents of the buffer from user space to kernel space? If it does, is there a way to avoid this copy?
    Thanks in advance for any help you can provide.

    Oops. Just caught I typo in my previous message. I was planning on using ddi_dma_addr_bind_handle to get an unused pre-allocated DMA handle. Or would I want to use ddi_dma_buf_bind_handle? When is each one appropriate?

  • FPGA quick questions: High Throughput Division vs. Multiplication Implementation (rounding?)

    Hi all,
    I'm trying to implement a simple routine where I divide a FXP by the number 7 in FPGA. I wanted to use the high throughput division but it seems to only round to the nearest integer although the output is capable of representing fractions. Alternatively, I could multiply my number by 1/7 using the high throughput multiplication and I get what I want. I'm not too familiar with FXP arithmetic. Without fully understanding the problem, I at least have a solution which is to use the multiplication. I'd just like to know a little more. Can anyone please shine some insight on why the division rounds even though it can handle fractions?
    Thanks for your help
    Jeffrey Lee
    Solved!
    Go to Solution.
    Attachments:
    highthroughputdivisionormultiply.png ‏31 KB

    Thanks for the suggestions. I recreated this and indeed was able to get the correct results! So what happened?
    This may blow your minds, but there is something inherently wrong with my x/y indicator. I have it on "adapt to source". I created another supposedly identical indicator ("x/y 2") off the same wire and get the correct result with that indicator. This seems like some kind of bug but it worries me because I should have never run into it.
    I've attached a screenshot of the code in action as well as the VI (i'm using 2011)
    Thanks
    Jeffrey Lee
    Attachments:
    highthroughputdivisionormultiply_2.png ‏52 KB
    highthroughputdivideIssue.vi ‏21 KB

  • LabView Connection to NXT block Question

        Hi,
    I am trying to connect a NXT block using bluetooth. I have paired the device with the bluetooth software and created a serial connection. Thus I can just send data from LabVIEW using VISA but its proving more dificalt than I thought.
    All I have to do is send a string or integer to the NXT block.
     How can I create a simple sending function using LabVIEW -VISA for just an integer or string ??
    Attachments:
    Spike.vi ‏72 KB

    A couple of comments:
    1. I recommend that you invoke the Get Device Info VI after creating the NXT object to verify that you can communicate with the NXT.
    2. I don't remember what the default of the require response input to the Send Direct Command VI, but you need to specify TRUE if TRUE is not the default.
    3. You should wire the response buffer output to an indicator so you can examine the response from the NXT. I'm not sure how you are determining that you are not getting a response since you are checking for one.
    JacoNI wrote:
    Also I noticed you keep refering to spike.rtx. What is this? My VI running on the NXT block is named "Spike Remote"
    Will this have an affect? The VI jpg has got this name in hex but still no response :/
    4. "spike.rxe" was the name of the program on the NXT in my example. Programs on the NXT have an extension of rxe. I would recommend the use of the "String to Byte Array" VI rather than building the array by hand. The bytes you specify appear to be decimal rather than hex numbers and probably don't represent the string you expect. You also are missing the extension. Wire "Spike Remote.rxe" into the "String to Byte Array" VI and append a NULL byte.
    geoff
    Geoffrey Schmit
    Fermi National Accelerator Laborary

  • DMA RT to FPGA guaranteed order?

    I have a question regarding the sending of data via FIFO to an FPGA card via DMA.  I would assume that if I have several locations in my RT code sending data via DMA FIFO that it is still guaranteed that any given DMA Transfer (let's say 12 data values) are delivered atomically.  That is to say that each DMA node data is sent as a contiguous block.
    Take the following example.  I have two DMA FIFO nodes in parallel.  I don't know which is going to be executed first, and they will most of the time be vying for bandwidth.  Both nodes send over the SAME DMA FIFO.  Does the data arrive interleaved, or is each sent block guaranteed to be contiguous ont he receiving end.  Do I end up with
    Data0 (Faster node)
    Data1 (Faster node)
    Data2 (Faster node)
    Data3 (Faster node)
    Data11 (Faster node)
    Data0 (Slower node)
    Data1 (Slower node)
    Data11 (Slower node)
    or do the individual items get interleaved.
    I'm kind of assuming that they remain in a contiguous block which I'm also hoping for because I want to abuse the DMA FIFO as a built-in timing source for a specific functionality I require on my FPGA board.  I can then use the RT-FPGA DMA Buffer to queue up my commands and still have them execute in perfect determinism (Until the end of the data in my single DMA transfer of course).
    Shane.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

    Woah, new avatar. Confusing! 
    I am going to preface this by saying that I am making assumptions, and in no way is this a definitive answer. In general, I have always had to do FPGA to RT streaming through a FIFO and not the other way around.
    When writing to the FPGA from the RT, does the FIFO.write method accept an array of data as the input? I'm assuming it does. If so, I'd then make the assumption that the node is blocking (like most everything else in LabVIEW). in which case the data would all be queued up contiguously. Interleaving would imply that two parallel writes would have to know about each other, and the program would wait for both writes to execute so that it could interleave the data. That doesn't seem possible (or if it was, this would be an awful design decision because what if one of the writes never executed).
    Of course, this is all assuming that I am understanding what you are asking.
    You're probably safe assuming the blocks are contigous. Can you test this by simulating the FPGA? If your'e really worried about interleaving, could you just wrap the FIFO.write method up in a subVI, that way you are 100% sure of the blocking?
    Edit: Had a thought after I posted, how can you guarantee the order things are written to the FIFO? For example, what if the "slow" write actually executes first? Then your commands, while contiguous, will be "slower node" 1-12 then "faster node" 1-12. It seems to me you would have to serialize the two to ensure anything. 
    Sorry if I'm not fully understanding your question.
    CLA, LabVIEW Versions 2010-2013

  • Why should I adopt LABVIEW FPGA as a tool for developing my FPGA projects?

    Dear Friends, 
    Since I have started using LABVIEW FPGA, I got too many questions in my mind looking for answers! 
    1-      Does anybody can tell me “why should I adopt LABVIEW FPGA as a tool for developing my FPGA projects?”
    I mean there are many great tools in this field (e.g. Xilinx ISE, ….); what makes LABVIEW FPGA the perfect tools that can save my time and my money? 
    I’m looking for a comparison can show the following points:
    ·         The Code size and speed optimization.
    ·         Developing time.
    ·         Compiling time.
    ·         Verifying time.
    ·         Ability to developing in future.
    ·         …etc.. 2-     
    I’ve Spartan-3E kit, I’m so glad that LABVIEW support this kit; I do enjoyed programming the kit using LABVIEW FPGA, but there are too many obstacles!
    The examples come with Spartan-3E driver don't cover all peripherals on board (e.g. LAN port is not covered)! There is a declaration at NI website which is "LabVIEW FPGA drivers and examples for all on-board resources" Located at: http://digital.ni.com/express.nsf/bycode/spartan3eI don’t think that is true!
    Anyway, I will try to develop examples for the unsupported peripherals, but if the Pins of these peripherals are not defined in the UCF file, the effort is worthless! The only solution in this case is to develop VHDL code in ISE and use it in Labview FPGA using HDL node!?
    3-      I wonder if NI has any plan to add support for Processor setup in Labview FPGA (Like we do in EDK)?
    4-      I wonder if NI has any plan to develop a driver for Virtex-5 OpenSPARC Evaluation Platform ?http://www.digilentinc.com/Products/Detail.cfm?Nav​Path=2,400,599&Prod=XUPV5 
    Thnaks & regards,Walid
    Solved!
    Go to Solution.

    Thanks for your questions and I hope I can answer them appropriately
    1. LabVIEW FPGA utilizes the intuitive graphical dataflow language of LabVIEW to target FPGA technology. LabVIEW is particularly nice for FPGA programming because of its ability to represent parallelism inherent to FPGAs. It also serves as a software-like programming experience with loops and structures which has become a focus of industry lately with C-to-gates and other abstraction efforts. Here are some general comparison along the vectors you mentioned
    Code Size and speed optimization - LabVIEW FPGA is a programming language. As such, one can program badly and create designs that are too big to fit on a chip and too slow to meet timing. However, there are two main programming paradigms which you can use. The normal LabVIEW dataflow programming (meaning outside a single-cycle loop) adds registers in order to enforce dataflow and synchronization in parity with the LabVIEW model of computation. As with any abstraction, this use of registers is logic necessary to enforce LabVIEW dataflow and might not be what an expert HDL programmer would create. You trade off the simplicity of LabVIEW dataflow in this case. On the other hand, when you program inside a Single-Cycle timed loop you can achieve size and speed efficiencies comparable to many VHDL implementations. We have had many users that understand that way LabVIEW is transformed to hardware and program in such a way to create very efficient and complex systems.
    Development Time - Compared to VHDL many of our users get near infinite improvements in development time due to the fact that they do not know (nor do they have to know) VHDL or Verilog. Someone who knows LabVIEW can now reach the speeds and parallelism afforded by FPGAs without learning a new language. For harware engineers (that might actually have an alternative to LabVIEW) there are still extreme time saving aspects of LabVIEW including ready-made I/O interfaces, Simple FIFO DMA transfers, stichable IP blocks, and visualizable parallism.  I talk to many hardware engineers that are able to drastically improve development time with LabVIEW, especially since they are more knowledgable about the target hardware.
    Compilation Time - Comparable to slightly longer to due to the extra step of generating intermediate files from the LabVIEW diagram, and the increased level of hierarchy in the design to handle abstraction.
    Verification Time - One of our key development initiatives moving forward is increased debugging capabilities. Today we have the abilities to functionally simulate anything included in LabVIEW FPGA, and we recently added simluation capabilities for Imported IP through the IP Integration node on NI Labs and the ability to excite your design with simulated I/O. This functional simualation is very fast and is great for verification and quick-turn design iteration. However, we still want to provide more debugging from the timing prespective with better cycle-accurate simulation. Although significantly slower than functional simulation. Cycle-accuracy give us the next level of verification before compilation. The single cycle loop running in emulation mode is cycle accurate simluation, but we want more system level simulation moving forwrad. Finally, we have worked to import things like Xilinx chipscope (soon to be on NI Labs) for on-chip debugging, which is the final step in the verification process. In terms of verification time there are aspects (like functional simulation) that are faster than traditional methods and others that are comparable, and still other that we are continuing to refine.
    Ability to develop in the future - I am not sure what you mean here but we are certainly continuing to activiely develop on the RIO platform which includes FPGA as the key diffentiating technolgoy.  If you take a look at the NI Week keynote videos (ni.com/niweek) there is no doubt from both Day 1 and Day 2 that FPGA will be an important well maintained platform for many years to come.
    2. Apologies for the statement in the document. The sentence should read that there are example for most board resources.
    3. We do have plans to support a processor on the FPGA through LabVIEW FPGA. In fact, you will see technology on NI Labs soon that addresses this with MicroBlaze.
    4. We do not currently have plans to support any other evaluation platforms. This support was created for our counterparts in the academic space to have a platform to learn the basics of digital design on a board that many schools already have in house. We are currently foccussing on rounding out more of our off-the-shelf platform with new PCI Express R Series boards, FlexRIO with new adapter modules, cRIO with new Virtex 5 backplanes, and more.
     I hope this has anwered some of the questions you have.
    Regards 
    Rick Kuhlman | LabVIEW FPGA Product Manager | National Instruments | ni.com/fpga
    Check out the FPGA IPNet for browsing, downloading, and learning about LabVIEW FPGA IP Cores

Maybe you are looking for

  • Problem accessing the resource in SP website

    I am new to SP and my boss created a sharepoint resources, at let's say abc.sharepoint.com My boss wish to access the resources using a .NET program. My developing computer does not have Windows Server but attempted to use the API by importing refere

  • Imac 20" N. American model to be used in Europe

    I am having an iMac 20" come from Canada but I would like to know if they absolutely work here in Europe. Apparently there were manufacturing processes/decisions with the iMac G5 that lead those models to fail if plugged in to European sockets. Of co

  • Explain Webservice

    HI, Can anybody explain the difference between a webservice and RFC/RPC, etc from a layman's point of view? Thank you very much. Regards, Arjun.

  • Possible false positive issue with SigID 3334

    I have yet another possible false positive signature. This time it is SigID 3334 - Windows Workstation Service Overflow. Here's a capture from the EventStore on the sensor, again with the signature modified so that it captures the offending packet (C

  • The document is encrypted

    How I can read the encrypted files thru iPad on web? It always notes  The document is encrypted and can\U2019t be opened. I have no chance to enter my passwords. Thks