HOST to FPGA DMA Transfers

Hi,
We're having trouble using the Host to FPGA DMA feature available in
LabVIEW 8.20 with IFRIO. After starting a compile, we get the following
error message:
An internal software error has occurred.
Please contact National Instruments technical support at ni.com/support
with the following information:
Error -61048 occurred at This target does not support DMA Output (from the host to the target).
Possible reason(s):
LabVIEW FPGA:  This target does not support DMA Output (from the host to the target).
Any help would be greatly appreciated.
Thanks.

Hi Manik:
We did not support DMA output on the PCI-5640R when we released NI-5640R 1.0. This is why you are getting the error message that you are seeing.
We plan to add support for DMA output in an upcoming release.
----abhay

Similar Messages

  • Alternative to DMA transfers from Host to FPGA in cRIO 9004

    Hi,
    We are using a cRIO 9004 + 9104 FPGA Chassis + 8x NI 9505 modules to replace an 8 axis Scorbot educational robot controller by a cRIO, but we found some troubles:
    - We are using FPGA IPs for Encoder reading, PWM signal generation and PID control, for the 8 axis. This is done using 3 SCTLs for each axis: one SCTL for encoder reading, one for PWM generation and one for NI Softmotion Splines and PID position control.
    The FPGA VI is successfully deployed.
    - We are using the NI Soft motion in the Real Time controller for trajectory generation and user interface. We are not using another VI on the Windows PC for user interface, just the Real Time processor.
    Here we get the R-T error -63001 (NI RIO FPGA Communications) when deploy the Real Time VI. It seems that our controller doesn't support DMA transfers from the Host to the FPGA.
    Questions:
    - If we cannot use DMA transfers from Host to FPGA, is there any other way to communicate the Host to the FPGA avoiding the R-T error -63001?
    - Is a good idea to use indexed IPs to reduce the amount of SCTLs in the FPGA to just 3 (instead of 24 SCTLs)?
    - Do you have any suggestion?
    Regards,
    Manuel

    Hey there.
    Indeed cRIO 900x series does not support DMA transfers from the Host to the FPGA; that is the reason you are getting an error 63001.
    However it does support data transfer from the FPGA to the Host You can find that information in this KB.
    To transfer information from the host to the rt and to the fpga you can use:
    Host <--> RT
    Network shared variables
    TCP
    UDP
    Data socket
    RT --> FPGA
    Front panel communication
    FPGA -->RT
    User defined variables
    Direct memory access
    DMA FIFOs
    I added some links with examples of each type of communication
    Hope this info helps
    Good luck

  • Transfering signal from Host to FPGA

    Hi:
    I am new to labview FPGA. I am trying to set
    up a transmitter by using Ni 5640r for my project.
    Right now i can only generate OFDM signal from labview itself.
    After that I try to program FPGA but when I use oscilloscope to catch the
    signal, I failed.
    I think I may have trouble on transfering signal from Host to FPGA.
    Can somebody help me with that problem?
    Thanks
    Austin
    Attachments:
    PCP-OFDM.zip ‏1785 KB

    Hello,
    If you want to transfer data from the host to the FPGA you need to create a FIFO with the type set to ‘Host to Target – DMA’. When you then use it in the FPGA VI the only selection you can do is to read it. When you set the type to ‘Target to Host- DMA’ you can only write to it.
    Regards,
    Jimmie Adolph
    Systems Engineer Manager, National Instruments Northern Region
    Bring Me The Horizon - Sempiternal

  • DMA from host to FPGA target is not supported for this remote system.

    I am trying tocommunicate with my FPGA (on the cRIO 9002) from the RTOS.  I setup up anOpen the correct VI reference with no error but as soon I try to access thefifo I receive error -63001 and the attached message says:
    Error -63001 occurredat Open FPGA VI Reference in target - multi rate - variables - fileIO_old.vi
    Possible reason(s):
    NI-RIO FPGACommunications Framework:  (Hex 0xFFFF09E7) DMA from host to FPGA targetis not supported for this remote system. Use another method for I/O or changethe controller associated with the FPGA target. 
    What other I/O optionsdo I have to move data asynchronously from the RTOS to the FPGA. I triedcreating memory but it appears that I can not write to the memory from the RTOSside.
    We also have a 9012sitting around will using this cRIO instead solve this problem. 
    I am very very greenwhen it comes to LabView so I apologize if this is an easy question. 
    Solved!
    Go to Solution.

    As stated in the NI-RIO driver readme,
    DMA is not supported from the host to the FPGA on the cRIO-900x series.
    The cRIO-901x controller supports DMA transfers from host to FPGA and
    FPGA to host while the cRIO-900x series controllers only support FPGA
    to host DMA transfers. Therefore, LabVIEW returns an error if you try
    to transfer using DMA from the cRIO-900x controller.
    The 9012 looks like the ideal solution, you are very lucky to be having extra hardware laying around 
    Rob K
    Measurements Mechanical Engineer (C-Series, USB X-Series)
    National Instruments
    CompactRIO Developers Guide
    CompactRIO Out of the Box Video

  • Passing data from RT host to FPGA through DMA FIFO

    Hello,
    I am trying to write some data from an RT host to FPGA target using DMA FIFO then process this data and then read them back from the FPGA target to the RT host through another DMA FIFO. I am working on NI PXIe chassis 1062Q, with NI PXIe-8130 embedded RT controller and NI PXIe-7965R FPGA target.
    The problem I am facing is that I want to send three different arrays, two of the same size and the third one with different size, and I need the smaller one to be sent to the FPGA first. I tried using flat sequece with two frames in the FPGA VI. In the first frame I read and write the first array in a while loop which is finite (i.e., finite number of iterations). The second frame contains the process of reading and writing the second two arrays (of the same size) in a while loop that can be finite or infinite (according to a control). The problem is that this didn't work. The 2 arrays are displayed on the front panel of the RT host VI and are working fine, however, the array that should have been read in the first sequence doesn't show up on the front panel of the RT host VI. This doesn't make sense because if it is not passed from the host to the fpga and vice versa then the second frame shouldn't have been executed. Note that I am wiring (-1) to the timeout to block the while loop iterations till the passing of each element is complete. Thus the first while loop has 3 iterations only. Could someone help me undersdtand why this happens and how to solve this problem?
    I am attaching a picture of both the host and the fpga vi.
    Thank you.
    Solved!
    Go to Solution.
    Attachments:
    RT host vi.png ‏102 KB
    FPGA vi.png ‏28 KB

    No need to initalize the arrays with values that you will immediately overwrite.  Here's what I believe to be equivalent code:
    The array outputs should be wired directly to the FPGA FIFO writes.  Do not use local variables when you can wire directly.
    If you know that you want to transfer the Temp Data Array first, why not make your code do that?  Eliminate the sequence structure, and put the functions in the order in which you want them to execute.  Use the FPGA reference and error wires to enforce that order.  You might consider writing the Temp Data Array, reading it back, then writing the Real and Imag A arrays, to see if that gets you the results you expect.  Run the code in simulation (in the project, right-click on the FPGA target and execute on the host with simulated IO) so that you can use execution highlighting and probes to see what is happening.  Wire the error wires through and see if you get an error anywhere.  Make sure you're not missing something simple like looking at the wrong starting array index.

  • Flexrio FPGA dma and dram simulation

    I have a pair of Flex RIO 7966r boards where I am trying to perform DRAM to DMA transfers.  Each FPGA uses both banks of DRAM.  One bank for capturing Camera Link frames, the other bank for capturing sub-frames from within each frame (And performing some processing on the sub-frame data).
    Each DRAM bank is written into from their own target scopes FIFOs.
    Each DRAM bank is read into their own target-to-host DMA FIFOs.
    When only one side is operating or the other (Capturing sub-frames by themselves or full frames by themselves) everything flows nicely.  But when I capture both at the same time, there appears to be some sort of contention in the DRAM (I suspect from the DMA engine).  Since I am simulating all of this, I would like to ask if anyone has the detailed descriptions of the DRAM interface signals below?  These are generated by LabView but I have found no explanation of what they mean in any documentation.
    Also, in the simulation build, there is obviously a DMA simulation.  But from within the simulator, I can find no signals related to the FPGA based DMA FIFOs or the simulated DMA transfers.  All I can infer about the DMA transfers is the effect on the DRAM above.  The DMA FIFO is being populated directly from the DRAM (Yes, this is a highly modified variant of the 10 tap cameralink (with DRAM) example from the NI-1483 examples).
    Does anyone know how I can see the DMA bahavior from within a simulation?  This would most likely allow me to see exactly why the contention is taking place.
    Thanks!

    Hey xl600,
    I'm not immediately sure how to have ISim display the DMA Engine behavior, but I'll see if I can I dig anything up for you. I've come across a couple of other users encountering issues with FIFO signals appearing in ISim over on the Xilinx forums, so it might be worthwhile to post there as well in case it happens to be due to ISim itself.
    Regards,
    Ryan

  • Labview 8.0 FPGA DMA question

    I am developing a control system which needs to access a big mass of
    variable data (32 Mbyte) that i can put into the host computer RAM. In
    labview tutorial and also on other web resources it is well described
    how the DMA FIFO can be used to "write to" host PC memory (RAM) but
    nothing about "reading from" the PC RAM. Is it possible? And which is
    the control to be used? Do I need some lower level coding?
    Thanx
    Davide

    Hi Davide,
    As of right now, the DMA transfers are one-way only (FPGA to host).
    Best regards,
    David H.
    Systems Engineer
    National Instruments

  • FPGA DMA Size Allocation

    Hi all,
    My application involves grabbing images from a 3-taps, 16-bit camera using FlexRIO. The PXI controller I am using is Windows-based while the FlexRIO module that I have is a PXI-7954 + NI 1483 adapter. The size of the image I am grabbing is 2560 x 2160, U16 and the clock cycle is 100 MHz. I've been trying for over a week and up to today, I still am not able to get the image from the camera as I kept on getting the DMA Write Timeout error. Right now, the DMA size in the FPGA is set at 130k but whenever I tried to increase this further, I get a compilation error. I've tried to have the host program to grab 100k data points from the FPGA DMA at every milisecond but it seems that, I am capped at about 10-15ms. Perhaps, Windows has its own limitation...
    Attached is the program that I am using, modified from the LabVIEW shipped example. Please advice, how do move forward from here? or, is it possible to further increase the DMA buffer size up 10x higher than the current limit?
    Attachments:
    1-Tap 10-Bit Camera with Frame Trigger.zip ‏1684 KB

    Hi Shazlan
    Apologies for taking so long to reply to you.
    You are correct in saying that the latest driver is IMAQ 4.6.4 and this can be downloaded from our website if you have not done so already.
    If you have already installed the IMAX 4.6.4 driver, has this managed to resolve your issue?
    Also, have you tried to run the compilation again and obtained a report outlining the problems?
    As a side note - I have been looking into the possibility of downloading some sort of driver for the Samos camera you are using from Andorra.  While National Instruments have not created a driver for this device, Andorra do have a Software Development Kit (SDK) which they say works with LabVIEW.  You may find it useful to have this so that you no longer have to write the driver yourself.  This may then save resources on the FPGA.
    Keep me updated on your progress and I will continue to look into this issue for you.
    Regards
    Marshall B
    Applications Engineer
    National Instruments UK & Ireland

  • Using Host and FPGA.vi in Teststand

    Does anyone know how to use the Host and FPGA vi's in Teststand??  A National App Engr told me I have to call the Project that the vi is in to get all the functionality of the FPGA.  How do you call a Project in Teststand??
    Thanks

    Ensure you are using the TestStand version 2010 or above. Create a new instance of a sequence and add a LabVIEW action step to it. Go to Module panel and browse for a LabVIEW project as displayed below.

  • What is block size for dma transfers? Can it be set?

    Can't find Application Note 011, "DMA Fundamental on Various PC Platforms" on the NI website.
    Can someone please send me the link?
    What I'm trying to figure out is what is the packet size (chunk size) for DMA transfers on NI M series boards?
    i.e. how many samples are collected into the boards FIFO buffer before
    DMA transfer takes place.  (How many samples (or bytes) are
    transferred at a time?)
    Is this packet size (chunk size) configurable?  If so , what is minimum value that it can take on?
    Thanks,
    Maurice

    See post at http://forums.ni.com/ni/board/message?board.id=170&message.id=162527.

  • Minimum block size for DMA transfers

    Can't find Application Note 011, "DMA Fundamental on Various PC Platforms" on the NI website.
    Can someone please send me the link?
    What I'm trying to figure out is what is the packet size (chunk size) for DMA transfers on NI M series boards?
    i.e. how many samples are collected into the boards FIFO buffer before
    DMA transfer takes place.  (How many samples (or bytes) are
    transferred at a time.
    Is this packet size (chunk size) configurable?  If so , what is minimum value that it can take on?
    Thanks,
    Maurice

    See post at http://forums.ni.com/ni/board/message?board.id=170&message.id=162527.

  • K8N Neo2 strange behavior with SATA enable DMA transfers

    K8N Neo2 with Maxtor 6B250S0 250GB SATA drive on SATA3 and Maxtor 6B200S0 200GB SATA drive on SATA4.  No RAID.  I use SATA3/4 because I have read that SATA1/2 are not locked with the PCI bus.  I do overclock a bit.     I have SATA1/2 disabled in the bios.
    The 250GB drive is for my applications, and the 200GB drive is for holding backups.  I use Drive Image 2002 to create image files of my partitions.  To create an image of my main partition, Drive Image boots to DOS.  I noticed that the drive performance under DOS was slow.  After a bit of testing, I discovered that enabling DMA transfers for SATA fixes the problem.  This makes sense, because under Windows the IDE drivers provide DMA access, but under DOS the hardware itself needs to provide DMA function.  But here is what's strange... I have to enable DMA for both SATA channels!     So even though I am using only one of the channels, and have the other disabled in the bios, I have to enable DMA transfers for both channels in order to have good performance under DOS.
    This occurs under both 1.4 and 1.5 bios versions.  This isn't a big problem and appears to be just a bios bug of some sort, but I thought I would post my findings here in case someone else is getting poor performance under DOS.
    BTW, does anyone know why these options default to disabled in the bios?

    BTW, does anyone know why these options default to disabled in the bios?
    It makes sense when windows and the drivers provide it .
    But not e.g when installing an OS or doing dos based or windows preinstall enviroment imaging .
    I have also thought it's strange they have it disabled at default .
    Atleast they should be enabled with "load optimized defaults" .
    Just something we have to be aware off i guess .
    This is one of the bios entrys i always ajust when starting with a default bios load .
    Nice tip about having both enabled , wasn't aware of that .
    Just been having the habit of enabling both 1/2 and 3/4 each time even if ports are used or not .

  • Dma host to fpga

    Use the DMA to transfer a array data from the RT system to the FPGA,and indicate the array data in the FGPA.
    Now,If the array data in the RT is a array including one element ,Ican recevie well in the FPGA VI,bcause in the FPGA VI the function of DMA Read only accept the single element,you can not put a array indication .But I want to transfer a array including many elements.How can i do?  

    Hi,
    You must use only fixed-size arrays in FPGA VIs. If you have a For Loop without a set number of iterations, you must use a Numeric Constant or a control to set the number of iterations. Alternatively, if you are building an array by using the Insert Into Array function, substitute with the Replace Array Subset function.
    I don’t know if this solve your problem, if not attach your VI and I goanna see what’s happen.
    Hope this helps,
    Benjamin R.
    R&D Software Development Manager
    http://www.fluigent.com/

  • Write latency on PCIe from PC (host) to FPGA

    Hello,
    In my application, the target is to write 64 Bytes from the PC to the FPGA with the minimum latency.
    I have implemented the “Virtex-7 FPGA Gen3 Integrated Block for PCI Express(3.0)” IP, that seems the best candidate for that and I am using the AXI STREAM “m_axi_cq” interface to write data to internal FPGA memory.
    The configuration is this one:
    PCIe GEN 3 / 8 lanes.
    AXI at 250 MHz with data bus = 256 bits.
    On the µP = XEON E5 V2, the right core is bound to the FPGA.
    I am using, specific Intel instruction to transfer directly bloc of 256 bits (in my example I am using 2 times this instruction in C code). I do not want to call a DMA, because I am afraid to lose more time to call the DMA than to make a direct memory write.
    In the FPGA, I put an ILA core to monitor the access (see waveforms).
    - First bus is the output of the Xilinx core (“m_axi_cq” bus)
    - Second bus is the “memory write signal” of 8 memories of 32 bits data width => 16 x writes of 32 bits = 64 Bytes
    Question:
    - Even if I use 2 Intel instruction to transfer 2x256 bits, I see 4 transfers of 128 bits => is it normal ?
    - It seems that we cannot have a TLP bigger than 256 bits (without DMA access) => do you confirm?
    - Between 2 write access, I have 23 clock cycles = 23*4ns = 92 ns, and I do not arrive to decrease this score => did I reached the minimum possible ?
    Many thanks for your attention

     
    Be careful with watching the latency. 
    It's HIGHLY variable in our cases.
    We've got our PCIE block (endpoint on the FPGA) issuing block reads to the cpu host and/or an NVIDIA GPU.
    We've measured on various systems, average latencies in the 180-240 clocks range.  (Clock = pcie user clock, 250 MHz).  That's not's great, but for us tolerable. 
    The problem was the distribution in the latencies.  The WORST case latencies can be terrible.  > 1600 clocks in some cases.    Plotting a few series of results, we see a bimodal distbution of latencies.  I.e. a bunch hovering around 150, and a bunch hovering around 300.  The average being the above results.  Usually around the point where the latencies are tending to move from one "average" to the other, we get the outlier extra long latency.
    Probably something to do with cache flushes/fills happening over on the CPU.
    In any event, because of this high variability of latencies, our first designs broke real time.  We had to re-architect things to handle.
    Bandwidth's not a problem for PCIE. Predicable latency, however is troublesome...
    Regards,
    Mark
     

  • Fpga DMA FIFO compilation error

    Hello,
    I have a cRIO 9074 with FPGA. I am trying a simple piece of code to learn how to aquire data that is generated on the FPGA at a rate of 10 KHz and transfer it to the Host VI for processing it later offline. However, I encounter this compilation error when compiling this basic FPGA VI containing a FIFO
    write node (picture of the VI attached below). In the Compilation Report, it says that there were 256 Block RAMs used (the
    total number is 40), therefore an error was produced. The exact compilation error
    notification from the Xilinx report is reproduced below:
    # Starting program map
    # map -o toplevel_gen_map.ncd -intstyle xflow -timing toplevel_gen.ngd
    toplevel_gen.pcf
    Using target part "3s2000fg456-4".
    Mapping design into LUTs...
    Running directed packing...
    Running delay-based LUT packing...
    ERRORack:2310 - Too many comps of type "RAMB16" found to fit
    this device.
    ERROR:Map:115 - The design is too large to fit the device.  Please check the Design Summary section to
    see which resource requirement for your design exceeds the resources available
    in the device. Note that the number of slices reported may not be reflected
    accurately as their packing might not have been completed.
    NOTE:  An NCD file will still be
    generated to allow you to examine the mapped design.  This file is intended for evaluation use only,
    and will not process successfully through PAR.
    Mapping completed.
    See MAP report file "toplevel_gen_map.mrp" for details.
    Problem encountered during the packing phase.
    Design Summary
    Number of errors   :   2
    Number of warnings : 125
    ERROR:Xflow - Program map returned error code 2. Aborting flow
    execution...
    Bitstream Not Created
    Timing Analysis Passed
    What does this mean? How can I fix this error?
    Thank you,
    Bogdan
    Solved!
    Go to Solution.
    Attachments:
    FPGA.png ‏16 KB

    Sorry, I forgot to mention that...LAbVIEW 2009. And yes, this is the only loop on the FPGA.vi. I just made up this code to understand how exactly I would save some data on the host for subsequent processing, but I didn't get to that point because the VI on the FPGA does not compile successfully. Do you know of any example of the most basic code for DMA FIFOs between the FPGA and Host Computer? This should be pretty straight forward, but for some reason it's not.
    Thanks,
    Bogdan

Maybe you are looking for

  • Sy-tabix in loop : Doubt

    LOOP AT i_lfa1 INTO wa_lfa1 WHERE werks = space.           wf_tabix = sy-tabix.                                     APPEND wa_lfa1 TO i_lfa1_werks.                          DELETE i_lfa1 index wf_tabix.                         ENDLOOP.               

  • Unable To Repair/Update Firmware for a Creative Nomad Jukebox

    Hello all, I've been trying for the past...3 or 4 hours to get my player to work, but it simply refuses to work =P. I sent the problem into Creative, but haven't gotten a response for a couple days now. So, here's a copy-paste of what I sent in that

  • I have non English characters showing up.

    This morning I wanted to make a change in my Security & Privacy settings and when prompted to authenticate the dialogue box contained non-English characters.  Even the OK button appeared to be in Chinese are the like. Any ideas welcome.  OS X 10.8.1

  • Does Pages '09 embed font in PDF?

    Well, my subject says it all. Some PDF creators on Windows have the "embed fonts" option. Pages doesn't. However, I don't know if it does it by default when exporting to PDF. Thanks, Manu

  • BB Z30 - Upgrade to version 10.2.1.3247 - Improved battery life

    On 09.09.2014 I got a notification about an available OS upgrade to version 10.2.1.3247 and I installed it. After the upgrade the battery life of my BB Z30 improved dramatically. Thank you BB Team!