FPGA DMA Size Allocation

Hi all,
My application involves grabbing images from a 3-taps, 16-bit camera using FlexRIO. The PXI controller I am using is Windows-based while the FlexRIO module that I have is a PXI-7954 + NI 1483 adapter. The size of the image I am grabbing is 2560 x 2160, U16 and the clock cycle is 100 MHz. I've been trying for over a week and up to today, I still am not able to get the image from the camera as I kept on getting the DMA Write Timeout error. Right now, the DMA size in the FPGA is set at 130k but whenever I tried to increase this further, I get a compilation error. I've tried to have the host program to grab 100k data points from the FPGA DMA at every milisecond but it seems that, I am capped at about 10-15ms. Perhaps, Windows has its own limitation...
Attached is the program that I am using, modified from the LabVIEW shipped example. Please advice, how do move forward from here? or, is it possible to further increase the DMA buffer size up 10x higher than the current limit?
Attachments:
1-Tap 10-Bit Camera with Frame Trigger.zip ‏1684 KB

Hi Shazlan
Apologies for taking so long to reply to you.
You are correct in saying that the latest driver is IMAQ 4.6.4 and this can be downloaded from our website if you have not done so already.
If you have already installed the IMAX 4.6.4 driver, has this managed to resolve your issue?
Also, have you tried to run the compilation again and obtained a report outlining the problems?
As a side note - I have been looking into the possibility of downloading some sort of driver for the Samos camera you are using from Andorra.  While National Instruments have not created a driver for this device, Andorra do have a Software Development Kit (SDK) which they say works with LabVIEW.  You may find it useful to have this so that you no longer have to write the driver yourself.  This may then save resources on the FPGA.
Keep me updated on your progress and I will continue to look into this issue for you.
Regards
Marshall B
Applications Engineer
National Instruments UK & Ireland

Similar Messages

  • 5640R FPGA Examples: Samples missing at multiples of DMA-Size

    Hi all,
    We have started working in NI-5640R IF-RIO with Labview-2011.
    In our current setup, a 15 MHz sine wave is generated from a signal generator and it was connected with AI of 5640R.
    Then we ran the buit-in examples and recorded the data in Measurement-File (ExpressVI) in TDMD format. The output tdms file was then analyzed using SCOUT software. (see Demo_setup.png).
    After running various buit-in example we found that Driver>Stream-to-Disk example was running fine and input 15MHzCW sine wave was acquired perfectely by the 5640R card. (see Driver_Stream_to_Disk.png)
    But there seems to be an issue/bug with FPGA>5640R & FPGA>5640R Async examples. THe problem is that when we anlaze the acquired sine-wane in SCOUT software there are jumps in the singal which means that after every DMA some inbetween samples are being skipped by the Device/HOST.
    This is shown in the figures FPGA_5640R.png & FPGA_5640R_Async.png that after every DMA(5000) sine wave has jumps. 
    Afterwards we increased the DMA-size to 10000 and analyzed the captured signal again. It was observed that now the jumps in the sine-wave were visiable at 10000. (see FPGA_5640R_Async_DMA10000.png).
    Although Driver-examples are working fine but in our work we want to use 5640R FPGA.
    Can anybody explain this behaviour or what should we do to overcome this issue?
    Thanks,
        -Adeel

    Hi Chris,
    >>Did you use the Asynch Analog Input Example as it is out of the box or did you make any modifications?
       Yes i used it "as-it-is". Just added a Measurement-FIle at the I/Q output to visualize the results offline.
    >>When you are talking about changing the DMA FIFO size, are you referring to the host side or the FPGA?
         I am reffering to "DMA-Read Number-of-Elements" see image below. In examples this parameter is same as "AI-Samples-to-Acquire"
    >>From looking at the example, it looks like the host buffer is set to 1M elements, and the FPGA side to 16,383
        Yes this is correct and i didnot changed this settings
    >> Is there a chance you are changing the samples to acquire (That starts at a default of 5,000)?
         Yes by default it is 5000 and results were incorrect. Then i changed this to 10000 and 15000 and the results were same i.e. samples were skipped at multiples of this parameter (plz see my first post)
    Using hit & trial, this issue was resolved if we disconnecnt "AI-Samples-to-Acquire" control from FPGA-parameter inptu setting, then the error disapered and now every-thing is working fine.
    Thanks,
        -Adeel

  • What's the FPGA step size and how to calculate it?

    Hi there,
    I inherited an vi with problem in it. It's basically reading the binary file then display it. So the vi reads the binary file by using Read From Binary File, the output data from this function then sends to FPGA after multiply a number (32767/10). But unfortunately I got a wrong output. The final output value for some reasons looks got attenuated. People told me maybe it's related to the FPGA step size, so I want to know what is the FPGA step size and how to calculate it. Can someone answer my questions here?
    Thanks in advanced!!!

    Hi Weny,
    It sounds like you are trying to find out the output resolution of your FPGA module.  It would be helpful if you provided what FPGA module you are using in your tests so we know what information to provide.  For instance, the R Series Manual provides information on how to calculate the required DAC output code to generate the desired output voltage.  You should also try to keep the accuracy of your device in mind.  The analog output signal you are generating will be subject to gain, offset, and noise errors.  You can use the specifications sheet (such as the R Series Specifications) of your device to determine what accuracy your board will have.  The specs also provide information on the resolution of the board.  You can search ni.com for the manual and specifications for your particular device if you are not using R Series. 
    Regards,
    Browning G
    FlexRIO R&D

  • HOST to FPGA DMA Transfers

    Hi,
    We're having trouble using the Host to FPGA DMA feature available in
    LabVIEW 8.20 with IFRIO. After starting a compile, we get the following
    error message:
    An internal software error has occurred.
    Please contact National Instruments technical support at ni.com/support
    with the following information:
    Error -61048 occurred at This target does not support DMA Output (from the host to the target).
    Possible reason(s):
    LabVIEW FPGA:  This target does not support DMA Output (from the host to the target).
    Any help would be greatly appreciated.
    Thanks.

    Hi Manik:
    We did not support DMA output on the PCI-5640R when we released NI-5640R 1.0. This is why you are getting the error message that you are seeing.
    We plan to add support for DMA output in an upcoming release.
    ----abhay

  • Fpga DMA FIFO compilation error

    Hello,
    I have a cRIO 9074 with FPGA. I am trying a simple piece of code to learn how to aquire data that is generated on the FPGA at a rate of 10 KHz and transfer it to the Host VI for processing it later offline. However, I encounter this compilation error when compiling this basic FPGA VI containing a FIFO
    write node (picture of the VI attached below). In the Compilation Report, it says that there were 256 Block RAMs used (the
    total number is 40), therefore an error was produced. The exact compilation error
    notification from the Xilinx report is reproduced below:
    # Starting program map
    # map -o toplevel_gen_map.ncd -intstyle xflow -timing toplevel_gen.ngd
    toplevel_gen.pcf
    Using target part "3s2000fg456-4".
    Mapping design into LUTs...
    Running directed packing...
    Running delay-based LUT packing...
    ERRORack:2310 - Too many comps of type "RAMB16" found to fit
    this device.
    ERROR:Map:115 - The design is too large to fit the device.  Please check the Design Summary section to
    see which resource requirement for your design exceeds the resources available
    in the device. Note that the number of slices reported may not be reflected
    accurately as their packing might not have been completed.
    NOTE:  An NCD file will still be
    generated to allow you to examine the mapped design.  This file is intended for evaluation use only,
    and will not process successfully through PAR.
    Mapping completed.
    See MAP report file "toplevel_gen_map.mrp" for details.
    Problem encountered during the packing phase.
    Design Summary
    Number of errors   :   2
    Number of warnings : 125
    ERROR:Xflow - Program map returned error code 2. Aborting flow
    execution...
    Bitstream Not Created
    Timing Analysis Passed
    What does this mean? How can I fix this error?
    Thank you,
    Bogdan
    Solved!
    Go to Solution.
    Attachments:
    FPGA.png ‏16 KB

    Sorry, I forgot to mention that...LAbVIEW 2009. And yes, this is the only loop on the FPGA.vi. I just made up this code to understand how exactly I would save some data on the host for subsequent processing, but I didn't get to that point because the VI on the FPGA does not compile successfully. Do you know of any example of the most basic code for DMA FIFOs between the FPGA and Host Computer? This should be pretty straight forward, but for some reason it's not.
    Thanks,
    Bogdan

  • SGA+PGA Size allocation

    Hi all
    I Have a server System Configuration with
    RAM size: 16GB Ram & Hard disk Size of 1TB.
    I Installed Oracle 11g R2 in the server System
    I want to Allocate Dedicated Memory for oracle..
    How Should i calculate the Memory size of SGA+PGA for good Performance..

    Pl do not post duplicates - Reg:-SGA & PGA Memory Allocation Size

  • Flexrio FPGA dma and dram simulation

    I have a pair of Flex RIO 7966r boards where I am trying to perform DRAM to DMA transfers.  Each FPGA uses both banks of DRAM.  One bank for capturing Camera Link frames, the other bank for capturing sub-frames from within each frame (And performing some processing on the sub-frame data).
    Each DRAM bank is written into from their own target scopes FIFOs.
    Each DRAM bank is read into their own target-to-host DMA FIFOs.
    When only one side is operating or the other (Capturing sub-frames by themselves or full frames by themselves) everything flows nicely.  But when I capture both at the same time, there appears to be some sort of contention in the DRAM (I suspect from the DMA engine).  Since I am simulating all of this, I would like to ask if anyone has the detailed descriptions of the DRAM interface signals below?  These are generated by LabView but I have found no explanation of what they mean in any documentation.
    Also, in the simulation build, there is obviously a DMA simulation.  But from within the simulator, I can find no signals related to the FPGA based DMA FIFOs or the simulated DMA transfers.  All I can infer about the DMA transfers is the effect on the DRAM above.  The DMA FIFO is being populated directly from the DRAM (Yes, this is a highly modified variant of the 10 tap cameralink (with DRAM) example from the NI-1483 examples).
    Does anyone know how I can see the DMA bahavior from within a simulation?  This would most likely allow me to see exactly why the contention is taking place.
    Thanks!

    Hey xl600,
    I'm not immediately sure how to have ISim display the DMA Engine behavior, but I'll see if I can I dig anything up for you. I've come across a couple of other users encountering issues with FIFO signals appearing in ISim over on the Xilinx forums, so it might be worthwhile to post there as well in case it happens to be due to ISim itself.
    Regards,
    Ryan

  • Labview 8.0 FPGA DMA question

    I am developing a control system which needs to access a big mass of
    variable data (32 Mbyte) that i can put into the host computer RAM. In
    labview tutorial and also on other web resources it is well described
    how the DMA FIFO can be used to "write to" host PC memory (RAM) but
    nothing about "reading from" the PC RAM. Is it possible? And which is
    the control to be used? Do I need some lower level coding?
    Thanx
    Davide

    Hi Davide,
    As of right now, the DMA transfers are one-way only (FPGA to host).
    Best regards,
    David H.
    Systems Engineer
    National Instruments

  • Stripe Breadth and Block size Allocation..

    Hi,
    Could anyone please advise me if there is any formula or utility to calculate or to investigate the stripe Breadth or the Block size to be used while creating the pools, I know it differs with the different kind of data to be stored..
    It should be a document or a utility that helps with that, I'm still fetching for.
    waddah
    MacBook Pro   Mac OS X (10.4.5)   PowerMac G5

    Check out Andre Aulich's site:
    http://www.andre-aulich.de/en/perm/optimized-xsan-settings-for-several-video-fil e-formats
    A lot of good testing went into those results.

  • How to determine FPGA code size ?

    Hi!
    Designing my application, I would like to have an estimation of the memory space it will take into the FPGA ... Is that possible ?

    Yes I agree with you. Here are some tips that might help you during coding.
    1. Avoid using series of multipliers.
    2. Avoid using too many nested case structures.
    3. Avoid using replacing more than one element of an array at a time.
    4. Try and pipeline your code as much as possible to achieve higer clock rates.
    5. Identify the critical path and see if you can reduce this length by adding register in between sections of the code.
    6. After all this, do not forget to test the functionality of the code on your PC before starting the compilation process.
    All the above are applicable when you are writing code inside a single cycle time-loop.
    I would also recommend that, do compile your code in parts rather the complete system. In the meantime, you can disconnect the compile server from LV and continue work on other sections of the code. After the compilation is done, you can always reconnect to the compile server to link the bitfile to the VI.
    You will learn to estimate the clock rate and slices used on the FPGA.
    --Vinay

  • FPGA Boolean Size in Memory

    LabVIEW stores booleans as a U8 in memory according to the article.
    http://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/how_labview_stores_data_in_memory/
    My question is what happens when you develop in FPGA? Specifically if I made 8 lookup tables with the data type being a boolean, does that take the same amount of resources as a single U8 lookup table? Or is it taking as much resources as 8 U8 lookup tables?
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.
    Solved!
    Go to Solution.

    Hooovahh wrote:
    LabVIEW stores booleans as a U8 in memory according to the article.
    http://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/how_labview_stores_data_in_memory/
    My question is what happens when you develop in FPGA? Specifically if I made 8 lookup tables with the data type being a boolean, does that take the same amount of resources as a single U8 lookup table? Or is it taking as much resources as 8 U8 lookup tables?
    I am not 100% sure, but I believe on FPGA a boolean is actually treated as a bit, not a byte. The u8 representation is due to a byte being the smallest addressable unit in a PC type environment. (At least I think this is correct. Someone smarter than me may say otherwise).
    CLA, LabVIEW Versions 2010-2013

  • Rt fpga dma timeout not waiting

    Hello
    I have a Target to Host DMA between. The read function sits in a while loop with a 2000ms timeout. Upon timeout it loops round and recalls the read function. On the second, and subsequent calls, it does not wait for the timeout time. I assume that this is because it was already itemed out. The error is wired to a shift register.
    Does the timeout result in an error on the error line?
    Is there a way to clear the timeout?
    If i change it to an indefinite wait, is there a way to force the read to wake up?
    Thanks
    Solved!
    Go to Solution.

    jonnnnnnn wrote:
    Hello
    I have a Target to Host DMA between. The read function sits in a while loop with a 2000ms timeout. Upon timeout it loops round and recalls the read function. On the second, and subsequent calls, it does not wait for the timeout time. I assume that this is because it was already itemed out. The error is wired to a shift register.
    Does the timeout result in an error on the error line?
    Is there a way to clear the timeout?
    If i change it to an indefinite wait, is there a way to force the read to wake up?
    Thanks
    Does the timeout result in an error on the error line? Yes, errro -54000 if memory serves correctly. Did you try highlighting execution to see this...
    Is there a way to clear the timeout? Don't wire -1, I got burned by this. The DMA actually polls in the background (I think but I never verified) so a negative timeout can slam your processor. You can maybe try interrupts, although I didn't use this method I just threw a not-too-long timeout on there and cleared the error if it was -54000
    If i change it to an indefinite wait, is there a way to force the read to wake up? The read will only return when data is there, unless you have a timeout.
    CLA, LabVIEW Versions 2010-2013

  • Modifying heap size allocated to glassfish...

    Hello!
    I've noticed that by default glassfish uses 512Mb for the heap. I'd like to modify this value.
    How can I change it?
    Is there any way through the administrative consiole provided (asadmin)?
    Thank you
    Sorin

    You can modify the heap size from the admin console.
    Login to the admin console, Click on Application Server on the Left Pane.
    On the Right Pane, click on JVM Settings -> JVM Options.
    Modify -Xmx to whatever value you desire.
    Cheers,
    Vasanth

  • DMA RT to FPGA guaranteed order?

    I have a question regarding the sending of data via FIFO to an FPGA card via DMA.  I would assume that if I have several locations in my RT code sending data via DMA FIFO that it is still guaranteed that any given DMA Transfer (let's say 12 data values) are delivered atomically.  That is to say that each DMA node data is sent as a contiguous block.
    Take the following example.  I have two DMA FIFO nodes in parallel.  I don't know which is going to be executed first, and they will most of the time be vying for bandwidth.  Both nodes send over the SAME DMA FIFO.  Does the data arrive interleaved, or is each sent block guaranteed to be contiguous ont he receiving end.  Do I end up with
    Data0 (Faster node)
    Data1 (Faster node)
    Data2 (Faster node)
    Data3 (Faster node)
    Data11 (Faster node)
    Data0 (Slower node)
    Data1 (Slower node)
    Data11 (Slower node)
    or do the individual items get interleaved.
    I'm kind of assuming that they remain in a contiguous block which I'm also hoping for because I want to abuse the DMA FIFO as a built-in timing source for a specific functionality I require on my FPGA board.  I can then use the RT-FPGA DMA Buffer to queue up my commands and still have them execute in perfect determinism (Until the end of the data in my single DMA transfer of course).
    Shane.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

    Woah, new avatar. Confusing! 
    I am going to preface this by saying that I am making assumptions, and in no way is this a definitive answer. In general, I have always had to do FPGA to RT streaming through a FIFO and not the other way around.
    When writing to the FPGA from the RT, does the FIFO.write method accept an array of data as the input? I'm assuming it does. If so, I'd then make the assumption that the node is blocking (like most everything else in LabVIEW). in which case the data would all be queued up contiguously. Interleaving would imply that two parallel writes would have to know about each other, and the program would wait for both writes to execute so that it could interleave the data. That doesn't seem possible (or if it was, this would be an awful design decision because what if one of the writes never executed).
    Of course, this is all assuming that I am understanding what you are asking.
    You're probably safe assuming the blocks are contigous. Can you test this by simulating the FPGA? If your'e really worried about interleaving, could you just wrap the FIFO.write method up in a subVI, that way you are 100% sure of the blocking?
    Edit: Had a thought after I posted, how can you guarantee the order things are written to the FIFO? For example, what if the "slow" write actually executes first? Then your commands, while contiguous, will be "slower node" 1-12 then "faster node" 1-12. It seems to me you would have to serialize the two to ensure anything. 
    Sorry if I'm not fully understanding your question.
    CLA, LabVIEW Versions 2010-2013

  • Problem with the analog input acquisition in labview fpga 8.5.1 and CRIO

     Hello all,
       I am using NI CRIO 9104 RT controller with 9014 FPGA chassis.I am using an Analog input module 9205.My problem is with the acquisition.I am using a
    function generator which generates some sine wave of -1 to 1 ,5hz to125khz sine signal and i have connected to 9205 in rse mode.In FPGA vi i had placed the I/O node and placed an indicator(data type FXP).If i connect the i/o node to a chart or graph,i am not able to see the output.?How this problem can bbe solved
    Next i had placed a FIFO in DMA transfer mode of depth 1023 which will accept the U32 datatype,so i did some manipulation as mentioned in a tutorial.
    Now in RT VI i am reading the same data from the buffer in U32 and doing the reverse manipulation i am able to view the data.Now the problem is ..if i am
    increasing the frequency of the input signal,the data is completly lost ?How can i overcome this problem...
    thanks in advance,
    srikrishna.J
    Analysis Engineer,
    Neurofocus

    Difficult to see where is the problem ....
    Be sure you are doing a good RT system by grabing reference design examples
    Don't forget to specify the DMA FIFO size in the RT code
    Doesn't matter the size of the DMA size under the FPGA project explorer.
    Upload your code, You will get answers...
    Mathieu

Maybe you are looking for

  • Use of Distinct and Minimum

    Post Author: mrae CA Forum: Formula My report lists Work Order Numbers with the corresponding technicians assigned to each. If a work order has more than 1 technician assigned to it, it's counting the record as more than 1 work order.  I only want it

  • How to add deep structure like LVC_T_SCOL in dynamic internal table

    Hello, can any one help me with adding a deep structure to a dynamically created internal table. the deep structure which needs to be added is for the color purpose for each record i.e.  LVC_T_SCOL . Dynamic table created with below method.     CALL

  • If I convert NEF  into DNG the photo looks different

    If I convert a NEF into a DNG, the DNG looks underexposed in comparison with the NEF. Why is this happening? I understood that it would be better to convert my photos into DNG because it is a universal RAW format but, it doesn't look anymore as I saw

  • About  "FIXUP_VTABLE"  in CS4

    In Cs4 sdk,there is a  VTableSupport.hpp : // If your plugin has a C++ object attached to it's globals, and if // the class of that object has virtual functions, those functions // will not work after the plugin is unloaded and then reloaded. // The

  • HT1320 How do I unfreeze my iPod classic? None of these options are working!

    How do I unfreeze my iPod classic? None of these options are working!