NI 5112 DMA Performanc​e

Hi,
I am currently doing continuous data acquisition with the NI 5112 using an sample rate of 2*20 MSamples/s (using fetchBinary8) for software defined radio. I transfer the data from the 5112 into the main memory in real-time and it works fine.
However, this continouos data transfer needs much CPU performance of nearly 100% of one 3 GHz P4 (I have a dual-processor system) and I would need this CPU also to process the data. I don't understand that since DMA transfer should not require CPU cycles. Has anyone an explanaition of that?
thanks, thomas

What is the "chunk size" when you are continuously fetching data? If
you're using LabVIEW and you leave the number of points to fetch as -1
in the read/fetch VI, the VI will return as soon as it finds a non-zero
number of points to fetch. Depending on how fast your computer is and
your sample rate, you may be getting only a few points per iteration in
the loop. Setting the number of points to a larger value, say 1M for
example, may improve your CPU usage. Also, be sure to set a non-zero
timeout value because if the timeout is zero, the available points are
returned regardless of the number of points requested. Also, is this system in PCI or PXI?
The CPU usage may still be higher than you expect because we DMA the
data from the board to a temporar
y buffer before copying it to your
buffer. We do this because we can only transfer a minimum of 256 bytes
at a time with DMA. If an acquisition is less than that size or a
non-multiple of 256 bytes, some of the points at the beginning and end
of the acquisition will be invalid. Since we don't want to make the
user have to think about all that stuff, we return a buffer that is
exactly what was asked for. Unfortunately, that requires an extra copy
of the data. We may add direct DMA to the user buffer in a future
release of NI-SCOPE.
I'd be interested to hear how well your application still runs after you
add your processing algorithms into your program. You may find that the
CPU yields time and it still works ok. If not, check back with us and
we can look at alternative ways to improve the performance.

Similar Messages

  • Analog out DMA performanc​e problems

    I'm working on an open-source driver for m-series and e-series boards (http://www.comedi.org). I've discoved some performance problems doing dma to analog outputs that I can't resolve. In summary, dma transfers to the analog output of a PXI-6281 in a pxi crate being controlled through a mxi-4 connection (pxi-pci8336) are VERY slow. I'm talking 250k samples/sec slow. That's the maximum speed the dma controller can fill the board's analog output fifo from host memory. I've also got an older PXI-6713 in the same crate, and dma transfers to it are about 15 times faster (about 3.5M samples/sec). I did notice that clearing the dma burst enable bit in the mite chips channel control register caused the 6713 to slow way down to something comparable to the 6281 (about 500k samples/sec). Setting or clearing the burst enable bit had no effect on the speed of the 6289. Is there some special mojo that needs to be done to enable burst transfers on the 6289? Also, even the relatively speedy 6713 does dma transfers much slower than it should, since the pxi-pci8336 advertises 80MB/sec sustained transfer rates over mxi4. Can you provide any insight into this matter? I've already looked through the ddk, and a register level document describing the mite chip, and example code which had chipobjects for the mite and an analog input example.
    By the way, dma transfers for analog input on the 6281 weren't as bad, I didn't measure the transfer time, but I was at least able to do input at 500k samples/sec without fifo overruns.
    I'll post more detailed performance measurements in a subsequent post, and include measurements for a couple other similar pci boards (a pci-6289 and pci-6711). In case you're wondering, neither of the pci boards get anywhere close to the bandwidth provided by the pci bus, but they're not as spectacularly bad as the pxi-6281.

    Here are my measurements:
    PCI-6711, tested on 1.4GHz Pentium 4:
    5.2 to 5.3 milliseconds to load fifo to half-full using dma. 0.9 to 1.0 microseconds to write to a 16-bit register. 1.9 to 2.1 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect.
    PXI-6713, tested on 3.2GHz Pentium D:
    2.2 to 2.4 milliseconds to load fifo to half-full using dma. 0.5 to 0.7 microseconds to write to a 16-bit register. 5 to 7 microseconds to read from a 16-bit register. Turning off the mite's burst enable bit causes the dma fifo load time to increase to 16 to 17 milliseconds.
    PCI-6289, tested on 3GHz Pentium 4:
    2.0 to 2.2 milliseconds to load fifo to half-full using dma. 0.4 to 0.6 microseconds to write to a 16-bit register. About 1.2 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 2.1MHz before the board's fifo started to underrun.
    PXI-6281, tested on 3.2GHz Pentium D:
    18 to 19 milliseconds to load fifo to half-full using dma. 0.3 to 0.4 microseconds to write to a 16-bit register. 4 to 6 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 250kHz before the board's fifo started to underrun.
    Notes: the 671x boards have a 16k sample ao fifo, the 628x boards have 8k.
    The 4 to 7 microseconds times to read a register on the PXI boards seems large too, is that normal overhead for going over the mxi-4 connection?
    I wasn't doing anything else intensive on the pci bus during these tests. For what it's worth, according to pci specs the two pci boards should be able to dma their analog output fifos to half full in less than 150 microseconds.

  • Feb 2009 T500, SSD performanc​e problems

    I just bought an Intel X25-M 80GB drive (with the updated 8820 firmware) for read only database data files.  I installed an eSata ExpressCard 34 and put the drive in a Rosewill enclosure.  All of these items are rated at 3.0 Gb/s SATA speeds.  But, when I benchmarked the drive it flatlines at around 130 MB/s.  Oddly close to the SATA 1 or UDMA 6, so I looked into the device manager under "SCSI/Raid controllers" and found my SATALink controller.  It reads that the "Host Link Speed" is 3.0 Gb/s, but the current transfer mode is "Ultra DMA 6".  Is there a reason my new T500 laptop isn't able to run at the full speed of the ExpressCard?  I was looking forward to those 250 MB/s reads I've been reading about.

    anti00Zero, thank you for your replies.  I have figured out what is going on with my situation.  It turns out that the SI processor chip on the eSata expressCard has a maximum throughput of 130 MB/s.  There is one other option that would provides a throughput of 200MB/s, but it costs $299.  It's not likely they will sell many of those cards, when one that provides 130 MB/s costs only $45.  In any event, I'll probably end up using the Intel X25 on my desktop because I dont want to have a $400 SSD go to waste at 130 MB/s.  Lenovo T500 is working great.

  • Poor reflective memory read performanc​e

    I'm having some trouble with the GE 5565 PIORC reflective memory set of VIs for use with our reflective memory setup. I need to copy a pretty sizable chunk of memory out of reflective memory and into a DLL I've written, but the performance on the "GE 5565 PIORC:GE5565 Read (Cluster).vi" is not where I need it. I need to copy somewhere in the realm of 12k out of reflective memory at a high frequency, but the call to read those 12k takes longer than the period I need to gather the data at. I apologize in advance for the image-heavy post, but I think it's worth it to show what I've got.
    Here's a picture of my setup to benchmark the Read call runtime: 
    Here's a graph of runtimes of that Read call, in microseconds:
    I need it to run in way less than 16 ms, which doesn't seem unreasonable to me for only 12k. I did fool around with the DMA version of the Read (which I don't really understand, and the documentation is nonexistent as far as I can tell). Here's my test setup:
    And here's a chart similar to the one above:
    Way better, though I have no idea if it even does what I think it should So, I have a few questions. First, is there any way to get better performance out of that Read VI? Some other library I should be using, some setting I should be setting, some other way I should be benchmarking its performance, maybe even some way of doing this with another DLL? Second, if the Read can't achieve the performance I need, what's up with the DMA version, and how would I use it properly? Is the performance advantage that it appears to be giving real, or just an artifact of some mistaken way in which I'm using it? Thanks!

    Hi dgoes
    Windows is not a deterministic operating system so the loop cycle time that you are getting might be the best time that the windows system can achieve. In order to benchmark your code you can also use the input node and the output node of the timed loop. Check the following link.
    Timed Loop
    But it will be nice to know if you are working on a real time operating system or windows? Because unfortunately the windows operating system is not a deterministic system so the time loop might not work as expected. Here are a couple of links with information about this.
    windows 1KHz time loop limitation reason
    What is a Real-Time Operating System (RTOS)?
    Which driver are you using for the GE 5565 PIORC? Is it this one?
    GE 5565 PIORC
    If it is, please notice that this driver is neither supported nor certified by National Instruments. This card is supported with NI VeriStand. Please check the information on the following links.
    Getting Started With the GE cPCI-5565PIORC Reflective Memory Module
    GE cPCI-5565PIORC
    I hope that this information answer your questions.
    Regards
    Esteban R.

  • DAQCard-65​33 performanc​e (PCMCIA)

    The on-line DAQCard-6533 PCMCIA card ad states "up to 400 kbytes/s (pattern I/O); up to 740 kbytes/s (handshaking I/O). I'm not even getting on tenth of those speeds.
    If I run the card at even 25 Khz, I get error -10803 indicating an input buffer overrun (too fast). I've tried patterned I/O and handshake I/O with the same results. Increasing the buffer size makes a small difference up to about 4096, then no benefit after that.
    Has anyone ever actually used this card to acquire continuous data faster than 25 KHz? Nobody I've been able to contact at NI has any hands-on experience. Everybody throws answers at me assuming I'm using the PCI version of the card, not the PCMCIA.
    It appears that this card's internal FIFO is 8 samp
    les deep, and it uses PIO rather than DMA, so interrupts are constantly happening. This seriously limits input speed, even on a fast notebook.
    Any help would be appreciated. I'm just about to abondon NI as a supplier for our data acq cards, but I thought I'd give this one more shot. I have sample source code and a program that demonstrates the problem.
    email: [email protected]

    My system is an IBM Thinkpad A21p, 850 MHz with 512MB RAM. I have all unnecessary services disabled, and no other applications running.
    I have attached a ZIP file that contains my example program that shows the problem, together with the relavant source code.
    Thanks,
    Ron Schaaf
    [email protected]
    Attachments:
    DAQCard6533_prob.zip ‏103 KB

  • 1.2 bios flash works fine on P35 Platinum but still having DMA/PIO trouble on XP

    After all the scare stories of the latest bios I thought I should mention that it worked like a charm for me.  I went from default 1.0D to 1.2 using the MSI Forum HQ USB flashing tool: https://forum-en.msi.com/index.php?topic=108079.0 not using any windows or floppy util.
    Once flashed I switched-off and unpluged it for a few minutes, then when I turned on I pressed the little reset button next to the battery on the motherboard.   Then I cycled the power a few times till the monitor came to life and asked me to press F1 to reset the clock and stuff.  Works great!
    Now the problem I was trying to fix is that I only have IDE drives and although it boots up saying DMA mode 6 when I do something intensive such as launch Quake4 or install SP2 it reverts to PIO mode and renders it unplayable. 
    Same problem on both XP64 and 2003 32bit.  Linux and Vista64 work great but I wanted an OS for games.  Anyone else solve this problem?

    Quote from: Del UK on 06-June-08, 01:38:01
    The controller is set to IDE and not raid???
    Go into bios
    Integrated Peripherals
    On Chip ATA Devices.
    PCI IDE Bus Master = Enabled
    On Chip Sata Controller = Enable
    RAID Mode = IDE
    I have not had any issues with drivers under:-
    XP Home SP2 & SP3
    XP Pro SP3
    Suse Linux 10.2 or 10.3 64bit
    Unbuntu 7.1 64Bit
    Also check your IDE cable, if you have issues with 80 pin, drop to 40 pin connection, there is no speed loss on optical drives.
    I am sure the guidance will help you.
    ATB
    Del
    all done.
    ide cable is that in the motherboard box
    all that options in the bios are like your

  • Data Transfer b/w Target and host using DMA FIFO's

    Dear NI,
         Am facing a problem while writing data into the DMA in the Host environment,
    Steps i did:
    1. Invoked the FIFO in the Host environment.
    2.Connected it to the VI reference.
    3.Configured the DMA Depth.
    4.Started the FIFO.
    5.An Array is initialised and adata is fed into Array using the File operations.
    6.and the DMA is read in the Target VI.,
    In the Above process, i attached a indicator to the DMA while reading in the Target environment,
    but i could not observe any activity,
    if some one tried please let me know the procedure to do the same,
    am attaching the Host VI for reference
    Attachments:
    host.vi ‏172 KB

    Hi Kalyansuman,
     Good afternoon and thanks for your post.
    I would again like to stress you must keep your post in one place on the forums. Now lets discuss your problem!
    I am confused about what your trying to acheive. You want to read and write data from the FPGA?
    Then normal setup is to open a FPGA VI reference, then do the read/write and close the reference outside of the loop.
     If
    then require to do this twice, I would have two loops. (but use the
    same references), then merge the error clusters, and the use a single
    close FPGA reference. 
    The reason why your DMA may not be working:
    1) Have you tried them on their own (just a read for example)?
    2) Have you taken a look at the examples in NI Example finder? They have two which show how to implement FIFOs.
    3) Is this a cRIO or an R-series board?
    Any more clarifcation would be great. For example,do you get an error? what do you mean by no activity?
    Kind Regards
    James Hillman
    Applications Engineer 2008 to 2009 National Instruments UK & Ireland
    Loughborough University UK - 2006 to 2011
    Remember Kudos those who help!

  • How can i send more than one signal to DMA FIFO?

    Hello,
    I'm trying to send more than one signal to DMA FIFO, but i don't know how to do. When i send one signal i don't have problems. I try to use one block DMA FIFO for one signal. For example if i have 3 signal i use · DMA FIFO but whe i want to wath them in a waveform chart the signals have a delay.
    How can I do to send more than one signal to DMA FIFO? and if that's no posible, How i can do for syncronizate the 3 signals?
    The data type of the signal is FXP <16,10>
    Regards.
    Pablo
    Solved!
    Go to Solution.
    Attachments:
    Block Diagram.jpg ‏81 KB
    Block Diagram.jpg ‏81 KB

    Not quite.  You need to use the Integer To Fixed Point Cast to change from the integers to your FXP numbers.  You can then build them into a cluster to write to the Waveform Chart.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Combine and Decode FXP.png ‏14 KB

  • How to set a DMA transfer type for PXIe-6536 in LabWindows/CVI?

    I have a PXI chassis PXIe-1078 with a controller PXIe-8115 running under Windows 7. The digital output board is PXIe-6536.
    I use a function DAQmxSetChanAttribute to set a property DAQmx_DO_DataXferMech to a value DAQmx_Val_DMA, since I want to use a direct memory access data transfer. This wokred well with a PCI-6534 board using the same LabWindows/CVI code before migrating it to the PXIe system.
    Unfortunately, running this code on the PXIe system reports a DAQmx error -200452: "Specified property is not supported by the device or is not applicable to the task".
    The task is created in the following simple way (the board name in MAX is 'Dev1'):
       DAQmxCreateTask ("digTask", &digitalTask);
       DAQmxCreateDOChan (digitalTask, "Dev1/port0:3", "DIG_CHANNELS", DAQmx_Val_ChanForAllLines);
       DAQmxSetChanAttribute (digitalTask, "", DAQmx_DO_DataXferMech, DAQmx_Val_DMA, 15);
    How can I solve this problem? How is it possible to choose between different transfer types?
    Thank you in advance for any hint!

    Hi CavityQED,
    The PCI-6534 is a "Digital I/O" board while the PXIe-6536 is a "High Speed Digital I/O" board, that's why they don't have the same properties.
    By the way you can use DMA transfer with this method :
    http://zone.ni.com/reference/en-XX/help/370520J-01/hsdio/direct_dma/
    Let me know if it helps you.
    Regards.
    Mathieu_T
    Certified LabVIEW Developer
    Certified TestStand Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    LabVIEW Tour
    Journées Techniques dans 10 villes en France, du 4 au 20 novembre 2014

  • Using DMA to update values in an array

    Good afternoon, 
    I've been running into a few problems with my vi, and I'd like to give a bit of the background information before I ask my questions. I'm using Labview 8.5 and the NI USB-6009 DAQ. I want to use an encoder to control values that are being written to a file for DMA. I found that I couldn't use the encoder as an external clock since the 6009 DAQ doesn't have this capability. So I've been trying to go a differenet route by using a case structure with a True/false statement to allow me to input values from a simulated signal into a write vi (each time the encoder pulses, a value from the simulated signal should be inputed into the write data storage vi). From there, I want to then read those values and put them into an array. So the plan is to have a 10 element array that reads in values from the storage file (just like in FIFO for FPGA). As I continue reading values, the oldest value of the 10 element array will leave the array and be replaced by a new value. 
     Now here come the questions, I'm using the Write/Read data storage vi's and I keep getting errors. First, if I'm wanting to use DMA to read these values am I using the correct vi's, or is there a different route? Also, once I read these values into the array how would I be able to constantly update the array in a descending order from begining to end of the stored values? 
    I'm posting my most recent vi that I've been editing. Also, in advance, thank you!
    -tjm
    Attachments:
    Using Encoder as an analog input 10_9_13 - Copy - Copy.vi ‏390 KB

    Thank you for your reply.
    First, I'm ultimately trying to use the array as input into a visual display for a meter (to display the mean of the array). I've been successful (in the past) with inputting into an array by not using DMA and using the Sort 1D Array point by point vi. The only problem is the timing mechanism with the encoder, and you are correct with stating that there is uncertainty with the encoder when trying to retrieve values from the input signal (sine wave). I thought about going down the route of using the encoder as a counter (since I am able to see the counter increase by a single digit with each pulse).
    My question would then be how to control the case structure with a counter input? 
    I'm posting both my setup with the Sort 1D Array Point by Point and the simple vi for the encoder as a counter. My idea is to try to merge the two and have the counter control the case structure. 
    Is there a way I can do this? 
    Attachments:
    Sort 1D Array Pt by Pt.vi ‏189 KB
    Using Encoder as a counter input 10_9_13.vi ‏123 KB

  • M-Series Buffered Event Counting with DMA -- gating problem

    Hi --
    I am implementing DMA-based buffered event counting on a PCIe-6259 board.  I use G0_Out as the gate for G1, which counts events on a PFI pin.   So by setting the speed of G0, I get an event count (either cumulative or non-cumulative) on a periodic basis, which is directly DMA'd to my buffer, and synchronized with other i/o operations.
    This is working well right now, except for one problem, which is that the I only get data if there is at least one  source edge between gates.  i.e. if there are no edges, nothing gets pumped to the dma buffer.
    I am guessing that a stale data error is somehow choking off the DMA transfer from the counter.   Is that possible?
    Is there some magic that I need to do to avoid this, because for this application, especially if I am counting cumulatively, I don't care about a missing edge, but I do care if the dma transfers get out of phase with the rest of my timing.
    Thanks in advance for any help!
    --spg
    Here is a snippet of the code that sets up the event counting on G1, partly based on gpctex6.cpp:
    const int sDMASelect[] = {1,2,4,8,3,5};
    // source:  pfi, or -1 for 20Khz clock
    void eventTimerSetup(tMSeries *board, tTIO *tio, int dmaChannel, bool cumulative, int source)
        int sourceSelect = (source==-1) ? 0 : (source+1);
        //MSeries.CTR.Source
        tio->G1_Input_Select.setG1_Source_Select(sourceSe​lect); // (pfi+1) or 20Khz=0
        tio->G1_Input_Select.setG1_Source_Polarity(0); //rising=0
        tio->G1_Input_Select.setG1_OR_Gate(0);
        tio->G1_Input_Select.flush();
        //MSeries.CTR.Gate
        tio->G1_Input_Select.setG1_Gate_Select(20); //the G_OUT signal from other clock=20
        tio->G1_Input_Select.setG1_Output_Polarity(0); //active high=0
        tio->G1_Input_Select.flush();
        //MSeries.CTR.IncrementRegisters
        tio->G1_AutoIncrement.writeRegister(0);
        //MSeries.CTR.InitialCountRegisters
        tio->G1_Mode.writeG1_Load_Source_Select(tTIO::tG1​_Mode::kG1_Load_Source_SelectLoad_A);
        tio->G1_Load_A.writeRegister(0);
        tio->G1_Command.writeG1_Load(1);
        tio->G1_Load_B.writeRegister(0);
        tio->G1_Load_A.writeRegister(0);
        tio->G1_Command.setG1_Bank_Switch_Enable(tTIO::tG​1_Command::kG1_Bank_Switch_EnableBank_X);
        tio->G1_Command.setG1_Bank_Switch_Mode(tTIO::tG1_​Command::kG1_Bank_Switch_ModeGate);
        tio->G1_Command.flush();
        //MSeries.CTR.ApplicationRegisters
        tio->G1_Input_Select.setG1_Gate_Select_Load_Sourc​e(0);
        tio->G1_Mode.setG1_Reload_Source_Switching(tTIO::​tG1_Mode::kG1_Reload_Source_SwitchingAlternate);
        tio->G1_Mode.setG1_Loading_On_Gate(cumulative ? tTIO::tG1_Mode::kG1_Loading_On_GateNo_Reload : tTIO::tG1_Mode::kG1_Loading_On_GateReload_On_Stop_​Gate);
        tio->G1_Mode.setG1_Loading_On_TC(tTIO::tG1_Mode::​kG1_Loading_On_TCRollover_On_TC);
        tio->G1_Mode.setG1_Gating_Mode (tTIO::tG1_Mode::kG1_Gating_ModeEdge_Gating_Active​_High);
        tio->G1_Mode.setG1_Gate_On_Both_Edges (tTIO::tG1_Mode::kG1_Gate_On_Both_EdgesBoth_Edges_​Disabled);
        tio->G1_Mode.setG1_Trigger_Mode_For_Edge_Gate(tTI​O::tG1_Mode::kG1_Trigger_Mode_For_Edge_GateGate_Do​es_Not_Stop);
        tio->G1_Mode.setG1_Stop_Mode(tTIO::tG1_Mode::kG1_​Stop_ModeStop_On_Gate);
        tio->G1_Mode.setG1_Counting_Once(tTIO::tG1_Mode::​kG1_Counting_OnceNo_HW_Disarm);
        tio->G1_Second_Gate.setG1_Second_Gate_Gating_Mode​(0);
        tio->G1_Input_Select.flush();
        tio->G1_Mode.flush();
        tio->G1_Second_Gate.flush();
        //MSeries.CTR.UpDown.Registers
        tio->G1_Command.writeG1_Up_Down(tTIO::tG1_Command​::kG1_Up_DownSoftware_Up); //kG1_Up_DownSoftware_Down
        //MSeries.CTR.OutputRegisters
        tio->G1_Mode.writeG1_Output_Mode(tTIO::tG1_Mode::​kG1_Output_ModePulse);
        tio->G1_Input_Select.writeG1_Output_Polarity(0);
        //MSeries.CTR.BufferEnable
        board->G1_DMA_Config.writeG1_DMA_Reset(1);
        board->G1_DMA_Config.setG1_DMA_Write(0);  
        board->G1_DMA_Config.setG1_DMA_Int_Enable(0);
        board->G1_DMA_Config.setG1_DMA_Enable(1);   
        board->G1_DMA_Config.flush();
        tio->G1_Counting_Mode.setG1_Encoder_Counting_Mode​(0);
        tio->G1_Counting_Mode.setG1_Alternate_Synchroniza​tion(0);
        tio->G1_Counting_Mode.flush();
        //MSeries.CTR.EnableOutput
        //board->Analog_Trigger_Etc.setGPFO_1_Output_Enab​le(tMSeries::tAnalog_Trigger_Etc::kGPFO_1_Output_E​nableOutput);
        //board->Analog_Trigger_Etc.setGPFO_1_Output_Sele​ct(tMSeries::tAnalog_Trigger_Etc::kGPFO_1_Output_S​electG_OUT);
        //board->Analog_Trigger_Etc.flush();
        //MSeries.CTR.StartTriggerRegisters
        tio->G1_MSeries_Counting_Mode.writeG1_MSeries_HW_​Arm_Enable(0);    
        board->G0_G1_Select.writeG1_DMA_Select(sDMASelect​[dmaChannel]);   
        tio->G1_Command.writeG1_Arm(1); // arm it
    Scott Gillespie
    http://www.appliedbrain.com
    scott gillespie
    applied brain, inc.
    Solved!
    Go to Solution.

    Okay, I have it working now.  In addition to your suggested changes, I had to remove the following line:
    tio->G1_MSeries_Counting_Mode.writeG1_MSeries_HW_A​rm_Enable(0);    
    It appears that writing something to MSeries_Counting_Mode causes that register to supersede the Counting_Mode register.  Is that right?
    So code that now works for  me is listed below.
    thanks Tom!
    -spg
    void eventCounterSetup(tMSeries *board, tTIO *tio, int dmaChannel, bool cumulative, int source) // pfi, or -1 for 20Khz clock
    int sourceSelect = (source==-1) ? 0 : (source+1);
    //MSeries.CTR.Source
    tio->G1_Input_Select.setG1_Source_Select(sourceSel​ect); // (pfi+1) or 20Khz=0
    tio->G1_Input_Select.setG1_Source_Polarity(0); //rising=0
    tio->G1_Input_Select.setG1_OR_Gate(0);
    tio->G1_Input_Select.flush();
    //MSeries.CTR.Gate
    tio->G1_Input_Select.setG1_Gate_Select(20); //the G_OUT signal from other clock=20
    tio->G1_Input_Select.setG1_Output_Polarity(0); //active high=0
    tio->G1_Input_Select.flush();
    //MSeries.CTR.IncrementRegisters
    tio->G1_AutoIncrement.writeRegister(0);
    //MSeries.CTR.InitialCountRegisters
    tio->G1_Mode.writeG1_Load_Source_Select(tTIO::tG1_​Mode::kG1_Load_Source_SelectLoad_A);
    tio->G1_Load_A.writeRegister(0);
    tio->G1_Command.writeG1_Load(1);
    tio->G1_Load_B.writeRegister(0);
    tio->G1_Load_A.writeRegister(0);
    tio->G1_Command.setG1_Bank_Switch_Enable(tTIO::tG1​_Command::kG1_Bank_Switch_EnableBank_X);
    tio->G1_Command.setG1_Bank_Switch_Mode(tTIO::tG1_C​ommand::kG1_Bank_Switch_ModeGate);
    tio->G1_Command.flush();
    //MSeries.CTR.ApplicationRegisters
    tio->G1_Input_Select.setG1_Gate_Select_Load_Source​(0);
    tio->G1_Mode.setG1_Reload_Source_Switching(tTIO::t​G1_Mode::kG1_Reload_Source_SwitchingAlternate);
    tio->G1_Mode.setG1_Loading_On_Gate(cumulative ? tTIO::tG1_Mode::kG1_Loading_On_GateNo_Reload : tTIO::tG1_Mode::kG1_Loading_On_GateReload_On_Stop_​Gate);
    tio->G1_Mode.setG1_Loading_On_TC(tTIO::tG1_Mode::k​G1_Loading_On_TCRollover_On_TC);
    tio->G1_Mode.setG1_Gating_Mode (tTIO::tG1_Mode::kG1_Gating_ModeEdge_Gating_Active​_High);
    tio->G1_Mode.setG1_Gate_On_Both_Edges (tTIO::tG1_Mode::kG1_Gate_On_Both_EdgesBoth_Edges_​Disabled);
    tio->G1_Mode.setG1_Trigger_Mode_For_Edge_Gate(tTIO​::tG1_Mode::kG1_Trigger_Mode_For_Edge_GateGate_Doe​s_Not_Stop);
    tio->G1_Mode.setG1_Stop_Mode(tTIO::tG1_Mode::kG1_S​top_ModeStop_On_Gate);
    tio->G1_Mode.setG1_Counting_Once(tTIO::tG1_Mode::k​G1_Counting_OnceNo_HW_Disarm);
    tio->G1_Second_Gate.setG1_Second_Gate_Gating_Mode(​0);
    tio->G1_Input_Select.flush();
    tio->G1_Mode.flush();
    tio->G1_Second_Gate.flush();
    //MSeries.CTR.UpDown.Registers
    tio->G1_Command.writeG1_Up_Down(tTIO::tG1_Command:​:kG1_Up_DownSoftware_Up); //kG1_Up_DownSoftware_Down
    //MSeries.CTR.OutputRegisters
    tio->G1_Mode.writeG1_Output_Mode(tTIO::tG1_Mode::k​G1_Output_ModePulse);
    tio->G1_Input_Select.writeG1_Output_Polarity(0);
    //MSeries.CTR.BufferEnable
    board->G1_DMA_Config.writeG1_DMA_Reset(1);
    board->G1_DMA_Config.setG1_DMA_Write(0);  
    board->G1_DMA_Config.setG1_DMA_Int_Enable(0);
    board->G1_DMA_Config.setG1_DMA_Enable(1);   
    board->G1_DMA_Config.flush();
    // from Tom:
    // The "magic" you need is referred to as synchronous counting mode (or Duplicate Count Prevention in NI-DAQmx).  
    // Try setting G1_Encoder_Counting_Mode to 6 (synchronous source mode) and G1_Alternate_Synchronization to 1 (enabled).
    tio->G1_Counting_Mode.setG1_Encoder_Counting_Mode(​6); // 0
    tio->G1_Counting_Mode.setG1_Alternate_Synchronizat​ion(1); // 0
    tio->G1_Counting_Mode.flush();
    board->G0_G1_Select.writeG1_DMA_Select(sDMASelect[​dmaChannel]);   
    tio->G1_Command.writeG1_Arm(1); // arm it
    scott gillespie
    applied brain, inc.

  • MHDDK with Visa: event when DMA transfer complete?

    We are using devices like PXIe-6363, PCIe-6321 depending on requirements and these are dealt with using the MHDDK with Visa backend on Windows (I hope that is the correct terminology).
    All I/O, both analog and digital simultaneously, is done using DMA transfers using the standard MHDDK classes for doing so. This all works without problems.
    One last improvement is I'd like to be notified using a callback/event system when a new block of data is ready: currently I'm just polling the input DMA's available number of bytes in a continuous loop with no sleep() in order to be able to process data as soon as it's ready. Works, but keeps one cpu busy the whole time.
    I didn't see any way to use callbacks using the MHDDK which makes sense since it's purely based on register I/O if I understand it correctly.
    However I noticed Visa has viInstallHandler, viEnableEvent and the likes. I went through some documentation on the event types but admit this is over my head and I have no idea where to start.
    Could anyone point me in the right direction?
    First question is if what I'm asking for is possible? Second I think I just have to call viInstallHandler for the correct event type, then figure out if the event is for an input DMA transfer from PXI? Using viGetAttribute somehow?
    Thanks in advance!

    You'll have better luck posting to the ddk board - http://forums.ni.com/t5/Driver-Development-Kit-DDK/bd-p/90

  • How to structure the DMA buffer for PXie 6341 DAQ card for analog output with different frequencies on each channel

    I'm using the MHDDK for analog out/in with the PXIe 6341 DAQ card.
    The examples, e.g. aoex5, show a single Timer  (outTimerHelper::loadUI method), but the example shows DMA data loaded with the same vector size.
    There is a comment in the outTimerHelper:rogramUpdateCount call which implies that different buffer sizes per channel can be used.
       (the comment is: Switching between different buffer sizes will not be used)
    Does anyone know what the format of the DMA buffer should be for data for multiple channels with different frequencies ?
    For example, say we want a0 with a 1Khz Sine wave and a1 with a 1.5Khz sine wave.  What does the DMA buffer look like ?
    With the same frequency for each channel, the data is interleaved, e.g.  (ao0#0, ao1#0; ao0#1, ao1#1, ...), but when the frequencies for each channel is different, what does the buffer look like ?

    Hello Kenstern,
    The data is always interleaved because each card only has a single timing engine for each subsystem.
    For AO you must specify the number of samples that AO will output. You also specify the number of channels. Because there is only one timing engine for AO, each AO will channel will get updated at the same time tick of the update clock. The data will be arranged interleaved exactly as the example shows because each AO channel needs data to output at each tick of the update clock. The data itself can change based on the frequency you want to output.
    kenstern wrote:
    For example, say we want a0 with a 1Khz Sine wave and a1 with a 1.5Khz sine wave.  What does the DMA buffer look like ?
    With the same frequency for each channel, the data is interleaved, e.g.  (ao0#0, ao1#0; ao0#1, ao1#1, ...), but when the frequencies for each channel is different, what does the buffer look like ?
    In your example, you need to come up with an update rate that works for both waveforms (1 KHz and 1.5 KHz sine waves). To get a good representation of a sine wave, you need to update more than 10x as fast as your fastest frequency...I would recommend 100x if possible.
    Update Frequency: 150 KHz
    Channels: 2
    Then you create buffers that include full cycles of each waveform you want to output based on the update frequency. These buffers must also be the same size.
    Buffer 1: Contains data for the 1 KHz sine wave, 300 points, 2 sine wave cycles
    Buffer 2: Contains data for the 1.5 KHz sine wave, 300 points, 3 sine wave cycles
    You then interleave them as before. When the data is run through the ADC, they are outputting different sine waves even though the AO channels are updating at the same rate.

  • Dma channel problems

    Hello,
    I really need help on this one. Using PCI-6035E I am trying to output an analog signal (actually 2 waverforms interleaved together) on 2 channels while simultaneously using the two GPCTRs to measure two separate signals' pulse widths. The VI runs perfectly on some occasions, while on others I get "no DMA channel available for use". Whenever this message appears and I try to exit labview, the nipalk.sys driver messes up and I get a blue screen crash that says "process_has_locked_pages" (I traced the locked pages to this driver). I've tested this many times and it appears to me to be totally random. I've used the Set DAQ device information VI to set the two counters to use interrupts, but I need to use the one DMA channel for
    the analog output. My best guess is that there is some kind of resource conflict with something else in the PC, but I can't figure out how to change DMA assignments(I'm using Windows XP). The NI-DAQ Measurement and Automation Explorer tells me that the card is using DMA 0, IRQ 17. I've tried uninstalling all the NI-DAQ drivers, reinstalling and updating. Any help anybody could give me would be greatly appreciated.
    Thanks,
    Nick
    Attachments:
    aug7.vi ‏374 KB

    Hello;
    The PCI bus has 3 DMA channels that are share in between all devices that do DMA data transfers.There is nothing you can do to change that, since that is a PCI bus feature.
    The best way to go about that is to remove other devices that use DMA, such as network cards, for instance. That might free up more DMA access time for your DAQ device to execute its data trasnfers.
    Hope this helps.
    Filipe A.
    Applications Engineer
    National Instruments

  • X121e and Windows XP: only PIO mode (not DMA)

    I installed Windows XP on a thinkpad X121e. After actualizing all drivers from Lenovo I found out why the whole system was very slow: The internal hard disk is running only in PIO mode and reaches only 3 MB/s which is far to slow to work. I did not had this issue for years; normally a hard disk should run in DMA mode without any problems.
    I tried several things:
    - deinstalling the driver
    - reinstalling chipset driver)
    - changing the hard disk
    -  resetting the storage controller with a script (dmareset.vbs) found in the net 
    but I did not find any solution. The PIO mode is still on.
    I did install the XP system in SATA legacy (compatibility) mode. I could not try installing windows XP in ACHI mode because I did not find any ACHI driver for this chipset (HM65) at intel (only for Intel 5 series chipset exist F6 driver).  
    What could be the reason for the problem that DMA mode is not recognized by XP? Is it an issue for Lenovo (e.g. Bios update necessary) or does it have to do with Intel (problem with chipset driver)? 
    Thanks for any help,
    Thorsten
    Solved!
    Go to Solution.

    Hi thoral,
    I don't know if it will fix the P-I/O issue, but there is an AHCI driver for your machine on the Lenovo support site:
    Intel SATA Controller AHCI Driver
    Intel SATA Controller AHCI Driver
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

Maybe you are looking for

  • Webutil - issue with uploading text files with client_get_file_name

    Hi, I'm using webutil client_get_file_name to upload files from the client machine to the server. It works fine for xls,pdf,doc files etc. but not for txt files it seems. It doesn't display any error at all either. Has anyone seen this type of issue

  • First PKGBUILD - PyMbs - A Python tool for modelling multibody systems

    I've tried to make a PKGBUILD for PyMbs (bitbucket.org/pymbs/pymbs), a tool for modelling multibody systems. My try can be seen at this Github gist: https://gist.github.com/Psirus/00d3e873c22e5ee6d8e7, copied here: # Maintainer: Christoph Pohl <chris

  • Palletization data in material master??

    Hi All, I have maintained the palletization data in material master as given below LE quantity     Un     SUT 2500               pc     E1 200                 pc     L4 100                 pc     L3 Now when I have 201 pcs and above system picks SUT

  • Creating dynamic graphics

    Hi everyone, I have been programming a neural network simulation in java. The network consists of hundreds of simple elements. Simulating it by sending activation thru the network is purely computation and all of it is not visible. So I decided to cr

  • Skype lockup (and continuous lockup after restarti...

    so when i call my friend, my skype locks up, and after restarting, it continues to do said lockup, if i message her it locks up as well, but not as bad (i can sign out and back in, ittle be fine) what is the exact cause for this? (also im running win