S series DMA

Hi,
I try to use PCI-6143 card under Qnx. I download  DDK files. In the driver of the PCI 6143 , polling read implementation has been done. but i want to use DMA.
For this purpose, i add following function to the shipping polling read example.When i debugging, all functions return kNoError  status but when i try to read i see no data.
Please help, what's wrong
 tDMAError status = kNoError;  
    // Specify DMA and user buffer sizes
    const u32 dmaSizeInSamples = 100;
    const u32 dmaSizeInBytes   = dmaSizeInSamples * sizeof(i32);
    const u32 userSizeInSamples = 10;
    const u32 userSizeInBytes   = userSizeInSamples * sizeof(i32);
    // DMA objects
    tAddressSpace bar0;
    tMITE       *mite;
    tDMAChannel *dma;
    bar0 = bus->createAddressSpace(kPCI_BAR0);
    mite = new tMITE(bar0);
    mite->setAddressOffset(0x600);  
    dma = new tDMAChannel(bus, mite);
    // Configure DMA on the device  
    board->ScarabDMASelect.set(tSSeries::tScarabDMASelect::kChannel1);
    board->ScarabDMASelect.flush ();
    // Configure and start DMA Channel
    status = dma->config (0, tDMAChannel::kRing, tDMAChannel::kIn, dmaSizeInBytes, tDMAChannel::k32bit);
     if (status != kNoError)
         printf ("Error: dma configuration (%d)\n", status);
    status = dma->start();
     if ( kNoError != status )
         printf ("Error: dma start (%d)\n", status);
     else
         // No error - arm and start AI engine
        aiArm(theSTC); 
     aiStart(theSTC); // start acqusition
    // DMA Read

Hi eesen-
The ScarabDMASelect() bitfield is used to determine which DMA channel to use while downloading the large scarab image on scarab-based boards.  It does not apply to AI operations.  The AIAO_select bitfield is indeed the correct entry to indicate the AI DMA channel in use.  Possible values for AIAO_select are as follow (from the E Series RLP manual):
AI AO Select Register
The AI AO Select Register contains 8 bits that control the logical DMA selection for the
analog input and analog output resources. The contents of this register are cleared upon power
up and after a reset condition.
<...>
3–0 Input <D..A> Analog Input Logical Channel D through A—These four
bits select the logical channels to be used by the analog
input. You can only set one of these bits at a time.
In order to achieve a circular buffered DMA operation you should use Ring mode; it is the easiest to work with and works well with the MHDDK DMA library.  I don't have an example specific to the 6143, but the M Series example should be quite similar in terms of the steps to setup buffers and the actual access into the MHDDK DMA library.  Check out aiex3 from the M Series DDK as an example.
Hopefully this helps
Tom W
National Instruments

Similar Messages

  • M-Series DMA, Scatter-Gather List

    I have DMA working, using an M-series board and a contiguous memory buffer.   However, getting a successful allocation of a very large chunk of contiguous memory is difficult, or impossible depending on the size of the chunk.  
    So, the next task is to implement SGL (scatter-gather list) DMA.  Is there some sample code, or even just a little more information on the MITE DMA registers that might shed light on this?  
    thank you!
    --spg 
    scott gillespie
    applied brain, inc.

    Hey Scott,
    How are you today?  Thanks for posting!  Since you are doing your own driver development, hopefully someone in the DDK group will answer your question.  But I wanted to post to the rest of the community about a bit of info on our M-series devices.
    Our M-series devices have the STC-2 chip.  One of the many features of the STC-2 chip is that it includes 6 DMA channels.  Each DMA channel has a dedicated scatter-gather DMA controller for each function.  You can read more about the STC-2 chip at the following devzone article.
    M Series Frequently Asked Questions
    Here is a bit of background info about scatter-gather DMA transfer.
    What is scatter-gather DMA (Direct Memory Access) ?
    Rod T.

  • X-series dma: hang in tCHInChDMAChannelController::requestStop

    hi --
    i am getting an occasional hang inside the DDK function tCHInChDMAChannelController::requestStop.  
    basically, the function sets stop in the DMA Channel_Operation_Register, then waits for stop, last link, or error bits to be set in the Channel_Status_Register.
    what could cause this be stuck in an infinite loop (channel status is an unchanging 0x10004000)?
    in the particular case i was looking at today, i was trying to stop the dma by calling nNISTC3::tCHInChDMAChannel::stop() after detecting an overrun error (from AI_Timer.Status_1_Register.getOverrun_St()).
    and, since this condition is possible, what is a valid way to detect it and bail out of this routine (or avoid it completely) without resorting to evil timeouts?
    thanks,
    --spg
    scott gillespie
    applied brain, inc.
    Solved!
    Go to Solution.

    steven t --
    i am really baffled now.
    i modifed aiex3 to do a single large acquisiton (rather than continous mode).  
    any size transfer larger than about 300,000 samples exhibits this behavior.  that is, the last 1 to 10K samples of the transfer does not make it into the buffer, even after the stream transfer count is reported complete.   
    it is not consistent at which point the data stops -- it is generally a transfer somewhere in the last or second to last DMA chunky link.  i do my own sgl construction, and of course that is the first suspect for some type of DMA problem -- but i have been over it with a fine tooth comb and so far i can't find anything wrong with the sgl setup and not even a good theory to fit the facts.
    once the problem occurs, the next time i run the modifed aiex3, i will usually get nothing (no data will transfer), then eventually both StreamControlStatusRegister and AI InTimer Status_1_Register start returning 0xffffffff (that doesn't look good).  this is when the stop() routine hangs, because Status_1 keeps returning 0xffffffff no matter what.
    at this point, i have to reboot, otherwise i will never get any dma'd data again.
    i have tried this with 2 and 4-byte fifo mode, and a variety of sampling rates and sizes of transfers (up to 10 meg) -- regardless, the problem still occurs.
    below is an example of output during the intial run (modified to display data across the sampled range, rather than every value).  further below, output of the next run, which hung in stop:
    -- while waiting for the whole transfer to complete, the status control register toggles between 237910b0 and 237900b2. 
    -- here, the last 10,528 samples did not come through correctly
    -- i am sampling a square wave
    Testing: Speedy X-series 6363 Slot-4
    Bar0/Bar1/iBus: 0x1b1000/0x0/0x21df50
    X-Series Info -- nameCIe-6363, id=29749, adc:1 ai:32 dac:4
    Memrequest: 0000000001000000
    Starting finite 100.00-second hardware-timed analog measurement.
    Reading 500000-sample chunks from the 500000-sample DMA buffer.
    Status_1: 237910b0, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00040101
    Status_1: 619010f0, StreamControlStatusReg: 00040101
    --> dma reports all of the data is available
    --> last 10528 samples matched to index (i.e. did not tranfer)
    dump
    0) -1.293030 -1.294964 -1.294964 -1.293997 -1.294319 -1.293674 -1.294319 -1.292707 -1.293030 -1.291740
    50000) -0.214794 -1.180854 -1.245968 -1.270466 -1.282070 -1.286260 -1.288517 -1.290773 -1.294319 -1.293352
    100000) 3.791936 3.792581 3.791936 3.794837 3.789035 3.788068 3.790002 3.790969 3.792259 3.790002
    150000) -1.295931 -1.294964 -1.294641 -1.294641 -1.294319 -1.293674 -1.291096 -1.291740 -1.292063 -1.292063
    200000) -1.295931 -1.293352 -1.293030 -1.293352 -1.292385 -1.292063 -1.293352 -1.295608 -1.294319 -1.293352
    250000) 3.790969 3.792259 3.788391 3.791936 3.789358 3.789680 3.788391 3.792581 3.793226 3.791614
    300000) -1.295931 -1.296575 -1.296253 -1.295286 -1.292385 -1.292707 -1.293030 3.649783 3.740684 3.769050
    350000) -1.293030 -1.293352 -1.292385 -1.292707 -1.293674 -1.295286 -1.295931 -1.294319 -1.295931 -1.294641
    400000) 3.791936 3.787101 3.783233 3.785490 3.789358 3.790002 3.788068 3.792259 3.792581 3.796772
    450000) 3.786134 3.788713 3.790969 3.793226 3.791936 3.792581 3.795160 3.793226 3.790002 3.787746
    499999) -7.838192
    Finished finite 100.00-second hardware-timed analog measurement.
    Read 500000 samples (without overwriting data) using a 500000-sample DMA buffer.
    --------- speedy -- Unload Library
    -- program loops waiting for data (0 bytes reported available)
    -- as soon as Status_1 starts returning 0xffffffff it is all over
    Testing: Speedy X-series 6363 Slot-4
    Bar0/Bar1/iBus: 0x1b1000/0x0/0x229f10
    X-Series Info -- nameCIe-6363, id=29749, adc:1 ai:32 dac:4
    Memrequest: 0000000001000000
    Starting finite 100.00-second hardware-timed analog measurement.
    Reading 500000-sample chunks from the 500000-sample DMA buffer.
    Status_1: 237910b0, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00040101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: 337900b2, StreamControlStatusReg: 00000101
    Status_1: 237900b2, StreamControlStatusReg: 00000101
    Status_1: ffffffff, StreamControlStatusReg: ffffffff  <-- aiError is set true here
    ---> hang in tCHInChDMAChannel::stop
    scott gillespie
    applied brain, inc.

  • X-series DMA -- single chunky link for finite SGL -- possible?

    ok, i guess i stumped you guys on the previous question regarding aout fifo width.  here is another one.
    can i have a ChInCh SGL program that has a single chunky link, with the done flag set?  
    i compose the SGL myself (not using the DDK chunky link classes).   I have LinkChainRing and ReuseLinkRing SGL working fine.  However, if I want to run a finite SGL with one chunky link, I hang in tCHInChDMAChannelController::start waiting for the link ready bit.
    so there appears to be an issue with starting a finite DMA SGL on a chunky link that has the Done bit set -- is that right?  Is there a way around this?
    thanks,
    --spg
    scott gillespie
    applied brain, inc.
    Solved!
    Go to Solution.

    Steven T --
    Ok, thanks for verifying that.
    >> Is there a reason why you must use one page descriptor in the chunky link?
    Not necessarily, however since I am constructing my own SGL's, I do need to know exactly what I can and can't do.  So when I see behavior like this, I first want to understand if I am doing something wrong, then determine a workaround if it is a hardware limitation.
    As I am writing a driver that supports several different clients, I need to provide a generalized interface that can handle any request.  For example, I need to know that if one of my clients requests a single byte transfer, the driver has to fail gracefully (or implement the request without using DMA), and not hang :-)
    Having you verify this limitation (if it is that) is extremely useful to me, since I can now deploy the workaround (add an extra transfer for any single link chunky, use a direct write or FIFO preload for any single byte transfer) and not continue to wonder if I have missed some other essential register setting or flag.
    Thanks again, and if you do find out anything more, let me know.
    cheers,
    spg
    scott gillespie
    applied brain, inc.

  • X Series DDK: Configure Interrupt on DMA Channel's total transfer count

    Hello,
    In the DAQ-STC3 X Series DDK Reference Manual, Chapter 1: Theroy of Operation, Section Interrupts, Subsection Special Considerations: Maximizing Throughput in Low-Latency Situations (p41), it is said:
    "for X Series devices, the CHInCh can interrupt on the DMA channel’s total transfer count, which occurs once the data has been completely transferred to the host memory. The order of programming for this situation (and output operations) is as follows:
    1. Program the DMA channel’s Total_Transfer_Count_Compare_Register (CHTTCCR) with the number of Bytes in a single input/output sample.
    2. Set the DMA channel’s Notify on Total Count flag in the CHCR.
    3. Set the DMA channel’s Arm Total Count Interrupt flag in the CHOR.
    4. Start data transfer (through the DMA controller and the subsystem’s Stream Circuit).
    5. Receive total transfer count interrupt.
    6. Increase the CHTTCCR by the number of Bytes in a single input/output sample.
    7. Re-arm the total transfer count interrupt in the CHOR.
    Using the X Series DDK, I don't manage to perform such a configuration.
    Can you please provide me code sample to do so ?
    Thanks in advance for your support.
    Sincerely
    Bertrand

    Hello Steve,
    Weeks ago, we developed a Linux application that configure NI acquisition board (serie X) to send an interrupt when FIFO count reach a given number. At this stage we manage to prove that our board configuration was good and that the problem was due to INtime. TenAsys (INtime developers) fix this issue few weeks ago.
    We just come back from holidays, apply the modifications created by TenAsys and manage to get interrupt inside INtime.
    We still have two problems.
    Reading DMA
    ===========
    From the interrupt handler, when we access to the DMA to get samples stored in the FIFO, we manage to get the samples inside the first interrupt handler. With the following interrupts, when accessing DMA with the tCHInChDMAChannel structure, it said that there is no available bytes. But when we read the Channel_Total_Transfer_Count_Status_Register from the DMA channel, we see that we have the desired numbers of samples.
    In the interrupt handler, during the interrupt aknowledgement, instead of only reading the Volatile_Interrupt_Status_Register to ackowledge the interrupt, if I increase the Channel_Total_Transfer_Count_Compare_Register_LSW by a given number (X) then I got X samples to read in the following interrupt. Problem with this solution is that the delay between two interrupts is not constant.
    It seems that we mis-configured the DMA channel. But don't manage to find the error.
    Two interrupts generated
    ====================
    Moreover, we always get 2 FIFO_Count interrupts. Even configuring conversion, sampling and interrupt frequencies at very low value (conversion 1KHz, sampling 1Hz, interrupt generation: 1Hz). The delay between the two interrupts is about few nano seconds.
    Source code
    ============
    I attach to this post the source code we use to play/test this configuration. There is a Visual Studio workspace that we used to play with INtime and a CMake configuration file that we used to manage our Linux tests. You can find all the informations you need to build the binary in the README file.
    Thanks in advance for your help with these issues.
    Sincerely
    Bertrand Cachet
    Attachments:
    IOMonitoring.zip ‏355 KB

  • M-series counter interrupts/dma

    I'm trying to get the counters on an m-series board (6281) to generate interrupts, so I can do buffered event counting. As a test, when I turn on terminal count interrupts, then make the counter count down to zero, no interrupt is generated. It is disarming when it hits the TC (I have it configured to stop counting and disarm on TC) but no interrupt. I have the TC interrupt enabled in the interrupt A enable register (i'm using counter 0), and I am able to generate TC interrupts using an e-series board. I've noticed the m-series has some additional bits in the Gi_DMA_Config register and I've played with them but to no effect.
    Is there any extra magic to make counter interrupts work on m-series boards? Is there any example code of buffered counting using m-series boards (there's none in the m-series ddk, although there is for the 660x boards in the 660x ddk)? Ultimately, I'd like to get buffered counting working with DMA, but for now I'd just like to get it to generate an interrupt.

    Hi fmhess-
    I created an example recently for DMA-based buffered period measurement.  It's attached as gpctex6.cpp and should be a good starting point (along with gpctex1.cpp from the M Series MHDDK) to get buffered edge counting with DMA working.  This should give considerably better performance than interrupt-based transfers; will DMA-based transfers (using the MHDDK's DMA library) work for your app?
    Thanks-
    Tom W
    National Instruments
    Attachments:
    gpctex6.cpp ‏12 KB

  • How to decide the maximum number of elements for DMA FIFO in R series FPGA

    Greetings!
    I'm working on a project with NI R series PCIe-7842R FPGA board. To achieve fast data transfer I'm using target-to-host DMA FIFO. And to minimize the overhead, I'd like to make the FIFO size as large as possible. According to the manual, 7842R has 1728 kb (216KB) embedded block RAM, i.e., 108,000 I16 type FIFO elements available in theory (1728,000/16). However the FPGA had compilation error when I requested such amount of elements. I checked the manual and searched online but couldn't find the reason. Can anyone please explain? And in general, what's the max FIFO size given the size of the block RAM?
    Thanks! 

    Hey iron_curtain,
    You are correct that moving larger blocks of data can lead to more efficient utilization of the bus, but that's almost certainly not the most important factor here. Assuming of course that the FIFO on the FPGA is big enough to avoid overflows, I'd expect the dominant factor to be the size of read performed on the host. In general, larger reads on the host lead to improved throughput, up to the speed of the bus. This is because FIFO.Read is a relatively expensive software operation, so it's advantageous to make fewer calls to get the same amount of data.
    Note that the larger your call to FIFO.Read the larger the host buffer needs to be. Depending on your application, it might need to be several times larger than the read size. You can set the buffer size with the FIFO.Configure node.
    http://zone.ni.com/reference/en-XX/help/371599H-01/lvfpgaconcepts/fpga_dma_how_it_works/ explains the various buffers involved. It's important to note that the DMA Engine moves data asynchronously from the read/write nodes on the host and fpga.
    Let me know if you have any questions about any of this.
    Sebastian

  • DMA acquisition - M series

    Hi all,
    I went to this page: http://digital.ni.com/express.nsf/bycode/exyv4w?opendocument&lang=en&node=seminar_US#cfiles
    I downloaded the examples and register objects of M series.
    In the AI exemple, I saw one exemple for the On-demand acquisition and another one with the Hardware timed acquisition.
    Is there another one with the DMA acquisition?
    Thanks for your help

    Hello Fv1234,
    Currently, there are not any DMA examples, though we are in the process of creating some.  What is your application?  Do you need DMA to accomplish what you are trying to do?  Hopefully, we should have some DMA stuff finished by the end of February this year.  Once complete, we will post with more info.  Thanks for your interest!
    -Alan A.

  • M-Series Buffered Event Counting with DMA -- gating problem

    Hi --
    I am implementing DMA-based buffered event counting on a PCIe-6259 board.  I use G0_Out as the gate for G1, which counts events on a PFI pin.   So by setting the speed of G0, I get an event count (either cumulative or non-cumulative) on a periodic basis, which is directly DMA'd to my buffer, and synchronized with other i/o operations.
    This is working well right now, except for one problem, which is that the I only get data if there is at least one  source edge between gates.  i.e. if there are no edges, nothing gets pumped to the dma buffer.
    I am guessing that a stale data error is somehow choking off the DMA transfer from the counter.   Is that possible?
    Is there some magic that I need to do to avoid this, because for this application, especially if I am counting cumulatively, I don't care about a missing edge, but I do care if the dma transfers get out of phase with the rest of my timing.
    Thanks in advance for any help!
    --spg
    Here is a snippet of the code that sets up the event counting on G1, partly based on gpctex6.cpp:
    const int sDMASelect[] = {1,2,4,8,3,5};
    // source:  pfi, or -1 for 20Khz clock
    void eventTimerSetup(tMSeries *board, tTIO *tio, int dmaChannel, bool cumulative, int source)
        int sourceSelect = (source==-1) ? 0 : (source+1);
        //MSeries.CTR.Source
        tio->G1_Input_Select.setG1_Source_Select(sourceSe​lect); // (pfi+1) or 20Khz=0
        tio->G1_Input_Select.setG1_Source_Polarity(0); //rising=0
        tio->G1_Input_Select.setG1_OR_Gate(0);
        tio->G1_Input_Select.flush();
        //MSeries.CTR.Gate
        tio->G1_Input_Select.setG1_Gate_Select(20); //the G_OUT signal from other clock=20
        tio->G1_Input_Select.setG1_Output_Polarity(0); //active high=0
        tio->G1_Input_Select.flush();
        //MSeries.CTR.IncrementRegisters
        tio->G1_AutoIncrement.writeRegister(0);
        //MSeries.CTR.InitialCountRegisters
        tio->G1_Mode.writeG1_Load_Source_Select(tTIO::tG1​_Mode::kG1_Load_Source_SelectLoad_A);
        tio->G1_Load_A.writeRegister(0);
        tio->G1_Command.writeG1_Load(1);
        tio->G1_Load_B.writeRegister(0);
        tio->G1_Load_A.writeRegister(0);
        tio->G1_Command.setG1_Bank_Switch_Enable(tTIO::tG​1_Command::kG1_Bank_Switch_EnableBank_X);
        tio->G1_Command.setG1_Bank_Switch_Mode(tTIO::tG1_​Command::kG1_Bank_Switch_ModeGate);
        tio->G1_Command.flush();
        //MSeries.CTR.ApplicationRegisters
        tio->G1_Input_Select.setG1_Gate_Select_Load_Sourc​e(0);
        tio->G1_Mode.setG1_Reload_Source_Switching(tTIO::​tG1_Mode::kG1_Reload_Source_SwitchingAlternate);
        tio->G1_Mode.setG1_Loading_On_Gate(cumulative ? tTIO::tG1_Mode::kG1_Loading_On_GateNo_Reload : tTIO::tG1_Mode::kG1_Loading_On_GateReload_On_Stop_​Gate);
        tio->G1_Mode.setG1_Loading_On_TC(tTIO::tG1_Mode::​kG1_Loading_On_TCRollover_On_TC);
        tio->G1_Mode.setG1_Gating_Mode (tTIO::tG1_Mode::kG1_Gating_ModeEdge_Gating_Active​_High);
        tio->G1_Mode.setG1_Gate_On_Both_Edges (tTIO::tG1_Mode::kG1_Gate_On_Both_EdgesBoth_Edges_​Disabled);
        tio->G1_Mode.setG1_Trigger_Mode_For_Edge_Gate(tTI​O::tG1_Mode::kG1_Trigger_Mode_For_Edge_GateGate_Do​es_Not_Stop);
        tio->G1_Mode.setG1_Stop_Mode(tTIO::tG1_Mode::kG1_​Stop_ModeStop_On_Gate);
        tio->G1_Mode.setG1_Counting_Once(tTIO::tG1_Mode::​kG1_Counting_OnceNo_HW_Disarm);
        tio->G1_Second_Gate.setG1_Second_Gate_Gating_Mode​(0);
        tio->G1_Input_Select.flush();
        tio->G1_Mode.flush();
        tio->G1_Second_Gate.flush();
        //MSeries.CTR.UpDown.Registers
        tio->G1_Command.writeG1_Up_Down(tTIO::tG1_Command​::kG1_Up_DownSoftware_Up); //kG1_Up_DownSoftware_Down
        //MSeries.CTR.OutputRegisters
        tio->G1_Mode.writeG1_Output_Mode(tTIO::tG1_Mode::​kG1_Output_ModePulse);
        tio->G1_Input_Select.writeG1_Output_Polarity(0);
        //MSeries.CTR.BufferEnable
        board->G1_DMA_Config.writeG1_DMA_Reset(1);
        board->G1_DMA_Config.setG1_DMA_Write(0);  
        board->G1_DMA_Config.setG1_DMA_Int_Enable(0);
        board->G1_DMA_Config.setG1_DMA_Enable(1);   
        board->G1_DMA_Config.flush();
        tio->G1_Counting_Mode.setG1_Encoder_Counting_Mode​(0);
        tio->G1_Counting_Mode.setG1_Alternate_Synchroniza​tion(0);
        tio->G1_Counting_Mode.flush();
        //MSeries.CTR.EnableOutput
        //board->Analog_Trigger_Etc.setGPFO_1_Output_Enab​le(tMSeries::tAnalog_Trigger_Etc::kGPFO_1_Output_E​nableOutput);
        //board->Analog_Trigger_Etc.setGPFO_1_Output_Sele​ct(tMSeries::tAnalog_Trigger_Etc::kGPFO_1_Output_S​electG_OUT);
        //board->Analog_Trigger_Etc.flush();
        //MSeries.CTR.StartTriggerRegisters
        tio->G1_MSeries_Counting_Mode.writeG1_MSeries_HW_​Arm_Enable(0);    
        board->G0_G1_Select.writeG1_DMA_Select(sDMASelect​[dmaChannel]);   
        tio->G1_Command.writeG1_Arm(1); // arm it
    Scott Gillespie
    http://www.appliedbrain.com
    scott gillespie
    applied brain, inc.
    Solved!
    Go to Solution.

    Okay, I have it working now.  In addition to your suggested changes, I had to remove the following line:
    tio->G1_MSeries_Counting_Mode.writeG1_MSeries_HW_A​rm_Enable(0);    
    It appears that writing something to MSeries_Counting_Mode causes that register to supersede the Counting_Mode register.  Is that right?
    So code that now works for  me is listed below.
    thanks Tom!
    -spg
    void eventCounterSetup(tMSeries *board, tTIO *tio, int dmaChannel, bool cumulative, int source) // pfi, or -1 for 20Khz clock
    int sourceSelect = (source==-1) ? 0 : (source+1);
    //MSeries.CTR.Source
    tio->G1_Input_Select.setG1_Source_Select(sourceSel​ect); // (pfi+1) or 20Khz=0
    tio->G1_Input_Select.setG1_Source_Polarity(0); //rising=0
    tio->G1_Input_Select.setG1_OR_Gate(0);
    tio->G1_Input_Select.flush();
    //MSeries.CTR.Gate
    tio->G1_Input_Select.setG1_Gate_Select(20); //the G_OUT signal from other clock=20
    tio->G1_Input_Select.setG1_Output_Polarity(0); //active high=0
    tio->G1_Input_Select.flush();
    //MSeries.CTR.IncrementRegisters
    tio->G1_AutoIncrement.writeRegister(0);
    //MSeries.CTR.InitialCountRegisters
    tio->G1_Mode.writeG1_Load_Source_Select(tTIO::tG1_​Mode::kG1_Load_Source_SelectLoad_A);
    tio->G1_Load_A.writeRegister(0);
    tio->G1_Command.writeG1_Load(1);
    tio->G1_Load_B.writeRegister(0);
    tio->G1_Load_A.writeRegister(0);
    tio->G1_Command.setG1_Bank_Switch_Enable(tTIO::tG1​_Command::kG1_Bank_Switch_EnableBank_X);
    tio->G1_Command.setG1_Bank_Switch_Mode(tTIO::tG1_C​ommand::kG1_Bank_Switch_ModeGate);
    tio->G1_Command.flush();
    //MSeries.CTR.ApplicationRegisters
    tio->G1_Input_Select.setG1_Gate_Select_Load_Source​(0);
    tio->G1_Mode.setG1_Reload_Source_Switching(tTIO::t​G1_Mode::kG1_Reload_Source_SwitchingAlternate);
    tio->G1_Mode.setG1_Loading_On_Gate(cumulative ? tTIO::tG1_Mode::kG1_Loading_On_GateNo_Reload : tTIO::tG1_Mode::kG1_Loading_On_GateReload_On_Stop_​Gate);
    tio->G1_Mode.setG1_Loading_On_TC(tTIO::tG1_Mode::k​G1_Loading_On_TCRollover_On_TC);
    tio->G1_Mode.setG1_Gating_Mode (tTIO::tG1_Mode::kG1_Gating_ModeEdge_Gating_Active​_High);
    tio->G1_Mode.setG1_Gate_On_Both_Edges (tTIO::tG1_Mode::kG1_Gate_On_Both_EdgesBoth_Edges_​Disabled);
    tio->G1_Mode.setG1_Trigger_Mode_For_Edge_Gate(tTIO​::tG1_Mode::kG1_Trigger_Mode_For_Edge_GateGate_Doe​s_Not_Stop);
    tio->G1_Mode.setG1_Stop_Mode(tTIO::tG1_Mode::kG1_S​top_ModeStop_On_Gate);
    tio->G1_Mode.setG1_Counting_Once(tTIO::tG1_Mode::k​G1_Counting_OnceNo_HW_Disarm);
    tio->G1_Second_Gate.setG1_Second_Gate_Gating_Mode(​0);
    tio->G1_Input_Select.flush();
    tio->G1_Mode.flush();
    tio->G1_Second_Gate.flush();
    //MSeries.CTR.UpDown.Registers
    tio->G1_Command.writeG1_Up_Down(tTIO::tG1_Command:​:kG1_Up_DownSoftware_Up); //kG1_Up_DownSoftware_Down
    //MSeries.CTR.OutputRegisters
    tio->G1_Mode.writeG1_Output_Mode(tTIO::tG1_Mode::k​G1_Output_ModePulse);
    tio->G1_Input_Select.writeG1_Output_Polarity(0);
    //MSeries.CTR.BufferEnable
    board->G1_DMA_Config.writeG1_DMA_Reset(1);
    board->G1_DMA_Config.setG1_DMA_Write(0);  
    board->G1_DMA_Config.setG1_DMA_Int_Enable(0);
    board->G1_DMA_Config.setG1_DMA_Enable(1);   
    board->G1_DMA_Config.flush();
    // from Tom:
    // The "magic" you need is referred to as synchronous counting mode (or Duplicate Count Prevention in NI-DAQmx).  
    // Try setting G1_Encoder_Counting_Mode to 6 (synchronous source mode) and G1_Alternate_Synchronization to 1 (enabled).
    tio->G1_Counting_Mode.setG1_Encoder_Counting_Mode(​6); // 0
    tio->G1_Counting_Mode.setG1_Alternate_Synchronizat​ion(1); // 0
    tio->G1_Counting_Mode.flush();
    board->G0_G1_Select.writeG1_DMA_Select(sDMASelect[​dmaChannel]);   
    tio->G1_Command.writeG1_Arm(1); // arm it
    scott gillespie
    applied brain, inc.

  • Analog out DMA performanc​e problems

    I'm working on an open-source driver for m-series and e-series boards (http://www.comedi.org). I've discoved some performance problems doing dma to analog outputs that I can't resolve. In summary, dma transfers to the analog output of a PXI-6281 in a pxi crate being controlled through a mxi-4 connection (pxi-pci8336) are VERY slow. I'm talking 250k samples/sec slow. That's the maximum speed the dma controller can fill the board's analog output fifo from host memory. I've also got an older PXI-6713 in the same crate, and dma transfers to it are about 15 times faster (about 3.5M samples/sec). I did notice that clearing the dma burst enable bit in the mite chips channel control register caused the 6713 to slow way down to something comparable to the 6281 (about 500k samples/sec). Setting or clearing the burst enable bit had no effect on the speed of the 6289. Is there some special mojo that needs to be done to enable burst transfers on the 6289? Also, even the relatively speedy 6713 does dma transfers much slower than it should, since the pxi-pci8336 advertises 80MB/sec sustained transfer rates over mxi4. Can you provide any insight into this matter? I've already looked through the ddk, and a register level document describing the mite chip, and example code which had chipobjects for the mite and an analog input example.
    By the way, dma transfers for analog input on the 6281 weren't as bad, I didn't measure the transfer time, but I was at least able to do input at 500k samples/sec without fifo overruns.
    I'll post more detailed performance measurements in a subsequent post, and include measurements for a couple other similar pci boards (a pci-6289 and pci-6711). In case you're wondering, neither of the pci boards get anywhere close to the bandwidth provided by the pci bus, but they're not as spectacularly bad as the pxi-6281.

    Here are my measurements:
    PCI-6711, tested on 1.4GHz Pentium 4:
    5.2 to 5.3 milliseconds to load fifo to half-full using dma. 0.9 to 1.0 microseconds to write to a 16-bit register. 1.9 to 2.1 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect.
    PXI-6713, tested on 3.2GHz Pentium D:
    2.2 to 2.4 milliseconds to load fifo to half-full using dma. 0.5 to 0.7 microseconds to write to a 16-bit register. 5 to 7 microseconds to read from a 16-bit register. Turning off the mite's burst enable bit causes the dma fifo load time to increase to 16 to 17 milliseconds.
    PCI-6289, tested on 3GHz Pentium 4:
    2.0 to 2.2 milliseconds to load fifo to half-full using dma. 0.4 to 0.6 microseconds to write to a 16-bit register. About 1.2 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 2.1MHz before the board's fifo started to underrun.
    PXI-6281, tested on 3.2GHz Pentium D:
    18 to 19 milliseconds to load fifo to half-full using dma. 0.3 to 0.4 microseconds to write to a 16-bit register. 4 to 6 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 250kHz before the board's fifo started to underrun.
    Notes: the 671x boards have a 16k sample ao fifo, the 628x boards have 8k.
    The 4 to 7 microseconds times to read a register on the PXI boards seems large too, is that normal overhead for going over the mxi-4 connection?
    I wasn't doing anything else intensive on the pci bus during these tests. For what it's worth, according to pci specs the two pci boards should be able to dma their analog output fifos to half full in less than 150 microseconds.

  • Data Transfer b/w Target and host using DMA FIFO's

    Dear NI,
         Am facing a problem while writing data into the DMA in the Host environment,
    Steps i did:
    1. Invoked the FIFO in the Host environment.
    2.Connected it to the VI reference.
    3.Configured the DMA Depth.
    4.Started the FIFO.
    5.An Array is initialised and adata is fed into Array using the File operations.
    6.and the DMA is read in the Target VI.,
    In the Above process, i attached a indicator to the DMA while reading in the Target environment,
    but i could not observe any activity,
    if some one tried please let me know the procedure to do the same,
    am attaching the Host VI for reference
    Attachments:
    host.vi ‏172 KB

    Hi Kalyansuman,
     Good afternoon and thanks for your post.
    I would again like to stress you must keep your post in one place on the forums. Now lets discuss your problem!
    I am confused about what your trying to acheive. You want to read and write data from the FPGA?
    Then normal setup is to open a FPGA VI reference, then do the read/write and close the reference outside of the loop.
     If
    then require to do this twice, I would have two loops. (but use the
    same references), then merge the error clusters, and the use a single
    close FPGA reference. 
    The reason why your DMA may not be working:
    1) Have you tried them on their own (just a read for example)?
    2) Have you taken a look at the examples in NI Example finder? They have two which show how to implement FIFOs.
    3) Is this a cRIO or an R-series board?
    Any more clarifcation would be great. For example,do you get an error? what do you mean by no activity?
    Kind Regards
    James Hillman
    Applications Engineer 2008 to 2009 National Instruments UK & Ireland
    Loughborough University UK - 2006 to 2011
    Remember Kudos those who help!

  • X121e and Windows XP: only PIO mode (not DMA)

    I installed Windows XP on a thinkpad X121e. After actualizing all drivers from Lenovo I found out why the whole system was very slow: The internal hard disk is running only in PIO mode and reaches only 3 MB/s which is far to slow to work. I did not had this issue for years; normally a hard disk should run in DMA mode without any problems.
    I tried several things:
    - deinstalling the driver
    - reinstalling chipset driver)
    - changing the hard disk
    -  resetting the storage controller with a script (dmareset.vbs) found in the net 
    but I did not find any solution. The PIO mode is still on.
    I did install the XP system in SATA legacy (compatibility) mode. I could not try installing windows XP in ACHI mode because I did not find any ACHI driver for this chipset (HM65) at intel (only for Intel 5 series chipset exist F6 driver).  
    What could be the reason for the problem that DMA mode is not recognized by XP? Is it an issue for Lenovo (e.g. Bios update necessary) or does it have to do with Intel (problem with chipset driver)? 
    Thanks for any help,
    Thorsten
    Solved!
    Go to Solution.

    Hi thoral,
    I don't know if it will fix the P-I/O issue, but there is an AHCI driver for your machine on the Lenovo support site:
    Intel SATA Controller AHCI Driver
    Intel SATA Controller AHCI Driver
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • How to generate an interrupt using DI change detection on m-series card

    Hi,
    I want to generate an interrupt on the positive edge of a digital input signal on the IO connector.
    Does anybody know how to configure an m-series card (PXI-6224) for this use through RLP programming?
    Thanks in advance,
    Richard

    Richard vl wrote:
    I want to generate an interrupt on the positive edge of a digital input signal on the IO connector.
    Does anybody know how to configure an m-series card (PXI-6224) for this use through RLP programming?
    RuthC wrote:
    I also want to generate an external interrupt on an M- series pci-6229, and on a pci-6602.
    1. Is there an exampe how to configure the registers?
    2. which external signals can genarate interrupts on those cards?
    Hi Richard, hi Ruth,
    Let me address your questions together: first for 662x (part of M Series) digital change detection and then for 6602 (part of 660x).
    622x (M Series)
    Digital change detection has not been released in the DDK for M Series devices. If you must use an M Series device, please ask your field engineer to contact NI support so we can discuss options. On the other hand, digital change detection has been released in the DDK for X Series devices (63xx) [1].
    If you can use one from that family, then your programming will be much easier -- the RLP manual discusses change detection as well as interrupts (Chapter 1: Interrupts, beginning on PDF page 48), and the example distribution demonstrates how to configure change detection on the device (dioex3). The last piece is data transfer: the example's data transfer mechanism is DMA, so you would need to supply your own interrupt handler to move data to the host (or alert the host that a DMA transfer has completed).
    6602 (660x family)
    Moving to the 6602, change detection is not possible. The 660x device family only supports polling for transfering data read on the digital lines [2].
    Please let me know if I overlooked anything in your questions.
    [1] NI Measurement Hardware Driver Development Kit
    http://sine.ni.com/nips/cds/view/p/lang/en/nid/11737
    [2] NI 660x Specifications
    http://digital.ni.com/manuals.nsf/websearch/57893F11B0C0687F862579330064FF6F
    Joe Friedchicken
    NI VirtualBench Application Software
    Get with your fellow hardware users :: [ NI's VirtualBench User Group ]
    Get with your fellow OS users :: [ NI's Linux User Group ] [ NI's OS X User Group ]
    Get with your fellow developers :: [ NI's DAQmx Base User Group ] [ NI's DDK User Group ]
    Senior Software Engineer :: Multifunction Instruments Applications Group
    Software Engineer :: Measurements RLP Group (until Mar 2014)
    Applications Engineer :: High Speed Product Group (until Sep 2008)

  • Time-based interrupt m-series

    I can't find anything directly related to what I'm hoping to do, so I'm back to ask more questions.
    I'm running a hardware timed analog input acquisition (80 kHz currently) using dma transfers, on-demand analog output, and on-demand DIO on an M-series PCI-622x.  I'm exploring the possibility of using a counter to send an interrupt every 200 ms using an internal signal as the counter source.  I found this forum post, which looks like it is a least a good source of information about the counter registers.  I haven't found much else on internal routing, setting up a continuous digital output task to feed back to the counter (if I have to go with a round about way to do this), etc, but as you can see I'm really stabbing in the dark.
    I'm hoping I can set the counter up in a trivial setup that counts like crazy to a specified number, triggers an interrupt, resets the counter, and automatically begins counting again repeating the process.  Any enlightenment people can provide to the possibility of this or even round about ways of accomplishing a similar task would be greatly appreciated.
    Aaron

    Hi Aaron-
    I'm a bit unclear on exactly what you're trying to accomplish.  Is the "interrupt" you're trying to generate just a pulse to be generated out to the I/O connector for interface to some external hardware, or are you looking for a way to route an internal event signal back to the host CPU as a software interrupt (to be handled by some ISR associated with your app).
    If it's the former, you should be able to achieve what you want by setting up the M Series counter for pulse train generation.  We don't have an example of this for the M Series DDK, but the 660x DDK examples do show how to set this up (specifically, ni660x/Examples/gpct_ex3.cpp would be a good reference).
    Hopefully this helps-
    Message Edited by Tom W [DE] on 10-11-2007 09:56 AM
    Tom W
    National Instruments

  • Stopping a currently running DAQ task for m-series

    I'm running a hardware timed analog input data acquisition task on a PCI-6229 m-series DAQ card that takes 200 us.  Every 250 us the program reads the data and restarts the task.  The difficulty is that the program sometimes has a late start and the next time the thread reads the task is still in progress.  I'd like to guarantee the task is stopped every time the program reads the data.  I've tried the following three sets of commands when the thread wakes up:
    Attempt 1:
    if( board->Joint_Status_2.readAI_Scan_In_Progress_St() )
         board->AI_Command_1.writeAI_Disarm(1);
         board->AI_Command_1.flush();
    Attempt 2:
    if( board->Joint_Status_2.readAI_Scan_In_Progress_St() )
         board->AI_Status_1.setAI_STOP_St(kTrue);
         board->AI_Status_1.flush();
    Attempt 3:
    if( board->Joint_Status_2.readAI_Scan_In_Progress_St() )
         board->AI_Mode_1.setAI_Start_Stop(kTrue);
         board->AI_Mode_1.flush();
    They seem to randomly work.  Sometimes the task stops immediately, sometimes it reads a few more times, and sometimes it just keeps reading.  The positive part of these commands are that the task can be restarted by simply issuing the aiStart(board) command again -- most of the time.  Is there something that I can send to the card to reliably stop any currently running AI tasks and at the same time allow the aiStart(board) command to be used to start the next set of readings?
    You may ask why I'm doing this.  I've had a lot of problems losing track of the inputs after 13 hr to several days at 250 kHz.  By restarting the task every loop and clearing the DMA buffer, I can guarantee the first element in the buffer is the first input read.  I'm using DMA so if the task is still running when I send the aiStart(board) command, it can screw up this balance.  You may argue that I should keep track of things more closely, but this system means that if the inputs somehow become switched the next time the thread runs it will automatically correct the problem.  This self-correction is a critical feature.
    Thanks.
    Aaron

    Hi Aaron-
    The bitfields you attempt to write are problematic for a few reasons.  First, AI_Disarm is only safe to use for idle counters and may not work reliably if the acquisition is currently running (which it sounds like you have observed).  AI_STOP_St is a read-only bit, so writing it will have no effect.  Finally, AI_Start_Stop controls an unrelated functionality (essentially, it decides whether an AI_Start -> AI_Stop cycle constitutes a "scan".  This is actually the only mode of the STC2 that makes much sense to use on M Series).
    There are a couple of bitfields in AI_Command_2 that might help.  AI_End_On_SC_TC is a strobe bit that disarms the AI_SC, AI_SI, AI_SI2, and AI_DIV counters when an SC_TC event occurs.  AI_End_On_End_Of_Scan provides the same functionality for when an AI_Stop occurs.  So basically, you could determine a regular interval boundary number of scans to stop on (using End_On_SC_TC) or just stop at the end of the "current" scan (using End_On_End_Of_Scan). 
    I haven't tested this, but it should work.  Let me know if you have problems using either of these methods.  Hopefully this helps- 
    Message Edited by Tom W [DE] on 03-14-2008 03:21 PM
    Tom W
    National Instruments

Maybe you are looking for