FPGA target to host DMA transfer speed

Hello,
------------Task summary:
I'm currently working on a data acquisition-heavy project using a PXIe chassis system (plus a host computer), see below for the components.
PXIe-PCIe8388 x16 Gen 2 MXI-express (controller)*
PXIe-1082 (chassis)
PXIe-7966R (FPGA)
NI 5772 (AC version, IO Module)
*note: the controller is connected to a PCIe port on the host computer with the full x16 bandwidth.
For my application, I need to acquire a fixed number of samples (16000) from each channel of the IO module at a fixed sampling rate (800MS/s). Each acquisition will be externally triggered at a fixed frequency, 50kHz. The number of acquisitions will also be fixed. Right now I'm aiming for about 90000 acquisitions per session.
So in summary, for each acquisition session, I will need (16000 samples per acquisition) * (90000 acquisitions) * (2 AI channels) = 2.88e9 samples per acquisition session.
Since each sample is transferred as a 16-bit number, this equates to 5.76GB per acquisition session.
The total time per acquisition session is (90000 acquisitions) / (50kHz per acquisition) = 1.8 seconds.
--------------Problems:
I'm having problems transferring the acquired data from the FPGA to host. I think I'm seeing an overflow on the FPGA before the data is transferred to the host. I can go into more detail pending an answer to my questions below.
--------------Questions:
I want to ask a few general questions before posting any code screenshots. Assuming my math is correct and the host computer 'good' enough, is it theoretically possible to transfer data at my required throughput, 5.76GB/1.8seconds = 3.2GB/s using the hardware that I have?
If it is possible, I can post the FPGA and host VIs that I'm using. If not, I will have another set of problems!
Thanks,
Michael

thibber wrote:
Hi Michael,
I have a few questions / observations for you based on your post:
First, you mention that you are using the PXIe-PCIe8388 x16 Gen 2 MXI-express.  This is only compatible with the NI RMC-8354, so when you mention the streaming speeds you are looking to acheive, is this streaming back to the RMC, or to something else?  Is the NI RMC-8354 the host computer you are mentioning?
When it comes to streaming data with the NI 5772 and PXI 7966R, there are a few different important data rates.  First, the NI-5772 can acquire at a maximum rate of 1.6 GS/s with 12 bit resolution = 2.4 GB/s.  This is only if you are using 1 channel, for 2 channels the rate is halved.  Are you planning on using 2 separate 5772 and 7966Rs?
The 7966R can stream data at a maximum rate of 800 MB/s, so we have a data rate coming into the FlexRIO's FPGA (2.4GB/s) and going out of the FlexRIO's FPGA (.8 GB/s).  The data that isn't being sent back to the host accumulates in the FPGA's DRAM.  Lets say we have all of the FPGA's DRAM available to store this data (512 MB).  Our effective accumulation rate is 2.4 - 0.8 = 1.6 GB/s, so our FPGA is going to fill up in about 1/3 s, streaming a total of 0.8+0.512 = ~1.3 GB back to the host before saturating and losing data.
There are a few options, therefore, to reach your requirement.  One might be duplicating your setup to have more cards.  1.3 GB x 3 = 4GB, which meets your need.  Also, the 7975R can stream data back to the host twice as fast and has 2GB of DRAM onboard, so you could store more data and stream faster, therefore meeting your requirement.
I hope that this information helps clarify what concerns come into play for this type of application.  Please let me know if anything above is unclear or if you have further questions.
Thanks for replying. To answer your first question: I'm transferring to a desktop computer. The controller is able to connect with a PCI express x16 slot in the desktop computer. I'm not sure how to technically describe it, but the controller plugs into the PXIe chassis, then there is another card that plugs into the host computer's PCI express x16 slot, and finally there is a large cable that connects the card in the host computer and the controller. 
For your second paragraph: the reason I used 16-bit numbers in my calculations is because that's how the data is handled in the FPGA after it has been acquired (assuming I keep it as an integer), is that correct? Then it's packed in chunks of 4 (one U64) before being inputted to the target to host FIFO (that's how the NI 5772 examples do it). Right now I'm only using one FPGA and I/O module, and I'm using both AI channels (I need to simultaneously sample two different inputs).
I might be able to live with half of the sampling rate, 400MS/s for both channels, if that means I will be able to acquire a larger amount of data. Getting another FPGA and IO module is also an appealing option. It depends on what my advisors think (I'm a graduate student), and if they want to buy another FPGA and IO module.
Questions:
I have a question about the 7966R vs the 7975R that you mentioned. I could probably find the information in the specifications, but I figured I would just ask you here. Is there any advantage to using the 7966R over the 7975R in terms of programmable logic elements? From what I could quickly read, the 7975R has more DSP slices and RAM, but does it have less general purpose logic blocks than the 7966R? The reason I'm asking is because the project that I'm working on will eventually involve implementing as much signal processing on the FPGA as possible. But obviously figuring out the acquisition part of the project is more important right now. 
The other question I have is related to something nathand said in response to my first post. Is using multiple target to host FIFOs faster than using 1 target to host FIFO (assuming the combined sizes are equivalent)? I noticed that the FPGA has a max of 16 target to host FIFOs. Does each target to host FIFO reserve some amount of bandwidth? Or is the total bandwidth just divided by the amount of target to host FIFOs that I use in a given FPGA VI? Ex: If I only define 2 target to host FIFOs, each would have half of the total bandwidth, if I define 3 target to host FIFOs each would have 1/3, etc.
Hi Michael,
A few updates to my previous post:
First, I think I could have explained the sampling rate a bit more clearly.  Using 2 channels instead of 1 means that each channel will have half the sampling rate (800 MS/s), but the total acquisition rate will still be the same (1.6 S/s).
There are some other options you might want to look into as well regarding your acquisition.  For instance, is it acceptable to use only the 8 most significant or least significant bits of your measurement?  Or to discard a section of your acquisition that is irrelevant to the measurement?
Also, if you do end up wanting to look in the direction of a 7975R, you would also want to likely switch to a 1085 chassis to fully utilize the improved streaming speeds.  The 1082 has a limitation of 1 GB/s per slot, while the 1085 can achieve up to 4 GB/s per slot.
I look forward to hearing what other observations or concerns arise in your testing.
Andrew T.
National Instruments
I'll go ahead and respond to your latest response too. Thanks again for your help.
I think I understand the streaming rate concept. I'm not using time interleaved sampling. My application requires using the simultaneous sampling mode. I need two channels of input data.
Unfortunately I don't think I can sacrifice on bit depth. But for right now I can probably sacrifice half of the sampling rate, and reduce my acquisition duty cycle from 100% (constantly streaming) to 50% (acquiring only half of the time). My acquisition rate will still need to be 50kHz though. I'm planning to compromise on sampling rate by summing pairs of data points instead of simply decimating, and then transferring the data to the host.
Questions:
We (my advisors and I) think that the summing pairs approach would preserve more information than simply throwing away every other point. Also, we can avoid overflow because each 16-bit number only contains 12-bits of actual information. The 16-bit number will just need to be divided by 16 before summing because the 12-bits of information are placed in the 12 MSBs of the 16-bit number. Does that sound right?
As for upgrading the hardware, that would be something I would need to discuss with my advisors (like I said in my above response to your previous post). It would also depend on any exchange programs that NI may have. Is it possible to exchange current hardware for some discount on new hardware?

Similar Messages

  • Number of elements in target to host DMA FIFO

    Hi everyone,
    I'd like to transfer a set of datapoints from a FPGA to a RT-host controller using a DMA fifo. If I use the "Get Number of Elements to Write" function on the FPGA target, do I get the total number of elements in both buffers, or just the one on the FPGA-target?
    (see http://zone.ni.com/reference/en-XX/help/371599H-01​/lvfpgaconcepts/fpga_dma_how_it_works/)
    Solved!
    Go to Solution.

    What type of data do you want to transfer over the FIFO? As in how many bits does each sample contain.
    The reason I ask is because you can take a bit packing approach.
    Lets take for example you want to take two samples of a measerment both samples are 32 bit and then send the data as a set to the processor.
    If you just dump the data into a single FIFO you may lose track as to what was the rising edge, or falling edge data, or if the two samples you got from the FIFO are even from the same dataset.
    To fix this use bit packing technique.
    On the FPGA merge your two 32 bit data sets into one 64bit dataset. 
    Set your FIFO to 64 bits.
    On the processor side of things all you need to do is read one 64bit dataset from the FIFO.
    Use the split data to break the 64bit into two 32bit data fields. 
    Now you have your two data samples, and you can be garentee that it is from the same dataset.
    If the sum of the data bits exceed 64 bits (the limit of the FPGA FIFO) then you will need to migrate to a more complex bit packing data schema where the data is split up among multiple 64bit datafields, with a defined bitfield header and identifier. For example the first 5 bits of the 64bit data identifies that this data block is 1 out of X data blocks, that when combinded together and reasembeled per the schema will represent your data..
    I hope this helps.

  • FPGA to HOST DMA

    I am trying to record rising edge timestamps of a digital signal input to a spreadsheet using a CRIO. I have been getting some bad data elements recorded (although there are periods without faults). This is perhaps the input signal but i would be greatful if someone could check my VI's for any obvious mistakes.
    I would also like to do the same with another input signal. Should i use 2 FIFOs, as below?
    Many Thanks
    Erny

    ern,
    Here are some thoughts. 
    Make sure that your FIFO type is Target to Host - DMA.  If you use more than one, make sure that the DMA Channel is unique for each (0 and 1 for instance).  Check the depth to make sure that you are not running out of space in the FIFO.  Your FPGA code looks ok to me.
    In the GUI, you are specifying to read 1023 elements at a time.  Is your FIFO this same size?  This means that you will most likely always miss edges as the GUI will not read out the data until 1023 data points are present.  You are trying to stream edge timestamp information to the GUI so you have to stay ahead of the FPGA or you will miss data.  You could code your GUI so that you use a shift register to pass the number of Elements Remaining back as an input to the read operation as the Number of Elements to read.  You could seed the shift register with a value of 500 (or something like that) as the initial read.  You would also have to make sure that if the number of elements remaining was too small (or zero) that you set it to some minimum or you may read data out one point at a time depending on your edge rate.
    In you GUI loop, you are writing to a file too.  You might want to use a producer consumer type architecture to read in the data.  One thread would read out the fifo as you currently have.  A second thread would write the data to a file.  You would pass data from the fifo thread to the file writer thread with a queue.
    Hope this helps! 
    -cb

  • DMA from host to FPGA target is not supported for this remote system.

    I am trying tocommunicate with my FPGA (on the cRIO 9002) from the RTOS.  I setup up anOpen the correct VI reference with no error but as soon I try to access thefifo I receive error -63001 and the attached message says:
    Error -63001 occurredat Open FPGA VI Reference in target - multi rate - variables - fileIO_old.vi
    Possible reason(s):
    NI-RIO FPGACommunications Framework:  (Hex 0xFFFF09E7) DMA from host to FPGA targetis not supported for this remote system. Use another method for I/O or changethe controller associated with the FPGA target. 
    What other I/O optionsdo I have to move data asynchronously from the RTOS to the FPGA. I triedcreating memory but it appears that I can not write to the memory from the RTOSside.
    We also have a 9012sitting around will using this cRIO instead solve this problem. 
    I am very very greenwhen it comes to LabView so I apologize if this is an easy question. 
    Solved!
    Go to Solution.

    As stated in the NI-RIO driver readme,
    DMA is not supported from the host to the FPGA on the cRIO-900x series.
    The cRIO-901x controller supports DMA transfers from host to FPGA and
    FPGA to host while the cRIO-900x series controllers only support FPGA
    to host DMA transfers. Therefore, LabVIEW returns an error if you try
    to transfer using DMA from the cRIO-900x controller.
    The 9012 looks like the ideal solution, you are very lucky to be having extra hardware laying around 
    Rob K
    Measurements Mechanical Engineer (C-Series, USB X-Series)
    National Instruments
    CompactRIO Developers Guide
    CompactRIO Out of the Box Video

  • How can i decide the depth of all3 DMA fifo's are used as target to host at RT contoller side(host)?

    Hello,
          I am using all DMA fifos,I want to acquire data from 3 AI modules upto 50 khz frequency.Sampling rate can be varied according to application.
       please suggest me how should i allocate memory of my RT controller for those DMA fifos?I am also facing one problem, I am using 3 time deterministic loop with different different priorities,each time loop has one fifo where data is reading with polling method(first fifo.read=0 then againg fifo.read=remaining elements).and each time loop has been interact to host vi with global varibales.those global variables data update from normal time loop.can anybody suggest that this procedure is right or not.if my problem is not much clear i will again explaing my query with snapshot of my application.
    Pratima
    *****************Certified LabView Associate Developer****************************

    You wouldnt need to allocate memory separately on your RT Controller for the FIFOs. You need to create a FIFO under your FPGA target and use the FPGA interface VIs in your RT VI to access the DMA FIFO. You would need to use the FPGA interface invoke method VI to access your DMA FIFO.
    As for the other questions, I would recommend you to create a separate post the RT section of LabVIEW so that you can get a faster response to your questions.
    I hope this helps!
    Mehak D.

  • Transfer speed in Target Disk mode

    This may not be a question as I am not even interested in the "why" any more.
    My MBP (2.0GHz, 2GB Ram, 7200rpm HD, Week 15) is in target disk mode, and I am copying my Aperture Library (50GB) over to my PowerMac G4 via firewire. Obviously the firewire ports on both sides are Firewire 400, but the file transfer speed appears to be noticeably slow compared to a similar transfer between my external hard drive and the PowerMac G4.
    When I copy over the exact same library (50GB, about 40,000 files according to Finder) from the external HD to the PowerMac, it takes less than 40 minutes. Now between my MBP and PowerMac G4, Finder estimates more than 2 hours.
    I am curious why this is the case when both the external HD and my MBP have 7200rpm drives. Has anyone noticed this too?

    Disks aren't (usually) rated by read/write speed, at
    least not by the manufacturers. They sell by
    capacity.
    You’re kidding, right? Drive manufacturers specify transfer rates, average, minimum and maximum seek times. Rotational latencies and cache buffer sizes. All these parameters are direct measures of performance, and people who design and build systems pay close attention to them. I know I do. I’ve been putting 10,000 RPM Ultra-160 and now Ultra-320 SCSI drives and dual-channel SCSI adaptors in my Linux desktops for several years, now. These parameters have a large effect on performance, obviously.
    It's not a question of 'consider', it's a question of
    verifiable fact - data on the inner part of the
    spindle cannot be written or read as quick as data on
    the outer part. …
    If you cannot tolerate the minimum speeds, then you cannot use that part of the drive. If you’re going to use the full capacity of the drive, you won’t get the speed. You are forced to de-rate the drive one way or the other.
    That’s what I’m saying.
    In other words, if you’re going to use the whole drive, then you’re going to get the speeds you get. Avoiding filling it is not really an option, is it?
    Randall Schulz

  • Communication problem between FPGA VI and Host-PC VI

    Dear,
    I am trying to set up communication between an FPGA an the host-PC using FPGA FIFO's.
    The communication still has some problems and I don't know what would cause them.
    Labview gives me the following reason "The transfer did not complete within the timeout period or within the specified number of retries."
    What is wrong with my labview program? How can i solve this?
    The Project can be found in attachment.
    Best regards,
    Jasper Beurms
    Solved!
    Go to Solution.
    Attachments:
    CEC20_02.zip ‏150 KB

    Hello Jasper,
    Are you fully familiar with how DMA FIFOs work on a cRIO?
    Some general questions:
    - Is there a specific reason that you need to use DMA FIFOs?
      You seem to only require a 10 msec acquisition rate?
    - Wouldn't it be easier to just use the Scan Engine in the case you don't need to go below 10 msec?
    The Scan Engine should allow you to do acquisitions at this rate without even having to implement FPGA code yourself.
    Another benefit is that the Shared Variables created/published by the Scan Engine are also by default visible over the network/
    If want to use DMA FIFOs, then I would suggest you take a look at the Compact RIO Developer's Guide: http://www.ni.com/compactriodevguide/
    I would advise that you read out the DMA FIFOs on a VI that is running on the RT Controller (RT VI) and then send those values from the RT VI over the network to your Windows VI.
    You could use for example Shared variables to sent values from the RT VI to the Windows Host VI.
    Another solution might be to use network streams or more custom TCP/IP communication.
    If these concepts are new to you, then please have look at the Compact RIO Developer's Guide: http://www.ni.com/compactriodevguide/
    This Guide should explain you all the basics you need to know.
    If something is unclear or requires further explanation, then please let me know.
    Kind Regards,
    Thierry C - Applications Engineering Specialist Northern European Region - National Instruments
    CLD, CTA
    If someone helped you, let them know. Mark as solved and/or give a kudo.

  • Chart to excel file in fpga target

    I am a new user of LabVIEW.please help me. I want to export the data to an excel spread sheet from a waveform chart (FPGA Target). suggest me a solution.
    Also give me a suggestion for sampling in FPGA target.

    Hi,
    Which FPGA Target you are using? FlexRIO or CompactRIO?
    There is a few ways to transfer data from FPGA to computer, you can refer to the following topic for more detail information:
    Transferring Data between the FPGA and the Host (FPGA Module)
    http://zone.ni.com/reference/en-XX/help/371599F-01​/lvfpgaconcepts/pfi_data_transfer/
    Once you choose the method of communications, you need to save your data into an excel spreadsheet.
    Do you need to generate a graph automatically in the excel? If yes you might need a report generation toolkit for this.
    Else, you can use the "write to spreadsheet.vi" to log your data.
    I found a great example in the community, you can refer to the link below:
    https://decibel.ni.com/content/docs/DOC-7332
    Regards,
    KwokHow
    Applications Engineer | National Instruments
    Singapore (65) 6226 5886 | Malaysia (60) 3 7948 2000 | Thailand (66) 2 298 4800
    Philippines (63) 2 659 1722 | Vietnam (84) 8 3911 3150 | Indonesia (62) 21 2924 1911

  • How to set a DMA transfer type for PXIe-6536 in LabWindows/CVI?

    I have a PXI chassis PXIe-1078 with a controller PXIe-8115 running under Windows 7. The digital output board is PXIe-6536.
    I use a function DAQmxSetChanAttribute to set a property DAQmx_DO_DataXferMech to a value DAQmx_Val_DMA, since I want to use a direct memory access data transfer. This wokred well with a PCI-6534 board using the same LabWindows/CVI code before migrating it to the PXIe system.
    Unfortunately, running this code on the PXIe system reports a DAQmx error -200452: "Specified property is not supported by the device or is not applicable to the task".
    The task is created in the following simple way (the board name in MAX is 'Dev1'):
       DAQmxCreateTask ("digTask", &digitalTask);
       DAQmxCreateDOChan (digitalTask, "Dev1/port0:3", "DIG_CHANNELS", DAQmx_Val_ChanForAllLines);
       DAQmxSetChanAttribute (digitalTask, "", DAQmx_DO_DataXferMech, DAQmx_Val_DMA, 15);
    How can I solve this problem? How is it possible to choose between different transfer types?
    Thank you in advance for any hint!

    Hi CavityQED,
    The PCI-6534 is a "Digital I/O" board while the PXIe-6536 is a "High Speed Digital I/O" board, that's why they don't have the same properties.
    By the way you can use DMA transfer with this method :
    http://zone.ni.com/reference/en-XX/help/370520J-01/hsdio/direct_dma/
    Let me know if it helps you.
    Regards.
    Mathieu_T
    Certified LabVIEW Developer
    Certified TestStand Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    LabVIEW Tour
    Journées Techniques dans 10 villes en France, du 4 au 20 novembre 2014

  • Error code 63003..while running a FPGA prog on Host PC using FPGA reference VI

    Hi,
    when i run my FPGA program on HOST PC using a FPGA reference vi on windows platform, I get an error code 63003 (FPGA bit file out of date). When I go back, recompile and run the FPGA program on the FPGA target, then shut it and come back to HOST PC and run my application with FRGA Open refence vi, it runs with out any error! Does any one know why I get that error code 63003

    This error means that the VI which is downloaded to the FPGA was changed on the host/development system, but has not been recompiled. By recompiling it you resolve this problem.
    When a LabVIEW FPGA VI is compiled for the FPGA, the bit stream, which will be downloaded to the FPGA, is stored in the VI file. When LabVIEW RT or LabVIEW Windows attempts to download the bit stream to the RIO card, it makes sure the bit stream corresponds to the VI source code in the same file. This is necessary so that the host application can properly communicate with the VI on the FPGA. If the two are not the same, this error is generated from the Open FPGA VI Reference function in the host application.
    Christian L
    NI Consulting Services
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

  • Error -63150 when accessing FPGA FIFO in host VI

    I have a problem accessing a target-to-host FIFO in the host VI (32767 elements, U16) using the FIFO invoke node:
    FIFO.stop works without error
    FIFO.start and FIFO.read result in the following error message:
    Error -63150 occurred at Invoke Method: FIFO.Read in host.vi
    Possible reason(s): NI-RIO:  (Hex 0xFFFF0952) An unspecified hardware failure has occurred. The operation could not be completed.
    I already tried the following:
    removed all other PCI/PCIe cards from the PC (except the graphics card in PCIe slot 1): didn't help.
    I put the connector card for the PXIe chassis (which was originally in PCIe slot 2) to slot 3 and 4, respectively (see below for motherboard details): didn't help.
    I put the connector card into another (older and much too slow) PC (WinXP). There, everything is running fine.
    Hardware/software Details:
    target: PXIe-7962R (with mounted FlexRIO adapter NI 5751; PCI bus 9, device 0, function 0). The device is displayed as working properly both in the Windows Device Manager and in Measurement & Automation Explorer.
    chassis: PXIe-1073 (FPGA card is in slot 2, a PXI-6229 card is in hybrid slot 4)
    LabVIEW 2010 SP1 with FPGA module 10.0.01
    NI-RIO 3.6
    operating system: Windows 7 Enterprise (32bit, Service Pack 1)
    processor: Core i7 960
    motherboard: ASROCK X58 Deluxe3: http://www.asrock.com/mb/overview.de.asp?Model=X58%20Deluxe3&cat=Specifications
    What could I do to fix this problem? Any hint highly appreciated.
    Solved!
    Go to Solution.

    Current status (still not solved):
    reinstalling Windows 7 (32bit Enterprise Edition) and LV2010 SP1 with FPGA module and drivers didn't help. Error is still there.
    @cheggers: the error is produced when FIFO.read() is executed the first time, it's not only after stop/start.
    trying the chassis/cards and test VI on a similar PC: there it works perfectly. This other PC differs only the following aspects from mine:
    motherboard: ASUS P6T DELUXE
    processor: core i7 950 instead of 960
    RAM: 12GB (12GB DDR3 1333MHz Memory) instead 6GB (DDR3 1333MHz Memory)
    OS: Windows 7 64bit Ultimate instead of 32bit Enterprise
    graphics card: NVIDIA GeForce GFX 460 instead of Quadro NVS 295
    FPGA module: 10.0.0 instead of 10.0.1
    It really seems to be related my PC's hardware. Motherboard, processor, memory, graphics card???

  • Slow file transfer speed on OS X / Windows LAN

    I have a Mac Mini with OS X Server (Yosemite) on a network of 5 Windows 7 PCs, the server hosts FileMaker Server 13 and also some files in a shared folder.
    I notice the file transfer speed when copying a file from one PC up to the server's shared folder is around 350KB/second, is this an acceptable speed? how could I improve it?
    Thanks for your help.

    I'm having the same issues.. I recorded a bunch of video with the 4s.. now i'm copying them to my windows 7 pc with it peaking out at 900KB/sec.  My corsair thumb drive in the same port gets 5-10x this speed.  any luck getting faster speeds?

  • IPhone File Transfer Speeds

    I have two MacBooks and two iPhones and I'm seeing very different file transfer speeds between the two. Both have the backup option manually disabled to improve sync times.
    Setup 1: iPhone 3GS with MacBook Pro (15-inch, Late 2008) model
    Setup 2: iPhone 3G with Black Macbook (13-inch Mid 2007) model
    The time it takes to sync video between the two is drastic with the newer model copying files in a few seconds, and the older setup copying files over 5-10 minute periods.
    I have yet to swap the two iPhones on the two computers but from a technical perspective, I don't understand why one would transfer files faster than the other. Both are using USB 2.0 and I've already tried using the same cable for both. Anyone see similar behavior? Is this a facet of the older computer or older iPhone?

    As an example, a USB host on a PCI bus will send or receive the data via the PCI bus; the stack will prepare the next data in memory and receive interrupt from the host. That's what I mean, how that's implemented.

  • DMA transfer rate for PCI-6602 counter/timer

    I'm strongly interested in raising the DMA transfer rate between the PCI-6602 counter and computer. At the moment, I've got a Pen-4 2.4GHz operating under Win98. I have to move an 80 Megaword array at an ~5 MHz speed. So far, I've been able to reach just 2 MHz. Would it be possible? What is the battleneck here - the soft- or hardware?

    Hello,
    I think the bottleneck you are seeing here is a limitation of the dma transfer capabilities that is dependant on the bus of your PC and not your 6602 card. Here is a link of a knowledgebase that you could try to use to see if that would improve your transfer rates. I still doubt you will be able to achieve approximately 5MHz.
    http://ae.natinst.com/operations/ae/public.nsf/fca7838c4500dc10862567a100753500/1b64310fae9007c086256a1d006d9bbf?OpenDocument
    Regards,
    Steven B.

  • 3702i only getting 10MB/s transfer speed

    I am in the process of configuring a 3702i access point, but I am running into an issue where I cannot get over 10MB/s. I have the access point plugged into a gigabit port and have a power adapter plugged into the 3702i since I do not have a POE+ capable switch. I only get around 5MB/s on 2.4GHz radio and 10MB/s on the 5GHz radio which is not any faster than our old aironet 1100 AP. I have tried messing with the data rates but it does not make a difference. The properties on my wireless card show that the connection speed is 217Mbps.
    Below is the switchport settings and I attached my configuration file if that helps at all.
    4506-2#show run interface gi6/48
    Building configuration...
    Current configuration : 213 bytes
    interface GigabitEthernet6/48
     switchport access vlan 95
     switchport trunk encapsulation dot1q
     switchport trunk native vlan 95
     switchport trunk allowed vlan 94,95
     switchport mode trunk
     no cdp enable
    end

    After changing the channel width to 80MHz I get an average of 17MB/s transfer speed and it shows that my connection speed is 405Mbps which should give me around 50MB/s.
    Wired I get an average of 100MB/s-112MB/s for my transfer speed
    Before Changing to 80MHz
    c3702-1#show dot11 associations all-client
    Address           : 0024.xxxx.xxxx     Name             : c3702-1
    IP Address        : 192.168.95.91      IPv6 Address        : ::
    Gateway Address   : 0.0.0.0
    Netmask Address   : 0.0.0.0            Interface        : Dot11Radio 1
    Bridge-group        : 1
    reap_flags_1        : 0x0     ip_learn_type       : 0x0       transient_static_ip : 0x0
    Device            : ccx-client         Software Version : NONE
    CCX Version       : 4                  Client MFP       : Off
    State             : EAP-Assoc          Parent           : self
    SSID              : vlan95n
    VLAN              : 95
    Hops to Infra     : 1                  Association Id   : 1
    Clients Associated: 0                  Repeaters associated: 0
    Tunnel Address    : 0.0.0.0
    Key Mgmt type     : WPAv2              Encryption       : AES-CCMP
    Current Rate      : m23-               Capability       : WMM 11h
    Supported Rates   : 6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 m0-2 m1-2 m2-2 m3-2 m4-2 m5-2 m6-2 m7-2 m8-2 m9-2 m10-2 m11-2 m12-2 m13-2 m14-2 m15-2 m16-2 m17-2 m18-2 m19-2 m20-2 m21-2 m22-2 m23-2
    Voice Rates       : disabled           Bandwidth        : 20 MHz
    Signal Strength   : -62  dBm           Connected for    : 302 seconds
    Signal to Noise   : 35  dB            Activity Timeout : 20 seconds
    Power-save        : Off                Last Activity    : 0 seconds ago
    Apsd DE AC(s)     : NONE
    Packets Input     : 34487              Packets Output   : 51847
    Bytes Input       : 2250174            Bytes Output     : 76757696
    Duplicates Rcvd   : 1                  Data Retries     : 36241
    Decrypt Failed    : 0                  RTS Retries      : 0
    MIC Failed        : 0                  MIC Missing      : 0
    Packets Redirected: 0                  Redirect Filtered: 0
    IP source guard failed : 0             PPPoE passthrough failed : 0
    DAI failed : IP mismatch  : 0             src MAC mismatch : 0             target MAC mismatch : 0
    Existing IP failed :  0              New IP failed :  0
    11w Status       : Off
    Session timeout   : 0 seconds
    Reauthenticate in : never
    After Changing to 80MHz
    c3702-1#show dot11 associations all-client
    Address           : 0024.xxxx.xxxx    Name             : c3702-1
    IP Address        : 192.168.95.91      IPv6 Address        : ::
    Gateway Address   : 0.0.0.0
    Netmask Address   : 0.0.0.0            Interface        : Dot11Radio 1
    Bridge-group        : 1
    reap_flags_1        : 0x0     ip_learn_type       : 0x0       transient_static_ip : 0x0
    Device            : ccx-client         Software Version : NONE
    CCX Version       : 4                  Client MFP       : Off
    State             : EAP-Assoc          Parent           : self
    SSID              : vlan95n
    VLAN              : 95
    Hops to Infra     : 1                  Association Id   : 1
    Clients Associated: 0                  Repeaters associated: 0
    Tunnel Address    : 0.0.0.0
    Key Mgmt type     : WPAv2              Encryption       : AES-CCMP
    Current Rate      : m23b               Capability       : WMM 11h
    Supported Rates   : 6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 m0-2 m1-2 m2-2 m3-2 m4-2 m5-2 m6-2 m7-2 m8-2 m9-2 m10-2 m11-2 m12-2 m13-2 m14-2 m15-2 m16-2 m17-2 m18-2 m19-2 m20-2 m21-2 m22-2 m23-2
    Voice Rates       : disabled           Bandwidth        : 40 MHz
    Signal Strength   : -67  dBm           Connected for    : 259 seconds
    Signal to Noise   : 27  dB            Activity Timeout : 15 seconds
    Power-save        : Off                Last Activity    : 5 seconds ago
    Apsd DE AC(s)     : NONE
    Packets Input     : 77863              Packets Output   : 147266
    Bytes Input       : 4979788            Bytes Output     : 221483209
    Duplicates Rcvd   : 1                  Data Retries     : 46004
    Decrypt Failed    : 0                  RTS Retries      : 0
    MIC Failed        : 0                  MIC Missing      : 0
    Packets Redirected: 0                  Redirect Filtered: 0
    IP source guard failed : 0             PPPoE passthrough failed : 0
    DAI failed : IP mismatch  : 0             src MAC mismatch : 0             target MAC mismatch : 0
    Existing IP failed :  0              New IP failed :  0
    11w Status       : Off
    Session timeout   : 0 seconds
    Reauthenticate in : never

Maybe you are looking for

  • Multiple pages that edit a single row

    I have searched on tabs and editing a single row using multiple pages and I am confused. I don't want to use java as I am trying to not learn that too... What I am trying to do sounds like it has been explained before but let me do this again... I ha

  • Ipod classic 80 freezes itunes and won't work

    upgraded itunes to 10.1 (etc). plugged in formerly operating ipod classic 80. it showed up in the device list,. but i could not move the cursor off music. had to use ctrl-alt-delete to close itunes. ipod would not disconnect. tried to disconnectg by

  • Day wise Analysis

    Hi Experts , Plese guide me in achiving the below reuirment. Date                              Sales         Sales for Previous month same day 01.06.2011                    100                 10(01.05.2011) 02.06.2011                    200         

  • Prolblem while replicating datasources in BW, source system tab in modellin

    hi friends, I have a problem. we are using SAP NetWeaver 2004s and we are implementing for the first time. The problem is we have activated the datasources of 0FI_AR_4, 0FI_AP_4, 0FI_GL_4 from delivery version RSA5, After activation we checked in RSA

  • Installing Oracle Database 10g Express Edition.

    Hopefully someone can help. The standard method of deploying this unattended (i.e. pass the setup.iss via the setup.exe) isn't good enough for my purposes. As it is a vendor MSI (wrapped in an executable), I'd like to be able to run it with a transfo