ADC testing with pci 6534/GPIB

is it possible to test linearity of ADC with pci 6534 or with agilent GPIB
please provide some .vi examples to test adc
thanks

Well, an ADC is an analog-to-digital converter. The 6534 is just a digital board. So on that basis alone, the answer is no, you can't use it to test linearity of an ADC with that board.  I have no idea what you mean by "with agilent GPIB". This is somewhat meaningless, as it doesn't refer to anything.
As for VI examples to test ADC, this is completely dependent on the hardware you have. 
I would suggest contacting your local NI sales office, as they will be capable of helping you pick out the right hardware for what you have and need to do. 

Similar Messages

  • Cannot get high transfer rates with PCI-6534

    I am using a PCI-6534 in the Pattern I/O mode for performing data acquisition. The data rate I am getting for reliable data is about 2MS/s. I need 8MS/s, but do not get reliable or correct data at this rate. According to the 653X User Manual the benchmarked rate is 20MS/s, but I seem to be getting nowhere near this.

    How is your memory?
    http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/101ad216bd2b692286256a48006a986c?OpenDocument
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Ouput all data with pci 6534 bevor they were updated..

    In my application,i do the Digital IO with Ni 6534,i use the onboard loop to output the data from my output buffer,but this data are permanent update.i want to know ,how can i be sure that all my output buffer is transfered, if i update the buffer.i have try to do that which DIG_Block_Check,which [while(ulRemaining !=0)&&(istatus == 0)]but ulRemaining is never egal 0.
    i think somebody can help me thank you.
    best Regards

    The Apple Support Communities are an international user to user technical support forum. As a man from Mexico, Spanish is my native tongue. I do not speak English very well, however, I do write in English with the aid of the Mac OS X spelling and grammar checks. I also live in a culture perhaps very very different from your own. When offering advice in the ASC, my comments are not meant to be anything more than helpful and certainly not to be taken as insults.
    Unfortunately that is how it works. By unchecking the boxes, no matter how it happens, accidentaly or on purpose, you tell the sync mechanism that you no lonnger wish thes app to be on your iPad. So the sync process then procedes to remove the app and all of its sandboxed content.
    Do you have a backup of the iPad that would contain that content?

  • PCIE 3.0 test with MSI Z68A-GD80 G3 :)

    The Z68A-GD80 G3 is MSI’s first motherboard with PCIE 3.0 connectivity. Currently there are no PCIE 3 devices available yet but later on we can test its performance by using a PCIE SSD. Aside from this, the board has also a new UEFI Bios named “Click Bios II” which for me is way better than the previous version. 
    Before we take a closer look on the board, let’s check out the package first.
    Box design is a bit the same as the GD80 B3 version before.  Once you open it up, all main features of the said board are enumerated
    Extreme Power Design, Military Class II Components (SFC, Hi-C Caps, Multi Bios), 3 PCIEx16 Slots, Super Charger and OC Genie II
    For the bundled accessories, you have the same old story – Manual, Software Disc, four SATA cables, two Molex to SATA power connectors & SLI Bridge. Then for some extras - PCI USB 3.0 two ports bracket, guide headers for faster and easy connectivity and lastly voltage check points.
    Next, the “Mainboard”.  By physically looking at it and by comparing it side by side against the Z68A-GD80 B3 motherboard. You won’t find any difference except for the PCIE X16 locks. It has also the same black and blue color theme, two vrm heatsinks connected by a flat heatpipe and  V-Check points.
    Moving closer to the board you have 8 Power Pins to power up the processor, 2 PCI slots, 2 PCIE x1 slots for devices such as TV Tuners or Audio Cards. 2 PCIE Gen3 & 1 PCIE Gen2  x16 Slots. With one VC connected at the top most slot, speed will be @ x16. If both top and middle are used, it will be @ x8/x8. Lastly, by using the 3rd PCIEx16 Gen2 slot. You have x8/x8/x4. It would also disable certain onboard devices such as eSATA Port, Sata 7, one onboard front USB 3.0, two PCI Slots and firewire. So if possible avoid using the last VC slot. 
    At the top most PCIE x1 area. You will find a 6-pin power connector. This will provide more juice/power for multi-GPU setups.
    Next, at the bottom part of the board. You have the Easy Buttons (Power and Reset) and the popular one button “OC Genie”. The red colored USB header is for the Super Charger while the blue is a regular USB 3.0.  For the SATA Connectors  you have a total of 7. 4 regular SATA 3GB/s, 2 SATA 6GB/s controlled by Intel chipset and another SATA 6GB/s by Marvell.
    Wrapping things up, we have now the IO terminal ports. Starting from the left we have combo PS2 port and SPDIF Optical Out. Clear CMOS Button, e-SATA port running under Marvell, 2 USB 2.0 and Firewire ports controlled by Via. Moving across, 2 Gigabit Lan by Realtek 8111E, 2 USB 2.0 and 2 USB 3.0 run under NEC D720200 controller. Next, we have DVI and HDMI output connections and 6 analog audio ports by Realtek ALC892.
    Once you are into the bios, you will find the new and improved Click Bios. Nice and professional looking, faster navigation and easier to use.
    To test the performance of the new PCIE Gen3 technology, we will be using this Photofast PCIE SSD device http://www.photofast.tw/comboproducts.asp?pid=1.
    We will be connecting the PCIE SSD device on the GD80’s Gen3 and Gen2 slots and compare the results using the disk benchmark software ATTO.  I ran the benchmark 9 times for the two set of tests.
    System Configuration as follows:
    Processor:  Intel Core i7 2600K at default stock speed
    Memory: Kingston HyperX Genesis Grey 2x2GB DDR3 2133MHz
    SSD: OCZ Vertex 2
    Motherboard: MSI Z68A-GD80 G3
    Softwares: CPU-Z 1.58, Windows 7 Ultimate 64 Bit with SP1, Latest Atto Software 1.47 I think.
    First test, Photofast running on Gen2 (third VC slot).  One screenshot from the 9 tests made
    Second, using the Gen 3 connectivity. One screenshot from the 9 tests made
    Below is the summary of all the tests done on the board with the Photofast SSD PCIE.
    That’s probably it. More forum posts to come 

    Quote
    I have a eSata hdd Seagate Freeagent Xtreme 1,5tb which cannot be recognized by win7
    Did you have your previous board's BIOS set to AHCI or IDE ? You need to set this BIOS the same.
    Firstly only insert one RAM module in the slot closest to the CPU. Remove the rest. Then do full CMOS clear >>Clear CMOS Guide<<  also remove the MB battery.
    What BIOS do you currently have ? The initial BIOS releases were plagued by CPU throttling which have been fixed in the cyrrent beta BIOS releases.

  • Using a lan/hpib gateway (Agilent) with a NI GPIB pci card

    Need to be able to use a E2050 lan/hp-ib gateway (Agilent) together with a NI GPIB pci card on the same workstation (windows). I need 2 seperate gpib busses but also need the remote functionality or the lan/hpib box. I would use ni enet but it will not work together with a installed gpib card, only one or the other. Read information on useing an hp card with a ni card in the same system but cannot get it to work with the lan box. I do not have a hp card to install in my computer to see if that works or not. Has anyone attempted a similar task?

    Hello-
    It is possible to use a PCI and ENET board in Windows 9X/ME. The solution for Windows NT/2000 is to use the new VISA server technology using two computers.
    Computer A:
    This is a computer that will not use a GPIB plug-in board, but is on the network. Install the GPIB-ENET/100 driver and VISA 2.6. Then, set the appropriate VISA server permissions and run VISAServer.exe (see the VISA Programmer's Reference for more details ni.com/manuals).
    Computer B:
    Install the PCI-GPIB driver and VISA 2.6. Use Remote VISA to access the ENET/100 through the other computer's driver.
    This is also a very useful technology for remotely controlling numerous instruments throughout a distributed laboratory.
    Randy Solomonson
    Application Engineer
    Natio
    nal Instruments

  • Will a PCI-1428 interface with a DAQ - PCI - 6534?

    I have a Camera Link camera with a PCI 1428 card and was wondering if it will work with a DAQ PCI-6534. Thanks!

    Hello,
    Both 1428 and 6534 have a Real Time Signal Interface (RTSI) connectors on them. Using this bus you can route signals (timing, triggering, etc.) between boards directly instead of going through the PCI bus. This will simplify your coding and improve reliability and performance of your application. You'll need a RTSI ribbon cable to connect two (or more) boards together.
    Hope this helps!
    Best regards,
    Yusuf C.
    Applications Engineering
    National Instruments

  • Does PCI-6534 work on Win7 32bit with DAC (Legacy) 7.5 and if so, NI-VISA or NI-PXI

    I have a PCI-6534 DAC installed in a Windows 7 32bit machine. I've installed:
    NI-DAQ (Legacy) 7.5.0
    NI-VISA 5.4.1
    NI-488 3.1.2
    PCI-6534 shows up in Measurement & Automation Explorer. However, the 6534 is not showing up under VISA and I can't communicate through VISA. The only VISA devices that show up from Python are the serial ports. I'm told that this SHOULD work by other engineers. It that is true, what am I doing wrong? What can I check next?
    I'm beginning to think VISA is not supported on the PCI-6534. If not, how do I communicate with this card from Python 2.7.6? Is there a DLL or through NI-PXI?

    Why can't you use python with the supported api - DAQmx? Who told you to use VISA?

  • Output overshooting and undershooting problem of PCI-6534

    Hi,
    I'm using the digital outputs from PCI-6534 to control varies devices, like shutters,CCD camera. But actually the output is not a clean TTL signal, several snapshootes are attached below.
    I also try to terminate each output channel with schottky diode following the same scheme in the manual. It does not work somehow.
    What could be the reason for this problem? Does this indicate the hardware itself is broken?
    Any help would be appreciated
    Cheers,
    Yibo 
    Attachments:
    20150603_TTL_1.gif ‏5 KB
    20150603_TTL_2.gif ‏5 KB

    Hi Matt,
    Thank you for your reply. It turns out that when you try to terminate the output chennal you have to make the diode termination very close to the load. I attached my latest result, it looks improve quite a lot. Regarding your question, yes, it does cause problem sometime. My labview program will lock up, then the card (PCI-6534) can not pass the self-test with NI MAX and show Error -200020. I have to restart computer. We  are still not clear which part triggers the crash. It is quite annoying.  
    Cheers,
    Yibo
    Attachments:
    20150612_terminate @ ocsilloscope.gif ‏5 KB
    20150605_PCI-6534 self test failed_labview crash.png ‏1173 KB

  • PCI-6534 Device Driver Availabili​ty

    I am thinking of using a PCI-6534 for continious streaming 32 bit digital pattern generation from a host PC and clocked via an external clock at 1 - 10 MHz. From the benchmarks it seems that the PCI-6534 will meet these needs. My question involves device drivers. Windows is not an option for the system I am constructing as we have stability requirements that border on the need for a hard real time system. Is there enough information available about the card to allow a device driver to be written for an operating system such as Linux or RTLinux? Alternatively are such drivers available or alternatively for a RTOS such as QNX or VxWorks?

    National Instruments does not provide drivers for the Operating Systems you listed.
    However, there is a lot of information available on the PCI-DIO-32HS and register level programming available on the National Instruments website at:
    www.ni.com/support/daqsupp.htm
    view the Register Level programming link.
    under that link there is some information about programming the card through the registers. Also view the DAQ Register Programming link in the register programming site for examples and header files.
    In addition you may want to visit http://www.ni.com/linux/daq_comedi.htm
    where National Instruments provides information and a link to the Comedi group which has developed a Driver for Linux that has had success with the PCI-DIO-32HS.
    Hope this help you get starte
    d.

  • I want to check all functions of PCI 6534.I have read the user manual..I have some memory related questions.​Please help me for that.

    I want to check all functions of PCI 6534.I have read the user manual..I have some memory related questions.Please help me for that.
    1.)If i am using the continuous output mode.and the size of generated data is less than 32 MB.If i want to preload the memory,what should i do?I want that first of all i load all my data to onboard memory & then i want to make start the transfer between 6534 & peripheral.Is it possible?As per me it should be.Plz tell me how should i do this?I think that in normal procedure the transfer between 6534-peripheral & outputting data from pc buffer to onboard memory works parallely.But i don't want this.Is it poss
    ible?
    (2).Similarly in finite input operation(pattern I/O) is it possible to preload the memory and then i read it?Because i think that the PC memory will be loaded automatically when 6534 acquires the data and then when we use DIO read vi the pc buffer data will be transferred to application buffer.If this is true,i do not want this.Is it possible?
    (3) One more question is there if i am using normal operation onboard memory will be used bydefault right?Now if i want to use DMA and if i have data of 512 bytes to acquire.How will it work and how should i do it?Please tell me the sequence of operations.As per my knowledge in normal DMA operation we have 32 Bytes FIFO is there so after acquisition of 32 bytes only i can read it.How it will known to me that 32 bytes acquisition is complete?Next,If i want to acquire each byte separately using DMA interrupts what should i do?Provide me the name of sourse from which i can get details about onboard memory & DMA process of 6534 specifically
    (4).In 6534 pattern Input mode,if i want to but only 10 bits of data.and i don't want to waste any data line what should i do?

    Hi Vishal,
    I'll try to answer your questions as best I can.
    1) It is definitely possible to preload data to the 32MB memory (per group) and start the acquisition after you have preloaded the memory. There are example programs on ni.com/support under Example Code for pattern generation and the 6534 that demonstrate which functions to use for this. Also, if your PC memory buffer is less than 32MB, it will automatically be loaded to the card. If you are in continuous mode however, you can choose to loop using the on-board memory or you can constantly be reading the PC memory buffer as you update it with your application environment.
    2) Yes, your data will automatically be loaded into the card's onboard memory. It will however be transferred as quickly as possible to the DMA FIFO on the card and then transferred to the PC memory buffer through DMA. It is not going to wait until the whole onboard memory is filled before it transfers. It will transfer throughout the acquisition process.
    3) Vishal, searching the example programs will give you many of the details of programming this type of application. I don't know you application software so I can't give you the exact functions but it is easiest to look at the examples on the net (or the shipping examples with your software). Now if you are acquiring 512 bytes of data, you will start to fill your onboard memory and at the same time, data will be sent to the DMA FIFO. When the FIFO is ready to send data to the PC memory buffer, it will (the exact algorithm is dependent on many things regarding how large the DMA packet is etc.).
    4) If I understand you correctly, you want to know if you waste the other 6 bits if you only need to acquire on 10 lines. The answer to this is Yes. Although you are only acquiring 10 bits, it is acquired as a complete word (16bits) and packed and sent using DMA. You application software (NI-DAQ driver) will filter out the last 6 bits of non-data.
    Hope that answers your questions. Once again, the example code on the NI site is a great place to start this type of project. Have a good day.
    Ron

  • Is there any successful example of USRP RIO with PCIe adapter?

    Hi All,
           Can I ask who has ever had successful experience of achieving high IQ rate using USRP RIO with PCIe adapter? If so, what PC were you using?
           I am working on the USRP RIO. I tried to run the LabVIEW code for the USRP-2920 on the USRP 2950R but could not achieve a high IQ rate. The PC I am using has a mini-itx motherboard with H97 chipset, 16GB 1600MHz DDR3 memory, 256GB SSD, i7-4790K quad-core 4.0GHz CPU. Even if I connect the USRP-2920 to the PC, I could not achieve 20MS/s. When I was using a Macbook Pro with i7-3840QM, I could achieve 25MS/s IQ rate. But there is no way I installed a PCIe adapter to the laptop. 
           Thank you very much.

    I have a benchmarking utility that I will attach here that can run through a number of IQ Rates and the number of samples in each Write call.  Here are the results when I ran with an USRP-2940R over a x4 MXIe (PCIe) link.  I configured to run a 2-channel continuous generation for 10 s for each test.  
    Device connected over PCIe
    Conducting benchmarks for continuous writes (CDB, 16-bit).
    IQ Rate        363 726 1000 3000 6000 10000 100000
    1000000      0      0       0       0        0         0           0
    2500000      0      0       0       0        0         0           0
    5000000      0      0       0       0        0         0           0
    10000000    0      0       0       0        0         0           0
    20000000    39    0      0       0        0          0           0
    40000000    18    18    0       0        0          0           0
    60000000    9      36    15     2        0          0           0
    80000000    22    17    14     0        2          0           0
    90000000    15    22    15     9        9          9           9
    95000000    21    21     22    9        9          9           2
    97500000    16    14     31    9        9          9           9
    100000000  15    34     25    9        9          9          10
    The formatting may be hard to figure out, but the small numbers are the number of underflows.  So 0 is what you want.  As you can see, I can sustain rates up to 80 MS/s for at least 10 s IF my write size is big enough.  That is, if I write at least 10,000 samples with each call to niUSRP Write, I don't see underflows.  That's two channels, so we're talking 640 MB/s over the bus.
    Here are some tips to increase your Tx streaming throughput:
    0) Don't use a continuous generation at all.  For many applications you don't need to stream continuously- a finite transmission (and you can loop a finite transmission) will do and you will basically not underflow in that case if you provide all the data up front.
    1) As you can see from the chart, the bigger the data buffer you provide in each Write call, the faster you can stream.
    2) Write sizes in multiples of the maximum packet size seem to work well.  That number is 2044 for the USRP-294x/5x series (although that may change in future driver releases).  Try sending bursts of 10220 samples.
    3) If your application allows it, set a Start Trigger Time a little in the future.  Then start writing data before the device starts transmitting.  For example, set the Start Trigger Time to the (current device time + 1 second).  Then start writing data and you will have a second to pre-fill the on-device buffers.  This will substantially reduce the number of underflows.
    4) Be sure to do your data processing out of your write loop, to keep the write loop filling the pipleline as quickly as possible.

  • Setting parameters for synching pci-6534 cards via RTSI bus

    I have been performing high-speed,
    buffered, looping output with one pci-6534 card.  I am now adding
    a second 6534 card that I need to sync to the first card via the RTSI
    bus.  I have successfully used the RTSI bus to see the master REQ1
    and ACK1 signals on those channels of the slave (seen at a connector
    block), using the "RTSI control" vi.  I simply set the master and
    slave as transmitter and receiver, respectively, over the RTSI bus.
    The question is: Once I have used the RTSI control vi to share the
    necessary signals, do I need to do anything in my "dio config," "dio
    write," or "dio start" vi's in the looping output code for the 2nd 6534
    card to let it know that its REQ, ACK, STPTRG, and CLK signals are
    coming from the bus?  For example, in the buffered pattern looping
    output vi, the "dio start" vi has choices of "internal" or "RTSI
    connection" for its clock.  My master board's code simply uses the
    internal.  Does my slave need to be set to RTSI connection, or,
    once I have shared the clock signal over the RTSI bus, is that
    effectively the internal clock for my slave too?
    I apologize it this question is confusing.  Unfortunately, so is the issue.

    Hello bwz,
    When you are performing synchronization across the RTSI bus you need to specify that the slave device should get its clock signals from there.  You would use the digital clock config VI to do this.  If you look in the example finder, you will find synchronization example VIs that do the same kind of thing for analog input.  To find the examples, open the example finder by going to Help >> Find Examples >> Hardware Input and Output >> Traditional DAQ >> Multiple Device. 
    If you are just getting started developing your application, you may want to consider using DAQmx.  There are many more examples available to look at for this type of synchronization.  To find these examples in the example finder go to Hardware Input and Output >> DAQmx >> Synchronization >> Multi-Device.  To use your PCI-6534 with NI-DAQmx, you must have version 7.4 or later.  The newest version is DAQmx 7.5.  You may also want to look at this tutorial about synchronization with DAQmx. 
    I hope this helps!
    Laura

  • How to use PCI-6534 High speed DIO to count the no.of pulses aquired

    HI All
    I have PCI-6534 high speed DIO card. My requirement is to count the no.of pulses coming. Here i have an energy meter which generates pulses with frequency of around 8MHz. i need to cunt the no. of pulses coming in, here i am attaching the VI i am using. I could not really count all pulses coming in. right now i am using single line, but the requirement is to develop for 7 lines. I do not know where i am going wrong. Can any of you help me in this regards.
    Thanks
    Anil Punnam
    Attachments:
    Read Dig Chan-Change Detection_stop.vi ‏120 KB

    Sorry, not near a LV PC so can't look at your vi now.  Are you limited to using only the 6534?  If all you need to know is the count of pulses from each of the 7 ~8MHz sources, it seems like the amount of data storage required with a 6534 is terribly inefficient.  Since the 7 sources are unlikely to be synchronized in any way and they are each at ~8MHz, you're looking at about 100+ million transitions per second with change detection.  I don't think the hw can keep up with that.  Even using a constant sampling rate of 20 MHz (which just barely satisfies the Nyquist minimum of 2x 8MHz), it's questionable whether you can keep up with that rate for several minutes.  Even supposing the hw and your PCI bus and software can keep up, there's still a TON of processing to do.  20 MB/sec for 20 minutes = 24 GB! 
    On the other hand, consider the 6602 counter timer board.  Here you would simply set up 7 edge counting tasks, probably without any buffering at all.  At any leisurely pace you want, you can software query the counts of the # of pulses on each of the 7 channels and have an instant answer.  The only issue to deal with is that the counts will rollover when you reach 2**32.  At 8 MHz, this will happen about every 9 minutes.  However, DAQmx provides a nice way to handle this.  There's a property you can query that will tell you if a rollover has occurred.  It automatically resets itself to False after you read it so it's ready to detect the next rollover 9 minutes later.  See my first post in this thread for example.  (Last I knew, only DAQmx does the automatic reset, not traditional NI-DAQ).
    If you can possibly buy a 6602, I'd highly recommend it.
    -Kevin P.

  • Possible to drive a stepper motor with PCI-6111?

    Is it possible to drive a stepper motor with PCI-6111?

    Hello Tristan,
    If your stepper motor is TTL compatible you should be able to control it with one of the two counters on the board. Keep in mind that the stepper will ask for a certain amount of power so before attaching it track down how much power it consumes and take a look at the Specifications of the PCI-6111 to be sure that the counters on the board can deliver that amount. If the stepper takes to much power you have to use some kind of power drive which can be controlled with digital (TTL) signals or Analog signals between -10V and +10V.
    Hope this helps.
    Best regards
    RikP
    Application Engineering
    National Instruments
    Rik Prins, CLD
    Applications Engineering Specialist Northern Europe, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

  • How can I verify that the PCI-6534 is using all of its onboard memory?

    When using a PCI-6534 under Windows 98 with NI-DAQ 6.9, I need to transfer a small buffer (8126464 bytes) to the adapter, and get it into the onboard memory. The board is configured for 32 bits output, and I'm using handshake mode. DAQ_GET_DEVICE_INFO tells me that there is 33554432 bytes for each Group. DIG_Block_Check shows that 2031587 words (8126348 bytes) are remaining to transfer. Is that in the PC system memory or the PCI-6534 on board memory? How can I tell if it is safe to reuse the (PC side) buffer? Also, How big is the FIFO on the PCI-6534?

    When you call DIG_Block_Out, NI-DAQ first downloads your data into the 6534 memory. Only after the download is done (which should happen at about 80 MB/s) it starts outputting data and DIG_Block_Out returns. So it is safe for you to reuse your PC memory buffer as soon as DIG_Block_Out returns.
    This is assuming your entire buffer fits in memory. If it doesn't, NI-DAQ will download all the data that fits and start the transfer, and as more 6534 memory becomes available the DMA channel will fill it up.
    The 6534 has a 16 sample FIFO and an additional 32 MB of memory for each handshaking group. So for 32-bit transfers you can fit 8 MSamples, for 16-bit transfers you can fit 16 MSamples (per group), and for 8-bit transfers you can fit 32 MSamples (per group) in the 6534
    memory.

Maybe you are looking for