Delay using block in with AT-DIO-32HS

I inserted in a loop 4 sequential DIG_Block_In() calls to get input data with
my AT-DIO-32HS in handshake mode.
Before each call the program ensures that the previous transfer was completed,
by means of DIG_Block_Check() calls, which indeed automatically call DIG_Block_Clear()
at the end. The first call reads a small header (6 shorts) and retrieves
how many data the next call should get. The second call reads that amount
of data (which can be rather huge, say up to 2 Mbytes, dinamycally allocated).
The third call reads a second header and the last call reads another huge
block of data. This four operations must be repeated 100 times. What I experience
is that the first time the loop is executed everything works fine,
but before
the second execution (getting next "first" header) the system waits for several
seconds (!!). Note that the 4 calls use 4 distinct data buffers. I would
avoid using double buffering in this case because my problem is really sequential
and can be easily managed in the described way. Any help ?

Hi Andreasz,
Welcome to LabVIEW!
Any time I start working with a new device I start by searching the examples that ship with LabVIEW.
Goto search examples. Take a look at the examples located in
Hardware Input and Output >>> Solutions >>> 653x Logic Analyzer
There you will find four example that will get you sarted.
If you find these lacking, let us know what needs changed and we may be able to help.
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction

Similar Messages

  • Why ACK should be deasserted sometimes during the data acquisition with PCI-DIO-32HS burst mode handshaking?

    My peripheral device sends 32-bit data to the DIO board serially with PCLK 6MHz, about 300,000 times totally. The phenomenan like I mentioned in the summary above happens, and it causes some data missings.
    Though I know ACK is not always asserted as somewhere in the NI database says, I want to know why it happens. if I can. I wonder if it is just inevitable or not.
    Do I only have to add some buffer memories to my device and make it watch on the ACK changing? Or, is there any other good way to avoid this problem?

    Hi,
    Burst mode handshaking protocol needs to conditions to be meet before data can be transfered. The PCI-DIO-32HS need to be ready to transfer data and the external device needs to be ready to transfer data.
    The ACK line tells the external device when the PCI-DIO-32HS is ready and the REQ line tells the PCI-DIO-32HS when the external device is ready. When both are ready data should be transfered. This is the nature of Handshaking, guarenteed data transfer (when both devices are ready), but not at a guarenteed rate. Handshaking means that the two devices communicate with each other to determine when to transfer data.
    The PCI-DIO-32HS ACK line is toggling low because the PCI-DIO-32HS is busy catching up with the given transfer and is not ready to receive m
    ore data at this time. The ACK line is not something you can control, it is controlled by the PCI-DIO-32HS.
    Your application may be better suited for use with Pattern I/O if you are not using the handshaking lines, ACK and REQ, to control the flow of data. Pattern I/O does not use handshaking lines and clocks data in on every rising edge of the clock. You may receive an error if your system can not keep up with the transfer rate.

  • How to use a PCI-DIO-32HS board with macintosh C++

    How can I run the PCI-DIO-32HS board with code from CodeWarrior for a Macintosh G3? The current drivers (V4.8) with Mac C++ support don't support this board.

    Here is a list of supported compilers with NI-DAQ 4.9.4 or earlier found on the www.ni.com/support website search
    Apple MPW 3.3.x
    Mainstay VIP-C 2.0.x
    Metrowerks CodeWarrior Pro 1
    Staz Software FutureBASIC 2.x.x
    Symantec THINK Pascal 4.0.x
    Symantec THINK Project Manager 7.0.x Symantec Project Manager 8.0.x
    Zedcor FutureBASIC 1.0.x
    If are programming in a different programming environment you will have to use Register Level Programming (RLP). For more information on RLP you can visit www.ni.com/support/daqsupp.htm and look under the Resources for register level programming link.

  • Can I use DIO 32HS in pattern I/O and Unstrobed I/O in the same time?

    I work with a pci dio 32 HS(programming with labview) and i want to know if it is possible to configure 3 ports in unstrobed output mode (24 static digital lines) and 1 port in pattern I/O mode which can read AND(not "or") write data(using internal clock)in the same VI.moreover in pattern I/O mode am i obliged to use these port combinations (16 bits: (port0,Port1) or (port 2, port 3) , 32 bits: (port 0,1,2,3))and not an other one and can i configure one port in output and input at the same time (for example aquiring and generating data at the falling and rising edge of the internal clock)?? please give me all possibilities of lines, ports and groups direction (input output bidirectionnal) with more de
    tails (i'm not sure of what i can do using the 653X user manual). else, I can't observe with my oscilloscope TEK the internal clock signal(squarewaveform TTL) onthe REQ pin as it is said in the user manual, WHY????
    Thank you for your answers.bye

    Hi,
    No, you cannot configure 3 ports together. On the PCI-DIO-32HS, there are two main port groups, each group consisting of two ports: group 1 (ports 0,1) and group 2 (ports 2,3). You can configure two 8-bit ports (0 and 2), two 16-bit ports (0,1 and 2,3) and one 32-bit port (ports 0,1,2,3); you can configure each group independently.
    Regards,
    RamziH.

  • Digital Handshaking with two PCI-DIO-32HS Cards

    Hardware: two PCI-DIO-32HS Cards
    Software: LabVIEW 5.1, NI DAQ 6.6
    Problem:
    I'd like to do burst digital handshaking with two PCI-DIO-32HS cards.
    One being used for sending bit stream while the other receive.
    Suppose I want to use burst handshake mode.
    How should I wire the connections?
    Where should I wire the REQ, and ACK line from the sending card?
    Should I wire REQ from card one to REQ of the other card?
    Also, how do I configure labVIEW VI to do burst handshaking mode.
    Can anyone send me a VI that can do this.
    Thanks a lot.

    Matt,
    I would recomend using the DIdoubleBufPatternGen.C examples that ships with NI-DAQ. You can find it in your \NI-DAQ\Examples\VisualC\Di folder. If you don't have this example on your machine, you can get it by running NI-DAQ Setup and selecting support for C/C++.
    This example does double buffering to allow you to continuously acquire data from your card. Data is transfered only when a full 1/2 buffer is ready. You can set how long to acquire data by setting the number of half buffers to read, or by modifying the read loop conditional parameters to fit you needs. See the NI-DAQ help on how to set you REQ pulse rate to 100kS/s.
    Nick W.
    www.ni.com/ask

  • Using OleDbDataAdapter Update with InsertCommands and getting blocking locks on Oracle table

    The following code snippet shows the use of OleDbDataAdapter with InsertCommands.  This code is producing many inserts on the Oracle table and is now suffering from contention... all on the same table.  How does the OleDbDataAdapter produce
    inserts from a dataset... what characteristics do these inserts inherent in terms of batch behavior... or do they naturally contend for the same resource. 
    oc.Open();
    for (int i = 0; i < xImageId.Count; i++)
    // Create the oracle adapter using a SQL which will not return any actual rows just the structure
    OleDbDataAdapter da =
       new OleDbDataAdapter("SELECT BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, " +
       "DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE FROM sysadm.PS_RI_INV_PDF_MERG WHERE 1 = 2", oc);
    // Create a data set
    DataSet ds = new DataSet("documents");
    da.Fill(ds, "documents");
    // Loop through invoices and write to oracle
    string[] sInvoices = invoiceNumber.Split(',');
    foreach (string sInvoice in sInvoices)
        // Create a data set row
        DataRow dr = ds.Tables["documents"].NewRow();
        ... map the data
        // Populate the dataset
        ds.Tables["documents"].Rows.Add(dr);
    // Create the insert command
    string insertCommandText =
        "INSERT /*+ append */ INTO PS_table " +
        "(SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, " +
        "EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) " +
        "VALUES (INV.nextval, :BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME,  " +
        ":BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)";
    // Add the insert command to the data adapter
    da.InsertCommand = new OleDbCommand(insertCommandText);
    da.InsertCommand.Connection = oc;
    // Add the params to the insert
    da.InsertCommand.Parameters.Add(":BUSINESS_UNIT", OleDbType.VarChar, 5, "BUSINESS_UNIT");
    da.InsertCommand.Parameters.Add(":INVOICE", OleDbType.VarChar, 22, "INVOICE");
    da.InsertCommand.Parameters.Add(":ASSIGNMENT_ID", OleDbType.VarChar, 15, "ASSIGNMENT_ID");
    da.InsertCommand.Parameters.Add(":END_DT", OleDbType.Date, 0, "END_DT");
    da.InsertCommand.Parameters.Add(":RI_TIMECARD_ID", OleDbType.VarChar, 10, "RI_TIMECARD_ID");
    da.InsertCommand.Parameters.Add(":IMAGE_ID", OleDbType.VarChar, 8, "IMAGE_ID");
    da.InsertCommand.Parameters.Add(":FILENAME", OleDbType.VarChar, 80, "FILENAME");
    da.InsertCommand.Parameters.Add(":BARCODE_LABEL_ID", OleDbType.VarChar, 18, "BARCODE_LABEL_ID");
    da.InsertCommand.Parameters.Add(":DIRECT_INVOICING", OleDbType.VarChar, 1, "DIRECT_INVOICING");
    da.InsertCommand.Parameters.Add(":EXCLUDE_FLG", OleDbType.VarChar, 1, "EXCLUDE_FLG");
    da.InsertCommand.Parameters.Add(":DTTM_CREATED", OleDbType.Date, 0, "DTTM_CREATED");
    da.InsertCommand.Parameters.Add(":DTTM_MODIFIED", OleDbType.Date, 0, "DTTM_MODIFIED");
    da.InsertCommand.Parameters.Add(":IMAGE_DATA", OleDbType.Binary, System.Convert.ToInt32(filedata.Length), "IMAGE_DATA");
    da.InsertCommand.Parameters.Add(":PROCESS_INSTANCE", OleDbType.VarChar, 10, "PROCESS_INSTANCE");
    // Update the table
    da.Update(ds, "documents");

    Here is what Oracle is showing as blocking locks and the SQL that has been identified with each of the SIDS.  Not sure why there is contention.  There are no triggers or joined tables in this piece of code.
    Here is the SQL all of the SIDs below are running:
    INSERT INTO sysadm.PS_RI_INV_PDF_MERG (SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) VALUES (SYSADM.INV_PDF_MERG.nextval,
    :BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME, :BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1150 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1156 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX3
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 6 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1726 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 2016 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX2

  • I've deleted Adobe Reader 11 and rebooted and reinstalled Adobe Reader 11 and I still get the error message that 'Adobe Reader is blocked because it is out of date'. Using Windows XP with the latest updates (SP3).

    I've deleted Adobe Reader 11 and rebooted and reinstalled Adobe Reader 11 and I still get the error message that 'Adobe Reader is blocked because it is out of date'. Using Windows XP with the latest updates (SP3).

    Screenshots attached to email replies will not make it back to the forum; you need to login to the forum and post it in your topic using the camera icon in the editor.
    Google Chrome is a problem:
    if you use Chrome's own PDF viewer, the results are unpredictable.
    if you use the Adobe Reader plugin with Chrome, it may reject (block) it if it is not the latest version.  Reader 11.0.08 is the latest version for Windows XP, but Chrome may insist on the current version 11.0.10.
    My suggestion; use a different browser!

  • How to use the PCI-DIO-32HS counter

    I have to create a SPI, I'd like to use counter, but the card doesn't answer, so Labview bug. Do you if there is programmable/configurable counter on this Card?

    Hello,
    The PCI-DIO-32HS does have a counter onboard. However the counter is only used for generating an internal clock in order to do pattern generation. You cannot program this counter in LabVIEW. If you need a counter, I would look at using an M Series Board, or a 660X board.
    Best regards,
    Justin T.

  • I am unable to use my keyboard with out a delay, how do I fix this

    I am unable to use my keaboard with out a delay. How do I fix this?

    when just using safari to watch streaming television, my computer gets really slow, and locks up, and the dock appears blank
    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    The purpose of the test is to determine whether the problem is caused by third-party software that loads automatically at startup or login, or by a peripheral device. 
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards. Boot in safe mode and log in to the account with the problem. Note: If FileVault is enabled, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to boot and run than normal, and some things won’t work at all, including sound output andWi-Fi on certain iMacs. The next normal boot may also be somewhat slow.
    The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin. Test while in safe mode. Same problem? After testing, reboot as usual (i.e., not in safe mode) and verify that you still have the problem. Post the results of the test.

  • I can not add a new credit card to my iPad without having to block directly with the bank card I'm using.

    I can not add a new credit card to my iPad without having to block directly with the bank card I'm using.

    What does this mean..?
    Gaby0526 wrote:
    ...  without having to block directly with the bank card I'm using.
    Changing Account Information  >  http://support.apple.com/kb/HT1918
    Accepted forms of payment  >  http://support.apple.com/kb/HT5552
    If necessary...
    Contact iTunes Customer Service and request assistance
    Use this Link  >  Apple  Support  iTunes Store  Contact

  • How to...snapshot with delay using Webcam Center ?

    Hi all,
    I was wondering if I could take some snapshot with a certain delay using Webcam Center ?
    Meaning I click on the snapshot button and I got like 0 secs to move on before I got the screen.
    And if that's not possible using Webcam Center, would you know any other soft doing that ?
    Thanks for your time.

    I have done this in the past by using a command line version of putty called plink.exe
    http://www.chiark.greenend.org.uk/~sgtatham/putty/
    call plink.exe with the systemexec.vi using the command line parameters of any ssh commands
    - James
    Using LV 2012 on Windows 7 64 bit

  • Issue in File to RFC to File Scenario with BPM using Block Step

    Hi Everybody,
                           I am doing File to RFC to File Scenario for multiple records using BPM using Block Step. The File Message is getting posted and after that, the message is getting stuck up in the qRFC Monitor (Inbound Queue).
                        After seeing the message in Inbound Queue, I am trying to execute and release the message. But when i execute the LUW, it says " Function module doesnt exist or EXCEPTION raised" in Inbound queue.
    Could somebody suggest me the outcome of this? What does this mean and how to
    release the stuck up message in Queue.
    Thanks and Regards,
    N.Jayanth Kumar

    Hi Rajesh,
                           After going through the blog, i saw the trace messages. It says      
    " The   exception occurred (program: CL_SWF_XI_INBOUND=============CP, include CL_SWF_XI_INBOUND=============CM00F, line: 19)"
    Regards,
    N.Jayanth Kumar

  • PCI DIO 32HS (6533) Suddenly giving "The device is not responding to the first IRQ level" error, and no longer functioning.

    Greetings NI folks,
      I'm an oceanographer, and have an sidescan sonar data aquasition computer running Windows XP SP2, and NiDAQ 7.0 (Legacy). For several years, this machine has worked flawlessly, but today, I booted it up to test the system for an upcoming job, and I got some strange errors in our sonar program. I tranced the problem to our NI-6533 PCI-DIO-32HS card. I launched NI Automation Explorer to test that the card was responsive, and when I click the "test panel", I get an error: "The device is not responding to the first IRQ level." Continue (yes/no). If I click yes, I can test the digital i/o's, but nothing happens, and all the tests fail (nonresponsive). I tried moving the card to another PCI slot, tried forcing it to have a specific IRQ that was unused by anything else, and finally tried moving it to another computer that had never been used with the DIO card. I'm still getting the error, and the card is nonresponsive. I'm at the limit of my abilities, and would like to know if there's anything else I can do, or should we send the card back to NI for repair/diagnosis.
    Thanks.

    Duplicate Post
    Best Regards
    Hani R.
    Applications Engineer
    National Instruments

  • PCI-DIO-32HS (PCI-6533) setup problem

    Hello
    I am in the process of setting up a Windows XP-based Labview 7.1 system and I am encountering a frustrating problem. Just to make sure I provide enough details, I'll describe what I've done so far, step-by-step (sorry if this gets tedious):
    First, I installed Labview and the NI-DAQ 7.3 drivers. I powered down the system and installed two PCI cards: a PCI-6031E and a PCI-DIO-32HS (PCI-6533) in PCI slots 1 and 2, respectively. I powered the system back up, went into MAX and configured the cards as follows:
    PCI-6031E: Device 1; AI: Polarity/Range=-10.0V - +10.0V, Referenced Single Ended; AO: Polarity=Bipolar; Accessory=SCB-100
    PCI-DIO-32HS: Device 2; Accessory=SCB-68
    I then started up Labview and ran my VI. This VI has been in use for 2 years now on the same NI hardware, so it's been well-tested and works great on other systems. However, when I run it on this system, the PCI-DIO-32HS spits out an error, with "Digital Buffer Write" as the source, and with a code of -10843 (buffer underflow).
    What's interesting is that I had this exact same problem when I was setting this system up in Mac OS 9. That time, I realized that the problem could have been due to the fact that I installed the hardware before I installed the software, so there may have been problems communicating with the device. By uninstalling everything and then re-installing it in the proper order, I solved the problem and was able to run the VI flawlessly. I'm assuming that these two problems are related in their nature, but this time around I was very careful to make sure that I did all of the setting up properly (I did it twice just to make sure. It did not work either time), so I'm not sure what could be the exact source of this one.
    Please let me know if you have any ideas as to what the source of this problem might be. Like I said before, I think there's probably a resource problem that's causing a communication failure which results in no data being sent to the DIO card (hence the buffer underflow error), but I can't figure out where to look for such a problem or how to fix it. Obviously, I'm rather new to Labview and everything about it, so the help is greatly appreciated.
    Thanks!

    Hi,
    Thanks for the reply. I have run the test panels, and I have not generated any errors in them. I've verified that I can definitely do output, because LEDs on my equipment turn on when turn on output on certain channels.
    So, I agree that the problem lies in the VI. I was not the author of the VI, however, so I'm not sure where to look. The author was also kind enough to have not provided any documentation. What would be a good example VI to run? I've never looked at any of them.
    As for how the program works, I don't believe there's any actual input coming back into the DIO-32HS. The system is used for electrophysiology. The DIO sends a signal to flash LEDs at given intervals. Electrodes then pick up an electrical signal from the retina of a mouse, which is sent to the DAQ card and written to a file. I have run complete tests, and proper data files were generated and contained expected voltage values. The only part that's not working right now is that the LEDs aren't flashing due to this error.
    I did some digging around in the program, but I couldn't come up with much. I verified that the program expects the DIO card to be Device 2, so there's no problem there. Aside from that, I couldn't find anything that seemed like it would apply.
    Thanks for your help! I have no experience with Labview, yet I've found myself placed in the "Labview expert" position over here, so I've kind of been forced into a sink-or-swim type crash course where I learn as I go.

  • Who has had a PCI-DIO-32HS card fail, ever?

    I've looked through the forum somewhat and I've found little concerning the card itself failing.  For instance, I found something about the card failing due to driving it with a TTL high before it was powered, but that was about it as far as actual death of the card.  Hence, I'm wondering if others besides myself, and therefore my design using the PCI-DIO-32HS, have killed or had the death of a PCI-DIO-32HS DAQ card and what was determined to be the root cause in your case(s).
    Also, by chance, did your card's death also take down the PC such that it wouldn't POST (Power On Self Test) until the card was removed?
    Thank you all.

    First, the issue has not been resolved yet. I still have boards dying. At $1500 a pop, this issue needs to be resolved.
    I was mistaken about two things said earlier: 1) The G card has now failed; therefore, the recovery was a fluke, and 2) I thought 8.x was installed in the design's PC, but as it turns out it has been 9.x. (The version numbers are tough to remember. Sorry. I think it's 9.1.7.) Is there a known issue with that one also?
    What I really want to know is why the original board I returned to NI failed. I want to know specifically what slot contacts are holding up the PC, keeping it from performing its POST? I don't want to hear any more suggestions for installing version x.y.z. without also having a specific reason for why it should work. These cards are too expensive. There's something very wrong with this design such that I can't keep the DAQ card alive using Traditional DAQ software that has run for years. The system works until (it seems) you think the problem's been resolved, and then whammo! An esoteric error number in a window is thrown up. (It's not just one error. The error can change as you try in vain to avoid cycling power as the only way out of the now dead software.) You can't run the software past a button click due to the error window, but yet the PC itself works fine...unless you cycle the power: NO POST.
    What on earth in this design could be killing these cards? The PCI slot was tested via substitution with a video card and there was no issue found. I just can't keep the DAQ card alive. Help! Trying version x.y.z of the driver is not a good enough solution. Running software should not just out of the blue blow up a DAQ card and cripple the PC only at power-up. Rather, there should be an error, some incompatibility indication before the card is dead. Frying a $1.5K card...eventually...is not how you find the source of an incompatibility issue. Like I said, HELP!
    I'm suspecting the software is creating a bus contention somehow which doesn't end in an error but, rather, an internally shorted IC, where I'm guessing it's the large NI ASIC just above the card edge due to the considerably noticeable heat generated while the PC tries to POST, a short that is undetectable except that the software stops working, but nevertheless a short that won't let the PC continue its POST if power is removed and then reapplied to the PC. It's like the software has had a "sacrifice(PCI-DIO-32HS, legacy_sw);" function embedded into it that only gets called under certain circumstances, circumstances that involve the power-up sequence of a PC. Help!

Maybe you are looking for