ErrorMsg[1024]: Getting buffer size programmatically

I am using TestStand 3.5 and LabWindows/CVI 8. The prototypes for my functions callled from TestStand look like this:
"void __declspec(dllexport) __stdcall myFunc (short *errorOccurred, long *errorCode, char errorMsg[1024])"
It is nice to be able to pass errorMsg as a fixed size array, as opposed to just a pointer, so the burden or allocating/deallocating memory is removed from me.  The size of the buffer (1024) is defined in the sequence editor and I can generate the prototype and code for my function, but it seems that the two are somewhat seperate. By that I mean that I can change the literal number in either place and get them not to agree (accidently).
It would be nice if could specify the 1024 buffer size in the TestStand sequence editor (as I am doing now), and then have this buffer and the buffer size passed down to my function.  So, I'd like to have my  function  read  "... char errorMsg[], int  errorMsgBuflen ... ". Or , alternatively, is there some property/function call that I can make to get the size of the errorMsg Buffer?
While I have your "ears", is there anything magical about 1024 for the buffer size? That seems to be the number I've seen used in all of the examples.
Thanks
Don Frevele

Hi Don,
I've attached an example of passing in the array length of errorMsg from TestStand into a CVI dll.  There is nothing magical about the size of 1024.  It sets the maximum length of the error message. 
Cheers,
David Goldberg
National Instruments
Software R&D
Attachments:
myFunc.zip ‏720 KB

Similar Messages

  • Getting recv buffer size error even after tuning

    I am on AIX 5.3, IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3...), Coherence 3.1.1/341
    I've set the following parameters as root:
    no -o sb_max=4194304
    no -o udp_recvspace=4194304
    no -o udp_sendspace=65536
    I still get the following error:
    UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 44 packets (65536 bytes)....
    The following commands/responses confirm that the settings are in place:
    $ no -o sb_max
    sb_max = 4194304
    $ no -o udp_recvspace
    udp_recvspace = 4194304
    $ no -o udp_sendspace
    udp_sendspace = 65536
    Why am I still getting the error? Do I need to bounce the machine or is there a different tunable I need to touch?
    Thanks
    Ghanshyam

    Can you try running the attached utility, and send us the output. It will simply try to allocate a variety of socket buffer sizes and report which succeed and which fail. Based on the Coherence log message I expect this program will also fail to allocate a buffer larger then 65536, but it will allow you verify the issue externally from Coherence.
    There was an issue with IBM's 1.4 AIX JVM which would not allow allocation of buffers larger then 1MB. This program should allow you to identify if 1.5 has a similar issue. If so you may wish to contact IBM support regarding obtaining a patch.
    thanks,
    Mark<br><br> <b> Attachment: </b><br>so.java <br> (*To use this attachment you will need to rename 399.bin to so.java after the download is complete.)<br><br> <b> Attachment: </b><br>so.class <br> (*To use this attachment you will need to rename 400.bin to so.class after the download is complete.)

  • Get I/O buffer size of operating system...

    Hi ,
    Is there any script in Oracle which will display the I/O buffer size of operating system i use???
    I need it in order to set appropriately the parameter DB_FILE_MULTIBLOCK_READ_COUNT.
    Thanks , a lot
    Simon

    Hi ,
    I found a script in order to find the IO buffer size of operating system. This is as follows:
    create tablespace tester datafile 'C:\oracle\product\10.2.0\oradata\EPESY\test.dbf' size 10M reuse
    default storage (initial 1M next 1m pctincrease 0)
    create table testing tablespace tester
    as select * from all_objects
    where rownum<50000
    select relative_fno from dba_data_files
    where tablespace_name='TESTER'
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    select count(*) from testing
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    In an example - explaining the above example- there were the following figures:
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    PHYRDS PHYBLKRD
    154 1220
    The test ends up by dividing the PHYBLKRD by the PHYRDS . So in the above example it yields a result of 7.92 which is close to 8 - so the effective multiblock read count is 8.
    NOTE : The words underlined are of the author words not mine.
    I run the above script and i found a figure about 10.85. So which may be the effective multiblock read count...??????
    Thanks , a lot
    Simon

  • SQL*NET V2 에서 BUFFER SIZE 지정 방법

    제품 : SQL*NET
    작성날짜 : 1996-04-15
    SQL*NET V2와 ODBC Driver를 사용하는 경우 Power Builder, Visual Basic,
    SQL Windows, Object View, Excel, Access 등과 같은 프로그램을 사용하면서
    대량의 데이타를 처리하는 경우 세션이 끊어지거나 하는 문제가 발생을 하면
    SQL*NET V2의 Buffer Size를 줄여 줌으로써 문제를 해결할 수 있습니다.
    PC의 tnsnames.ora화일에 SDU 를 적용하여 다음과 같이 수정하면 됩니다.
    만약 SDU를 적용하여도 문제가 해결이 안된다면 DEDICATED 방식으로 접속
    을 하여서 사용하기 바랍니다.
    tnsnames.ora 화일의 위치
    16BIT SQL*NET : c:\orawin\network\admin
    32BIT SQL*NET : c:\orawin95\network\admin
    =======================
    SDU (Session Data Unit)
    =======================
    Syntax : SDU=n (Byte)
    Range for 'n': 512 <= n <= 2048
    Default value = 2048 Bytes.
    =========
    바꾸기 전
    =========
    TORA =
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=TCP)
    (PORT=1521)
    (HOST=krhp2)
    (CONNECT_DATA=(SID=RC))
    =======
    바꾼 후
    =======
    TORA =
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=TCP)
    (PORT=1521)
    (HOST=krhp2)
    (CONNECT_DATA=(SID=RC))
    (SDU=1024)
    단, 서버가 ORACLE V7.3 인 경우에는 SDU를 서버쪽에도 세팅할 수 있다.
    서버의 listener.ora 화일에서
    ========
    바꾸기 전
    ========
    SID_LIST_LIST73 =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = ORA73)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    =======
    바꾼 후
    =======
    SID_LIST_LIST73 =
    (SID_LIST =
    (SID_DESC =
    (SDU=1024)(SID_NAME = ORA73)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    와 같이 변경해 주면 된다. 서버와 클라이언트 양쪽에 SDI가 세팅된 경우에는
    두 값중 작은 값을 사용하게 된다.

    I have 4 redo log groups with one member each, size of each redo log file is 128 MB( By doing some research on internet I found the solution to increase the redo logs file size which I tried upto 400MB each still getting the same error. If there is any other way to check optimal size of REDO FILES, without changing the size of FAST_START_MTTR_TARGET please share with me). Use below doc to check redo log optimum size. And also as per note mentioned by Justin you can ignore the alert as its not going to harm your database.
    274264.1 (10g New Feature - REDO LOGS SIZING ADVISORY)
    Mark your Post as Answered or Helpful if Your question is answered.
    Thanks & Regards,
    SID
    (StepIntoOracleDBA)
    Email : [email protected]
    http://stepintooracledba.blogspot.in/
    http://www.stepintooracledba.com/

  • Linux Serial NI-VISA - Can the buffer size be changed from 4096?

    I am communicating with a serial device on Linux, using LV 7.0 and NI-VISA. About a year and a half ago I had asked customer support if it was possible to change the buffer size for serial communication. At that time I was using NI-VISA 3.0. In my program the VISA function for setting the buffer size would send back an error of 1073676424, and the buffer would always remain at 4096, no matter what value was input into the buffer size control. The answer to this problem was that the error code was just a warning, letting you know that you could not change the buffer size on a Linux machine, and 4096 bytes was the pre-set buffer size (unchangeable). According to the person who was helping me: "The reason that it doesn't work on those platforms (Linux, Solaris, Mac OSX) is that is it simply unavailable in the POSIX serial API that VISA uses on these operating systems."
    Now I have upgraded to NI-VISA 3.4 and I am asking the same question. I notice that an error code is no longer sent when I input different values for the buffer size. However, in my program, the bytes returned from the device max out at 4096, no matter what value I input into the buffer size control. So, has VISA changed, and it is now possible to change the buffer size, but I am setting it up wrong? Or, have the error codes changed, but it is still not possible to change the buffer size on a Linux machine with NI-VISA?
    Thanks,
    Sam

    The buffer size still can't be set, but it seems that we are no longer returning the warning. We'll see if we can get the warning back for the next version of VISA.
    Thanks,
    Josh

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • Sychronize AO/AI buffered data graph and measure data more than buffer size

    I am trying to measure the response time (around 1ms) of the pressure drop indicated by AI channel 0 when the AO channel 0 gives a negetive single pulse to the unit under test (valve). DAQ board: Keithley KPCI-3108, LabView Version: 6.1, OS system: Win2000 professional.
    My problem is I am getting different timed graph between the AI and AO channels every time I run my program, except the first time I can get real time graph. I tried to decrease the buffer size less than the max buffer size of the DAQ board (2048 samples), but it still does unreal time graph from AI channel, seems it was still reading from old data in the buffer when AO writes the new buffer data, that is my guessing. In my p
    rogram, the AO and AI part is seperated, AO Write buffer is in a while loop while AI read is not. Would that be a problem? Or it's something else?
    Also, I am trying to measure data much larger than the board buffer size limit. Is it possible to make the measurement by modifying the program?
    I really appreciate any of your help. Thank you very much!
    Best,
    Jenna

    Jenna,
    You can modify the X-axis of a chart/graph in LabVIEW to display real-time. I have included a link below to an example program that illustrates how to do this.
    If you are doing a finite, buffered acquisition make sure that you are always reading everything from the buffer for each run. In other words, If you set a buffer size of 5000, then make sure you are reading 5000 scans (set number of scans to read to 5000). This will assure you are reading new data every time you run you program. You could always put the AI Read VI within a loop and read a smaller number from the buffer until the buffer is empty (monitor the scan backlog output of the AI Read VI to see how many scans are left in the buffer).
    You can set a buffer size larger than the FIFO
    buffer of the hardware. The buffer size you set in LabVIEW is actually a software buffer size within your computer's memory. The data is acquired with the hardware, stored temporarily within the on-board FIFO, transferred to the software buffer, and then read in LabVIEW.
    Are you trying to create a TTL square wave with the analog output of the DAQ device? If so, the DAQ device has counters that can generate a highly accurate digital pulse as well. Just a suggestion. LabVIEW has a variety of shipping examples that are geared toward using counters (find examples>>DAQ>>counters). I hope this helps.
    Real-Time Chart Example
    http://venus.ni.com/stage/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E95556A4E034080020E74861&p_node=DZ52038&p_submitted=N&p_rank=&p_answer=&p_source=Internal
    Regards,
    Todd D.
    National Instruments
    Applications Engineer

  • How do you determine ip and op buffer size on a 3550-12G

    I have a Cisco 3550-12G switch and I want to check to see if the input buffers and the output buffers for port gi0/12 are the same size. Is there a simple way to do this, I tried using the show buffers command but I couldn't seem to find what I was looking for. Help!

    Hi,
    "The 3550 switch uses central buffering. This means that there are no fixed buffer sizes per port. However, there is a fixed number of packets on a Gigabit port that can be queued. This fixed number is 4096. By default, each queue in a Gigabit port can have up to 1024 packets, regardless of the packet size."
    http://www.cisco.com/warp/public/473/187.html#topic7
    HTH,
    Bobby
    *Please rate helpful posts.

  • How to increase the buffer size ?

    Our BI system displays the following message when I execute a query on the web : buffer too small
    How to increase the buffer size ?
    Thanks in advance

    Indeed, we are using  Bex Web 7.0
    The query uses a big structure and when I execute it on the web, it takes a long time (> 5 min) and I receive folowing message on  top of the result page : " could not buffer query structures. Buffer too small "
    When I execute the query in Bex analyzer, it takes a long time too but we dont get the message
    I have checked the cache size:
          -  Local cache : 100 MB
          - Global cache : 200 MB
    Please let me know any solution as there is no BASIS  TEAM. The guy who installed the soft was a consultant and is no more reachable.
    Thanks in adavnce

  • How does buffer size affect double buffered waveform generation?

    I had originally posted the following question:
    "Why does the double buffered waveform generation pause after the first buffer before continuing?"
    "I am using an AT-AO-10 board to generate a multiple channel waveform in double buffered mode. The board's DAC's are updated by an external clock signal. While the waveform generation performs well, I notice that after the first buffer has been generated there is a time delay before the next buffer is output. However the second buffer and thereafter perform well without any time delays. If anyone can provide me an explanation on why this happens I would appreciate it. I am using NI-DAQ API functions to generate the waveforms and my settings for the WFM_DB_Config function are 1 for oldDataStop to disallow regeneration of data and 0 for partialTransferStop to not stop when a half buffer is partially transferred."
    -posted by Vadi on 6/7/2001
    I received a response from Geneva as follows:
    Geneva L. on 6/11/2001 says:
    "Vadi,
    The first thing is to make sure that you have the latest version of NI-DAQ installed, NI-DAQ 6.9.1. If you need to install it, make sure you completely uninstall any prior versions. Then, you will have examples installed in either the NI-DAQ or the CVI directory. In the AO directory, you should find the WFMdoubleBuf example.
    Start with that to make sure the output appears as you expect. Then, you can modify it to apply your external update clock, following the idea presented in the WFMsingleBufExtUpdate example. You might even want to double-check that your external clock acts as you expect using an oscilloscope.
    Finally, modify the example such that you can update on multiple channels, remembering that you interleave each channels buffer into one buffer for WFM_DB_Transfer. Whatever data is in the buffer will be updated on the output channels.
    Regards,
    Geneva L.
    Applications Engineer
    National Instruments"
    I have checked my version of NI-DAQ and it is 6.9.1. I am generating the double buffered waveform according to the format shown in WFMdoubleBuf and with some modifications from WFMsingleBufExtUpdate to allow me to use my external update clock. However I continue to notice the same phenomena again and again. For a buffer size of 7500 or 10000 points there is a time lag meaning after the first buffer has been output there is a noticeable time delay before the second buffer and buffers there after is output. This time lag doesn't exist for the buffers that are output after the first buffer but it does exist for the first buffer. When I decrease the buffer size down to 5000 points the time lag disappears (Note: this phenomena also occurs when I use an internal time base as opposed to my external clock). Is there a reason for this? I am using a AT-AO-10 board and I know the on board FIFO is 1024 points deep. However from the documentation provided it doesn't indicate that double buffered mode uses the on board FIFO at all. In fact, the functions require that the FIFO mode be turned off (in WFM_Load) for double buffered waveform generation. Is there a reason why when the buffer size is increased that there is a time lag after the first buffer? Is this because of the functions themselves or is this because of the AT-AO-10 board?

    Vadi,
    Make sure that your buffer size is set to the same number of points that you're actually writing to the buffer initially. For instance, if you run the example as-is, the NIDAQMakeBuffer puts exactly the ulCount amount of data into the buffer. Then, it continuously writes out half buffers. Thus, if you are not writing enough data to fill up the buffer the first time, there will be that lag until the section where half buffers are output.
    Regards,
    Geneva L.
    Applications Engineer
    National Instruments
    http://www.ni.com/ask

  • How to set buffer size mac?

    Video streaming is intermittent, BBC iPlayer.
    Cannot get info about router but have latest Apple Extreme modem in MacBook Air.

    A DAQboard is an IOtech device. I use a DAQbook in one of my programs.
    You set the buffer size by calling Acquisition Scan Configuration.vi, assuming you've downloaded the IOtech enhanced LabVIEW drivers. The buffer size input to this VI is in number of scans. There's no real rule to the size you should use. Just set it to a size that's big enough to hold as many scans as you will need buffered before you read them. For example, If the board is set up to sample at 100 Hz and you read the buffer once a second, make sure the buffer is at least 100 scans.
    As far as your other question, like Dennis said, you don't have to do anything to reserve the memory. The operating system will take care of it for you.

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • Doing Buffered Event count by using Count Buffered Edges.vi, what is the max buffer size allowed?

    I'm currently using Count Buffered Edges.vi to do Buffered Event count with the following settings,
    Source : Internal timebase, 100Khz, 10usec for each count
    gate : use the function generator to send in a 50Hz signal(for testing purpose only). Period of 0.02sec
    the max internal buffer size that i can allocate is only about 100~300. Whenever i change both the internal buffer size and counts to read to a higher value, this vi don't seem to function well. I need to have a buffer size of at least 2000.
    1. is it possible to have a buffer size of 2000? what is the problem causing the wrong counter value?
    2. also note that the size of max internal buffer varies w
    ith the frequency of signal sent to the gate, why is this so? eg: buffer size get smaller as frequency decrease.
    3. i'll get funny response and counter value when both the internal buffer size and counts to read are not set to the same. Why is this so? is it a must to set both value the same?
    thks and best regards
    lyn

    Hi,
    I have tried the same example, and used a 100Hz signal on the gate. I increased the buffer size to 2000 and I did not get any errors. The buffer size does not get smaller when increasing the frequency of the gate signal; simply, the number of counts gets smaller when the gate frequency becomes larger. The buffer size must be able to contain the number of counts you want to read, otherwise, the VI might not function correctly.
    Regards,
    RamziH.

  • Error -200609 occurred at DAQmx Write: Selected Buffer Size Too Small

    Hello, I'm writing some simple test VI's that I will eventually build upon to make an externally clocked analog output VI. I started with a very simple program to output finite samples using the onboard clock with the DAQmx Timing.VI. When I run the program, I almost immediately get an error. The error message is below.
    Error -200609 occurred at DAQmx Write (Analog DBL 1Chan 1Samp).vi:1
    Possible reason(s):
    Generation cannot be started, because the selected buffer size is too small.
    Increase the buffer size.
    Conflicting Property
    Property: Output.BufSize
    Corresponding Value: 1
    Minimum Supported Value: 2
    Task Name: _unnamedTask<1C>
    I have used DAQmx VI's before in similar applications and never encountered this error. Additionally, I read at the link below that DAQmx Timing.VI should be generating the buffer automatically. Any ideas as what could be causing this?
    Specs:
    Windows 7
    Labview 2012
    PCIe-6353 as DAQ board
    Below is a picture of my block diagram and the VI is attached.
    Solved!
    Go to Solution.
    Attachments:
    FiniteSamplesTest.vi ‏18 KB

    Oops. Just realized my very silly mistake: I forgot to add the Start Task VI. I did so and it works as designed.

  • Smartcardio ResponseAPDU buffer size issue?

    Greetings All,
    I’ve been using the javax.smartcardio API to interface with smart cards for around a year now but I’ve recently come across an issue that may be beyond me. My issue is that I whenever I’m trying to extract a large data object from a smart card, I get a “javax.smartcardio.CardException: Could not obtain response” error.
    The data object I’m trying to extract from the card is around 12KB. I have noticed that if I send a GETRESPONSE APDU after this error occurs I get the last 5 KB of the object but the first 7 KB are gone. I do know that the GETRESPONSE dialogue is supposed to be sent by Java in the background where the responses are concatenated before being sent as a ResponseAPDU.
    At the same time, I am able to extract this data object from the card whenever I use other APDU tools or APIs, where I have oversight of the GETRESPONSE APDU interactions.
    Is it possible that the ResponseAPDU runs into buffer size issues? Is there a known workaround for this? Or am I doing something wrong?
    Any help would be greatly appreciated! Here is some code that will demonstrate this behavior:
    * test program
    import java.io.*;
    import java.util.*;
    import javax.smartcardio.*;
    import java.lang.String;
    public class GetDataTest{
        public void GetDataTest(){}
        public static void main(String[] args){
            try{
                byte[] aid = {(byte)0xA0, 0x00, 0x00, 0x03, 0x08, 0x00, 0x00};
                byte[] biometricDataID1 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x08};
                byte[] biometricDataID2 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x03};
                //get the first terminal
                TerminalFactory factory = TerminalFactory.getDefault();
                List<CardTerminal> terminals = factory.terminals().list();
                CardTerminal terminal = terminals.get(0);
                //establish a connection with the card
                Card card = terminal.connect("*");
                CardChannel channel = card.getBasicChannel();
                //select the card app
                select(channel, aid);
                //verify pin
                verify(channel);
                 * trouble occurs here
                 * error occurs only when extracting a large data object (~12KB) from card.
                 * works fine when used on other data objects, e.g. works with biometricDataID2
                 * (data object ~1Kb) and not biometricDataID1 (data object ~12Kb in size)
                //send a "GetData" command
                System.out.println("GETDATA Command");
                ResponseAPDU response = channel.transmit(new CommandAPDU(0x00, 0xCB, 0x3F, 0xFF, biometricDataID1));
                System.out.println(response);
                card.disconnect(false);
                return;
            }catch(Exception e){
                System.out.println(e);
            }finally{
                card.disconnect(false)
        }  

    Hello Tapatio,
    i was looking for a solution for my problem and i found your post, first i hope your answer
    so i am a begginer in card developpement, now am using javax.smartcardio, i can select the file i like to use,
    but the problem is : i can't read from it, i don't now exactly how to use hexa code
    i'm working with CCID Smart Card Reader as card reader and PayFlex as smart card,
              try {
                          TerminalFactory factory = TerminalFactory.getDefault();
                      List<CardTerminal> terminals = factory.terminals().list();
                      System.out.println("Terminals: " + terminals);
                      CardTerminal terminal = terminals.get(0);
                      if(terminal.isCardPresent())
                           System.out.println("carte presente");
                      else
                           System.out.println("carte absente");
                      Card card = terminal.connect("*");
                     CardChannel channel = card.getBasicChannel();
                     ResponseAPDU resp;
                     // this part select the DF
                     byte[] b = new byte[]{(byte)0x11, (byte)0x00} ;
                     CommandAPDU com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                        //this part select the Data File
                     b = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                     byte[] b1 = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xB2, (byte)0x00, (byte)0x04, b1, (byte)0x0E); */
                        // the problem is that i don't now how to built a CommandAPDU to read from the file
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                      card.disconnect(false);
              } catch (Exception e) {
                   System.out.println("error " + e.getMessage());
              }read record : 00 A4 ....
    if you know how to do , i'm waiting for your answer

Maybe you are looking for

  • RFC Look Up with MultiMapping in PI 7.1

    Hi All, I have created a multimapping with one source & two target messages. I am using a RFC look Up for one of fields inside mapping. But, when I used this multimapping in interface determination in PI 7.1, at runtime i got an error. Error - is var

  • New-BE help needed for audio out

    No matter what I try I can't seem to get audio to come out of FCE... The VU meter in the program shows there is signal present but no matter what I try (changing audio out in system prefs..) I hear no audio. I have a Digidesign 002 rack (I'm moving i

  • Password issue for SQL Developer

    Hi All, I have downloaded the SQL developer and I am facing the issue with the username and Password while creating the new connection. Please let me know what is the username and password. Thanks in advance. Best regards, Pavan

  • PC/MAC Version of Photoshop Elements 13

    I have just purchased a new MacBook Pro (Yosemite OS) and also have a PC Desktop/Windows 7 64 bit OS.  I would like to purchase the CD/DVD for Elements 13, but the Mac does not have a CD/DVD drive.  If I install the software on the PC and transfer it

  • Should I update my great working week 38 to 1.1.1?

    Wondering about experiences of other week 38 owners who have fine working iTouch's. Did you update? If you did, do you notice any differences, either good or bad. I want to see if it is worthwhile or NECESSARY to do this update on a good working mach