Smartcardio ResponseAPDU buffer size issue?

Greetings All,
I’ve been using the javax.smartcardio API to interface with smart cards for around a year now but I’ve recently come across an issue that may be beyond me. My issue is that I whenever I’m trying to extract a large data object from a smart card, I get a “javax.smartcardio.CardException: Could not obtain response” error.
The data object I’m trying to extract from the card is around 12KB. I have noticed that if I send a GETRESPONSE APDU after this error occurs I get the last 5 KB of the object but the first 7 KB are gone. I do know that the GETRESPONSE dialogue is supposed to be sent by Java in the background where the responses are concatenated before being sent as a ResponseAPDU.
At the same time, I am able to extract this data object from the card whenever I use other APDU tools or APIs, where I have oversight of the GETRESPONSE APDU interactions.
Is it possible that the ResponseAPDU runs into buffer size issues? Is there a known workaround for this? Or am I doing something wrong?
Any help would be greatly appreciated! Here is some code that will demonstrate this behavior:
* test program
import java.io.*;
import java.util.*;
import javax.smartcardio.*;
import java.lang.String;
public class GetDataTest{
    public void GetDataTest(){}
    public static void main(String[] args){
        try{
            byte[] aid = {(byte)0xA0, 0x00, 0x00, 0x03, 0x08, 0x00, 0x00};
            byte[] biometricDataID1 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x08};
            byte[] biometricDataID2 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x03};
            //get the first terminal
            TerminalFactory factory = TerminalFactory.getDefault();
            List<CardTerminal> terminals = factory.terminals().list();
            CardTerminal terminal = terminals.get(0);
            //establish a connection with the card
            Card card = terminal.connect("*");
            CardChannel channel = card.getBasicChannel();
            //select the card app
            select(channel, aid);
            //verify pin
            verify(channel);
             * trouble occurs here
             * error occurs only when extracting a large data object (~12KB) from card.
             * works fine when used on other data objects, e.g. works with biometricDataID2
             * (data object ~1Kb) and not biometricDataID1 (data object ~12Kb in size)
            //send a "GetData" command
            System.out.println("GETDATA Command");
            ResponseAPDU response = channel.transmit(new CommandAPDU(0x00, 0xCB, 0x3F, 0xFF, biometricDataID1));
            System.out.println(response);
            card.disconnect(false);
            return;
        }catch(Exception e){
            System.out.println(e);
        }finally{
            card.disconnect(false)
    }  

Hello Tapatio,
i was looking for a solution for my problem and i found your post, first i hope your answer
so i am a begginer in card developpement, now am using javax.smartcardio, i can select the file i like to use,
but the problem is : i can't read from it, i don't now exactly how to use hexa code
i'm working with CCID Smart Card Reader as card reader and PayFlex as smart card,
          try {
                      TerminalFactory factory = TerminalFactory.getDefault();
                  List<CardTerminal> terminals = factory.terminals().list();
                  System.out.println("Terminals: " + terminals);
                  CardTerminal terminal = terminals.get(0);
                  if(terminal.isCardPresent())
                       System.out.println("carte presente");
                  else
                       System.out.println("carte absente");
                  Card card = terminal.connect("*");
                 CardChannel channel = card.getBasicChannel();
                 ResponseAPDU resp;
                 // this part select the DF
                 byte[] b = new byte[]{(byte)0x11, (byte)0x00} ;
                 CommandAPDU com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                 resp = channel.transmit(com);
                 System.out.println("Result: " + getHexString(resp.getBytes()));
                    //this part select the Data File
                 b = new byte[]{(byte)0x11, (byte)0x05} ;
                 com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                 System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                 resp = channel.transmit(com);
                 System.out.println("Result: " + getHexString(resp.getBytes()));
                 byte[] b1 = new byte[]{(byte)0x11, (byte)0x05} ;
                 com = new CommandAPDU((byte)0x00, (byte)0xB2, (byte)0x00, (byte)0x04, b1, (byte)0x0E); */
                    // the problem is that i don't now how to built a CommandAPDU to read from the file
                 System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                 resp = channel.transmit(com);
                 System.out.println("Result: " + getHexString(resp.getBytes()));
                  card.disconnect(false);
          } catch (Exception e) {
               System.out.println("error " + e.getMessage());
          }read record : 00 A4 ....
if you know how to do , i'm waiting for your answer

Similar Messages

  • I'm having sever latency issues on Logic 9 through the built-in input (despite having no plugins active and 32 bit buffer size selected)

    I'm literally running a guitar straight into my computer (no plugins, no interface, just the "built-in input" on my macbook) and there's a LOT of latency. I'm very experienced with Logic and pro-audio (I have logic on my other computer and it works fine) but for some reason there's a ton of delay on my signal. NO idea where to start with this one. I've examined every option within Logic and nothing seems to be helping. I've changed every parameter, including buffer size, disabling and reenabling core audio, and low-latency mode, but nothing has worked.
    Any advice would be appreciated, thanks.
    Also, I'm using a late 2011 15" Macbook (Quad core, 4 GB ram)

    Never heard of this before... what are you using for output in Logic's Audio Prefs?
    Are you on 10.7.5 as you signature says?  There is a supplemental update for early builds of 10.7.5 that were released.
    Take a look at the Audio portion the "Audio-MIDI Setup" in Utilities.
    Bit-depth, sample frequency?  Try it at 16-bit 44.1kHz.  Is Logic set to the same?
    What kind of cable are you using to get the guitar into Logic, the Mac audio input jack is multifunction, could you be partially enabling the digital I/O.
    just tossing some ideas..

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • DBMS_LOB.WRITEAPPEND Max buffer size exceeded

    Hello,
    I'm following this guide to create an index using Oracle Text:
    http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/cdatadic.htm#i1006810
    So I wrote something like this:
    CREATE OR REPLACE PROCEDURE CREATE_INDEX(rid IN ROWID, tlob IN OUT NOCOPY CLOB)
    IS
    BEGIN
         DBMS_LOB.CREATETEMPORARY(tlob, TRUE);
         FOR c1 IN (SELECT ID_DOCUMENT FROM DOCUMENT WHERE rowid = rid)
         LOOP
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT_TITLE>'), '<DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c1.TITLE, ' ')), NVL(c1.TITLE, ' '));
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT_TITLE>'), '</DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
              FOR c2 IN (SELECT TITRE,TEXTE FROM PAGE WHERE ID_DOCUMENT = c1.ID_DOCUMENT)
              LOOP
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE>'), '<PAGE>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c2.TEXTE, ' ')), NVL(c2.TEXTE, ' '));
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE>'), '</PAGE>');
              END LOOP;
         END LOOP;
    END;
    Issue is that some page text are bigger than 32767 bytes ! So I've got an INVALID_ARGVAL...
    I can't figure out how can I increase this buffer size and how to manage this issue ??
    Can you please help me :)
    Thank you,
    Ben
    Edited by: user10900283 on 9 févr. 2009 00:05

    Hi ben,
    I'm afraid, that doesn't help much, since you have obviously rewritten your procedure based on the advise given here.
    Coluld you please post your new procedure, as formatted SQL*Plus, embedded in {noformat}{noformat} tags, like this:SQL> CREATE OR REPLACE PROCEDURE create_index(rid IN ROWID,
    2 IS
    3 BEGIN
    4 dbms_lob.createtemporary(tlob, TRUE);
    5
    6 FOR c1 IN (SELECT id_document
    7 FROM document
    8 WHERE ROWID = rid)
    9 LOOP
    10 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
    11 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT_TITLE>')
    12 ,'<DOCUMENT_TITLE>');
    13 dbms_lob.writeappend(tlob, LENGTH(nvl(c1.title, ' '))
    14 ,nvl(c1.title, ' '));
    15 dbms_lob.writeappend(tlob
    16 ,LENGTH('</DOCUMENT_TITLE>')
    17 ,'</DOCUMENT_TITLE>');
    18 dbms_lob.writeappend(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
    19
    20 FOR c2 IN (SELECT titre, texte
    21 FROM page
    22 WHERE id_document = c1.id_document)
    23 LOOP
    24 dbms_lob.writeappend(tlob, LENGTH('<PAGE>'), '<PAGE>');
    25 dbms_lob.writeappend(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
    26 dbms_lob.writeappend(tlob
    27 ,LENGTH(nvl(c2.texte, ' '))
    28 ,nvl(c2.texte, ' '));
    29 dbms_lob.writeappend(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>')
    30 dbms_lob.writeappend(tlob, LENGTH('</PAGE>'), '</PAGE>');
    31 END LOOP;
    32 END LOOP;
    33 END;
    34 /
    Advarsel: Procedure er oprettet med kompileringsfejl.
    SQL>
    SQL> DECLARE
    2 rid ROWID;
    3 tlob CLOB;
    4 BEGIN
    5 rid := 'AAAy1wAAbAAANwsABZ';
    6 tlob := NULL;
    7 create_index(rid => rid, tlob => tlob);
    8 dbms_output.put_line('TLOB = ' || tlob); -- Not sure, you can do this?
    9 END;
    10 /
    create_index(rid => rid, tlob => tlob);
    FEJL i linie 7:
    ORA-06550: line 7, column 4:
    PLS-00905: object BRUGER.CREATE_INDEX is invalid
    ORA-06550: line 7, column 4:
    PL/SQL: Statement ignored
    SQL>

  • Getting recv buffer size error even after tuning

    I am on AIX 5.3, IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3...), Coherence 3.1.1/341
    I've set the following parameters as root:
    no -o sb_max=4194304
    no -o udp_recvspace=4194304
    no -o udp_sendspace=65536
    I still get the following error:
    UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 44 packets (65536 bytes)....
    The following commands/responses confirm that the settings are in place:
    $ no -o sb_max
    sb_max = 4194304
    $ no -o udp_recvspace
    udp_recvspace = 4194304
    $ no -o udp_sendspace
    udp_sendspace = 65536
    Why am I still getting the error? Do I need to bounce the machine or is there a different tunable I need to touch?
    Thanks
    Ghanshyam

    Can you try running the attached utility, and send us the output. It will simply try to allocate a variety of socket buffer sizes and report which succeed and which fail. Based on the Coherence log message I expect this program will also fail to allocate a buffer larger then 65536, but it will allow you verify the issue externally from Coherence.
    There was an issue with IBM's 1.4 AIX JVM which would not allow allocation of buffers larger then 1MB. This program should allow you to identify if 1.5 has a similar issue. If so you may wish to contact IBM support regarding obtaining a patch.
    thanks,
    Mark<br><br> <b> Attachment: </b><br>so.java <br> (*To use this attachment you will need to rename 399.bin to so.java after the download is complete.)<br><br> <b> Attachment: </b><br>so.class <br> (*To use this attachment you will need to rename 400.bin to so.class after the download is complete.)

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • Changing Buffer Size - Logic Tweaks Out

    I find that I'm unable to change the buffer size (for instance, after I record and want to play back without any glitches) without having to quit Logic and reload the file. I usually get a message that "some plug-ins have been disabled."
    Everything is usually fine after I quit the file and reload. But I'm wondering if anyone else has this issue and if it can be avoided. It's mostly just a time-waster.
    Thanks.

    Nevermind. Thanks.

  • Buffer Size error on Oracle 10.2.0 Database

    Hi Experts,
    I am getting th following error when my VB application executes on Oracle 10.2.0.
    ORA-01044: Size 5000000 of buffer bound to variable exceeds maximum SYSMAN.EMD_NOTIFICATION
    The Oracle documentation suggests to reduce the Buffer Size.
    As per my knowledge Oracle 10g has only one such parameter Log_Buffer.
    Which parameter is required to be changed in init.ora ?
    Any additional recommendation, which could arrest this error?
    Thanks and Regards

    Hi,
    The same Application works with the same schema and same Database Objects with Oracle 9i, which is pretty surprising. I took a backup of the schema objects from Oracle 10g, and uploaded it on Oracle 9i. The application works fine without a single modification.
    Next, I sat with my developer and checked the parameters, as suggested. Where the Procedure was being called, I found that it is thru command line (not with record set), and the parameter was set to 10000, which was modified it to 5000. This Procedure calling object is used globally across the application. The application started running fine on Oracle 10g, where it was earlier giving errors.
    Now the point is that should we go for the change of the parameter from 10000 to 5000 ?
    This value is very sensitive, and if the number or rows returned is more than 5000 (even 5001), the application shall give an error, and it is obvious that the number of rows in the table for most of the queries shall definitely exceed 5000.
    I feel this is a temporary solution, but can we go for some permanent solution? Definitely, in this fourm I am sure, we have plenty of brains who can trouble shoot this issue.
    Waiting for realtime solution.
    Edited by: user3299261 on Aug 31, 2009 2:14 PM

  • Network Stream Max Buffer Size

    Hello,
    I recall an AE on here once mentioning that network streams can exhibit problems if you set a buffer size greater than 9MB, but I haven't been able to find any concrete explanation of this.  The reason behind me asking is that, I'm currently running into some memory problems in my application in which I initialize my buffer to 30MB.
    Basically, my application starts out running fine, but after 30-45 minutes, all of a sudden, my network stream buffer starts building up until it reaches a max value, even though I am still reading data out of it at seemingly the same rate.  It's as if routine memory allocation/deallocation elsewhere in my application causes issues with the network stream.
    Anyone have any insight to this?
    thanks in advance,

    The size of the buffer you set for your network streams will just determine how much memory is set aside within your application. If you make your stream too large, LabVIEW will throw an error such as 'Memory is Full' telling you there is not enough space to create that large of a buffer. 
    The important benchmarking data to look at concerning network streams is throughput and latency. The following KB has data in section 6 regarding how quickly data can be passed through a network stream:
    http://www.ni.com/white-paper/12267/en/#toc4
    Since you are able to function properly for the first 30-45 minutes, it does not seem that you are violating network capabilities with your setup. However, there seems to be something that is building up/slowing down after that period of time.
    How did you determine that you need a 30 MB buffer size? If you are originally only storing between 0-8000 items on the buffer, it may make sense to make a buffer smaller than 30000000 to free up resources for other parts of your application.
    Here is some more information about how to profile performance and memory within LabVIEW: http://zone.ni.com/reference/en-XX/help/371361J-01/lvdialog/profile/
    Applications Engineer
    National Instruments

  • JMS Server Message Buffer Size & Thresholds and Quotas settings

    On WLS10MP1,
    For persistent messages:
    1.Does "JMS Server Message Buffer" setting serve the same purpose as "Bytes Threshold High" under Threshold ?
    2.If no, can someone explain the difference pls.
    Many thanx,

    Message Buffer Size relates to the number of message the JMS server keeps in the memory. The value of this determines when the server should start paging the message out of memory to a persistence store. So this is directly related with the memory/storage issue and the size of messages.
    Bytes Threshold High relates to the performance of the JMS server. When this limit is reached JMS server starts logging the message and may even instruct he producer to slow down the message input.
    So the if you get Bytes Threshold High messages that means you should check on your consumer (MDB who is picking up messages from the que), and try to increase its performance.
    However if your Message Buffer Size is crossing limits then you should think of increasing the momory so that more messages can be kept in memory and disck IO can be reduce.
    Anyone wants to add something more to it?

  • I/O buffer size switch by itself

    Hi, whenever I'm playing my guitar in Logic Pro 9, the latency suddenly changes, and I start to hear this delay in the sound.
    Then I have to modify the I/O buffer size switch by (e.  like128 to 256) to hear the sound in real time again.
    It doesn't let me record as it changes when I'm playing.
    What is Logic doing this? and how can I fix it.
    It started doing this after I updated to maverick, before I had no problems at all with this.
    Thanks
    Using Logic Pro 9
    9.1.8
    Macbook Pro
    OS X 10.9

    You will find several threads on the M-Audio website dedicated to Mavericks issues with M-Audio Fast Tracks...
    Here is the major one I believe.....
    http://community.m-audio.com/m-audio/topics/os_x_mavericks_fast_track_pro_suppor t
    It does not make for good reading i am afraid to say..... and although there are a variety of different issues discussed there.. it basiclaly amounts to..
    The interface doesn't have 10.9 compatible drivers and it may be several weeks until M-Audio release updated ones....
    Sorry!

  • I/O Buffer Size Requires Constant Changes

    Every 20 seconds or so, I have change the I/O Buffer Size to stop a crackling noise and latency.  When I change it, everything sounds clear for 20 seconds, and then goes back to sounding terrible and out of sync.  Any help?

    it sounds like you have an issue that isn't actually directly related to the buffer size. When you chnge the buffer size, you are reinitializing core audio. Something else is happening that causes your sync issues after about 20 seconds, then reintializing clears it.
    What sort of setup do you have? Do you have an aggregate device? A clock? What sorts of instruments? etc.

  • Sound buffer size

    can I increase my sound buffer size?
    right now I am getting 
    17640 bytes  coming in at a time.
    then
    i do this code: 
    int bytesRecorded = e.BytesRecorded;
    buffer1 = new double[(8192)];
    // WriteToFile(buffer, bytesRecorded);
    int tempint = 0;
    for (int index = 0; index < 16384; index += 2)
    short sample = (short)((buffer[index + 1] << 8) |
    buffer[index + 0]);
    float sample32 = sample / 32768f;
    buffer1[tempint] = (double)sample32;
    tempint++;
    and then I send it to my FFT
    my FFT only works at 2^(number) samples at a time
    so I can only do 8192 samples at a time
    I wanted to do more samples at a time, so I can get an higher accuracy 

    Hi Btb4198,
    I’m afraid that it is not the correct forum about this issue, since this forum is to discuss Visual C#. I am moving your question to the moderator forum ("Where is the forum for..?"). The owner of the forum will direct you to a right forum. Thanks for your
    understanding.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Does weblogic set socket buffer size ?

    Hi all,
    Want to know if Weblogic sets the Socket buffer size,when it establishes a connection with a client ?
    If so then what is its default value?
    any help is appreciated.
    Thanks in advance.

    Thanks for all your reply.
    I give u more detail on my issue:
    Test Environment:
    Server: 4 GB memory, 1.3 GHz, a dual core CPU with hyper-threading enabled on each core (4 logical CPUs), a tomcat with my application deployed
    Client: the same configuration as Server, a http simulator
    Network: 1Gb/s
    Test:
    1> Simulate one user. Client keep sending requests to server, and server has one thread serve user's requests. (make Server always busy)
    request
    Server <------------- Client
    response
    Server ---------------> Client
    request
    Server <-------------- Client
    CPU usage was 18%, in normal case should be 25% (100% / 4 = 25%, in my other cases, CPU were 25%)
    Take a few thread dumps: saw threads sometimes blocked in the native method java.net.SocketOutputStream.socketWrite0(Native Method).
    2> Simulate 4 concurrent users. Client keep sending requests to server concurrently, and server has multiple threads serve user's requests. (make Server always busy)
    4 requests
    Server <------------- Client
    responses
    Server ---------------> Client
    requests
    Server <-------------- Client
    CPU usage was 80%, in normal case should be 90%+.
    Take a few thread dumps: saw threads sometimes blocked in the native method java.net.SocketOutputStream.socketWrite0(Native Method).
    If CPU was not fully utilized, for example only 80%, server must be blocked by something. And this bottleneck would impact the scalability of server.
    If I can remove this bottleneck and do not bring other overheads, I may acquire:
    1> a higher throughput
    2> a more scalable server
    In our performance requirement, server CPU usage must be 90%+.
    I just feel strange that why the CPU usage was normal (25% for single thread test or 90%+ for 4 threads test) even I choose a very small buffer size.

  • Performance affected from the socket buffer size

    I have a server program deployed in the Tomcat server.
    I found the server was sometime blocked in the native method java.net.SocketOutputStream.socketWrite0(Native Method). And the CPU was not fully utilized.
    Intuitive thinking, may be I can solve this problem by increasing the socket send buffer size. So I tried it.
    By default, tomcat socket send buffer size is 9000 bytes or so. I increased this value to 102400 bytes. Tested again. CPU usage was around 100%. OK, it worked.
    But how about if I decrease this value to a small number? I tried. Set the value to 100 bytes. Tested again. CPU usage was still around 100% !!!
    So, the problem here is: CPU was not able to be fully utilized by using the default buffer size (9000 bytes). But if you increase the value to a very large number or a very small number, you can achieve a better performance.
    note: the client was sending requests all the time. Just like the stress test. So, the server side was always busy.
    Edited by: willpowerforever on Jul 31, 2009 6:37 PM

    Thanks for all your reply.
    I give u more detail on my issue:
    Test Environment:
    Server: 4 GB memory, 1.3 GHz, a dual core CPU with hyper-threading enabled on each core (4 logical CPUs), a tomcat with my application deployed
    Client: the same configuration as Server, a http simulator
    Network: 1Gb/s
    Test:
    1> Simulate one user. Client keep sending requests to server, and server has one thread serve user's requests. (make Server always busy)
    request
    Server <------------- Client
    response
    Server ---------------> Client
    request
    Server <-------------- Client
    CPU usage was 18%, in normal case should be 25% (100% / 4 = 25%, in my other cases, CPU were 25%)
    Take a few thread dumps: saw threads sometimes blocked in the native method java.net.SocketOutputStream.socketWrite0(Native Method).
    2> Simulate 4 concurrent users. Client keep sending requests to server concurrently, and server has multiple threads serve user's requests. (make Server always busy)
    4 requests
    Server <------------- Client
    responses
    Server ---------------> Client
    requests
    Server <-------------- Client
    CPU usage was 80%, in normal case should be 90%+.
    Take a few thread dumps: saw threads sometimes blocked in the native method java.net.SocketOutputStream.socketWrite0(Native Method).
    If CPU was not fully utilized, for example only 80%, server must be blocked by something. And this bottleneck would impact the scalability of server.
    If I can remove this bottleneck and do not bring other overheads, I may acquire:
    1> a higher throughput
    2> a more scalable server
    In our performance requirement, server CPU usage must be 90%+.
    I just feel strange that why the CPU usage was normal (25% for single thread test or 90%+ for 4 threads test) even I choose a very small buffer size.

Maybe you are looking for

  • "Downloads" button in Safari 5.1 is gone.

    My downloads arrow has been deleted by acident. Because of this I can not download anything. How do I get this icon back to the right hand corner of my browser? I have tried going to customize Toolbar... it does not have a downloads icon there. How d

  • Sql Query is running fine in one database but running from long in other

    Hi All, Please advice me on below: One query is running fine on 11gr2 database with having 2GB SGA and cost of the running query is 15, but the same query is running from long time in other database having same 2GB SGA. Below is teh Query: SELECT CDU

  • ATV2 No Longer able to Sync with iTunes Library

    My ATV2 has been working just fine for weeks. I've watched a lot of content from my iTunes library on my iMac. I started up the ATV2 to watch a show and the interface was all garbage. It displayed blurred garbage of some of the movie/show icons at th

  • I've got the problem about sharing the screen

    Hi, i bought iMac 21.5 (Late 2013), i'v upgraded it to the Mavericks. I've got the problem about sharing the display, i have thunderbolt to hdmi adapter and while i plug it to my TV, it doesn't work, it has no signal on TV, can someone help me ?? Tha

  • HT3964 blue screen

    my macbook pro just locked on a blue screen and even when i restart it wont load. help me please