TCP buffer

I have a question about reading data from the Socket's InputStream
If I want to, for example, read an int in, can I do something like the following?
InputStream in = socket.getInputStream();
final byte[] integer = new byte[4];
if(in.read(integer))==-1)
//EOF
int value  = integer[0]<<24 | integer[1]<<16 | integer[2]<<8 | integer[3];The InputStream javadoc states that the read will "try" to read the length of the array into the array. I'm not exactly sure what that means. If I do this code example, am I guaranteed (as long as the EOF is not reached) for all 4 bytes to be read in. Or should I do something like the following example
InputStream in = socket.getInputStream();
final byte[] integer = new byte[4];
int temp1 = in.read(integer);
int temp2 = temp1;
// in.read only tries to read in the given length, nothing is guaranteed.  Read until all 4 bytes are
// read in.
while(temp2<4)
     if(temp1==-1)
          //EOF
     temp1 = in.read(integer, temp2, 4-temp2);
     temp2 += temp1;
int value  = integer[0]<<24 | integer[1]<<16 | integer[2]<<8 | integer[3];Thanks.
Edited by: iLL_LeaT on Mar 31, 2010 9:08 PM

The InputStream javadoc states that the read will "try" to read the length of the array into the array. I'm not exactly sure what that means.It means it will try.
If I do this code example, am I guaranteed (as long as the EOF is not reached) for all 4 bytes to be read in.No. It will try. It may not succeed.
Or should I do something like the following exampleNo. You should do either use DataInputStream.readFully() or write a correct loop. Yours is incorrect. The test for -1 should immediately follow the read. At the moment you're running the risk of adding -1 to the running count.
On the other hand you can throw it all away and use DataInputStream.readInt().

Similar Messages

  • How to manage tcp buffer with array

    I am having trouble parsing data from the tcpread buffer. I can parse the data from the buffer I was just wondering if the following is possible...Is there a way to have each tcp packet that is received be put into an array element instead of having the buffer append the in received data onto itself. It would be much easier for me if the buffer was divided in to packets that way I could just parse each packet instead of having to find each packet then parse each packet individually? I was just wondering if  there was an example or something, or do I need a vi myself to handle this. I noticed that the TCPREAD vi block has 4 options for the buffer...this is not one of them...but perhaps someone has done something like this before? Thanks for any help.

    Unfortunately the project I have has a variable number of connections so it could be one or it could be up to 65535.
    I've been thinking of a server-type example for my blog.
    I've written a web server in LabVIEW before (1995 or so).  Basically you separate the listener from the service functions.
    The listener listens for connections and puts CONN IDs into a pool (queue).  The pool contains all the connections that are active at the moment, along with some sort of status flag.
    A service loop continuously extracts a connection cluster from the pool, handles it according to it's status, and either returns it to the pool, or terminates the connection and  removes it from the pool.
    The idea is that the server is a sort of state machine which runs through all current connections and gives them some attention.
    You could also do it by spawning a number of copies (one for each connection) of a single reentrant VI, but I don't know about the performance of that.  If you truly have a thousand connections at once, then you would have a thousand instances of this VI running in parallel.
    That's a bit daunting to me, but then I've never tried it. Perhaps someone else could comment on that feasibility.  It would seem that the thread-management overhead might be a problem, but I don't know. 
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • Tcp details please

    Well, I know that this is more of a general TCP question than a LabVIEW question but I will ask it anyway since there does not seem to be any good documentation on the Web about TCP under Windows, particularly whatever Windows layer LabVIEW calls to do TCP.
    Can someone out there explain what happens behind the scenes when doing TCP Read/Write with LabVIEW?  Let's assume "immediate" or "standard" mode is used and there is no LabVIEW buffer set up for the data.  What actually triggers the data to travel over the network?  The Read or the Write?  Obviously there is some fixed size buffer allocated somewhere at a low level- is it used by the Read side or the Write side, or both?  What is the default size of the TCP buffer?  4kb?  When that buffer fills, I'm assuming that's when the "Write" timeout comes into play?
    So in summary, I'm looking for a basic "tutorial" about how TCP data transfer is handled below the LabVIEW layer.
    Solved!
    Go to Solution.

    On windows, LabVIEW uses the winsock2 API (WS2_32.dll).
    http://msdn.microsoft.com/en-us/library/ms740673(VS.85).aspx
    It would seem that on WindowsXP the default size of TCP buffers is 17,520 bytes.
    By default LabVIEW leaves the send and recv buffer sizes alone (the OS defaults). If you want to fine tune them you can use the ini tokens
    SocketSendBufferSize and
    SocketRecvBufferSize
    The values of these tokens will be applied to all sockets that LabVIEW uses.
    <Disclaimer>
    The OS defaults are almost always the best choice. It is really easy to destroy LabVIEW's network performance by fiddling with these values.
    </Disclaimer>

  • What happens to receive buffer when a RST is received

    Hi,
    I have a messaging system with an XML over TCP interface running in jdk 1.4.2_04 on Windows XP Professional. Sometimes when I publish data at a very high rate (about 500 messages/sec) the messaging server starts to get behind due to the parsing of the incoming XML messages. I have several publisher programs running in separate JVMs that open a connection to the messaging server, write data, close the socket and terminate. When a publisher programs closes the socket and terminates I am getting a Socket exception (Connection reset by peer) when I call the read method on the socket InputStream. I assumed that I would not get the exception until my server had consumed all of the unread bytes in the receive buffer and then called read again. However, I am getting the exception before I finish reading data out of the buffer and I lose a lot of messages that were sent over the socket. I know all of the data was sent from the publisher process over the socket because I can see all of it in the Ethereal packet sniffer trace.
    What is the behavior of the socket supposed to be when a RST is received and the receive buffer still has unread data? I do not know for sure whether I have read all of the data out of the buffer by the way. It's possible that my read is pulling in the last of the buffer when it gets the exception but I assume I can't expect that the contents of the byte[] I am reading data into is correct since an exception is thrown. I also assume I should not try to read again from the socket after I get the exception.
    Does anyone have any insight on what might be happening? Also, does anyone know of a tool for Windows that will allow me to determine how many bytes are in the receive and send buffers? On Solaris netstat can do this but not on Windows and I have not seen a tool for Windows that can do this.
    Thanks,
    Mark

    Thanks for your help. I'm sure now that the TCP buffer is being discarded when the RST is received. However, I am not setting SO_LINGER and I think I am closing the socket appropriately on the client side. However, I get a RST right after the FIN from the client side of the socket every time. It does not wait around for an ACK to the FIN before sending the RST to the server. The client side is a Windows 2000 Advanced Server. The server side is XP Professional.
    Here is the last two packets from the Ethereal capture on the server (server ip 192.168.203.22 port 5001 client ip 192.168.203.179 port 1605):
    No. Time Source Destination Protocol Info
    3882 78.610929 192.168.203.179 192.168.203.22 TCP 1605 > 5001 [FIN, PSH, ACK] Seq=84750 Ack=83 Win=64430 Len=340
    Frame 3882 (394 bytes on wire, 394 bytes captured)
    Internet Protocol, Src Addr: 192.168.203.179 (192.168.203.179), Dst Addr: 192.168.203.22 (192.168.203.22)
    Transmission Control Protocol, Src Port: 1605 (1605), Dst Port: 5001 (5001), Seq: 84750, Ack: 83, Len: 340
    Data (340 bytes)
    No. Time Source Destination Protocol Info
    3883 78.610971 192.168.203.179 192.168.203.22 TCP 1605 > 5001 [RST] Seq=85091 Ack=3643464782 Win=0 Len=0
    Here is my client socket code (abbreviated)
    m_socket = new Socket(m_host, m_port);
    System.out.println("Send buffer size " + m_socket.getSendBufferSize());
    System.out.println("SO Linger " + m_socket.getSoLinger());
    System.out.println("Get tcpNoDelay " + m_socket.getTcpNoDelay());
    OutputStream outStream = m_socket.getOutputStream();
    InputStream inStream = m_socket.getInputStream();
    InputStreamReader reader = new InputStreamReader(inStream, "UTF-8");
    // send data
    for (int count = 0; count < argLoopCount; count++)
    try
    String record;
    String payload = new Integer(count + startIndex).toString();
    record = "<MSG>"+payload+"</MSG>";
    bytes = record.getBytes();
    outStream.write(bytes, 0, record.length());
    outStream.flush();
    System.out.println(payload);
    catch (Exception e)
    System.err.println("File input error");
    e.printStackTrace();
    try
    Thread.sleep(argFreq);
    catch(InterruptedException e)
    System.out.println("interrupted");
    outStream.flush();
    outStream.close();
    m_socket.close();
    // sleep in case the program exit causes the RST
    Thread.sleep(20000);
    -Mark

  • WAAS: TCP-Settings with ADSL

    Hello,
    it is recomanded to change the values of tcp-buffer, if the edge wae is connected with adsl?
    download/upload: 6000/512 kBit/s.
    Kind Regards

    Volker,
    It depends on the RTT latency. Buffers in 4.0 should only be changed when the bandwidth-delay product exceeds the default buffer allocation.
    Zach

  • FTP/SFTP/FISH (etc) slow file transfer rate over LAN

    Hi everyone,
    I have a problem with transferring files over my home network that has been bothering me for quite some time.
    I have a 802.11n router which should provide me with the transfer rate up to 150 Mbps (afaik). When I download files from the Internet, 3 MB/s data transfer rate is of no problem.
    However, when receiving or sending data over LAN, the transfer rate is much slower (1.8 MB/s).
    My rough guess is (after reading some papers on this topic) that TCP protocol is causing this (its flow control feature to be exact), since TCP max window size is too small on Linux by default.
    So, setting TCP max window size to a greater number should solve this.
    I tried putting this:
    # increase TCP max buffer size setable using setsockopt()
    # 16 MB with a few parallel streams is recommended for most 10G paths
    # 32 MB might be needed for some very long end-to-end 10G or 40G paths
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    # increase Linux autotuning TCP buffer limits
    # min, default, and max number of bytes to use
    # (only change the 3rd value, and make it 16 MB or more)
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    # recommended to increase this for 10G NICS
    net.core.netdev_max_backlog = 30000
    # these should be the default, but just to be sure
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_sack = 1
    in /etc/sysctl.conf but to no avail.
    So either there is no problem with the max window size setting, or the Linux kernel ignores it (maybe because /proc is no longer supported?).
    Thanks for any neat ideas.
    Last edited by Vena (2012-06-01 21:48:14)

    Bump? No ideas whatsoever?

  • Socket communication very slow on Mac OS X 10.6

    Hi everybody,
    I'm using a socket based communication in an application and in Mac OS 10.5 everything works quick and flawless using XCode 3.1.4.
    In Mac OS 10. 6, using XCode 3.2.4, I set the Mac OS X 10.6 SDK, I recompiled the app. but the responsiveness of the app is terrible slow. The GUI itself runs OK but any socket related action is painfully slow (sometimes connection timeout).
    The socket communication is implemented using standard CFSocket classes.
    Is this a XCode migration issue from 3.1 to 3.2 or a Mac OS 10.6 related one (I've seen lots of complains on networking issues)?
    Thank you very much,
    Dan

    Thank you very much for the info but why it worked (and really fast) on 10.5. I also wonder why I didn't run into TCP_NODELAY problem there ? My TCP buffer size is 1K and I'm using the same testing environment.
    Of course there is much more to it than some init parameters but still, what could be so different on 10.6 ? And I tried already the sysctl.conf TCP_NODELAY setting but no effect.
    This is the code (briefly):
    1. CONNECT
    // Create signature from ip and port
    struct sockaddr_in sin; /* TCP/UDP socket address structure. */
    self->m_RemoteSocketSignature = (CFSocketSignature*)malloc(sizeof(CFSocketSignature));
    self->m_RemoteSocketSignature->protocolFamily = PF_INET; /* IPv4 */
    self->m_RemoteSocketSignature->socketType = SOCK_STREAM; /* Stream oriented. */
    self->m_RemoteSocketSignature->protocol = IPPROTO_TCP; /* TCP */
    memset( &sin, 0, sizeof( sin ) );
    sin.sin_len= sizeof( sin );
    sin.sin_family = AF_INET; /* Address is IPv4. Use PF_xxx for protocol
    description, AF_xxx for address description. */
    /* sin.port and sin.addr are in network( big endian ) byte order. */
    sin.sin_port = CFSwapInt16HostToBig( port );
    inet_aton( [address UTF8String], &sin.sin_addr ); /* inet_aton() gives back
    network order addresses from strings. */
    self->m_RemoteSocketSignature->address = CFDataCreate( NULL, (UInt8 *)&sin, sizeof( sin ) );
    // Connect the Write & the Read streams to the socket
    CFStreamCreatePairWithPeerSocketSignature (
    kCFAllocatorDefault,
    m_RemoteSocketSignature,
    & m_ReadStream,
    & m_WriteStream
    if((m_ReadStream != NULL) && (m_WriteStream != NULL))
    bOpenWrite = CFWriteStreamOpen (m_WriteStream);
    bOpenRead = CFReadStreamOpen (m_ReadStream);
    2. RECEIVE
    CFIndex iTotalBytesRead = 0;
    while (true == CFReadStreamHasBytesAvailable(m_ReadStream))
    char buffer[1024];
    CFIndex bufferLength = 1024;
    CFIndex iBytesRead = CFReadStreamRead (
    m_ReadStream,
    buffer,
    bufferLength
    // Error ?
    if(iBytesRead < 0)
    return iBytesRead;
    // EOS ?
    if(iBytesRead == 0)
    return iTotalBytesRead;
    iTotalBytesRead += iBytesRead;
    if(string != nil)
    [string appendString:[NSString stringWithCString:buffer length:(unsigned)iBytesRead]];
    3. SEND
    CFIndex iBytesSent = CFWriteStreamWrite (
    m_WriteStream,
    buffer,
    bufferLength
    Thanks a lot for your help,
    Dan

  • Advice on Communication VI's - Polling for replies

    Advice on Communication VI’s - Polling for replies
    Hi all
    I have been given the task of improving the performance my works communication vi’s. One of the main areas I have found that needs improving is how we implement a delay between a commands being transmitted and when we read back the data from the DUT. Currently we use the flowing steps:
      • Convert text string into Hex
      • Transmit Hex commands to DUT using appropriate protocol (TCP, USB)
      • Wait Xms
      • Read Reply
      • Check Reply is valid
    The main issue we have is that we adjust the X ms delay to cover the slowest possible command the DUT supports (~2 sec), however most command will return in ~20 ms. I am planning to replace the “wait X ms” vi with a vi that will continuously reads/polls until it receives a valid reply. This way communication vi will only delay for the require amount of time. This is the area I could do with some advice. Should I continuously poll/read to detect if a reply has been received, or is there a better method of detecting arrive of a reply?
    Any advice would be gratefully received
    D.Barr

    Hi 148,
    I have attached an example vi. It does not run as is just an indication of what i think you need. I wanted to just post a pic of it to give you and idea but my browser kept on crashing and im too intoxicated to figure out why so you can view the example instead.
    Would be an idea to run a seperate loop that constantly scans the tcp read vi. If you set it in standard mode, it will either update all read bytes once 'bytes to read' is complete or update with the bytes read once the vi times out. This can then update a functional global that acts as a data buffer and message parser. Your main loop can then poll this for any new input messages. I havent included the functional global just an example of what the dedicated loop should do.
    Hope this points you in the right direction. Im off to bed, goodnight.
    Lucither.
    "Everything should be made as simple as possible but no simpler"
    Attachments:
    TCP buffer.vi ‏8 KB

  • NAS nfs (WD Netcenter) permissions and speed problems

    I have a 320GB WD Netcenter - used to use it on an older mac and now on my new one. Permissions seemed to be blocked on one of the other user accounts on this new MBP. When I do "get info" the owenship boxes are greyed out and I cannot change anything. Should I run Old Toad's BatCHMod (http://www.macchampion.com/arbysoft/) on the drive?
    Also I am getting VERY slow transfer speeds over my 54Mbs wireless network. I've read about this in other places but could not find an answer (apart from a very complicated article on TCP buffer sizes!). I can mount in as smb or nfs and chose nfs. Also been using Cocktail to optimize the connection speed over the net. Maybe this has something to do with it?!
    Cheers
    Charlie

    Ask here:
    http://lists.apple.com/mailman/listinfo/macos-x-server
    -Ralph

  • Replication with in memory DB: client synchronization

    Hi,
    I'm using replication framework with two completely in-memory databases. The first one is launched as master without knowledge of its replica db ("dbenv->repmgr_add_remote_site" and "dbenv->repmgr_set_nsites" are not called), some data is inserted into it and subsequently replica process is launched as client (in this case "repmgr_add_remote_site" and "repmgr_set_nsites" are called with master coordinates). I expected client to be synchronized by master with previously inserted records, but this doesn't seem to happen. Futhermore, although client opens db successfully, when db->get is called on the client the following error is returned:
    "DB->get: method not permitted before handle's open method".
    These are the first messages printed by master when client process is started:
    MASTER: accepted a new connection
    MASTER: got handshake 10.100.20.106:5066, pri 1
    MASTER: handshake introduces unknown site
    MASTER: EID 0 is assigned for site 10.100.20.106:5066
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type newclien
    t, LSN [0][0] nogroup
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newsite, L
    SN [0][0] nobuf
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
    LSN [1][134829] nobuf
    MASTER: NEWSITE info from site 10.100.20.106:5066 was already known
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type master_r
    eq, LSN [0][0] nogroup
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
    LSN [1][134829] nobuf
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update_r
    eq, LSN [0][0]
    MASTER: Walk_dir: Getting info for dir: ./env
    MASTER: Walk_dir: Dir ./env has 2 files
    MASTER: Walk_dir: File 0 name: __db.rep.gen
    MASTER: Walk_dir: File 1 name: __db.rep.egen
    MASTER: Walk_dir: Getting info for in-memory named files
    MASTER: Walk_dir: Dir INMEM has 1 files
    MASTER: Walk_dir: File 0 name: RgeoDB
    MASTER: Walk_dir: File 0 (of 1) RgeoDB at 0x41ee2018: pgsize 65536, max_pgno 1
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update, LSN
    [1][134829] nobuf
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req
    , LSN [0][0]
    MASTER: page_req: file 0 page 0 to 1
    MASTER: page_req: found 0 in dbreg
    MASTER: sendpages: file 0 page 0 to 1
    MASTER: sendpages: 0, page lsn [1][218]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
    1][134829] nobuf resend
    MASTER: wrote only 13032 bytes to site 10.100.20.106:5066
    MASTER: sendpages: 0, lsn [1][134829]
    MASTER: sendpages: 1, page lsn [1][134585]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
    1][134829] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: sendpages: 1, lsn [1][134829]
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req,
    LSN [1][28]
    MASTER: [1][28]: LOG_REQ max lsn: [1][134829]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][28] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131549] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131633] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131797] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131877] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131961] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132125] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132205] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132289] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132453] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132533] nobuf resend
    And these are the corresponding messages printed by client process after startup:
    REP_UNDEF: rep_start: Found old version log 13
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
    LSN [0][0] nogroup nobuf
    Slave becomes slave
    Replication service started
    CLIENT: starting election thread
    CLIENT: elect thread to do: 0
    CLIENT: repmgr elect: opcode 0, finished 0, master -2
    CLIENT: init connection to site 10.100.20.105:5066 with result 115
    CLIENT: got handshake 10.100.20.105:5066, pri 1
    CLIENT: handshake from connection to 10.100.20.105:5066
    CLIENT: handshake with no known master to wake election thread
    CLIENT: reusing existing elect thread
    CLIENT: repmgr elect: opcode 3, finished 0, master -2
    CLIENT: elect thread to do: 3
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
    LSN [0][0] nogroup nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newsite,
    LSN [0][0]
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type master_req
    , LSN [0][0] nogroup nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
    r, LSN [1][134829]
    CLIENT: repmgr elect: opcode 0, finished 0, master -2
    CLIENT: Election done; egen 6
    CLIENT: Updating gen from 0 to 5 from master 0
    CLIENT: Egen: 6. RepVersion 4
    CLIENT: No commit or ckp found. Truncate log.
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update_req,
    LSN [0][0] nobuf
    New Master elected
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
    r, LSN [1][134829]
    CLIENT: Election done; egen 6
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update,
    LSN [1][134829]
    CLIENT: Update setup for 1 files.
    CLIENT: Update setup: First LSN [1][28].
    CLIENT: Update setup: Last LSN [1][134829]
    CLIENT: Walk_dir: Getting info for dir: ./env
    CLIENT: Walk_dir: Dir ./env has 5 files
    CLIENT: Walk_dir: File 0 name: __db.rep.gen
    CLIENT: Walk_dir: File 1 name: __db.rep.egen
    CLIENT: Walk_dir: File 2 name: __db.rep.init
    CLIENT: Walk_dir: File 3 name: __db.rep.db
    CLIENT: Walk_dir: File 4 name: __db.reppg.db
    CLIENT: Walk_dir: Getting info for in-memory named files
    CLIENT: Walk_dir: Dir INMEM has 0 files
    CLIENT: Next file 0: pgsize 65536, maxpg 1
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req, L
    SN [0][0] any nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
    N [1][134829] resend
    CLIENT: PAGE: Received page 0 from file 0
    CLIENT: PAGE: Write page 0 into mpool
    CLIENT: PAGE_GAP: pgno 0, max_pg 1 ready 0, waiting 0 max_wait 0
    CLIENT: FILEDONE: have 1 pages. Need 2.
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
    N [1][134829] resend
    CLIENT: PAGE: Received page 1 from file 0
    CLIENT: PAGE: Write page 1 into mpool
    CLIENT: PAGE_GAP: pgno 1, max_pg 1 ready 1, waiting 0 max_wait 0
    CLIENT: FILEDONE: have 2 pages. Need 2.
    CLIENT: NEXTFILE: have 1 files. RECOVER_LOG now
    CLIENT: NEXTFILE: LOG_REQ from LSN [1][28] to [1][134829]
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req, LS
    N [1][28] any nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][28] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][64] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][147] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][218] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][65802] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][131386] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][131469] resend
    It seems like there are repeated messages from master, but I'm not able to understand what's wrong.
    Thanks for any kind of help
    Marco

    The client requests copies of the database pages from the master by sending the PAGE_REQ message. The master responds by sending a message for each page (i.e., many PAGE messages). The master tries to send PAGE messages as fast as it can, subject only to the throttling configured by rep_set_limit (default 10Meg).
    With 64K page size, the master's local TCP buffer fills up immediately, and repmgr only stores a backlog of 10 additional messages before starting to drop messages. The replication protocol is designed to tolerate missing messages: if you were to let this run, and continue to commit new update transactions at the master at a modest rate, I would expect this to complete eventually.
    However, repmgr could clearly do better at managing the traffic to avoid this situation, at least in cases where the client is accepting input at a reasonable rate. I am currently working on a fix/enhancement to repmgr which should accomplish this. (This same problem was reported by another user a few days ago.)
    In the meantime, you may be able to work around this problem by setting a low throttling limit. With your 64K page size, I would try something in the 320,000 to 640,000 range.
    Alan Bram
    Oracle

  • A question about waits

    Hi,
    We are on Oracle 10.2.0.4 on Solaris 10.
    In my awr report I have the following among top five top wait event:
    Wait event
    SQL*Net more data to client 14,022,144 3,758 0 17.7 Network
    Edited by: orausern on Apr 15, 2010 5:07 AM

    Your average wait on "more data to client" is very small.
    The number of calls is large - but you don't have any real measure of its significance and meaning.
    You need to look at Instances statistics:
    SQL*Net roundtrips to/from clien         17,407,740        9,876.1          32.6
    bytes sent via SQL*Net to client      4,429,719,421    2,513,152.0       8,303.4A round-trip consists of one 'sql net data to client' plus ALL the ' more data to client' that took place at the same time, so:
    Compare your count of 'more data to client' with roundtrips - if it is a large multiple you can infer that on average each round trip is a large chunk of data that has to be sent as several packets, and you probably need to make your SDU larger. If you look at 'bytes sent via SQL net to client' compared to roundtrips, you get an idea of the size of the the "average" packet - hence an indication of the necessary size of the SDU.
    There are also a couple of tcp network configuration parameters that may need setting. I never remember the exact names, but it's something like 'tcp_transmit_buf' and 'tcp_recv_buf'. This is the volume of data that the tcp layer can send before it needs an ACK from the far end. So, for example, if your SDU is 32K and your tcp buffer size was only 4KB, then Oracle would send one packet of 32KB, tcp would handle as 8 chunks of 4KB - with 8 tcp ACKs from the far end, and the (because tcp works in line packets of about 1400 bytes - each 4KB would go as 3 packets with an ACK expected on the third). So you need the tcp buffer sizes to be slightly larger than the SDU for best effect.
    (If your procsses parameter is large, though, the total tcp buffer memory needed is processes * tcp buffer size).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Low CPU utilization on Solaris

    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

    Hi,
    I recently went down this path and wound up coming to the realization that the
    cpu's are almost neck and neck per cycle when running my Java app. Let me qualify
    this a little more (400mhz Sparc II cpu vs 500mhz Intel cpu) under similar load
    running the same test gave me similar results. It wasn't as huge difference in
    performance as I was expecting.
    My theory is given the scalability of the SPARC architecture, more chips==more
    performance with less hardware, whereas the Wintel boxes are cheaper, but in order
    to get scaling, the underlying hardware comes into question. (how many wintel
    boxes to cluster, co-locate, manage, etc…)
    From what little I've found out when running tests against our Solaris 8 (E-250's)
    400mhz UltraSparc 2's is that it appears that the CPU performance in a lightly
    threaded environment is almost 1 cycle / 1 cycle (SPARC to Intel). I don't think
    the 64 bit SPARC architecture will buy you anything for java 1.3.1, but if your
    application has some huge memory requirements, then using 1.4.0(when BEA supports
    it) should be beneficial (check out http://java.sun.com/j2se/1.4/performance.guide.html).
    If your application is running only a few threads, tying the threads to the LWP
    kernel processes probably won't gain you much. I noticed that it decreased performance
    for a test with only a few threads.
    I can't give you a good reason as to why your Solaris CPU utilization is so low,
    you may want to try getting a copy of Jprobe and profiling Weblogic and your application
    to see where your bottlenecks are. I was able to do this with our product, and
    found some nasty little performance bugs, but even with that our CPU utilization
    was around 98% on a single and 50% on a dual.
    Also, take a look at iostat / vmstat and see if your system is bottlenecking doing
    io operations. I kept a background process of vmstat to a log and then looked
    at it after my test and saw that my cpu was constantly pegged out (doing a lot
    of context switching), but that it wasn't doing a whole lot of page faults
    (had enough memory).
    If you're doing a lot of serialization, that could explain slow performance as
    well.
    I did follow a suggestion on this board of running my test several times with
    the optimizer (-server) and it boosted performance on each iteration until a plateau
    on or about the 3rd test.
    If you're running Oracle or another RDBMS on your Solaris machine you should see
    a pretty decent performance benchmark against NT as these types of applications
    are more geared toward the SPARC architecture. From what I've seen running Oracle
    on Solaris is pretty darn fast when compared to Intel.
    I know that I tried a lot of different tweaks on my Solaris configuration (tcp
    buffer size, etc/system parameters for file descriptors, etc.) I even got to the
    point where I wanted
    to see how WebLogic was handling the Nagle algorithm as far as it's POSIX muxer
    was concerned and ran a little test to see how they were setting the sockets (setTcpNoDelay(Boolean)
    on java.net.Socket). They're disabling the Nagle algorithm so that wasn't an
    issue sigh. My best advice would be to profile your application and see where
    the bottlenecks are, you might be able to increase performance, but I'm not too
    sure. I also checked out www.spec.org and saw some of their benchmarks that
    coincide with our findings.
    Best of luck to you and I hope this helps :)
    Andy
    [email protected] (feanor73) wrote:
    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

  • Low bandwidth anomalies at a university: 100BASE-TX and 1000BASE-T

    +Please note that this question was originally posted in another thread. In the hope of resolving this issue as quickly as possible, I’ve reposted this question in this thread- to cast the largest net, so to speak.+
    Over the past several weeks, colleagues and I have been attempting to resolve low bandwidth and throughput problems experienced by users at our university. The entirety of these instances are located in one building on campus. Working with one faculty member extensively, we initially saw problems including low download speeds, and in certain instances, exceedingly high upload speeds. At a first approximation, I concluded that this was the result of a duplex mismatch. Upon further investigation, and the inclusion of university IT personnel, we were informed that the problem was supposedly not related to a duplex mismatch, but rather a platform specific anomaly, specifically related to Macs and their supposed inability (or at the very least limited ability) to employ auto-negotiation protocols with any degree of success.
    My colleagues and I dismissed this almost entirely as ignorance or prejudicial. Was this an error?
    In order to test that hypothesis, we connected a PC to one of the drops in question; no substantial change in bandwidth was detected.
    However, during one meeting, it was communicated to us that by switching from the extant 100BASE-TX connections to a Gigabit Ethernet (1000BASE-T) connection that any low bandwidth issues would be resolved. Having made the transition for two machines, we have been unable to verify that the expected improvement was made respective to the new standard. Under the 100BASE-TX connections, we observed a range of download rates as low as 10-20kb/s and as fast as 4-10 Mb/s. After the switch to Gigabit Ethernet, the maximum download rate we’ve seen has been 80Mb/s; the average rate is substantially lower, usually floating in a range of 10-18 Mb/s. In any event, these represent a staggeringly low fraction of the expected available bandwidth.
    I’ve adjusted TCP buffer sizes (albeit with the Apple Broadband Tuner, not via the terminal), adjusted frame sizes (up to jumbo frames) and confirmed numerous times that the auto-negotiation protocols are indeed working client-side. On all the affected machines the speed (1000BASE-T) is set automatically, as are the duplex mode and MTU. Adjusting these settings manually seemingly has no effect.
    The affected machines include two Mac Pro workstations (quad core 3GHz and a eight-core 2.8 GHz), a MacBook Pro (2.5 GHz, 15 inch) a dual processor G5 tower (liquid cooled 2.5 GHz model), a Dell PC (for testing purposes only- see above), and a 500 MHz G4. The problem persists in various drops in the building, seemingly randomly.
    My suspicion is that the problem exists on the server-end; either a switch or router is configured incorrectly or perhaps a cabling problem exists? Unfortunately, the university’s IT personnel are not providing any answers. I’m wondering if I’ve overlooked something (basic or otherwise) that might be contributing to these problems?
    Thanks in advance!
    Message was edited by: DRS 1

    DRS 1 wrote:
    My colleagues and I dismissed this almost entirely as ignorance or prejudicial. Was this an error?
    No.
    In order to test that hypothesis, we connected a PC to one of the drops in question; no substantial change in bandwidth was detected.
    What do you mean by "no change"? Does the PC run fast or slow like the Macs? If the PC runs slow, then you should be able to use that fact to get the network fixed.
    My suspicion is that the problem exists on the server-end; either a switch or router is configured incorrectly or perhaps a cabling problem exists? Unfortunately, the university’s IT personnel are not providing any answers. I’m wondering if I’ve overlooked something (basic or otherwise) that might be contributing to these problems?
    First, you need to clarify a couple of things. Does a PC work correctly in this building? Does a Mac from the building work correctly in another building. You can try going to some random office store and buying the absolute cheapest 100BT hub and plugging that into the network and plug the Macs into the new hub. How is the speed?
    The problem is almost certainly wiring problems or a flaky router or hub. Unfortunately, no IT department in the world will even give you the time of day if "Macs" are involved. You have to prove that there is a problem that does not involve the Macs. For that, you'll need a PC or a hub. This is the one job that Mac can't do.

  • Audio B-wav issues

    Can anyone shed any light on a very frustrating problem Im having with Protools B-wavs...
    Workflow is...
    Shot on one of the first Red Ones at 4k 24fps
    Offlined Prores conversion (2k) 24fps
    Media managed and clipfinder xml to link to color 24fps
    graded and rendered out to Digital Cinema 2k Prores 4444
    Audio omf from Offline to Protools
    5.1 audio mix 24fps
    export 5.1 stems and stereo ltrt to Bwavs
    Import to Online project.
    All the B-wavs show incorrect start time and durations in FCP
    Quicktime player shows them to have correct duration
    SoundtrackPro likewise.
    If I create a 25fps project and Import them FCP shows the correct duration and if I then create a 24fps sequence in this project and paste the audio files in everything is correct.
    Pix pasted on top – all in sync. Online done.
    24fps picture exported
    Audio files exported (for DCP Creation)
    All audio has reverted to the original duration (wrong length drifting)
    The quicktime output shows the audio is in sync but the audio tracks are all in the wrong order and some are not there at all.
    OSX 10.6 (latest version)
    FCP 7.03
    Fibrechannel Raid with Apple Fibrechannel card.
    Im assuming the original B- wavs are fine as they seem OK everywhere but in FCP
    Any light greatly appreciated!
    Sean

    with RTMFP, video and audio can go out of sync if the data
    rate of your stream exceeds your network capacity. video is
    currently sent with 100% reliability, so once it's queued for
    transmission, it's going to be sent (eventually). video is lower
    priority than audio, though, so your audio data gets first dibs on
    your network capacity. if the video stream's data rate is higher
    than what fits in your network (after audio), the video data will
    back up until its send buffer is filled, and new camera frames will
    stop getting captured. in the steady state, this can look like a
    multi-second offset between the video and audio.
    try turning your video rate/quality down so that it fits in
    your network capacity. video and audio should stay in sync.
    with RTMP, there's one network transmission buffer (TCP's)
    for all of the parts of your stream (audio & video). when you
    have insufficient network bandwidth, the TCP buffer will eventually
    fill up and video frames will stop being captured to compensate. so
    while audio and video might remain in sync, the total end-to-end
    latency will go up. when using RTMFP, the audio and video have
    independent transmission buffers, so in cases of insufficient
    network resources, the higher-priority audio should remain more
    timely but video may fall behind.
    -mike

  • Audio/Video Publishing Issues

    I've narrated three videos and have included two small flash videos into my project. When I publish the project, the video and the audio don't play. Any advice for resolving these two issues would be much appreciated. Thank you.

    with RTMFP, video and audio can go out of sync if the data
    rate of your stream exceeds your network capacity. video is
    currently sent with 100% reliability, so once it's queued for
    transmission, it's going to be sent (eventually). video is lower
    priority than audio, though, so your audio data gets first dibs on
    your network capacity. if the video stream's data rate is higher
    than what fits in your network (after audio), the video data will
    back up until its send buffer is filled, and new camera frames will
    stop getting captured. in the steady state, this can look like a
    multi-second offset between the video and audio.
    try turning your video rate/quality down so that it fits in
    your network capacity. video and audio should stay in sync.
    with RTMP, there's one network transmission buffer (TCP's)
    for all of the parts of your stream (audio & video). when you
    have insufficient network bandwidth, the TCP buffer will eventually
    fill up and video frames will stop being captured to compensate. so
    while audio and video might remain in sync, the total end-to-end
    latency will go up. when using RTMFP, the audio and video have
    independent transmission buffers, so in cases of insufficient
    network resources, the higher-priority audio should remain more
    timely but video may fall behind.
    -mike

Maybe you are looking for