Too many open socket connections causing ColdFusion to crash?

I’m currently working on an e-commerce site which sends and receives information to/from the client’s order management system via XML over a TCP/IP socket.  It uses a very old java-based custom tag called CFX_JSOCKET (which appears to have been written in 2002) to open the socket, send the data, and get the response.  The code that calls the custom tag and sends/receives data from the OMS pre-dates my working on the site, but its always worked, so I haven’t paid it much attention.
Back in the summer of 2009 we started experiencing issues with ColdFusion (v.7 on Window 2003 at the time) locking up on a more and more frequent basis, until it ultimately became a daily issue.  After extensive research we narrowed the issue down to the communication between the web server and our client’s order management server.  It seemed the issue with ColdFusion hanging was either related to there being too many connections open, or to these connections hanging and resulting in dead threads.  This an educated guess based on a blog post I’d seen online, not actual monitoring of either CF or the TCP/IP connections.  As soon as we dialed back the timeout on the CFX_JSOCKET tag from 20 seconds to 10, the issue disappeared, so we left it at that and moved on.
Fast forward to this January. The site is hosted at a new location, on a 64-bit Windows 2008 box running ColdFusion 9.  Over the years traffic on the site has continued to grow.  The nature of the clients business means that August and January are their business times of the year (back to school for college kids) and in January ColdFusion once again started locking up on an almost-daily basis.  
One significant difference is that the address cleansing software that previously ran on the box and was used to verify shipping addresses is not available for 64-bit, so when we moved to the new server last summer, that task was moved to the client’s order management software and handled via XML like all other interaction with that system. However, while most XML calls to that server (order input, inventory check, etc) take under a second to complete, the address cleansing call regularly takes over 5 seconds to return data, and frequently times out. 
Once we eliminated the address cleansing call from the checkout process, ColdFusion once again stopped locking up regularly.  So it appears that once again it’s the communication between the web server and the order management server that’s causing problems. We currently have that address cleansing call disabled on the web site in order to keep ColdFusion from crashing, but that’s not a long term solution.
We don’t have, nor can I find online, the source code for the CFX_JSOCKET custom tag, so I decided I’d write some CF code utilizing the java methods to open the socket, send the data, get the response, and close the connection.  My test code is working fine (under no load).  However, in trying to troubleshoot an issue I had with it, I started monitoring the TCP/IP connections using TCPView.  And I noticed that all the connections to the order management server, whether opened via the custom tag or my new code, remain open in either a TIME_WAIT or FIN_WAIT2 status for well over 2 minutes, even though I know for a fact that my new code is definitely closing the connection from the web server side. 
They do all close eventually, but I’m wondering 1. Why they’re remaining open that long; 2. Is that normal; and 3. If all these connections remaining open could be what’s causing ColdFusion to choke. 
Does this sound plausible?  If so, does anyone have any suggestions/recommendations about how to fix it?  My research seems to indicate this might be a matter of the order management system not closing the connection on its end, but I’m in way over my head, and before I go to client and tell them it’s their OMS causing the issue, I need to feel a little more confident that I’m on the right track. 
Any help or advice would be very greatly appreciated.  And thanks for taking the time to read through my long-winded explanation of the problem.
Set-up details:
ColdFusion Version: 9,0,0,251028  Standard 
Operating System: Windows Server 2008 
Java Version: 1.6.0_14 
Java VM Name: Java HotSpot(TM) 64-Bit Server VM 
Java VM Version: 14.0-b16 
Thanks,
Laurie

Hi Laurie,
Not aware of custom tag called CFX_JSOCKET. I guess the process you described very well is consuming a resource then you are getting a problem. Trick is what parameter to adjust. Perhaps you are running out of one the threads in CFadmin > Server Settings > Request Tuning.
I expect if you enable CF Metrics logging where you can log the threads and other resources then you can find out which parameter needs adjusting. Let me know if you want some details on enabling CF Metrics. Perhaps others will have much better idea than me and help without the overhead of logging.
The other interesting thing is you are using CF9.0.0. Do you have some reasons for not being on updater1 CF9.0.1?
HTH, Carl.
PS I posted before however seems to have gone, just hope does not come back and then I have posted twice.

Similar Messages

  • Too many open cursors exception caused by LRS Iterator

    Using Kodo4.1.4 with Oracle10, and Large Result Set Proxies, I encountered
    the error "maximum number of open cursors exceeded".
    It seems to have been caused because of incomplete LRSProxy iterators within
    the context of a single PersistenceManager. These iterators were over
    collections obtained by reachability, not directly from Queries or Extents.
    The Iterator is always closed, but the max-cursors exception still occurs.
    Following is a pseudocode example of the case... Note that if the code is
    refactored to remove the break; statement, then the program works fine, with
    no max-cursors exception.
    Any suggestions?
    // This code pattern is called hundreds of times
    // within the context of a PersistenceManager
    Collection c = persistentObject.getSomeCollection(); // LRS Collection
    Iterator i = c.iterator()
    try
    while(i.hasNext())
    Object o = i.next();
    if (someCondition)
    break; // if this break is removed, everything is fine
    finally
    KodoJDOHelper.close(i);
    }

    XSQL Servlet v. 0.9.9.1
    Netscape Enterprise / JRUN 2.3.3 / Windows NT
    I modified the document demo (insert request).
    The XSQL document:
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="newdocinsform.xsl"?>
    <page connection="demo" xmlns:xsql="urn:oracle-xsql">
    <xsql:insert-request table="xmlclob" transform="newdocins.xsl"/>
    <data>
    <xsql:query null-indicator="yes" max-rows="4">
    select id, doc
    from xmlclob
    order by id desc
    </xsql:query>
    </data>
    </page>
    The difference between this and your demo is the table: the table xmlclob has
    ID NUMBER and DOC CLOB. No constraints were enforced, so I was inserting the ID and the DOC. Upon page reload, several rows with the same values were inserted.
    I had a similar problem in the previous release.
    As a general question, how can I configure the XSQLConfig file for optimal performance?
    Although you provided default values, I'm not sure how much is necessary for connection pooling.

  • DirectoryService[67]: socket(PF_ROUTE) failed: Too many open files

    Hello!
    On MacOSX 10.4.10 which is an OD master, my log are filed with this:
    DirectoryService[67]: socket(PF_ROUTE) failed: Too many open files
    It happens at exactly every hours 44 minutes 18 seconds 161 times. At the same time, it makes a lot dns request for "kerberos-master.udp.XXXXXXXXX.COM IN SRV +"
    The server works fine, but there's probably a cronjob that go crazy and i would like to know why it's happening.
    Thanks a lot!
    Fred

    Hi,
    Cause might The server got an exception while trying to accept client connections. It will try to backoff to aid recovery.
    The OS limit for the number of open file descriptor (FD limit) needs to be increased. Tune OS parameters that might help the server to accept more client connections (e.g. TCP accept back log).
    http://e-docs.bea.com/wls/docs90/messages/Server.html#BEA-002616
    Regards,
    Prasanna Yalam

  • UTL_MAIL --- ORA-30678: too many open connections

    Hello,
    I have a pl/sql package that sends out emails using UTL_MAIL pkg pointing to an Exchange server, an APEX app calls this pkg.. This package used to work fine for months but I recently noticed that some emails are not being sent as expected. The package loops through a set of action items satisfying some conditions and send emails based on that ( this number is expected to grow every day ). I checked the errors log and I found this error:
    ORA-30678: too many open connections
    I think this means that I have to close the connection everytime I send en email, but UTL_MAIL does NOT have a function or a proc to close connections, right ?
    I don't know what causes this error to happen, but I suspect that this started happening right after we re-pointed the UTL_MAIL pkg from a Lotus Notes server to an Exchange server.
    I am also seeing this error:
    ORA-29279: SMTP permanent error: 501 5.1.3 Invalid address
    I know where this error comes from (usually a null email id in the FROM or TO field ), but can this be causing the first error to happen ?
    Please advise if you got this error before, is it a bug in oracle 10g as I read in some blog ? or is the second error happening make the Exchange server refuse SMTP connections ???
    Thanks,
    Sam

    Hi Sam,
    seems to be a bug in UTL_MAIL if you ask me, as you are right - there is only /send/, no option to close, so I'd expect this to be done automatically.
    Anyway, though UTL_MAIL is usable for basic mailing, I prefer using a custom mail implementation based on UTL_SMTP. The most important reason is that most mail servers don't work without authentication. And if you have done this once, you can reuse the function/procedure/package as simple as UTL_MAIL. The good news is, that there are several examples published that provide you with the functionality you have in UTL_MAIL at once - with the difference, that you definetly get your connection closed when you expect it to be closed.
    You'll also be able to handle empty addresses. Perhaps this error actually causes UTL_MAIL to "forget" to close the connection, if this exception isn't caught before in order to close an open connection before raising it to the outside.
    One example implementation for using UTL_SMTP can be found [url http://www.morganslibrary.com/reference/pkgs/utl_smtp.html]here
    -Udo

  • ORA-30678: too many open connections

    Hi,
    I executed this procedure..
    CREATE OR REPLACE procedure
    TRY.e_mail_message
    from_name in varchar2,
    to_name in varchar2,
    subject in varchar2,
    message in varchar2
    is
    l_mailhost VARCHAR2(64);
    l_from VARCHAR2(64);
    l_to VARCHAR2(64);
    crlf VARCHAR2( 2 ):= CHR( 13 ) || CHR( 10 );
    l_mail_conn UTL_SMTP.connection;
    mesg VARCHAR2( 4000 );
    BEGIN
    select a.SERVER into l_mailhost from email_setting a where a.SERVER is not null;
    select a.USERNAME into l_from from email_setting a where a.SERVER is not null;
    l_from := from_name;
    --UTL_SMTP.open_data(l_mail_conn);
    dbms_output.put_line('email test ');
    mesg:= 'Date: ' || TO_CHAR( SYSDATE, 'dd Mon yy hh24:mi:ss' ) || crlf ||
    'From: <'||l_from||'>' || crlf ||
    'Subject: ' ||subject|| crlf ||
    'To: '||to_name || crlf || '' || crlf ;
    mesg:=mesg||message;
    dbms_output.put_line(mesg);
    l_mail_conn := UTL_SMTP.open_connection(l_mailhost, 25);
    UTL_SMTP.helo(l_mail_conn, l_mailhost);
    UTL_SMTP.mail(l_mail_conn, l_from);
    UTL_SMTP.rcpt(l_mail_conn, to_name);
    UTL_SMTP.data(l_mail_conn, mesg);
    UTL_SMTP.quit(l_mail_conn);
    exception
    when others then dbms_output.put_line(sqlerrm);
    END;
    However, email could not be sent to recipient. Below is the error message.
    Sending email to subject: Reminder to settle outstanding bil for Invoice No: CA/01062010/555
    call email_mssg cmd 3
    email test
    Date: 23 June 10 15:11:50
    From: <>
    Subject: Reminder to settle outstanding bil for Invoice No: CA/01062010/555
    To: [email protected]
    Please Settle Outstanding Bil for your Invoice No: CA/01062010/555. Thank you
    ORA-30678: too many open connections
    Below is the username I keyed in the email_setting table, with reference to the select username statement from the procedure as above.
    username in email_setting => [email protected]
    I also tried key in as '[email protected]' but still not working.
    Could someone suggest any solution? Thanks..

    Mr. Saubhik,
    Yes, no data passed for the From: as shown from the dbms output error message shown below:
    Sending email to subject: Reminder to settle outstanding bil for Invoice No: CA/01062010/555
    call email_mssg cmd 3
    email test
    Date: 23 June 10 16:15:04
    From:
    Subject: Reminder to settle outstanding bil for Invoice No: CA/01062010/555
    To: [email protected]
    Please Settle Outstanding Bil for your Invoice No: CA/01062010/555. Thank you
    ORA-29279: SMTP permanent error: 501 5.1.7 Invalid address
    I am clueless how to settle this. I don't want to hard code the sender's email address. Please help. Thank you.

  • Too many open files in system cause database goes down

    Hello experts I am very worry because of the following problems. I really hope you can help me.
    some server features
    OS: Suse Linux Enterprise 10
    RAM: 32 GB
    CPU: intel QUAD-CORE
    DB: There is 3 instances RAC databases (version 11.1.0.7) in the same host.
    Problem: The database instances begin to report Error message: Linux-x86_64 Error: 23: Too many open files in system
    and here you are other error messages:
    ORA-27505: IPC error destroying a port
    ORA-27300: OS system dependent operation:close failed with status: 9
    ORA-27301: OS failure message: Bad file descriptor
    ORA-27302: failure occurred at: skgxpdelpt1
    ORA-01115: IO error reading block from file 105 (block # 18845)
    ORA-01110: data file 105: '+DATOS/dac/datafile/auditoria.519.738586803'
    ORA-15081: failed to submit an I/O operation to a disk
    At the same time I search into the /var/log/messages as root user and I the error notice me the same problem:
    Feb 7 11:03:58 bls3-1-1 syslog-ng[3346]: Cannot open file /var/log/mail.err for
    writing (Too many open files in system)
    Feb 7 11:04:56 bls3-1-1 kernel: VFS: file-max limit 131072 reached
    Feb 7 11:05:05 bls3-1-1 kernel: oracle[12766]: segfault at fffffffffffffff0 rip
    0000000007c76323 rsp 00007fff466dc780 error 4
    I think I get clear about the cause, maybe I need to increase the fs.file-max kernel parameter but I do not know how to set a good value. Here you are my sysctl.conf file and the limits.conf file:
    sysctl.conf
    kernel.shmall = 2097152
    kernel.shmmax = 17179869184
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 6553600
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 4194304
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 4194304
    limits.conf
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536

    process limit
    bcm@bcm-laptop:~$ ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 20
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 16382
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) unlimited
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited

  • Report causes too many open cursors

    Hello there!
    I've got the following situation:
    I've a very heavy report used for generating our Users Manual. In Reports 6i this Report works fine, generating the Manual works.
    In 10g the Report starts, and formats about 240 pages (in 6i I can generate over 1000 pages and more with this report), and cancels with the message "too many open cursors".
    So I took a look at the open cursors:
    In 6i there are about 100 open cursors caused by this report; in 10g there are...uhm...in all cases to much for the max_open_cursors parameter of the database (standard value which is used by our application is 1000; increasing this to e.g. 5000 resulted in the same behaviour => too many open cursors).
    Checked the open cursors while running the report which showed the following behaviour:
    The report formats about 230 pages, and opens about 20 cursors (~30 sec.). for the next 10 sites the report opens the pending 980 cursors (~5 sec.), and stops formatting...
    So it seems the report server causes some bad recursion: When restarting the reports server and re-running the report, I get sometimes the following error:
    Mit Fehler beendet: REP-536870981: Interner Fehler REP-62204: Interner Fehler beim Schreiben des Bildes BandCombine: a row of the matrix does not have the correct number of entries, should be OpImage.getExpandedNumBands(source0.getSampleModel(), source0.getColorModel()) + 1.. REP-0069: Interner Fehler REP-50125: Exception abgefangen: java.lang.NullPointerException REP-0002: Unable to retrieve a string from the Report Builder message file. REP-536870981:
    or maybe the report server tries to paralellize some querys (as this report consists of about 5 querys)?
    As said - this is a very complex report (my colleague spent about 3 months of his life with creating it and that's not why he is a lamer in reports ;-)) so it's very hard to give you a repcase, but if anyone knows some advice like "edit the <repservername>.conf; append 'DO NEVER EVER PARALLELYZE QUERYS' to the config" or something this would be very useful ;-).
    many thanks
    best regards
    Christian

    I've now located the problem:
    The report consists of several querys based on a ref cursor; and this cursors are opend and not closed in 10g...
    I'll open a SR on metalink....
    best regards
    Christian

  • What could this mean? It keeps appearing on my console. 10/29/11 10:17:47.113 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:1257 (18) socket Too many open files

    10/29/11 10:29:17.285 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:209 (fe09000a) CTunTapMgr::openDevice
    10/29/11 10:29:17.285 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:1257 (18) socket Too many open files
    10/29/11 10:29:17.285 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:340 (fe09000b) CTunTapMgr::disableHostMgr
    10/29/11 10:29:22.286 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:259 (18) open Too many open files
    10/29/11 10:29:22.286 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:209 (fe09000a) CTunTapMgr::openDevice
    10/29/11 10:29:22.286 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:1257 (18) socket Too many open files
    10/29/11 10:29:22.286 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:340 (fe09000b) CTunTapMgr::disableHostMgr
    10/29/11 10:29:27.287 PM vpnagentd: [p:133  pp:1]: error - TunTapMgr.cpp:259 (18) open Too many open files
    What could this mean? It keeps appearing on my console. I"m trying to install a Fitbit Tracker and can't get it to recognize the device. wondering if this has something to do with it. Thanks!

    It means you've installed some kind of third-party VPN software that isn't working.

  • ORA-30678: too many open connections with UTL_MAIL.send

    I'm have just recently started getting the ORA-30678: too many open connections error while calling the UTL_MAIL.send procedure to send emails via a pl/sql package. The call is made within a loop and there can be a significant number of calls made to this procedure. In fact, I received this error over 1400 times within the last three days. This was previously working until I made a change to the processing which now results in a larger number of emails/calls to this procedure. I don't see any documentation on how (or if) I can close the connection with UTL_MAIL.
    I'm using Oracle 11g. Any insight would be much appreciated.
    Thanks,
    Teri

    Looks like a bug possibly, check out this MOS Note:
    Ora-30678: Too Many Open Connections From UTL_MAIL [ID 788442.1]
    The bug note says it was fixed in 11.2. Not sure which version of 11 you are running.
    Edited by: Centinul on Apr 26, 2010 2:07 PM

  • Too Many open connections. UTL_HTTP

    I am submiiting The URL for multiple times.
    So i am getting the too many open connections error. How to avoid the code.
    For your reference i have attached my code.
    DECLARE
      req   UTL_HTTP.REQ;
      resp  UTL_HTTP.RESP;
      value VARCHAR2(1024);
      url   VARCHAR2(4000);
      OPT   varchar2(1000);
      CURSOR MOB IS SELECT MOB FROM MOBILE_NUMBER;
    BEGIN
    -- UTL_HTTP.SET_PROXY('proxy.my-company.com', 'corp.my-company.com');
    FOR I IN MOB
    LOOP
    DBMS_OUTPUT.PUT_LINE(I.MOB);
    URL:='http://www.meru.co.in/wip/sendsms?username=11|'&'||'password=11|'&'||'to='||I.MOB||'&'||'message=this%20is%20shanmugam';
    DBMS_OUTPUT.PUT_LINE(URL);
      req := UTL_HTTP.begin_REQUEST(url);
      UTL_HTTP.SET_HEADER(req, 'User-Agent', 'Mozilla/4.0');
      resp := UTL_HTTP.GET_RESPONSE(req);
      UTL_HTTP.READ_LINE(resp, value, TRUE);
        DBMS_OUTPUT.PUT_LINE(value);
      END LOOP;
      UTL_HTTP.END_RESPONSE(resp); 
    EXCEPTION
      WHEN UTL_HTTP.END_OF_BODY THEN
        UTL_HTTP.END_RESPONSE(resp);
    END;how to close the connection for each record????
    Can anyone help me out in this??
    Thanks in Advance.
    Cheers,
    Shan

    Move UTL_HTTP.END_RESPONSE(resp); inside the loop. I mean
    UTL_HTTP.END_RESPONSE(resp);
    END LOOP;

  • Operation Could Not Be Completed:  Too Many Open...

    I've had this error showing up a lot in the Safari 3.2 and 4.0 beta activity window when loading multiple tabs of pages with many images (usually gallery type pages or blogs with lots of thumbnails). The result is that some images or other elements like style sheets don't load, leading to the blue question mark, or messed up formatting. Unfortunately, the activity window can't be stretched wide enough to see the whole error, but in fact it's "Operation Could Not Be Completed: Too Many Open Files". The error "socket(PF_ROUTE) failed: Too many open files" shows up in the Console as well, at least with Safari 4. I had also experienced similar problems with "Operation Timed Out" errors as well.
    I've been pulling my hair out on this one, because it's rather inconsistent, not to mention annoying. You can usually get an individual page to load completely by refreshing it, but that kind of defeats the purpose of loading multiple tabs all at once. It's also less of a problem if more of those pages are already cached, but you never know for sure. Running your connection through a proxy also helps a bit, but not always.
    I found a fix, but it's actually not Safari's "fault" per se. The issue lies in the allowable number of open files per user process, which is set by the system's launchd process at boot time. Note that Safari is not entirely innocent, as FireFox doesn't have this problem. It Seems that Safari just tries to load everything all at once, whereas FireFox does a better job of managing its load requests. Anyway if you run the following command in Terminal:
    sudo launchctl limit
    the following list should show up (with perhaps slightly different values)
    cpu unlimited unlimited
    filesize unlimited unlimited
    data 6291456 unlimited
    stack 8388608 67104768
    core 0 unlimited
    rss unlimited unlimited
    memlock unlimited unlimited
    maxproc 200 532
    maxfiles 256 unlimited
    The second column is a "soft limit" and the third column is a "hard limit", though to be honest I'm not exactly sure what the difference entails. The image loading problem is caused by hitting the maxfiles limit of just 256 files. The solution is to change maxfiles to 4096/unlimited, and also change maxproc to 1000/2000 since it's pretty low as well. That sounds like a pretty big change, but OS X server is supposed to change them to numbers like this when services like Apache are enabled, and Apple even mentions how to change maxproc at http://support.apple.com/kb/TS1659
    To make these changes, run the following two commands in Terminal and restart the computer:
    echo "limit maxproc 1000 2000" | sudo tee -a /etc/launchd.conf
    echo "limit maxfiles 4096 unlimited" | sudo tee -a /etc/launchd.conf
    The commands add the two lines in quotes to the launchd.conf file in /etc/ (if no file exists yet, it creates it). That should clear up the loading issues. I haven't noticed any other problems with these increased numbers, but I'll report back if anything seems to go amiss. Hopefully this will be helpful to someone.

    I faced the same problem with an image gallery using css for image resizing. Thanks for the explanation.

  • "Too many open files" Exception on "tapestry-framework-4.1.1.jar"

    When a browser attempts accessing to my webwork, the server opens a certain number of file descriptors to "tapestry-framework-4.1.1.jar" file and don't release them for a while.
    Below is the output from "lsof | grep tapestry":
    java 26735 root mem REG 253,0 62415 2425040 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-portlet-4.1.1.jar
    java 26735 root mem REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root mem REG 253,0 320546 2425036 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-contrib-4.1.1.jar
    java 26735 root mem REG 253,0 49564 2424979 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-annotations-4.1.1.jar
    java 26735 root 28r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root 29r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root 30r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    These unknown references are sometimes released automatically, but sometimes not.
    And I get "Too many open files" exception after using my application for a few hours.
    The number of the unknown references increases as I access to my webwork or just hit on "F5" key on my browser to reload it.
    I tried different types of browsers to see if I could see any differences in consequence, and in fact it differed by the browser I used.
    When viewed by Internet Explorer it increased by 3 for every access.
    On the other hand it increased by 7 for each attempt when accessed by FireFox.
    I have already tried optimizing the max number of file discriptors, and it solved the "Too many open files" exception.
    But stil I'm wondering who actually is opening "tapestry-framework-4.1.1.jar" this many.
    Could anyone figure out what is going on?
    Thanks in advance.
    The following is my environmental version info:
    - Red Hat Enterprise Linux ES release 4 (Nahant Update 4)
    - Java: 1.5.0_11
    - Tomcat: 5.5.20
    - Tapestry: 4.1.1

    Hi,
    Cause might The server got an exception while trying to accept client connections. It will try to backoff to aid recovery.
    The OS limit for the number of open file descriptor (FD limit) needs to be increased. Tune OS parameters that might help the server to accept more client connections (e.g. TCP accept back log).
    http://e-docs.bea.com/wls/docs90/messages/Server.html#BEA-002616
    Regards,
    Prasanna Yalam

  • 'Error establishing socket' due to too many TCP sockets

    Hallo
    I have an app running on Win 2000, SP 3, JRE 1.2, connecting with a MSSQL 2000 DB using MS JDBC drivers and constantly writing updates to this DB. It starts off fine and continues for quite some time before encountering the following exception:
    java.sql.SQLException: [Microsoft][SQLServer JDBC Driver]Error establishing socket.
    Checking the output from netstat while the application is running revealed a huge number of sockets which are aparently used by the application since closing the application causes all these connections to be closed. I assumed that this may be the cause of the exception.
    The application only connects with the database once and statements and resultsets are closed after use.
    The same application running at other sites with more-or-less the same environment does not cause as many open sockets.
    Can anybody tell why so manu sockets are created and not closed ? Any suggestions will be much appreciated.
    Regards
    Dawie

    As an update, I was using connection pooling, and removed it. Now I create a connection per request and explicitly close the connection when done. This accomplished two things, reduced the number of stale connections, and also "throttled" down the process somewhat (not by much as in testing I found the average response times slowed by only a couple milliseconds).
    I did do a netstat on the application server when I started receiving all the errors and found, like dawiemoller did, there was a VERY large number of stale connections to the SQL 2000 server. I wasn't able to telnet into the SQL server and port, as expected. Restarting the Java app on the application server cleared up all the stale connections.
    After I removed the connection pooling, I've yet to receive the "error establishing socket" error, however after checking netstat again, I did find that there are still a lot of stale connections, just not nearly as many.
    If this is a Microsoft bug, why would the stale connections go away after restarting the Java Application? Microsoft IIS is also running on the same server with database access to the SQL 2000 server and it isn't having any trouble. Could it be just a combination of this "bug" and using Microsoft's JDBC driver (SP1) to SQL 2000? Would switching to another vendors JDBC driver avoid this problem?
    Scott Reynolds

  • Running Out Of FIle Descriptors "Too many open files"

    We have a 32 application(running on Solaris 8) that opens socket connections and also some files in read/write mode. The application works fine in normal(under low load) case.
    But it is failing under stress environment. At some point under stress environment, when it tries opening a file, it(fopen) gives me error code 24 that means that "too many files opened".
    From this it seems that the application is running out of file descriptors. I used the truss, pfiles and lsof utilities to see how many descriptors are currently opened by my application and the number it gives is around 900(and this is the expected figure to be opened by my application).
    I also set the ulimit(both hard and soft) to a larger number but it also didn't work. Also when i set the soft limit to 70000, the truss output shows like:
    25412/1:     5.3264     sysconfig(_CONFIG_OPEN_FILES)               = 70000
    23123/1: 7.2926 close(69999) Err#9 EBADF
    23123/1: 7.2927 close(69998) Err#9 EBADF
    23123/1: 7.2928 close(69997) Err#9 EBADF
    23123/1: 7.2928 close(69996) Err#9 EBADF
    23123/1: 7.2929 close(69995) Err#9 EBADF
    23123/1: 7.2929 close(69994) Err#9 EBADF
    23123/1: 7.2930 close(69993) Err#9 EBADF
    This goes to close(3).. loops almost 70K times.
    Don't know why such output is.
    Note: under moderate stress environment where only 400 file descriptors are opened, the application works fine.
    can you please help me in this? Is this the file descriptor problem or can be other potential source of problem.
    Is this any other way to increase the file descriptor limit.
    I aldo trying using LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 but it gave me following error while starting application:
    "ld.so.1: ls: fatal: /usr/lib/extendedFILE.so.1: open failed: No such file or direcroty"
    Also i cant use purify(because of some reasons) to find file descriptors leakage(if any) and is not possible to upgrade the system to Solaris 10.
    Thanks in advance.

    http://developers.sun.com/solaris/articles/stdio_256.html

  • What have "Too many open Files" to do with FIFOs?

    Hi folks.
    I've just finished a middleware service for my company, that receives files via a TCP/IP connection and stores them into some cache-directory. An external program gets called, consumes the files from the cache directory and puts a result-file there, which itself gets sent back to the client over TCP/IP.
    After that's done, the cache file (and everything leftover) gets deleted.
    The middleware-server is multithreaded and creates a new thread for each request connection.
    These threads are supposed to die when the request is done.
    All works fine, cache files get deleted, threads die when they should, the files get consumed by the external program as expected and so on.
    BUT (there's always a butt;) to migrate from an older solution, the old data gets fed into the new system, creating about 5 to 8 requests a second.
    After a time of about 20-30 minutes, the service drops out with "IOException: Too many open files" on the very line where the external program gets called.
    I sweeped through my code, seeking to close even the most unlikely stream, that gets opened (even the outputstreams of the external process ;) but the problem stays.
    Things I thought about:
    - It's the external program: unlikely since the lsof-command (shows the "list of open files" on Linux) says that the open files belong to java processes. Having a closer look at the list, I see a large amount of "FIFO" entries that gets bigger, plus an (almost) constant amount of "normal" open file handles.
    So perhaps the handles get opened (and not closed) somehwere else and the external program is just the drop that makes the cask flood over.
    - Must be a file handle that's not closed: I find only the "FIFO" entries to grow. Yet I don't really know what that means. I just think it's something different than a "normal" file handle, but maybe I'm wrong.
    - Must be a socket connection that's not closed: at least the client that sends requests to the middleware service closes the connection properly, and I am, well, quite sure that my code does it as well, but who knows? How can I be sure?
    That was a long description, most of which will be skipped by you. To boil it down to some questions:
    1.) What do the "FIFO" entries of the lsof-command under Linux really mean ?
    2.) How can I make damn sure that every socket, stream, filehandle etc. pp. is closed when the worker thread dies?
    Answers will be thanked a lot.
    Tom

    Thanks for the quick replies.
    @BIJ001:
    ls -l /proc/<PID>/fdGives the same information as lsof does, namely a slowly but steadily growing amount of pipes
    fuserDoesn't output anything at all
    Do you make exec calls? Are you really sure stdout and stderr are consumed/closed?Well, the external program is called by
    Process p = Runtime.getRuntime().exec(commandLine);and the stdout and stderr are consumed by two classes that subclass Thread (named showOutput) that do nothing but prepending the corresponding outputs with "OUT:" and "ERR" and putting them into a log.
    Are they closed? I hope so: I call the showOutput's halt method, that should eventually close the handles.
    @sjasja:
    Sounds like a pipe.Thought so, too ;)
    Do you have the waitFor() in there?Mentioning the waitFor():
    my code looks more like:
    try  {
         p = Runtime.getRuntime.exec(...);
         outshow = new showOutput(p.getInputStream(), "OUT").start;
         errshow = new showOutput(p.getErrorStream(), "ERR").start;
         p.waitFor();
    } catch (InterruptedException e) {
         //can't wait for process?
         //better go to sleep some.
         log.info("Can't wait for process! Going to sleep 10sec.");
         try{ Thread.sleep(10000); } catch (InterruptedException ignoreMe) {}
    } finally {
         if (outShow!=null) outShow.halt();
         if (errShow!=null) errShow.halt();
    /**within the class showOutput:*/
    /**This method gets called by showOutput's halt:*/
    public void notifyOfHalt() {
         log.debug("Registered a notification to halt");
         try {
              myReader.close(); //is initialized to read from the given InputStream
         } catch (IOException ignoreMe) {}
    }Seems as if the both of you are quite sure that the pipes are actually created by the exec command and not closed afterwards.
    Would you deem it unlikely that most of the handles are opened somewhere else and the exec command is just the final one that crashes the prog?
    That's what I thought.
    Thanks for your time
    Tom

Maybe you are looking for

  • Ipod 5th Generation - Post IoS update No Siri

    Hello, I have only seen one post on here and it was regarding a 4th generation ipod and Siri. I was really excited to see the new update and was hoping to get Siri for my ipod touch 5th g but it didnt come on my ipod? Can someone tell me if there is

  • Adobe Media encoder, After effects Installing problem

    When i try to Start Any of the programs above, i keep getting Error 1 which tells me to Reinstall tried that multiple of times not sure what i need to do to get it to work.

  • Trigger a Process chain

    Hi All, After a long time, approaching SDN for a help. At first i thank all the contributors. I had a requirement, my client wants a process chain in event based triggering. Scenario: There is a process chain, lets say: ZTEST_PC. This pulls GL data,

  • HDMI Live Video Input into new MacBook Pro?

    We current use FireWire into an older MacBook Pro to stream live video. Want to go to HD camera that has HDMI output. Can new MacBook Pros take live video input from HDMI?

  • My iWeb webpage links are only visible on Mac OS

    So I made a pretty elaborate webpage using iWeb 09. I had images in the background with text over the top. The text I chose was not friendly across the spectrum so I learned that I could turn them into picture files by adjusting the shadow. Now I hav