Sudden increase in open file descriptors

Our system is live since an year and half and for the first time I encountered the following exception
"java.net.SocketException: Too many open files"
When our internet application stopped responding I immediately checked the number of open file descriptors on my Solaris machine by using the "lsof" command and found that it was an abnormal 600 value and as I continued monitoring it reached 1024 in matter of minutes and WebLogic gave the above exception. The current setting for file descriptors is 1024. But all these days the average number of open file descriptors was well below 220.
I also took thread dumps and found that most of the threads were stuck at the following location
""ExecuteThread: '3' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x00883a90 nid=0x10 runnable [6d080000..6d0819c0]
     at java.net.SocketInputStream.socketRead0(Native Method)
     at java.net.SocketInputStream.read(SocketInputStream.java:129)
     at com.sybase.jdbc2.timedio.RawDbio.reallyRead(RawDbio.java:202)
     at com.sybase.jdbc2.timedio.Dbio.doRead(Dbio.java:243)
     at com.sybase.jdbc2.timedio.InStreamMgr.readIfOwner(InStreamMgr.java:512)
     at com.sybase.jdbc2.timedio.InStreamMgr.doRead(InStreamMgr.java:273)
     at com.sybase.jdbc2.tds.TdsProtocolContext.getChunk(TdsProtocolContext.java:561)
     at com.sybase.jdbc2.tds.PduInputFormatter.readPacket(PduInputFormatter.java:229)
     at com.sybase.jdbc2.tds.PduInputFormatter.read(PduInputFormatter.java:62)
     at com.sybase.jdbc2.tds.TdsInputStream.read(TdsInputStream.java:81)
     at com.sybase.jdbc2.tds.TdsInputStream.readUnsignedByte(TdsInputStream.java:114)
     at com.sybase.jdbc2.tds.Tds.nextResult(Tds.java:1850)
     at com.sybase.jdbc2.jdbc.ResultGetter.nextResult(ResultGetter.java:69)"
There is no file descriptor leak in the application. My question is since all the threads are hung at JDBC and socket level, does it mean that a faulty query would have triggered this problem(may be the database was too busy executing a faulty query).
I suspect this because I recieved a database exception soon after the problem appeared. One of my database insert transaction had timed out after 300 seconds. Also this was the first time I recieved this kind of an exception
java.sql.SQLException: The transaction is no longer active - status: 'Marked rollback. [Reason=weblogic.transaction.internal.TimedOutException: Transaction timed out after 299 seconds
Xid=BEA1-11FE69525362E51BFA16(6404670),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=299,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=started,assigned=none),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@a68c0),SCInfo[Mizuho-RWS+myserver]=(state=active),properties=({weblogic.jdbc=t3://10.104.8.81:7001}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+, XAResources={},NonXAResources={})],CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+)]'. No further JDBC access is allowed within this transaction.
     at weblogic.jdbc.wrapper.JTSConnection.checkIfRolledBack(JTSConnection.java:118)
     at weblogic.jdbc.wrapper.JTSConnection.checkConnection(JTSConnection.java:127)
Any inputs regarding this problem?

Raghu S wrote:
Hi,
I am using WebLogic 8.1 SP2 on a Solaris machine.Ok, good enough. Once WebLogic times out a transaction, it rolls it back on the
connection. That Sybase driver's rollback doesn't unfortunately affect it's
own running statements. For 81sp3 we added code to explicitly cancel any ongoing
statement during a rollback. This may be what you need to free up those
threads and the socekts the driver may be keeping open. If you can upgrade
to a newer version of 8.1, this code will free you up. Alternately, you can
try either upgrading to Sybase'e latest driver or to our BEA driver for Sybase.
Ask support for the latest BEA driver package for 8.1.
Joe
>
Stacktrace of one of the threads at the time I took the thread dump....All the threads at the time of taking the thread dump are stuck in a similar fashion.
ExecuteThread: '3' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x00883a90 nid=0x10 runnable [6d080000..6d0819c0]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.sybase.jdbc2.timedio.RawDbio.reallyRead(RawDbio.java:202)
at com.sybase.jdbc2.timedio.Dbio.doRead(Dbio.java:243)
at com.sybase.jdbc2.timedio.InStreamMgr.readIfOwner(InStreamMgr.java:512)
at com.sybase.jdbc2.timedio.InStreamMgr.doRead(InStreamMgr.java:273)
at com.sybase.jdbc2.tds.TdsProtocolContext.getChunk(TdsProtocolContext.java:561)
at com.sybase.jdbc2.tds.PduInputFormatter.readPacket(PduInputFormatter.java:229)
at com.sybase.jdbc2.tds.PduInputFormatter.read(PduInputFormatter.java:62)
at com.sybase.jdbc2.tds.TdsInputStream.read(TdsInputStream.java:81)
at com.sybase.jdbc2.tds.TdsInputStream.readUnsignedByte(TdsInputStream.java:114)
at com.sybase.jdbc2.tds.Tds.nextResult(Tds.java:1850)
at com.sybase.jdbc2.jdbc.ResultGetter.nextResult(ResultGetter.java:69)
at com.sybase.jdbc2.jdbc.SybStatement.nextResult(SybStatement.java:204)
at com.sybase.jdbc2.jdbc.SybStatement.nextResult(SybStatement.java:187)
at com.sybase.jdbc2.jdbc.SybStatement.executeLoop(SybStatement.java:1698)
at com.sybase.jdbc2.jdbc.SybStatement.execute(SybStatement.java:1690)
at com.sybase.jdbc2.jdbc.SybCallableStatement.execute(SybCallableStatement.java:129)
at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:68)
at com.mizuho.rws.report.business.dao.FIReportsDAO.getReportList(FIReportsDAO.java:3463)
at com.mizuho.rws.report.business.businessObject.FIReports.getReportList(FIReports.java:98)
at com.mizuho.rws.report.business.ejb.FIReportsBean.getReportList(FIReportsBean.java:96)
at com.mizuho.rws.report.business.ejb.FIReports_4f92ds_EOImpl.getReportList(FIReports_4f92ds_EOImpl.java:270)
at com.mizuho.rws.report.client.delegates.FIReportsBusinessDelegates.getReportList(FIReportsBusinessDelegates.java:173)
at com.mizuho.rws.report.client.web.FIReportsAction.handleFIReportsBean(FIReportsAction.java:1759)
at com.mizuho.rws.report.client.web.FIReportsAction.performAction(FIReportsAction.java:349)
at com.mizuho.foundation.presentation.AppBaseAction.perform(AppBaseAction.java:143)
at com.mizuho.foundation.presentation.AppActionServlet.processActionPerform(AppActionServlet.java:518)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1586)
at com.mizuho.foundation.presentation.AppActionServlet.doPost(AppActionServlet.java:562)
at com.mizuho.foundation.presentation.AppActionServlet.doGet(AppActionServlet.java:544)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:971)
at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:402)
at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6350)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3635)
at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
And the stack trace of the exception I recieved.
java.sql.SQLException: The transaction is no longer active - status: 'Marked rollback. [Reason=weblogic.transaction.internal.TimedOutException: Transaction timed out after 299 seconds
Xid=BEA1-11FE69525362E51BFA16(6404670),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=299,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=started,assigned=none),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@a68c0),SCInfo[Mizuho-RWS+myserver]=(state=active
),properties=({weblogic.jdbc=t3://10.104.8.81:7001}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+, XAResources={},NonXAResources={})],CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+)]'. No further JDBC access is allowed within this transaction.
at weblogic.jdbc.wrapper.JTSConnection.checkIfRolledBack(JTSConnection.java:118)
at weblogic.jdbc.wrapper.JTSConnection.checkConnection(JTSConnection.java:127)
at weblogic.jdbc.wrapper.Statement.checkStatement(Statement.java:222)
at weblogic.jdbc.wrapper.PreparedStatement.setString(PreparedStatement.java:414)
at com.mizuho.rws.services.mail.business.dao.FIReportMailDAO.insertMailClientDetails(FIReportMailDAO.java:2790)
at com.mizuho.rws.services.mail.business.businessObject.FIReportMail.sendMail(FIReportMail.java:645)
at com.mizuho.rws.services.mail.business.ejb.MailerBean.sendFIReportMail(MailerBean.java:87)
at com.mizuho.rws.services.mail.business.ejb.Mailer_fyyt2g_EOImpl.sendFIReportMail(Mailer_fyyt2g_EOImpl.java:662)
at com.mizuho.rws.services.mail.client.delegates.MailerBeanBusinessDelegates.sendFIReportMail(MailerBeanBusinessDelegates.java:153)
at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.sendFIReportMail(SendMailHandler.java:181)
at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.resolveMail(SendMailHandler.java:429)
at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.notify(SendMailHandler.java:651)
at com.mizuho.foundation.utils.AppNotificationListener.handleNotification(AppNotificationListener.java:66)
at weblogic.time.common.internal.TimerListener$1.run(TimerListener.java:48)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
at weblogic.time.common.internal.TimerListener.deliverNotification(TimerListener.java:44)
at weblogic.management.timer.Timer.deliverNotifications(Timer.java:578)
at weblogic.time.common.internal.TimerNotification$1.run(TimerNotification.java:118)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
Regards
Raghu

Similar Messages

  • [SOLVED] Maximun TCP open connections and Open File Descriptors

    Hi,
    back when I was using XP I needed to patch a system file to increase max amount of open TCP connections - you surely know about this. I'm wondering how do I do the same in GNU/Linux and if that's real important here.
    Also, by default open files are set to 1024, how do I change that? I'm running x86_64 + KDE SC 4.4.2
    Last edited by martin77 (2010-04-12 04:17:27)

    Thanks for replying.
    I mean "open file handler" or in a better GNU/Linux terminology "open file descriptors" aka the maximun number of files that can be accessed at a given time.
    For instance, VMWare will need you increase them to 4096 to work properly - and I presume something in the order of 5120 would be ok.
    Following The Arch Way, already found the solution and want to share it with all of you fellow n00bs:
    1. you need to open for edit /etc/security/limits.conf file with root privileges
    2. at the end of the file add:
    * soft nofile nn
    * hard nofile nn
    where nn is the number of open file descriptors you want. I set them to 8196 (probably too high) so for instance it should read:
    * soft nofile 8192
    * hard nofile 8192
    Read the embedded help for a better understanding of this crucial configuration file.
    As usual, thank you very much to this great community, devs and users, best!
    Last edited by martin77 (2010-04-12 04:18:47)

  • Oracle instance running on a system with low open file descriptor

    Hello.
    We have 10.1.0.4 on SuSE 9 on x86 64bit Sun servers.
    We have databases that if started manually come up without the warning, but if started via a shell script scheduled through a crontab start up with this warning: "Oracle instance running on a system with low open file descriptor".
    My understanding is that it has to do with OS ulimit. it appears that non-interactive shell (crontab) does not set the nofiles at 65536.
    All our system are set up exactly the same way. The problem is, though, that some systems do not report the warning even when started non-interactively.
    My question is this: assuming my nofiles ulimit is in fact too low on all systems, why would some systems report the warning and others would not? Is there anything database specific the instance looks for when it starts, such as the number of datafiles in the database, instance memory size, etc..., which would make the instance warn in some cases but not the others?
    Thank You
    Boris

    Thank You Satish.
    This is a good reference and we may end up looking into the patch bundle associated with the bug.
    But does anyone have any idea why the systems that are set up the same exact way would warn on systems and not on others?
    Also, the Metalink note talks about init.crsd. I have not build an association between this inconsistency and RAC.
    What I do see is that if we start up our database non-interactively (where ulimit -n resolves to 1024, instead of 65536) the warning is generated, in some cases.
    Perhaps, the 1024 is too low. But then my question is why would Oracle think it is too low only on some servers and not all?
    Boris

  • How to close BDB open File Descriptors ?

    Hi,
    In our current BDB environment,at runtime we switch Berkeley DB directories so that application reads data from new directory.
    We ran into File Descriptors issue when we switch to newer version of data as system was exceeding allowed number of File Descriptors. And had to restart the machine.
    1) I wanted to confirm that would Closing BDB Environment using env.close() , fix this issue ? Meaning does closing of environment, closes BDB file descriptors ?
    2) Also is there any way to test this functionality of closing of BDB File Descriptors ? We tried with "lsof" command which lists open file descriptors but could not see anything related jdb files or similar ? So how do we make sure that, File Descriptors are indeed getting closed or not ?
    Please let me know if there is any other way to get around File Descriptor problem.
    Thanks,

    1) I wanted to confirm that would Closing BDB Environment using env.close() , fix this issue ? Meaning does closing of environment, closes BDB file descriptors ?Yes, Environment.close will close all file descriptors.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Suddenly can't open file

    I have only ever had one version of InDesign on my computer, version CS5.5. Last month I worked on a file with no problem, then today I just went to revise it and I suddenly get the message "Cannot open file because it was saved with a newer version of Adobe InDesign (CS 6.0). There is no way this could have happened because the file has not budged from my computer since I worked on it and I certainly have no more recent version of ID. Even the file info says the file is in CS 5.5 format. What can I do?

    Thanks for your reply. I am the only person using this computer (it's a personal laptop) and I can assure you that nobody else has looked at this file. However, I'd much appreciate someone opening it and re-saving it as CS 5.5 (even though the file itself says the file is in version CS 5.5). But how or where where can I link the file? It appears I can't attach the file to my post here.

  • All of a sudden - can't open files

    Running Adobe Reader 10.1.2 (current rev) on Win 7 (latest release)
    Had been working fine. Now on some files (but not all) I get this error:
    "there was an error opening this document. The file is damaged and could not be repaired."
    Two comments:
    (1) I uninstalled Adobe Reader, then rebooted, then reinstalled Adobe Reader from a fresh download. I still get the same error.
    (2) I am able to open the same pdf files that don't open on this computer on another computer. So it must be this installation on this computer.
    Since I've already uninstalled, not sure where to go from here...  Is there a mis-set configuration option in Reader causing this for only some files?
    Thanks anyone for ideas...

    I had the same issue with a file that we had on our corporate network drive.
    After an hour of research plus trial+error, I had to download and purchase the Full copy of this program: kernel for PDF Repair
    http://www.nucleustechnologies.com/pdf-repair-tool.html
    It was an emergency for me, hence, the purchase of the tool but it worked and I was able to repair the file and open it with no more error messages.
    Second issue I encountered with the same file after I repaired it: I was unable to print it.  The doc gave me an error every time I tried to print it.
    I had to download foxit (pdf reader) and print it using that program.
    http://www.foxitsoftware.com/Secure_PDF_Reader/

  • Suddenly can't open files says need newer version

    I was using muse fine this morning.. then just about 30 minutes later went to open same file and get message saying needs newer version and please update to newest version.  I have the latest and even uninstalled/reinstalled and let update to below
    v3.2 Build 2 Cl 772925
    and still get same message.  Again, I was just working on the exact same file this am.  Weird thing is I can create a new site, just won't let me work on the exact same file I was working on earlier today.
    Ideas?

    Hello,
    Could you please provide us your .muse file so that we can have try to open it at our end and check for the issues?
    Please email it to [email protected] If your file is greater than 20mb you can use something like Adobe SendNow or SendThisFile.
    Please do not forget to mention link to this forum in your e-mail so that we can identify the file.
    Regards,
    Sachin

  • How  to increase open file desciptor value permanently in solaris 5.8

    I am getting some strange errors in Oracle alert log advising me to increase the open file descriptors limit..I believe ulimit is only hold values until the system is rebooted..How do I make this permanant ?

    500 images in a single gallery is too many - think of the poor viewer!
    But if you must create Flash galeries with more than 500 images, you're better off editing the underlying engine - here's one link showing how to do it

  • How to determine which file descriptor opened my driver?

    Suppose a user process opens my driver twice. How does open() determine which file descriptor opened the device? In Linux, the kernel will pass a pointer to a structure which represents the open file descriptor. However, Solaris only passes the device number to open(), so I can only determine my device was opened, but not which file. I need this information because my driver needs to keep track of all file descriptors opened for the device.
    Thanks!
    -Darren

    I'm still at a loss why you need to know the file descriptor value (unless the app is sufficiently spaghettied that it has to query the driver to figure out what it opened with what). It's like asking what filename was used to open the device (which you can't get either). Since Solaris is based on a Streams framework, it would be bad to have drivers to even think it has a direct mapping into user space. It would be the same in asking (using /bin/sh):
    prog3 4>&1 3>&1 2>&1 | prog2 | prog1
    and you want to know from prog1 what descriptor prog3 wrote to. I don't see how linux even does this properly, since any given file open can have multiple file descriptors (via dup).

  • "Too many open files" Exception on "tapestry-framework-4.1.1.jar"

    When a browser attempts accessing to my webwork, the server opens a certain number of file descriptors to "tapestry-framework-4.1.1.jar" file and don't release them for a while.
    Below is the output from "lsof | grep tapestry":
    java 26735 root mem REG 253,0 62415 2425040 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-portlet-4.1.1.jar
    java 26735 root mem REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root mem REG 253,0 320546 2425036 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-contrib-4.1.1.jar
    java 26735 root mem REG 253,0 49564 2424979 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-annotations-4.1.1.jar
    java 26735 root 28r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root 29r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    java 26735 root 30r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
    These unknown references are sometimes released automatically, but sometimes not.
    And I get "Too many open files" exception after using my application for a few hours.
    The number of the unknown references increases as I access to my webwork or just hit on "F5" key on my browser to reload it.
    I tried different types of browsers to see if I could see any differences in consequence, and in fact it differed by the browser I used.
    When viewed by Internet Explorer it increased by 3 for every access.
    On the other hand it increased by 7 for each attempt when accessed by FireFox.
    I have already tried optimizing the max number of file discriptors, and it solved the "Too many open files" exception.
    But stil I'm wondering who actually is opening "tapestry-framework-4.1.1.jar" this many.
    Could anyone figure out what is going on?
    Thanks in advance.
    The following is my environmental version info:
    - Red Hat Enterprise Linux ES release 4 (Nahant Update 4)
    - Java: 1.5.0_11
    - Tomcat: 5.5.20
    - Tapestry: 4.1.1

    Hi,
    Cause might The server got an exception while trying to accept client connections. It will try to backoff to aid recovery.
    The OS limit for the number of open file descriptor (FD limit) needs to be increased. Tune OS parameters that might help the server to accept more client connections (e.g. TCP accept back log).
    http://e-docs.bea.com/wls/docs90/messages/Server.html#BEA-002616
    Regards,
    Prasanna Yalam

  • Genunix: basic rctl process.max-file-descriptor (value 256) exceeded

    Hi .,
    I am getting the following error in my console rapidly.
    I am using Sun Sparc server running with Solaris 10 ., We start getting this error
    suddently after a restart of the server and the error is continously rolling on the console...
    The Error:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    SunOS Release 5.10 Version Generic_118822-25 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Failed to send email alert for recent event.
    SC Alert: Failed to send email alert for recent event.
    Hostname: nitwebsun01
    NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk
    NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 287/0x0 has migrated from enclosure FAKE_ENCLR_SNO to enclosure DISKS
    checking ufs filesystems
    /dev/rdsk/c1t0d0s4: is logging.
    /dev/rdsk/c1t0d0s7: is logging.
    nitwebsun01 console login: Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 439
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 467
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:44 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:49 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 503
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 510
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 516
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 540
    Nov 20 14:56:53 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 549
    Nov 20 14:56:53 nitwebsun01 last message repeated 4 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 665
    Nov 20 14:56:56 nitwebsun01 last message repeated 6 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 667
    Nov 20 14:56:56 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:57 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 868
    Nov 20 14:56:57 nitwebsun01 /usr/lib/snmp/snmpdx: unable to get my IP address: gethostbyname(nitwebsun01) failed [h_errno: host not found(1)]
    Nov 20 14:56:58 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 887
    Nov 20 14:57:00 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 976
    nitwebsun01 console login: root
    Nov 20 14:57:00 nitwebsun01 last message repeated 2 times
    Here I attached my /etc/project file also..
    [root@nitwebsun01 /]$ cat /etc/project
    system:0::::
    user.root:1::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    noproject:2::::
    default:3::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    group.staff:10::::
    [root@nitwebsun01 /]$
    Help me to came out of this issue
    Regards
    Suseendran .A

    This is an old post but I'm going to reply to it for future reference of others.
    Please ignore the first reply to this thread... by default /etc/rctladm.conf doesn't exist, and you should never use it. Just put it out of your mind.
    So, then... by default, a process can have no more than 256 file descriptors open at any given time. The likelyhood that you'll have a program using more than 256 files very low... but, each network socket counts as a file descriptor, therefore many network services will exceed this limit quickly. The 256 limit is stupid but it is a standard, and as such Solaris adheres to it. To look at the open file descriptors of a given process use "pfiles <pid>".
    So, to change it you have several options:
    1) You can tune the default threshold on the number of descriptors by specifying a new default threshold in /etc/system:
    set rlim_fd_cur=1024
    2) On the shell you can view your limit using 'ulimit -n' (use 'ulimit' to see all your limit thresholds). You can set it higher for this session by supplying a value, example: 'ulimit -n 1024', then start your program. You might also put this command in a startup script before starting your program.
    3) The "right" way to do this is to use a Solaris RCTL (resource control) defined in /etc/project. Say you want to give the "oracle" user 8152 fd's... you can add the following to /etc/project:
    user.oracle:101::::process.max-file-descriptor=(priv,8152,deny)
    Now log out the Oracle user, then log back in and startup.
    You can view the limit on a process like so:
    prctl -n process.max-file-descriptor -i process <pid>
    In that output, you may see 3 lines, one for "basic", one for "privileged" and one for "system". System is the max possible. Privileged is the limit by which you need to have special privs to raise. Basic is the limit that you as any user can increase yourself (such as using 'ulimit' as we did above). If you define a custom "priviliged" RCTL like we did above in /etc/projects it will dump the "basic" priv which is, by default, 256.
    For reference, if you need to increase the threshold of a daemon that you can not restart, you can do this "hot" by using the 'prctl' program like so:
    prctl -t basic -n process.max-file-descriptor -x -i process <PID>
    The above just dumps the "basic" resource control (limit) from the running process. Do that, then check it a minute later with 'pfiles' to see that its now using more FD's.
    Enjoy.
    benr.

  • Maximum simultaneous opened files in a java thread on linux 1.4.20

    Hello,
    It seems like a maximum of 1024 files can be opened by a processus on linux (in java, a thread is a linux processus).
    Is it possible to change this value, or has it been set at the JVM compilation?
    I created a nio server, and one thread is to manage thousands of connections. For the moment I can't pass 1010 connections.
    As a fast test, I created the following source :
    public class MaxOpenedFilesTest {
    /** Creates a new instance of MaxOpenedFilesTest */
    public MaxOpenedFilesTest() {
    * @param args the command line arguments
    public static void main(String[] args) {
    File f = new File("/home/greg/a.out");
    FileInputStream[] stream = new FileInputStream[2000];
    try {
    for (int i=0; i<stream.length; i++) {
    stream[i] = new FileInputStream(f);
    System.out.println(i+" files opened");
    catch (Exception e) {
    System.out.println("ERROR : "+e);
    e.printStackTrace();
    This test throws an IOException at 1019 files opened : Too many opened files...
    I have tested a lot of things. I even recompiled the linux kernel. Changed the NR_OPEN, OPEN_MAX, __FD_SETSIZE, etc... Still don't work.
    I havve set the global maximum file descriptor to 65536. It works. But for one thread, I can't have more than 1019 file descriptors (must be the 1024 initial limit defined by the kernel)...
    Could somebody help.
    It would be great.
    Thank you.
    Gr�goire.

    Thank you for your answer.
    I have already done all the described procedures in the link you provided.
    In fact, I used this link :
    file:///home/greg/1_NetNoLedge/LinuxTuning/linux.html
    and this other one :
    file:///home/greg/1_NetNoLedge/LinuxTuning/s1-custom-kernel-modularized.html
    Still only 1024 files allowed to be opened for one process...
    In your link, it is said :
    You need to give processes the option of increasing their file descriptor limits:
    In /etc/security/limits.conf add two lines:
    * soft nofile 1024
    * hard nofile 4096
    Suggesting that it wouldn't even be necessary recompile the kernel to allow more opened file descriptors in a process... This really isn't clear for me.
    Thank you.

  • DirectoryService[67]: socket(PF_ROUTE) failed: Too many open files

    Hello!
    On MacOSX 10.4.10 which is an OD master, my log are filed with this:
    DirectoryService[67]: socket(PF_ROUTE) failed: Too many open files
    It happens at exactly every hours 44 minutes 18 seconds 161 times. At the same time, it makes a lot dns request for "kerberos-master.udp.XXXXXXXXX.COM IN SRV +"
    The server works fine, but there's probably a cronjob that go crazy and i would like to know why it's happening.
    Thanks a lot!
    Fred

    Hi,
    Cause might The server got an exception while trying to accept client connections. It will try to backoff to aid recovery.
    The OS limit for the number of open file descriptor (FD limit) needs to be increased. Tune OS parameters that might help the server to accept more client connections (e.g. TCP accept back log).
    http://e-docs.bea.com/wls/docs90/messages/Server.html#BEA-002616
    Regards,
    Prasanna Yalam

  • Help!  Can't open files in all programs since installing version cue upgrade

    Please help. A few days ago I got some pop up (I'm in windows XP) saying updates were available for 2 adobe things. I just chose yes and kept working. Now, I can't open files in any of my CS3 programs. I have lost 4 days of productivity and fear losing my mind next!! after doing some research, it appears alot of people are having trouble since uploading some version cue upgrade for cs3. I followed their suggestions and unchecked VC in photoshop (my first offender) and suddenly I could open files again. But now I can't in Illustrator, but in preferences, it does not have VC checked.
    I've tried uninstalling and reinstalling the whole package. I've tried just uninstalling VC. Nothing's working and I'm having major deadline problems now.
    Please, please help! Thanks.
    Anne

    You can try a system restore to a point before the update. If you're lucky, it might work, but I often find System Restore will fail when you need it the most.
    Peter

  • File descriptor leak in socket programming

    We have a complex socket programming client package in java using java.nio (Selectors, Selectable channel).
    We use the package to connect to a server.
    Whenever the server is down, it tries to reconnect to the server again at regular intervals.
    In that case, the number of open file descriptors build up with each try. I am able to cofirm this using "pfile <pid>" command.
    But, it looks like we are closing the channels, selectors and the sockets properly when it fails to connect to the server.
    So we are unable to find the coding that causes the issue.
    We run this program in solaris.
    Is there a tool to track down the code that leaks the file descriptor.
    Thanks.

    Don't close the selector. There is a selector leak. Just close the socket channel. As this is a client you should then also call selector.selctNow() to have the close take final effect. Otherwise there is also a socket leak.

Maybe you are looking for

  • How can I find an older version of itunes?

    Hi I need to reinstal itunes 9.2 to my Mac. I was having a problem with itunes recognising my iphone so followed instructions to uninstall itunes. However when I tried to re-install I find that in the last couple of days itunesx has appeared. My Mac

  • Import org.eclipse.persistence.sessions.Project ?

    Hello, I have a legacy mapping file that is a Java file and its class extends org.eclipse.persistence.sessions.Project. This file maintains the toplink mapping and is the only toplink configuration of the project. During its inception, the webgain wo

  • PL SQLCode Hangs

    Hi, When I try to compile my procedure I get the following error. CREATE OR REPLACE FUNCTION [Function Name] ERROR at line 1: ORA-04021: timeout occurred while waiting to lock object DACSCAN.[Function Name] I use dbms_lock in my PL/SQL code. I dont k

  • Re-linking to library

    I am migrating to a new computer with Leopard. When I try to re-link to my itunes library on an external hard drive by Option-clicking on iTunes, I get " a itunes library file could not be found in the folder "itunes 2". This is the second time that

  • Problems Installing Flash Player 9

    has anyone else tried installing Adobe Flash Player 9 and it doesnt get past Searching: HD, leaving it with 4 things left to be installed? It always freezes or just stops at that point? any help?