Stdio - file descriptor limits - Solaris 10

Hi
New to Solaris from HP-UX and we are porting an application.
I run into a problem whereby we run out of file descriptors - application is TCP/IP and file I/O intensive. Sometimes it happens, sometimes not.
It manifests itself as an error when calling setsockopt.
Increasing the file limits in /etc/system has not relieved the problem.
A google suggests there is a hard limit of 255 on file descriptors for 32-bit applications using stdio - does this still apply? Any workarounds?
Specs:
Solaris 10 01/06
SunOS saturn 5.10 Generic_118822-25 sun4u sparc SUNW,Sun-Fire-v240
Thanks in advance.

What shell do you start the application from?
If you use sh/bash/ksh; type "ulimit -a" too see what limits the process will inherit from the shell, if the value for 'open files' is very low you can increase it with:
ulimit -n <new value>
for example;
ulimit -n 4096
if you are using tcsh/csh type "limit -h" to view the limits.
The values you set in /etc/system is the maximum allowed amount of file descriptors per process, it means that the process is allowed to increase its own limit of open files til that limit, but it doesn't mean that the process gets the highest limit automatically.
See also:
man ulimit
man setrlimit
7/M.

Similar Messages

  • Increase file descriptor limits on managed server

    Hi,
    we have an Admin Server which manages a managed server.
    We need to increase file descriptor limits of managed server.
    We modified the script commEnv.sh on Admin server and we successfully increased to 65,536 the limit. Here is the log of the boot of Admin Server
    ####<Sep 25, 2013 11:04:18 AM CEST> <Info> <Socket> <lv01469> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380099858592> <BEA-000416> <Using effective file descriptor limit of: 65,536 open sockets/files.>
    How can we do the same thing on managed server. We tried to modify the same script (commEnv.sh) on managed server but the file descriptor limits is still 1,024.
    ####<Sep 25, 2013 11:23:30 AM CEST> <Info> <Socket> <lv01470> <119LIVE_01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380101010988> <BEA-000415> <System has file descriptor limits of - soft: 1,024, hard: 1,024>
    ####<Sep 25, 2013 11:23:30 AM CEST> <Info> <Socket> <lv01470> <119LIVE_01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380101010989> <BEA-000416> <Using effective file descriptor limit of: 1,024 open sockets/files.>
    Thanks in advance

    Solved.
    It was necessary restart Node Manager after modify the commEnv.sh.

  • No of file descriptors in solaris 10

    hi,
    I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
    i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
    Thank you
    Jonu Joy

    Thank you alan
    even after setting the max file descriptor to 8192, the output from pfiles show as 4096
    Current rlimit: 4096 file descriptors
    would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
    thank you
    Jonu Joy

  • Overcoming file descriptor limitation?

    Hello,
    I am developing a server, which should be able to handle more than 65535 concurrent connections. I have it implemented in Java, but I see limit in file descriptors. Since there is no fork() call in Java, I can't find out what to do.
    The server is basically some kind of HTTP proxy and the connection often waits for upstream HTTP server to handle the connection (which can take some time, during which I need to leave the socket open). I made a simple hack, which helped me, I used LD_PRELOAD to catch bind() library call and set Linux socket option TCP_DEFER_ACCEPT
                    if (setsockopt(sockfd, IPPROTO_TCP, TCP_DEFER_ACCEPT, (char *)&val, sizeof(int)) < 0) ....This tells kernel to accept() connection only when there's some data there, which helps a little (sockets waiting for handshake and request to come don't have to consume file descriptor). Any other hints? Should I somehow convince java to fork()? Or should I switch to 64-bit kernel and 64-bit java implementation?
    I can quite easily switch to Solaris if that would help.
    Any pointers/solutions appreciated.
    Juraj.

    You can use dbms_lob functions to access CLOBS, so changing the datatype may not be as problematic as you think.
    Also, in PL/SQL the VARCHAR2 limit is 32767, not 4000, so if you're accessing the table via a stored procedure you can change the column datatype to CLOB. Provided the data is less than 32767 in length you can use a PL/SQL variable to manipulate it.
    Incidentally, use CLOB not LONG for future compatibility.

  • Solaris file descriptor question

    Hi,
    We have an application on Solaris 2.6 and
    the shell in which the server runs has a
    file descriptor limit of 1024. What does
    this mean? Does this mean that every process
    from the shell will have 1024 fds? What
    is the maximum # of fds that a solaris 2.6
    system can provide?
    When I run "sysdef", I get:
    ffffffff:fffffffd file descriptors
    How do I interpret this line?
    Is this 64K - some value?
    If system limit is 64K and if each
    shell has 1024, how are the fds allocated
    to the shells?
    What I mean is:
    say I have 3 shells each with descriptor
    limit of 1024, then is the distribution
    something like 1024-2047 for shell 1,
    2048 - 3071 for shell 2 (i.e. 3072) and
    3072 - 4095 for shell 3?
    Appreciate any explanation of this anyone
    can offer.
    thanks,
    mshyam

    Hi There,
    About File Descriptors and Their Limitations:
    All versions of Solaris (including Solaris 7 64-bit) have a default "soft" limit of 64 and a default "hard" limit of 1024.
    Processes may need to open many files or sockets as file descriptors. Standard I/O (stdio) library functions have a defined limit of 256 file descriptors as the fopen() call, datatype char, will fail if it can not get a file descriptor between 0 and 255. The open() system call is of datatype int, removing this limitation. However, if open() has opened 0 to 255 file descriptors without closing any, fopen() will
    not be able to open any file descriptors as all the low-numbered ones have been used up. Applications that need to use many file descriptors to open a large number of sockets, or other raw files, should be forced to use descriptors numbered above 256. This allows system functions such as name services, to work as they depend upon stdio routines.
    (See p 368 "Performance and Tuning - Java and the Internet").
    There are limitations on the number of file descriptors
    available to the current shell and its descendents. (See the ulimit man page). The maximum number of file descriptors that can be safely used for the shell and Solaris processes is 1024.
    This limitation has been lifted for Solaris 7 64-bit which can be 64k (65536).
    Therefore the recommended maximum values to be added to /etc/system are:
    set rlim_fd_cur=1024
    set rlim_fd_max=1024
    To use the limit command with csh:
    % limit descriptors 1024
    To use the ulimit command with Bourne or ksh:
    $ ulimit -n 1024
    However, some third-party applications need the max raised. A possible recommendation would be to increase rlim_fd_max, but not the default (rlim_fd_cur). Then rlim_fd_cur can be raised on a per-process basis if needed, but the higher setting
    for rlim_fd_max doesn't affect all processes.
    I hope this helps your understanding about systemwide file descriptor max limit in conjunction with shell and per process file descriptor limits.
    ....jagruti
    Deveoper Technical Support
    Sun Microsystems Inc.

  • Posix Performance Pack & file discriptor limitation

     

    This question doesn't appear to relate to our Enterprise product, so I'm
    responding to this message in the performance newsgroup as well. Please see
    answers below:
    Andy Ping wrote:
    What is Posix Performance Pack about?This refers to the availability of enhanced I/O handling (enabled by default
    with the Solaris version of WLS) and is documented at:
    http://www.weblogic.com/docs51/admindocs/tuning.html#performance packs
    >
    And How to resolve the file dirscritor limitation? Is ulimit OK?The Solaris file descriptor limits are set using lines similar to:
    set rlim_fd_cur = 1024
    set rlim_fd_max = 8192
    in your /etc/system file. In the above case, user processes are allowed 1K open
    files by default and can up this limit to 8K files using the ulimit command.
    You might have noticed that this command is used in the weblogic startup script
    to accomplish this. Judging by your log entries, I'd guess that your tunables
    are both set to 512. You might want to consider raising your maximum to 1K.
    Perhaps other newsgroup readers can suggest/recommend better file descriptor
    limits.
    Environment: sun solaris 2.6, weblogic.5.1 jdk1.2.2_006, oracle815.
    phenomena:
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < ListenThread > Listening on
    port: 7001
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    System has file descriptor limits of - soft: '512', hard: '512'
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    Using effective file descriptor limit of: '512' open sockets/files.
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    Allocating:'3' POSIX reader threads
    If use jdk1.1.7b, the phenomena:
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < EJB > 0 deployed, 0 failed
    to deploy.
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < HTTP > Log rotation is size
    based
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < ZAC > ZAC ACLs initialized
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < ZAC > ZAC packages stored
    in local directory exports
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < ListenThread > Listening on
    port: 7001
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    System has file descriptor limits of - soft: '512', hard: '512'
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    Using effective file descriptor limit of: '512' open sockets/files.
    D??¨²?? ¨º??? 26 10:51:20 GMT+08:00 2000:< I > < Posix Performance Pack >
    Allocating: '3' POSIX reader threads
    D??¨²?? ¨º??? 20 00:58:28 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)
    D??¨²?? ¨º??? 20 00:58:28 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)
    D??¨²?? ¨º??? 20 00:59:23 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 00:59:28 GMT-05:00 2000:< I > < ServletContext-General >
    Generated java file:
    /opt/weblogic/weblogic/myserver/classfiles/jsp_servlet/_high_tech_area/_fron
    t/_bbs/_bbs_list.java
    D??¨²?? ¨º??? 20 00:59:32 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:00:36 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)
    D??¨²?? ¨º??? 20 01:00:52 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)
    D??¨²?? ¨º??? 20 01:02:53 GMT-05:00 2000:< I > < ServletContext-General >
    Generated java file:
    /opt/weblogic/weblogic/myserver/classfiles/jsp_servlet/_high_tech_area/_fron
    t/_bbs/_bbs_list.java
    D??¨²?? ¨º??? 20 01:02:54 GMT-05:00 2000:< I > < ServletContext-General >
    Generated java file:
    /opt/weblogic/weblogic/myserver/classfiles/jsp_servlet/_high_tech_area/_back
    /_hbk_perlaw/_hbk_perlaw_add.java
    D??¨²?? ¨º??? 20 01:03:02 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:03:11 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:03:15 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:03:21 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:04:02 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)
    D??¨²?? ¨º??? 20 01:04:22 GMT-05:00 2000:< I > < ServletContext-General >
    Generated java file:
    /opt/weblogic/weblogic/myserver/classfiles/jsp_servlet/_high_tech_area/_fron
    t/_bbs/_bbs_list.java
    D??¨²?? ¨º??? 20 01:04:28 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource 'high_tech_area/front/file/htprog.css' in document root
    '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:04:33 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource '/high_tech_area/back/hbk_perlaw/hbk_perlaw_list.jsp'
    in document root '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:04:45 GMT-05:00 2000:< E > < ServletContext-General >
    Cannot find resource '/high_tech_area/back/hbk_perlaw/hbk_perlaw_list.jsp'
    in document root '/opt/weblogic/weblogic/myserver/public_html'
    D??¨²?? ¨º??? 20 01:05:31 GMT-05:00 2000:< E > < Posix Performance Pack >
    Failure in processSockets()
    java.net.SocketException: Connection reset by peer
    at java.net.SocketInputStream.read(Compiled Code)
    at weblogic.socket.PosixSocketMuxer.processSockets(Compiled Code)
    at weblogic.socket.SocketReaderRequest.execute(Compiled Code)
    at weblogic.kernel.ExecuteThread.run(Compiled Code)You will probably want to select your WLS/JDK environment using the platform
    support information at:
    http://www.weblogic.com/platforms/index.html#solaris
    Hope this helps.
    -Charlie

  • Max number of file descriptors in 32 vs 64 bit compilation

    Hi,
    I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
    However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
    Did anybody ever see similar problem before?
    Thanks in advance,
    Mikhail

    On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
    This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
    When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
    To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
    A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

  • Accpet() need more file descriptor

    my application server is using multithread to deal with high concurrency socket requests. When accpet() a request, it assign a FD, create a thread to deal, the thread will close the FD after finish the processing and thread_exit.
    my question is: when there are threads concurrency dealing 56~57 FDs, accept() can't get a new FD (errno.24). I know FD is limited in one process, I can try to fork sub_process to reach the high concurrency.
    But I wonder isn't there any other good method to solve the problem? How can a Web server reach a high concurrency?
    Any suggest is appreciated!
    Jenny

    Hi Jenny,<BR><BR>
    First of all, you did not say which release of Solaris you are using,<BR>but I'll assume you are on a version later than 2.4.<BR>
    You are correct when you say that the number of file descriptors <BR>
    that can be opened is a per-process limit. Depending on the OS <BR>version the default value for this limit changes, but there <BR>are simple ways to increase it.<BR>
    First of all there are two types of limits: a hard (system-wide) <BR>
    limit and a soft limit. The hard limit can only be changed by root<BR>
    but the soft limit can be changed by any user. There is one restriction<BR> on soft limits, they can never be set higher then the<BR>corresponding hard limit.<BR>
    1. Use the command ulimit(1) from your shell to increase the soft<BR>
    limit from its default value (64 before Solaris 8) to a specified <BR>value less than the hard limit.<BR>
    2. Use the setrlimit(2) call to change both the soft and hard limits.<BR> You must be root to change the hard limit though.<BR>
    3. Modify the /etc/system file and include the following line in <BR>
    to increase the hard limit to 128:<BR><BR>
    <CODE>set rlim_fd_cur=0x80</CODE><BR><BR>
    After changing the /etc/system file, the system should be rebooted <BR>so that the change takes effect.<BR><BR>
    Note that stdio routines are limited to using file descriptors <BR>
    0 through 255. Even though the limit can be set higher than 256, if<BR>the fopen function cannot get a file descriptor lower than 256,<BR>then it will fail. This can be a problem if other routines <BR>use the open function directly. For example, if 256 files are<BR>
    opened with the open function and none of them are closed, no other<BR>files can be opened with the fopen function because all of<BR>the low-numbered file descriptors have been used.<BR>
    Also, note that it is somewhat dangerous to set the fd limits higher<BR>than 1024. There are some structures, such as fd_set in<BR> <sys/select.h>, defined in the system that assume the maximum fd is<BR>1023. If a program uses an fd larger than 1023 with the macros<BR>and routines that access such a structure, the program will<BR>corrupt its memory space because it will modify memory outside<BR>of the bounds of the structure.<BR><BR>
    Caryl<BR>
    Sun Developer Technical Support<BR>

  • File Descriptor Limit

    1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
    2. There is some documentation that states that the mx should never exceed 1024.
    3. Question:
    a. For Solaris 8 can we ever set the max to be > 1024?
    b. If we can, is there another ceiling?
    c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
    4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
    Thnx
    Aman

    The hard limit is 1024 for number of descriptors.
    The man page for limit(1) says that root can
    change the hard limits, but if you raise the
    limit for fd above 1024 you may encounter kernel
    performance or even failure considtions. The number
    is a recommendation and emperical based on what a
    selection of processors and memory models can
    tolerate. You might get more expert info by cross
    posting this question to the solaris-OS/kernel
    forum. Raising the hard limit might be possible, but
    I cannot speak to the risks with much direct
    knowledge.
    You might want to examine the design of an app that
    needs to have more than 1024 files open at once. maybe
    there is an alternative design that allows you to
    close more file descriptors.

  • File descriptor leak in socket programming

    We have a complex socket programming client package in java using java.nio (Selectors, Selectable channel).
    We use the package to connect to a server.
    Whenever the server is down, it tries to reconnect to the server again at regular intervals.
    In that case, the number of open file descriptors build up with each try. I am able to cofirm this using "pfile <pid>" command.
    But, it looks like we are closing the channels, selectors and the sockets properly when it fails to connect to the server.
    So we are unable to find the coding that causes the issue.
    We run this program in solaris.
    Is there a tool to track down the code that leaks the file descriptor.
    Thanks.

    Don't close the selector. There is a selector leak. Just close the socket channel. As this is a client you should then also call selector.selctNow() to have the close take final effect. Otherwise there is also a socket leak.

  • How to determine which file descriptor opened my driver?

    Suppose a user process opens my driver twice. How does open() determine which file descriptor opened the device? In Linux, the kernel will pass a pointer to a structure which represents the open file descriptor. However, Solaris only passes the device number to open(), so I can only determine my device was opened, but not which file. I need this information because my driver needs to keep track of all file descriptors opened for the device.
    Thanks!
    -Darren

    I'm still at a loss why you need to know the file descriptor value (unless the app is sufficiently spaghettied that it has to query the driver to figure out what it opened with what). It's like asking what filename was used to open the device (which you can't get either). Since Solaris is based on a Streams framework, it would be bad to have drivers to even think it has a direct mapping into user space. It would be the same in asking (using /bin/sh):
    prog3 4>&1 3>&1 2>&1 | prog2 | prog1
    and you want to know from prog1 what descriptor prog3 wrote to. I don't see how linux even does this properly, since any given file open can have multiple file descriptors (via dup).

  • File descriptor setting

    We are using iDS 5.1 sp2 running on solaris 8. We have idar with 2 ldap server on back(1 master, 1 slave).
    We didnt't setup the max connection for iDAR, which mean unlimited connection is allowed. However, the unix system ulimit setting was 256, which is too low. I changed the setting under /etc/system and rebooted the unix.. Then the ulimit setting is 4096 for both hard limit and soft limit. It looks good.
    However, whenever the total connection to iDAR approaching 256, fwd.log file will show that "socket closed". The iDAR is still available, but the socked is used up.
    I have been wondering why the new setting didn't take effect for iDAR.
    Can anybody help me or give me some clue?
    Thanks!

    Hi,
    Welcome to oracle forums :)
    User wrote:
    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
    For more information on these settings please refer MOS tech note:
    *Kernel setup for Solaris 10 using project files. [ID 429191.1]*
    Hope helps :)
    Regards,
    X A H E E R

  • Running Out Of FIle Descriptors "Too many open files"

    We have a 32 application(running on Solaris 8) that opens socket connections and also some files in read/write mode. The application works fine in normal(under low load) case.
    But it is failing under stress environment. At some point under stress environment, when it tries opening a file, it(fopen) gives me error code 24 that means that "too many files opened".
    From this it seems that the application is running out of file descriptors. I used the truss, pfiles and lsof utilities to see how many descriptors are currently opened by my application and the number it gives is around 900(and this is the expected figure to be opened by my application).
    I also set the ulimit(both hard and soft) to a larger number but it also didn't work. Also when i set the soft limit to 70000, the truss output shows like:
    25412/1:     5.3264     sysconfig(_CONFIG_OPEN_FILES)               = 70000
    23123/1: 7.2926 close(69999) Err#9 EBADF
    23123/1: 7.2927 close(69998) Err#9 EBADF
    23123/1: 7.2928 close(69997) Err#9 EBADF
    23123/1: 7.2928 close(69996) Err#9 EBADF
    23123/1: 7.2929 close(69995) Err#9 EBADF
    23123/1: 7.2929 close(69994) Err#9 EBADF
    23123/1: 7.2930 close(69993) Err#9 EBADF
    This goes to close(3).. loops almost 70K times.
    Don't know why such output is.
    Note: under moderate stress environment where only 400 file descriptors are opened, the application works fine.
    can you please help me in this? Is this the file descriptor problem or can be other potential source of problem.
    Is this any other way to increase the file descriptor limit.
    I aldo trying using LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 but it gave me following error while starting application:
    "ld.so.1: ls: fatal: /usr/lib/extendedFILE.so.1: open failed: No such file or direcroty"
    Also i cant use purify(because of some reasons) to find file descriptors leakage(if any) and is not possible to upgrade the system to Solaris 10.
    Thanks in advance.

    http://developers.sun.com/solaris/articles/stdio_256.html

  • Getting file descriptor counts as a non-root user

    I have a number of scripts running on Solaris 8 and Solaris 10 systems that currently run as root in order to read file descriptor counts from various processes. currently they do something like: ls /proc/$PID/fd | wc -l to get a count of file descriptors for a given process $PID.
    These scripts need to be migrated to run as a non-root user. This means that my method for obtaining file descriptors will only work if the script owner and process owner are the same - this is not always the case however.
    For Solaris 10, I can assign the privilege proc_owner to the script owner - this works fine.
    For Solaris 8 I'm stuck.
    Does anyone have any idea how I can read a file descriptor count from an arbitrary process as a non-root user on Solaris 8 ?
    Thanks,
    Nick

    For Solaris 8 I'm stuck.
    Does anyone have any idea how I can read a file descriptor count from an
    arbitrary process as a non-root user on Solaris 8 ?As I'm sure you suspect there isn't a way to get around the all privileges
    or none arrangement in Solaris 8. One workaround option though is the
    recently announced Solaris 8 Migration Assistant 1.0 which allows you to
    run a Solaris 8 container on Solaris 10 SPARC systems. A good collection
    of the relevant links are here:
    http://blogs.sun.com/dp/entry/solaris_8_migration_assistant_1
    With this option your Solaris 10 script process with the proc_owner
    privilege could also be run against the processes in the Solaris 8
    container.
    Hope this helps.
    Brent

  • Idle File Descriptors

    Hello:
    We recently upgraded from WLS 7.01 to WLS 8.1. One thing we have encountered
    is an effective crash of the WebLogic server given that WebLogic appears to not
    be releasing idle TCP connections. We specifically are seeing idle file descriptors
    (using lsof -p) which remain idle for days (far exceeding any timeout settings
    on the Solaris machine). In addition, when all descriptors are used (current
    limit is 1024), we receive a socket exception (java.net.SocketException: Too many
    open files).
    We have contacted BEA support and received a response of 'network configuration
    is the issue'. We are highly skeptical of the response given that we have changed
    nothing in our network configuration between 7.0 and 8.1. In addition, we recently
    interrogated some production environments running 7.0 and found a small number
    of idle file descriptors. Overall it appears that the idle file descriptor problem
    is amplified in 8.1. Has any else encountered this behavior?
    Finally we are not in a position to restart production instances on a periodic
    basis. We have service level agreements to achieve and kicking our customers
    out of the product on a routine basis is not even close to being acceptable.
    Any thoughts or ideas from the group would be sincerely appreciated.
    Brian

    Hi,
    Have you managed to sort this yet, we are getting the same error in HPUX running
    BEA 6.1 - would be good if you could email me a resolution if you found one.
    thanks

Maybe you are looking for

  • Invalid Password on wifi..

    I have a PB G4 15" and I can not get it it to go wifi I have an older PB G4 that goes on with absolutly no problem. The message I get is invalid pasword, and it is not! I have done the $ and deleted from keychain (not administrative though? is that n

  • IE4 and Oracle JDBC

    Hi, I downloaded and installed Oracles JDBC drivers and tested them out using the sample programs that Oracle provided. Though the application I was able to connect and retrieve some rows without any problem. I then modefied the Java source for the a

  • Samsung 840 Evo vs 850 Pro for thinkpad w530

    I want to purchase a 250 gb samsung ssd drive for my w530. My specs: Lenovo Thinkpad W530 I7 3520M @ 2.90 GHz (2 Cores 4 Threads) 8 Gb Ram Nvidia Quadro Q1000M I am a mechanical engineer at work, and use this computer for learning new things on my fr

  • Asynchronous request/reply with message driven beans?

    HI, Can I implement requet/reply with message driven beans in an asynchronous way? If so can somebody give me some hints? I read that this can be done by creating a temporary queue on the producer and then calling setJMSReplyTo methond on the request

  • Unload file from client to server in APEX

    Hi All! I have a file on a client computer. I have my APEX application. Do you know how to unload file from client computer to server in my APEX application ? Best regards, Roman