File descriptor setting

We are using iDS 5.1 sp2 running on solaris 8. We have idar with 2 ldap server on back(1 master, 1 slave).
We didnt't setup the max connection for iDAR, which mean unlimited connection is allowed. However, the unix system ulimit setting was 256, which is too low. I changed the setting under /etc/system and rebooted the unix.. Then the ulimit setting is 4096 for both hard limit and soft limit. It looks good.
However, whenever the total connection to iDAR approaching 256, fwd.log file will show that "socket closed". The iDAR is still available, but the socked is used up.
I have been wondering why the new setting didn't take effect for iDAR.
Can anybody help me or give me some clue?
Thanks!

Hi,
Welcome to oracle forums :)
User wrote:
Hi,
We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
process.max-file-descriptor
basic 8.19K - deny 351158
privileged 65.5K - deny -
system 2.15G max deny -
My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
For more information on these settings please refer MOS tech note:
*Kernel setup for Solaris 10 using project files. [ID 429191.1]*
Hope helps :)
Regards,
X A H E E R

Similar Messages

  • Reg process.max-file-descriptor setting

    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.

    Hi,
    Welcome to oracle forums :)
    User wrote:
    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
    For more information on these settings please refer MOS tech note:
    *Kernel setup for Solaris 10 using project files. [ID 429191.1]*
    Hope helps :)
    Regards,
    X A H E E R

  • Cannot reset max-file-descriptor?

    My /var/ad/messages is full of :
    Apr 17 12:30:27 srv1 genunix: [ID 883052 kern.notice] basic rctl process.max-file-descriptor (value 256) exceeded by process 6910
    Even though I have tried to set process.max-file-descriptor set to 4096 for all projects, which appears correct whenever I query any running process, ie:
    srv1 /var/adm # prctl -t basic -n process.max-file-descriptor -i process $$
    process: 2631: -ksh
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    process.max-file-descriptor
    basic 4.10K - deny 2631
    Any ideas...?
    Thanks!!

    Hi,
    Finally found the route cause.
    It is the mistake of the user. In one of his startup scripts(.profile) he is running the command (ulimit -n 1024) which is setting both the soft and hard limits of file descriptors.
    This was the reason, I was unable to increase the file descriptor limit beyond 1024.
    Thanks & Regards,
    -GnanaShekar-

  • Set file descriptor limit for xinetd initiated process

    I am starting the amanda backup service on clients through xinetd and we
    are hitting the open file limit, ie file descriptor limit.
    I have set resource controls for the user and I can see from the shell that
    the file descriptor limit has increased, but I have not figured out how to get
    the resource control change to apply to the daemon started by xinetd.
    The default of 256 file channels persists for the daemon, I need to increase
    that number.
    I have tried a wrapper script, clearly doing it incorrectly for Solaris 10/SMF
    services. That route didn't work, or is not as straight forward as it used to be.
    There is a more direct way ?
    Thanks - Brian

    Hi Brian,
    It appears with 32 bits applications. You have to use the enabler of the extended FILE facility /usr/lib/extendedFILE.so.1
    % ulimit -n
    256
    % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'
    65536
    % ulimit -n
    65536
    % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
    % ./your_32_bits_applicationMarco

  • Maximum number of file/socket descriptors set to 800

    Hi
    When we are trying to restart opmn services in discover server, it is giving below error.
    [disc@odhappstest bin]$ ./opmnctl startall
    opmnctl startall: starting opmn and all managed processes...
    ================================================================================
    opmn id=odhappstest:6701
    5 of 6 processes started.
    ias-instance id=asinst_1
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    ias-component/process-type/process-set:
    webcache1/WebCache-admin/WebCache-admin/
    Error
    --> Process (index=1,uid=1314995949,pid=25917)
    failed to start a managed process after the maximum retry limit
    Log:
    /disc/Oracle/Middleware/asinst_1/diagnostics/logs/WebCache/webcache1/console~WebCache-admin~1.log
    The OPMNCTL status showing :
    [disc@odhappstest bin]$ opmnctl status
    Processes in Instance: asinst_1
    --------------------------------------------------------------+---------
    ias-component | process-type | pid | status
    --------------------------------------------------------------+---------
    emagent_asinst_1 | EMAGENT | 25577 | Alive
    Discoverer_asinst_1 | PreferenceServer | 25576 | Alive
    Discoverer_asinst_1 | ServicesStatus | 25579 | Alive
    webcache1 | WebCache-admin | N/A | Down
    webcache1 | WebCache | 25580 | Alive
    ohs1 | OHS | 25574 | Alive
    And we checked in log file and we have found entries as :
    Oracle Web Cache 11g (11.1.1.2), Build 11.1.1.2.0 091028.1147
    Maximum number of file/socket descriptors set to 800.
    Unable to allocate or access a shared memory segment of size 240 bytes. shmget(): Invalid argument
    The server process could not initialize.
    The server is exiting.
    Oracle Web Cache process of ID 16797 exits with code 1 at line 650 of file main.c [label: Build 11.1.1.2.0 091028.1147]
    We have applied patch 9262845 to resolve this issue according to MOS documnet :
    11G WEBCACHEADMIN FAILS TO START WITH "Unable to allocate or access a shared memory segment" [ID 1057444.1]
    But after applying this patch also issue not resolved.
    Plz help us to resolve this issue.
    Regards
    Shaik

    Hi Sheik,
    I believe it has got something to do with pre-requisites. May be you need to edit the "hard nofile" parameters in /etc/securuty/limits.conf.
    Please make sure you have done all the pre-requisites.
    Thanks

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • No of file descriptors in solaris 10

    hi,
    I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
    i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
    Thank you
    Jonu Joy

    Thank you alan
    even after setting the max file descriptor to 8192, the output from pfiles show as 4096
    Current rlimit: 4096 file descriptors
    would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
    thank you
    Jonu Joy

  • Help tracking down a file descriptor leak under java 6

    I have a large application I work on that runs fine under java5 (apart from possibly the latest update) but running under java 6 results in file descriptors used for TCP sockets being leaked.
    I'm testing this under FreeBSD 6 (both i386 and amd64) using diablo JDK and a port build jdk-1.6.0.3p3 but I have had reports from other users of exactly the same issue under various linux distributions. There are some reports that going back as far as 1.6.0b5 will resolve the issue but no later version works and a few reports that the latest 1.5 updates have the same issue.
    This application is using standard IO so Socket/ServerSocket and occasionally SSLSocket, no NIO is involved. Under the problem JDKs it will run for a while before available FDs are exhausted and then fall over with a "too many open files" exception. So far I have been unable to recreate the situation in a simple testcase and the fact it works fine under earlier JDKs is really causing me issues with deciding where to look for the issue.
    Using lsof to watch the FDs that are leaked I see a steadily increasing number shown in the following state:
    java 23438 djb 54u IPv4 0xffffff0091ad02f8 0t0 TCP *:* (CLOSED)
    java 23438 djb 55u IPv4 0xffffff0105aa45f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 56u IPv4 0xffffff01260c15f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 57u IPv4 0xffffff012a2ae8e8 0t0 TCP *:* (CLOSED)
    If these were showing as say (CLOSE_WAIT) then I would understand where they are coming from but as far as I understand the above means the socket has been fully closed but the FD simply hasn't been released. I'm not an expert on the TCP protocol however so I may be wrong here.
    I did try making the application set SoLinger(0,true) on all sockets which of course made all connecting clients think the connection was aborted rather than gracefully closed but even with this setting the FD leak persisted.
    I've gone as far as looking at what I think are the relevant parts of the src for both JDK versions I am using but there are very few changes and nothing that obviously looks linked.
    I'm fully prepared to spend a lot of time looking into this and I'm sure I'd eventually find the cause but if anyone here already knows what the answer may be or can simply give me a nudge in the best direction to look I would be very grateful.

    After weeks of dancing around the issue for weeks, we narrowed it down to garbage collection. If we put System.gc() to run periodically , file descriptors get garbage collected properly . I've tried playing with the settings by using XX:+UseConcMarkSweepGC which seems to help a great deal while system is under stress. However when there is light activity.. the file descriptors grow again and eventually bring everything down.
    Any clues ? is there any way to make gc to perform full collection more often ?
    pls whelp !!!

  • Running Out Of FIle Descriptors "Too many open files"

    We have a 32 application(running on Solaris 8) that opens socket connections and also some files in read/write mode. The application works fine in normal(under low load) case.
    But it is failing under stress environment. At some point under stress environment, when it tries opening a file, it(fopen) gives me error code 24 that means that "too many files opened".
    From this it seems that the application is running out of file descriptors. I used the truss, pfiles and lsof utilities to see how many descriptors are currently opened by my application and the number it gives is around 900(and this is the expected figure to be opened by my application).
    I also set the ulimit(both hard and soft) to a larger number but it also didn't work. Also when i set the soft limit to 70000, the truss output shows like:
    25412/1:     5.3264     sysconfig(_CONFIG_OPEN_FILES)               = 70000
    23123/1: 7.2926 close(69999) Err#9 EBADF
    23123/1: 7.2927 close(69998) Err#9 EBADF
    23123/1: 7.2928 close(69997) Err#9 EBADF
    23123/1: 7.2928 close(69996) Err#9 EBADF
    23123/1: 7.2929 close(69995) Err#9 EBADF
    23123/1: 7.2929 close(69994) Err#9 EBADF
    23123/1: 7.2930 close(69993) Err#9 EBADF
    This goes to close(3).. loops almost 70K times.
    Don't know why such output is.
    Note: under moderate stress environment where only 400 file descriptors are opened, the application works fine.
    can you please help me in this? Is this the file descriptor problem or can be other potential source of problem.
    Is this any other way to increase the file descriptor limit.
    I aldo trying using LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 but it gave me following error while starting application:
    "ld.so.1: ls: fatal: /usr/lib/extendedFILE.so.1: open failed: No such file or direcroty"
    Also i cant use purify(because of some reasons) to find file descriptors leakage(if any) and is not possible to upgrade the system to Solaris 10.
    Thanks in advance.

    http://developers.sun.com/solaris/articles/stdio_256.html

  • Max number of file descriptors in 32 vs 64 bit compilation

    Hi,
    I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
    However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
    Did anybody ever see similar problem before?
    Thanks in advance,
    Mikhail

    On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
    This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
    When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
    To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
    A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

  • Overcoming file descriptor limitation?

    Hello,
    I am developing a server, which should be able to handle more than 65535 concurrent connections. I have it implemented in Java, but I see limit in file descriptors. Since there is no fork() call in Java, I can't find out what to do.
    The server is basically some kind of HTTP proxy and the connection often waits for upstream HTTP server to handle the connection (which can take some time, during which I need to leave the socket open). I made a simple hack, which helped me, I used LD_PRELOAD to catch bind() library call and set Linux socket option TCP_DEFER_ACCEPT
                    if (setsockopt(sockfd, IPPROTO_TCP, TCP_DEFER_ACCEPT, (char *)&val, sizeof(int)) < 0) ....This tells kernel to accept() connection only when there's some data there, which helps a little (sockets waiting for handshake and request to come don't have to consume file descriptor). Any other hints? Should I somehow convince java to fork()? Or should I switch to 64-bit kernel and 64-bit java implementation?
    I can quite easily switch to Solaris if that would help.
    Any pointers/solutions appreciated.
    Juraj.

    You can use dbms_lob functions to access CLOBS, so changing the datatype may not be as problematic as you think.
    Also, in PL/SQL the VARCHAR2 limit is 32767, not 4000, so if you're accessing the table via a stored procedure you can change the column datatype to CLOB. Provided the data is less than 32767 in length you can use a PL/SQL variable to manipulate it.
    Incidentally, use CLOB not LONG for future compatibility.

  • [SOLVED] gpgme error: Bad file descriptor

    I tried to install a new system today. I chrooted into it and wanted to install some packages, but all I got was this error instead:
    error: GPGME Error: bad file descriptor
    error: <package>: missing required signature
    And that for each and every single package...
    I ran pacman-key --init and --populate, set SigLevel to Optional in pacman.conf, did pacman -Syu and all you want, still got the same error.
    Last edited by DeatzoSeol (2012-06-17 22:55:10)

    ... which I have. Still the same.
    OMG! I re-checked and found out there's no procfs in my chroot. Damn me it works now... what did I learn today? double-check your chroot.
    Thanks
    Last edited by DeatzoSeol (2012-06-17 22:55:46)

  • File descriptor

    Hi,
    I am a newbie in this, currently I face this problem where my application server running on Solaris 8, Sun Application Server 7, has been throwing the following error:
    Pr_proc_desc_table_full_error: file descriptor.
    This has caused the application to be very unstable. Checked with SUN app server 7 documentation online says there is a need to set the rlim_fd_max value to 4086. The current value = 1024.
    I need advice if this value should be changed to a higher value (4086 or higher)? And that this setting is set on the SUN OS level? Or you have any suggestion or concern? Please help.
    Thaks

    Yes, you need to increase the number of file descriptors available to the process. For Solaris 8, in /etc/system say:
    set rlim_fd_max=4096

  • Stdio - file descriptor limits - Solaris 10

    Hi
    New to Solaris from HP-UX and we are porting an application.
    I run into a problem whereby we run out of file descriptors - application is TCP/IP and file I/O intensive. Sometimes it happens, sometimes not.
    It manifests itself as an error when calling setsockopt.
    Increasing the file limits in /etc/system has not relieved the problem.
    A google suggests there is a hard limit of 255 on file descriptors for 32-bit applications using stdio - does this still apply? Any workarounds?
    Specs:
    Solaris 10 01/06
    SunOS saturn 5.10 Generic_118822-25 sun4u sparc SUNW,Sun-Fire-v240
    Thanks in advance.

    What shell do you start the application from?
    If you use sh/bash/ksh; type "ulimit -a" too see what limits the process will inherit from the shell, if the value for 'open files' is very low you can increase it with:
    ulimit -n <new value>
    for example;
    ulimit -n 4096
    if you are using tcsh/csh type "limit -h" to view the limits.
    The values you set in /etc/system is the maximum allowed amount of file descriptors per process, it means that the process is allowed to increase its own limit of open files til that limit, but it doesn't mean that the process gets the highest limit automatically.
    See also:
    man ulimit
    man setrlimit
    7/M.

  • File Descriptor Limit

    1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
    2. There is some documentation that states that the mx should never exceed 1024.
    3. Question:
    a. For Solaris 8 can we ever set the max to be > 1024?
    b. If we can, is there another ceiling?
    c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
    4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
    Thnx
    Aman

    The hard limit is 1024 for number of descriptors.
    The man page for limit(1) says that root can
    change the hard limits, but if you raise the
    limit for fd above 1024 you may encounter kernel
    performance or even failure considtions. The number
    is a recommendation and emperical based on what a
    selection of processors and memory models can
    tolerate. You might get more expert info by cross
    posting this question to the solaris-OS/kernel
    forum. Raising the hard limit might be possible, but
    I cannot speak to the risks with much direct
    knowledge.
    You might want to examine the design of an app that
    needs to have more than 1024 files open at once. maybe
    there is an alternative design that allows you to
    close more file descriptors.

Maybe you are looking for