Change 'Soft Resource Limit file descriptor' value

Hi,
How do I change 'Soft Resource Limit file descriptor; value in Solaris 10
Is there any conditions to change this?
Ashraf.

There are 2 values. The default and the maximum.
It starts at the default and can be increased up the maximum.
If you want change the value persistently and machine wide, you can put a config option
in /etc/system. Something like
set rlim_fd_cur = 8129
set rlim_fd_max = 8129
If you want to do it for your account only then, just put those lines into your profile.
But you'll still be only able to increase it up to the max.
The max can only be increased by root or in /etc/system.

Similar Messages

  • Genunix: basic rctl process.max-file-descriptor (value 256) exceeded

    Hi .,
    I am getting the following error in my console rapidly.
    I am using Sun Sparc server running with Solaris 10 ., We start getting this error
    suddently after a restart of the server and the error is continously rolling on the console...
    The Error:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    SunOS Release 5.10 Version Generic_118822-25 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Failed to send email alert for recent event.
    SC Alert: Failed to send email alert for recent event.
    Hostname: nitwebsun01
    NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk
    NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 287/0x0 has migrated from enclosure FAKE_ENCLR_SNO to enclosure DISKS
    checking ufs filesystems
    /dev/rdsk/c1t0d0s4: is logging.
    /dev/rdsk/c1t0d0s7: is logging.
    nitwebsun01 console login: Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 439
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 467
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:44 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:49 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 503
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 510
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 516
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 540
    Nov 20 14:56:53 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 549
    Nov 20 14:56:53 nitwebsun01 last message repeated 4 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 665
    Nov 20 14:56:56 nitwebsun01 last message repeated 6 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 667
    Nov 20 14:56:56 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:57 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 868
    Nov 20 14:56:57 nitwebsun01 /usr/lib/snmp/snmpdx: unable to get my IP address: gethostbyname(nitwebsun01) failed [h_errno: host not found(1)]
    Nov 20 14:56:58 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 887
    Nov 20 14:57:00 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 976
    nitwebsun01 console login: root
    Nov 20 14:57:00 nitwebsun01 last message repeated 2 times
    Here I attached my /etc/project file also..
    [root@nitwebsun01 /]$ cat /etc/project
    system:0::::
    user.root:1::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    noproject:2::::
    default:3::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    group.staff:10::::
    [root@nitwebsun01 /]$
    Help me to came out of this issue
    Regards
    Suseendran .A

    This is an old post but I'm going to reply to it for future reference of others.
    Please ignore the first reply to this thread... by default /etc/rctladm.conf doesn't exist, and you should never use it. Just put it out of your mind.
    So, then... by default, a process can have no more than 256 file descriptors open at any given time. The likelyhood that you'll have a program using more than 256 files very low... but, each network socket counts as a file descriptor, therefore many network services will exceed this limit quickly. The 256 limit is stupid but it is a standard, and as such Solaris adheres to it. To look at the open file descriptors of a given process use "pfiles <pid>".
    So, to change it you have several options:
    1) You can tune the default threshold on the number of descriptors by specifying a new default threshold in /etc/system:
    set rlim_fd_cur=1024
    2) On the shell you can view your limit using 'ulimit -n' (use 'ulimit' to see all your limit thresholds). You can set it higher for this session by supplying a value, example: 'ulimit -n 1024', then start your program. You might also put this command in a startup script before starting your program.
    3) The "right" way to do this is to use a Solaris RCTL (resource control) defined in /etc/project. Say you want to give the "oracle" user 8152 fd's... you can add the following to /etc/project:
    user.oracle:101::::process.max-file-descriptor=(priv,8152,deny)
    Now log out the Oracle user, then log back in and startup.
    You can view the limit on a process like so:
    prctl -n process.max-file-descriptor -i process <pid>
    In that output, you may see 3 lines, one for "basic", one for "privileged" and one for "system". System is the max possible. Privileged is the limit by which you need to have special privs to raise. Basic is the limit that you as any user can increase yourself (such as using 'ulimit' as we did above). If you define a custom "priviliged" RCTL like we did above in /etc/projects it will dump the "basic" priv which is, by default, 256.
    For reference, if you need to increase the threshold of a daemon that you can not restart, you can do this "hot" by using the 'prctl' program like so:
    prctl -t basic -n process.max-file-descriptor -x -i process <PID>
    The above just dumps the "basic" resource control (limit) from the running process. Do that, then check it a minute later with 'pfiles' to see that its now using more FD's.
    Enjoy.
    benr.

  • How To Change Resource Bundle file's data from Java Class

    Hi
    i have used below code for accessing Resource Bndle from Java Class but here i also want to make change in a particular key and its value.
    please let me know the code i should use to make changes in Resource Bundle file's key and value and saving the same in the file.
    package test;  import java.util.Enumeration;
    import java.util.ResourceBundle;
    public class ResourceBundleTest {
    public static void main(String[] args) {
    ResourceBundle rb = ResourceBundle.getBundle("test.bundletest.mybundle");
    Enumeration <String> keys = rb.getKeys();
    while (keys.hasMoreElements()) {
    String key = keys.nextElement();
    String value = rb.getString(key);
    System.out.println(key + ": " + value);
    Thanks

    With further debugging, I noticed the following line only works in integrated WLS but not in standalone WLS
    resourceBundle = ResourceBundle.getBundle("com.myapp.MyMappings");
    I confirmed the corresponding properties file was included properly in the EAR file but the standalone WLS failed to find the properties file at runtime.
    Why did the standalone WLS class loader (must be the same as the integrated WLS) failed to find the properties file deployed under the WEB-INF/classes path in the EAR file?
    The above line was in a POJO class which has the same classpath as the properties file ie. com.myapp.MappingManager.class.
    It was strange that the class loader could load the POJO class but unable to find the com.myapp.MyMappings.properties in the same classpath!!!
    Is this a bug in standalone WLS?
    Edited by: Pricilla on May 26, 2010 8:52 AM
    Edited by: Pricilla on May 26, 2010 9:01 AM

  • To change the approval limit for Extended attribute PO Value limits

    Hi,
    We are on SRM 4.0, in PPOMA_BBP for a local purchasing organisation we want to change the approval limit let us say from 100 to 50 but the system is presenting this field as non editable( In stabdard values sub screen)
    In Extended attribute tab in organisation structure after selecting the PO Value limits in SRM 4.0 we see two sub screens
    a) Standard values
    b) Local values
    We want to change the approval limit in standard values sub screen from 1000 to 500, which system is presenting as noneditable.
    Regrds
    Vivek

    Hi
    Refer these links.
    http://help.sap.com/saphelp_srm50/helpdata/en/5a/af5efc85d011d2b42d006094b92d37/frameset.htm
    http://help.sap.com/saphelp_srm50/helpdata/en/8b/4fa9585db211d2b404006094b92d37/content.htm
    http://help.sap.com/saphelp_srm50/helpdata/en/fa/fa7d3cb7f58910e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_srm50/helpdata/en/7f/0fea380bfe6161e10000000a11402f/frameset.htm
    Incase it does not helps, pls let me know.
    Regards
    - Atul

  • Problems changing the max open files attribute (ulimit)

    I realised that I posted this message in the wrong section
    (newbie corner) but there is no way to delete a message
    so this was a duplicate of the one in Kernel & Hardware
    Issues
    Last edited by Rehto (2009-04-06 08:53:52)

    Rehto wrote:
    I'm trying to change the max open files by
    putting this line to /etc/security/limits.conf
    <username> hard nofile 4096
    but nothing changes ('ulimit -n' spits as 1024 as an answer).
    I did try to logout and log back in and reboot the
    system but no go
    set the soft limits first. smth like:
    <your_name> soft nofile 4096
    <your_name> hard nofile 4096
    ulimit always uses the soft resource limit first. hard limits kills the application then, if it goes mad.
    vlad
    ps: logout then login again...
    Last edited by DonVla (2009-04-06 11:25:56)

  • Cannot reset max-file-descriptor?

    My /var/ad/messages is full of :
    Apr 17 12:30:27 srv1 genunix: [ID 883052 kern.notice] basic rctl process.max-file-descriptor (value 256) exceeded by process 6910
    Even though I have tried to set process.max-file-descriptor set to 4096 for all projects, which appears correct whenever I query any running process, ie:
    srv1 /var/adm # prctl -t basic -n process.max-file-descriptor -i process $$
    process: 2631: -ksh
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    process.max-file-descriptor
    basic 4.10K - deny 2631
    Any ideas...?
    Thanks!!

    Hi,
    Finally found the route cause.
    It is the mistake of the user. In one of his startup scripts(.profile) he is running the command (ulimit -n 1024) which is setting both the soft and hard limits of file descriptors.
    This was the reason, I was unable to increase the file descriptor limit beyond 1024.
    Thanks & Regards,
    -GnanaShekar-

  • [SOLVED] Problems changing the max open files attribute (ulimit)

    I'm trying to change the max open files by
    putting this line to /etc/security/limits.conf
    <username> hard nofile 4096
    but nothing changes ('ulimit -n' spits as 1024 as an answer).
    I did try to logout and log back in and reboot the
    system but no go
    P.S. I also tried adding 'ulimit -n 4096' to ~/.bashrc
    but then I get:
    bash: ulimit: open files: cannot modify limit: Operation not permitted
    so should I add it to the global profile file
    (etc/profile right?) or to /root/.bashrc or what?
    Last edited by Rehto (2009-04-06 12:08:32)

    Rehto wrote:
    I'm trying to change the max open files by
    putting this line to /etc/security/limits.conf
    <username> hard nofile 4096
    but nothing changes ('ulimit -n' spits as 1024 as an answer).
    I did try to logout and log back in and reboot the
    system but no go
    set the soft limits first. smth like:
    <your_name> soft nofile 4096
    <your_name> hard nofile 4096
    ulimit always uses the soft resource limit first. hard limits kills the application then, if it goes mad.
    vlad
    ps: logout then login again...
    Last edited by DonVla (2009-04-06 11:25:56)

  • How to determine which file descriptor opened my driver?

    Suppose a user process opens my driver twice. How does open() determine which file descriptor opened the device? In Linux, the kernel will pass a pointer to a structure which represents the open file descriptor. However, Solaris only passes the device number to open(), so I can only determine my device was opened, but not which file. I need this information because my driver needs to keep track of all file descriptors opened for the device.
    Thanks!
    -Darren

    I'm still at a loss why you need to know the file descriptor value (unless the app is sufficiently spaghettied that it has to query the driver to figure out what it opened with what). It's like asking what filename was used to open the device (which you can't get either). Since Solaris is based on a Streams framework, it would be bad to have drivers to even think it has a direct mapping into user space. It would be the same in asking (using /bin/sh):
    prog3 4>&1 3>&1 2>&1 | prog2 | prog1
    and you want to know from prog1 what descriptor prog3 wrote to. I don't see how linux even does this properly, since any given file open can have multiple file descriptors (via dup).

  • Set file descriptor limit for xinetd initiated process

    I am starting the amanda backup service on clients through xinetd and we
    are hitting the open file limit, ie file descriptor limit.
    I have set resource controls for the user and I can see from the shell that
    the file descriptor limit has increased, but I have not figured out how to get
    the resource control change to apply to the daemon started by xinetd.
    The default of 256 file channels persists for the daemon, I need to increase
    that number.
    I have tried a wrapper script, clearly doing it incorrectly for Solaris 10/SMF
    services. That route didn't work, or is not as straight forward as it used to be.
    There is a more direct way ?
    Thanks - Brian

    Hi Brian,
    It appears with 32 bits applications. You have to use the enabler of the extended FILE facility /usr/lib/extendedFILE.so.1
    % ulimit -n
    256
    % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'
    65536
    % ulimit -n
    65536
    % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
    % ./your_32_bits_applicationMarco

  • File Descriptor Limit

    1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
    2. There is some documentation that states that the mx should never exceed 1024.
    3. Question:
    a. For Solaris 8 can we ever set the max to be > 1024?
    b. If we can, is there another ceiling?
    c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
    4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
    Thnx
    Aman

    The hard limit is 1024 for number of descriptors.
    The man page for limit(1) says that root can
    change the hard limits, but if you raise the
    limit for fd above 1024 you may encounter kernel
    performance or even failure considtions. The number
    is a recommendation and emperical based on what a
    selection of processors and memory models can
    tolerate. You might get more expert info by cross
    posting this question to the solaris-OS/kernel
    forum. Raising the hard limit might be possible, but
    I cannot speak to the risks with much direct
    knowledge.
    You might want to examine the design of an app that
    needs to have more than 1024 files open at once. maybe
    there is an alternative design that allows you to
    close more file descriptors.

  • How we can limit the number of concurrent calls to a WCF service without use the Singleton pattern or without do the change in BizTalk Configuration file?

    How can we send only one message to a WCF service at a time? How we can limit the number of concurrent calls to a WCF service without use the Singleton pattern or without do the change in BizTalk Configuration file? Can we do it by Host throttling?

    Hi Pawan,
    You need to use WCF-Custom adapter and add the ServiceThrottlingBehavior service behavior to a WCF-Custom Locations.
    ServiceThrottlingBehavior.MaxConcurrentCalls - Gets or sets a value that specifies the maximum number of messages actively processing across a ServiceHost. The MaxConcurrentCalls property specifies the maximum number of messages actively
    processing across a ServiceHost object. Each channel can have one pending message that does not count against the value of MaxConcurrentCalls until WCF begins to process it.
    Follow MSDN-
    http://msdn.microsoft.com/en-us/library/ee377035%28BTS.10%29.aspx
    http://msdn.microsoft.com/en-us/library/system.servicemodel.description.servicethrottlingbehavior.maxconcurrentcalls.aspx
    I hope this helps.
    Rachit
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • Changing process.max-file-descriptor  in non global zone

    Hello Folks,
    I have non global zone.
    i wanted to change process.max-file-descriptor to 8192 so i issued the below command
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' default
    i have rebooted zone, after reboot system is not showing the value as 8192.
    can u someone help me to find out the missed

    # id -p
    uid=0(root) gid=0(root) projid=1(user.root)
    # prctl -P $$ | grep file
    process.max-file-descriptor basic 256 - deny 19452
    process.max-file-descriptor privileged 65536 - deny -
    process.max-file-descriptor system 2147483647 max deny -
    process.max-file-size privileged 9223372036854775807 max deny,signal=XFSZ -
    process.max-file-size system 9223372036854775807 max deny -
    # ulimit -n
    256
    # cat /etc/project | grep file
    default:3::::process.max-file-descriptor=(basic,8192,deny)
    #

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

  • Want to increase the file descriptors

    Hi,
    I am trying the increase the max number for file descriptor allowed in Solaris.
    I changed the ulimit soft value to hard limit value (65536) as a root.Even the ulimit -a shows the change value for the soft limit.
    When I run my test program to find the value of sysconf(_SC_OPEN_MAX) it shows the changed value.But when I try to open more than 253 files it fails.How do I increase this, also when I change the ulimit -n 65536 why the max number for files open is not increased.
    Looking forward for help.
    Thanks in advance
    -A

    The simplest workaround is to compile as a 64bit executable.
    -m64 in gcc. I don't remember offhand what the option is for Sun CC.
    The difficulty is that any non system libraries your using will also need to be recompiled to be 64 bit..

  • Accpet() need more file descriptor

    my application server is using multithread to deal with high concurrency socket requests. When accpet() a request, it assign a FD, create a thread to deal, the thread will close the FD after finish the processing and thread_exit.
    my question is: when there are threads concurrency dealing 56~57 FDs, accept() can't get a new FD (errno.24). I know FD is limited in one process, I can try to fork sub_process to reach the high concurrency.
    But I wonder isn't there any other good method to solve the problem? How can a Web server reach a high concurrency?
    Any suggest is appreciated!
    Jenny

    Hi Jenny,<BR><BR>
    First of all, you did not say which release of Solaris you are using,<BR>but I'll assume you are on a version later than 2.4.<BR>
    You are correct when you say that the number of file descriptors <BR>
    that can be opened is a per-process limit. Depending on the OS <BR>version the default value for this limit changes, but there <BR>are simple ways to increase it.<BR>
    First of all there are two types of limits: a hard (system-wide) <BR>
    limit and a soft limit. The hard limit can only be changed by root<BR>
    but the soft limit can be changed by any user. There is one restriction<BR> on soft limits, they can never be set higher then the<BR>corresponding hard limit.<BR>
    1. Use the command ulimit(1) from your shell to increase the soft<BR>
    limit from its default value (64 before Solaris 8) to a specified <BR>value less than the hard limit.<BR>
    2. Use the setrlimit(2) call to change both the soft and hard limits.<BR> You must be root to change the hard limit though.<BR>
    3. Modify the /etc/system file and include the following line in <BR>
    to increase the hard limit to 128:<BR><BR>
    <CODE>set rlim_fd_cur=0x80</CODE><BR><BR>
    After changing the /etc/system file, the system should be rebooted <BR>so that the change takes effect.<BR><BR>
    Note that stdio routines are limited to using file descriptors <BR>
    0 through 255. Even though the limit can be set higher than 256, if<BR>the fopen function cannot get a file descriptor lower than 256,<BR>then it will fail. This can be a problem if other routines <BR>use the open function directly. For example, if 256 files are<BR>
    opened with the open function and none of them are closed, no other<BR>files can be opened with the fopen function because all of<BR>the low-numbered file descriptors have been used.<BR>
    Also, note that it is somewhat dangerous to set the fd limits higher<BR>than 1024. There are some structures, such as fd_set in<BR> <sys/select.h>, defined in the system that assume the maximum fd is<BR>1023. If a program uses an fd larger than 1023 with the macros<BR>and routines that access such a structure, the program will<BR>corrupt its memory space because it will modify memory outside<BR>of the bounds of the structure.<BR><BR>
    Caryl<BR>
    Sun Developer Technical Support<BR>

Maybe you are looking for