Cannot reset max-file-descriptor?

My /var/ad/messages is full of :
Apr 17 12:30:27 srv1 genunix: [ID 883052 kern.notice] basic rctl process.max-file-descriptor (value 256) exceeded by process 6910
Even though I have tried to set process.max-file-descriptor set to 4096 for all projects, which appears correct whenever I query any running process, ie:
srv1 /var/adm # prctl -t basic -n process.max-file-descriptor -i process $$
process: 2631: -ksh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
basic 4.10K - deny 2631
Any ideas...?
Thanks!!

Hi,
Finally found the route cause.
It is the mistake of the user. In one of his startup scripts(.profile) he is running the command (ulimit -n 1024) which is setting both the soft and hard limits of file descriptors.
This was the reason, I was unable to increase the file descriptor limit beyond 1024.
Thanks & Regards,
-GnanaShekar-

Similar Messages

  • Changing process.max-file-descriptor  in non global zone

    Hello Folks,
    I have non global zone.
    i wanted to change process.max-file-descriptor to 8192 so i issued the below command
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' default
    i have rebooted zone, after reboot system is not showing the value as 8192.
    can u someone help me to find out the missed

    # id -p
    uid=0(root) gid=0(root) projid=1(user.root)
    # prctl -P $$ | grep file
    process.max-file-descriptor basic 256 - deny 19452
    process.max-file-descriptor privileged 65536 - deny -
    process.max-file-descriptor system 2147483647 max deny -
    process.max-file-size privileged 9223372036854775807 max deny,signal=XFSZ -
    process.max-file-size system 9223372036854775807 max deny -
    # ulimit -n
    256
    # cat /etc/project | grep file
    default:3::::process.max-file-descriptor=(basic,8192,deny)
    #

  • Reg process.max-file-descriptor setting

    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.

    Hi,
    Welcome to oracle forums :)
    User wrote:
    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
    For more information on these settings please refer MOS tech note:
    *Kernel setup for Solaris 10 using project files. [ID 429191.1]*
    Hope helps :)
    Regards,
    X A H E E R

  • Genunix: basic rctl process.max-file-descriptor (value 256) exceeded

    Hi .,
    I am getting the following error in my console rapidly.
    I am using Sun Sparc server running with Solaris 10 ., We start getting this error
    suddently after a restart of the server and the error is continously rolling on the console...
    The Error:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    SunOS Release 5.10 Version Generic_118822-25 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Failed to send email alert for recent event.
    SC Alert: Failed to send email alert for recent event.
    Hostname: nitwebsun01
    NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk
    NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 287/0x0 has migrated from enclosure FAKE_ENCLR_SNO to enclosure DISKS
    checking ufs filesystems
    /dev/rdsk/c1t0d0s4: is logging.
    /dev/rdsk/c1t0d0s7: is logging.
    nitwebsun01 console login: Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 439
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 467
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:44 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:49 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 503
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 510
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 516
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 540
    Nov 20 14:56:53 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 549
    Nov 20 14:56:53 nitwebsun01 last message repeated 4 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 665
    Nov 20 14:56:56 nitwebsun01 last message repeated 6 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 667
    Nov 20 14:56:56 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:57 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 868
    Nov 20 14:56:57 nitwebsun01 /usr/lib/snmp/snmpdx: unable to get my IP address: gethostbyname(nitwebsun01) failed [h_errno: host not found(1)]
    Nov 20 14:56:58 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 887
    Nov 20 14:57:00 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 976
    nitwebsun01 console login: root
    Nov 20 14:57:00 nitwebsun01 last message repeated 2 times
    Here I attached my /etc/project file also..
    [root@nitwebsun01 /]$ cat /etc/project
    system:0::::
    user.root:1::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    noproject:2::::
    default:3::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    group.staff:10::::
    [root@nitwebsun01 /]$
    Help me to came out of this issue
    Regards
    Suseendran .A

    This is an old post but I'm going to reply to it for future reference of others.
    Please ignore the first reply to this thread... by default /etc/rctladm.conf doesn't exist, and you should never use it. Just put it out of your mind.
    So, then... by default, a process can have no more than 256 file descriptors open at any given time. The likelyhood that you'll have a program using more than 256 files very low... but, each network socket counts as a file descriptor, therefore many network services will exceed this limit quickly. The 256 limit is stupid but it is a standard, and as such Solaris adheres to it. To look at the open file descriptors of a given process use "pfiles <pid>".
    So, to change it you have several options:
    1) You can tune the default threshold on the number of descriptors by specifying a new default threshold in /etc/system:
    set rlim_fd_cur=1024
    2) On the shell you can view your limit using 'ulimit -n' (use 'ulimit' to see all your limit thresholds). You can set it higher for this session by supplying a value, example: 'ulimit -n 1024', then start your program. You might also put this command in a startup script before starting your program.
    3) The "right" way to do this is to use a Solaris RCTL (resource control) defined in /etc/project. Say you want to give the "oracle" user 8152 fd's... you can add the following to /etc/project:
    user.oracle:101::::process.max-file-descriptor=(priv,8152,deny)
    Now log out the Oracle user, then log back in and startup.
    You can view the limit on a process like so:
    prctl -n process.max-file-descriptor -i process <pid>
    In that output, you may see 3 lines, one for "basic", one for "privileged" and one for "system". System is the max possible. Privileged is the limit by which you need to have special privs to raise. Basic is the limit that you as any user can increase yourself (such as using 'ulimit' as we did above). If you define a custom "priviliged" RCTL like we did above in /etc/projects it will dump the "basic" priv which is, by default, 256.
    For reference, if you need to increase the threshold of a daemon that you can not restart, you can do this "hot" by using the 'prctl' program like so:
    prctl -t basic -n process.max-file-descriptor -x -i process <PID>
    The above just dumps the "basic" resource control (limit) from the running process. Do that, then check it a minute later with 'pfiles' to see that its now using more FD's.
    Enjoy.
    benr.

  • No of file descriptors in solaris 10

    hi,
    I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
    i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
    Thank you
    Jonu Joy

    Thank you alan
    even after setting the max file descriptor to 8192, the output from pfiles show as 4096
    Current rlimit: 4096 file descriptors
    would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
    thank you
    Jonu Joy

  • File descriptor setting

    We are using iDS 5.1 sp2 running on solaris 8. We have idar with 2 ldap server on back(1 master, 1 slave).
    We didnt't setup the max connection for iDAR, which mean unlimited connection is allowed. However, the unix system ulimit setting was 256, which is too low. I changed the setting under /etc/system and rebooted the unix.. Then the ulimit setting is 4096 for both hard limit and soft limit. It looks good.
    However, whenever the total connection to iDAR approaching 256, fwd.log file will show that "socket closed". The iDAR is still available, but the socked is used up.
    I have been wondering why the new setting didn't take effect for iDAR.
    Can anybody help me or give me some clue?
    Thanks!

    Hi,
    Welcome to oracle forums :)
    User wrote:
    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
    For more information on these settings please refer MOS tech note:
    *Kernel setup for Solaris 10 using project files. [ID 429191.1]*
    Hope helps :)
    Regards,
    X A H E E R

  • Accpet() need more file descriptor

    my application server is using multithread to deal with high concurrency socket requests. When accpet() a request, it assign a FD, create a thread to deal, the thread will close the FD after finish the processing and thread_exit.
    my question is: when there are threads concurrency dealing 56~57 FDs, accept() can't get a new FD (errno.24). I know FD is limited in one process, I can try to fork sub_process to reach the high concurrency.
    But I wonder isn't there any other good method to solve the problem? How can a Web server reach a high concurrency?
    Any suggest is appreciated!
    Jenny

    Hi Jenny,<BR><BR>
    First of all, you did not say which release of Solaris you are using,<BR>but I'll assume you are on a version later than 2.4.<BR>
    You are correct when you say that the number of file descriptors <BR>
    that can be opened is a per-process limit. Depending on the OS <BR>version the default value for this limit changes, but there <BR>are simple ways to increase it.<BR>
    First of all there are two types of limits: a hard (system-wide) <BR>
    limit and a soft limit. The hard limit can only be changed by root<BR>
    but the soft limit can be changed by any user. There is one restriction<BR> on soft limits, they can never be set higher then the<BR>corresponding hard limit.<BR>
    1. Use the command ulimit(1) from your shell to increase the soft<BR>
    limit from its default value (64 before Solaris 8) to a specified <BR>value less than the hard limit.<BR>
    2. Use the setrlimit(2) call to change both the soft and hard limits.<BR> You must be root to change the hard limit though.<BR>
    3. Modify the /etc/system file and include the following line in <BR>
    to increase the hard limit to 128:<BR><BR>
    <CODE>set rlim_fd_cur=0x80</CODE><BR><BR>
    After changing the /etc/system file, the system should be rebooted <BR>so that the change takes effect.<BR><BR>
    Note that stdio routines are limited to using file descriptors <BR>
    0 through 255. Even though the limit can be set higher than 256, if<BR>the fopen function cannot get a file descriptor lower than 256,<BR>then it will fail. This can be a problem if other routines <BR>use the open function directly. For example, if 256 files are<BR>
    opened with the open function and none of them are closed, no other<BR>files can be opened with the fopen function because all of<BR>the low-numbered file descriptors have been used.<BR>
    Also, note that it is somewhat dangerous to set the fd limits higher<BR>than 1024. There are some structures, such as fd_set in<BR> <sys/select.h>, defined in the system that assume the maximum fd is<BR>1023. If a program uses an fd larger than 1023 with the macros<BR>and routines that access such a structure, the program will<BR>corrupt its memory space because it will modify memory outside<BR>of the bounds of the structure.<BR><BR>
    Caryl<BR>
    Sun Developer Technical Support<BR>

  • Max number of file descriptors in 32 vs 64 bit compilation

    Hi,
    I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
    However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
    Did anybody ever see similar problem before?
    Thanks in advance,
    Mikhail

    On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
    This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
    When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
    To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
    A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

  • My ipad quit backing up and it cannot find the file.  I tried putting on another computer after a hard reset but it says it cannot back up there either.

    My ipad quit backing up because it cannot find the file.  I have tried a restore and hard reset and adding itunes to another computer and putting it on there but it still will cannot add the ipad to the new computer and back up.  I have an iPad (not iPad 2) and 5.01 OS.  It appears that my iPad is not fully fuctional since I updated to the new OS which is when my ipad stopped backing up.  I lost data too and some apps don't work anymore.

    Have you previously synced photos to the iPad, as that occasionally causes the 'missing file' error. If so then try deleting the photo cache from your computer and then re-try backing up - the location of the cache, and how to delete it, is on this page http://support.apple.com/kb/TS1314.
    In terms of apps not working, if it's happening on all the apps that you've downloaded from the App Store, but not the Apple built-in ones, then try downloading any free app from the store (as that appears to reset something) and then re-try them - the free app can then be deleted.
    If it's happening on all apps then try closing them all completely and then see if they work when you re-open them : from the home screen (i.e.not with any app 'open' on-screen) double-click the home button to bring up the taskbar, then press and hold any of the apps on the taskbar for a couple of seconds or so until they start shaking, then press the '-' in the top left of each app to close them, and touch any part of the screen above the taskbar so as to stop the shaking and close the taskbar.
    If that doesn't work then you could try a reset : press and hold both the sleep and home buttons for about 10 to 15 seconds (ignore the red slider), after which the Apple logo should appear - you won't lose any content, it's the iPad equivalent of a reboot.

  • Oracle Portal item cannot be deleted using dav (Bad File Descriptor)

    I cannot delete an Oracle Portal item with webdav. I get an error 500 and the item is not deleted.
    When this same user logs in as portal user with a browser, the item kan be deleted.
    So the user permissions are probably not the problem.
    What can be the problem?
    How do I have to solve this?
    <h1>Info found in log files</h1>
    <h2>C:\OraHome_2\webcache\logs</h2>
    Here I find an access.log file, but this one does not seem to contain anything useful.
    <h2>C:\OraHome_2\Apache\Apache\logs\</h2>
    Here I find two recent log files:
    <h3>access_log.1340236800</h3>
    HTTP/1.1" 207 3215
    192.168.6.57 - - [21/Jun/2012:09:28:53 +0200] "DELETE /dav_portal/portal/Bibnet/Open_Vlacc_regelgeving/Werkgroepen/vlacc_wgCAT/fgtest.txt HTTP/1.1" 500 431
    <h3>error_log.1340236800</h3>
    [Thu Jun 21 09:28:53 2012] [error] [client 192.168.6.57] [ecid: 3781906711623,1] Could not DELETE /dav_portal/portal/Bibnet/Open_Vlacc_regelgeving/Werkgroepen/vlacc_wgCAT/fgtest.txt. [500, #0] [mod_dav.c line 2008]
    [Thu Jun 21 09:28:53 2012] [error] [client 192.168.6.57] [ecid: 3781906711623,1] (9)Bad file descriptor: Delete unsuccessful. [500, #0] [dav_ora_repos.c line 8913]
    In the error log, you also often find back message :
    [Thu Jun 21 10:33:02 2012] [notice] [client 192.168.6.57] [ecid: 3421133404379,1] ORA-20504: User not authorized to perform the requested operation
    This has probably nothing to do with it, you also have this message when the delete is successful.
    <h1>Versions I have used</h1>
    Dav client: I have tried with clients "Oracle Drive 10.2.0.0.27 Patch" and Cyberduck 4.2.1
    Oracle Portal 10.1.4
    In the errorX.log file, I find back these lines too:
    [Thu Jun 21 09:53:17 2012] [notice] [client 192.168.6.57] [ecid: 4348843884218,1] OraDAV: Initializing OraDAV Portal Driver (1.0.3.2.3-0030) using API version 2.00
    [Thu Jun 21 09:53:17 2012] [notice] [client 192.168.6.57] [ecid: 4348843884218,1] OraDAV: oradav_driver_info Name=interMedia Version=2.3

    You may want to try a rebuild of the DAV tables in Oracle Portal. Before you do so, take a backup of the Portal repository database to ensure that you can revert back in case of disaster.
    Rebuilding the DAV tables is done with the following instructions :
    <li> Start SQL*PLUS and connect to the Portal metadata repository database as PORTAL user
    <li> Execute wwdav_loader.create_dav_content :
    SQL> exec wwdav_loader.create_dav_content();{code}
    Thanks,
    EJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Cannot increase file descriptors

    Hello all,
    I'm trying to increase the number of file descriptors of the system.
    Currently with ulimit -n, I get 2048. So I would like to increase the limit to 8192.
    I have added the following lines in the /etc/system file:
    set rlim_fd_max=8192
    set rlim_fd_cur=8192
    These are standard lines I have added in other systems and after rebooting I always get the right value. But in one of the machines of my server room this doesn't seem to work. The machine is exactly the same as all the others: SunFire V210, Solaris 10 with Patch Cluster 31/10/2007
    I have tried to reboot several times: init 6, reboot -- -r ... but I always get 2048 with ulimit -n
    Is there any other parameter somewhere than can limit this value?
    Thanx.

    Doing more tests... now I'm even more confused.
    Rebooting the system, I connected to the console and saw that during the boot part, there is a warning about the /etc/system file:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    WARNING: unknown command 'nfs' on line 85 of etc/system
    WARNING: unknown command 'nfs' on line 86 of etc/system
    SunOS Release 5.10 Version Generic_118833-33 64-bit
    Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Those warnings refer to a problem of the /etc/system file that I had in the past (when I took over the system), but I modified the lines.
    They used to be just:
    nfs:nfs4_bsize=8192
    nfs:nfs4_nra=0
    Later I added the "set" in the front.
    Anyway, i changed the order of some commands and in lines 85 and 86 I have the following:
    85 * Begin MDD root info (do not edit)
    86 rootdev:/pseudo/md@0:0,30,blk
    The mirroring lines.
    So for some reason at boot, Solaris reads the old file. But I dont know which old file because it's been modified and I dont keep any backup of the original one. So, from where is Solaris reading that "strange" /etc/system file? It's definitely not the one I can see doing: cat /etc/system

  • File Descriptor Limit

    1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
    2. There is some documentation that states that the mx should never exceed 1024.
    3. Question:
    a. For Solaris 8 can we ever set the max to be > 1024?
    b. If we can, is there another ceiling?
    c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
    4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
    Thnx
    Aman

    The hard limit is 1024 for number of descriptors.
    The man page for limit(1) says that root can
    change the hard limits, but if you raise the
    limit for fd above 1024 you may encounter kernel
    performance or even failure considtions. The number
    is a recommendation and emperical based on what a
    selection of processors and memory models can
    tolerate. You might get more expert info by cross
    posting this question to the solaris-OS/kernel
    forum. Raising the hard limit might be possible, but
    I cannot speak to the risks with much direct
    knowledge.
    You might want to examine the design of an app that
    needs to have more than 1024 files open at once. maybe
    there is an alternative design that allows you to
    close more file descriptors.

  • Cannot move any files into trash to delete them. I get a Trash warning box saying "The Finder cannot complete the operation because some data in 'file name' could not be read or written.(Error code - 36)".Macbook has so much data it is almost not working

    Cannot move any files into trash to delete them. I get a Trash warning box saying "The Finder cannot complete the operation because some data in 'file name' could not be read or written.(Error code - 36)".Macbook has so much data it is almost not working.
    I have tried reseting to factory settings by removing battery and holding down the power switch for more than 5 sec but this does nothing.
    Any ideas would be greatly appreciated as I think with so much unwanted data on the desktop and other places the whole os will crash very soon.
    Thanks, Rick

    The hard drive may be dying anyway, or the directory damaged because it is so full.  You should never let the hard drive get over 85% full.
    A -36 error is a read/write error.
    All you did with the power is reset the SMC.  Does nothing to reset everything to factory settings. 
    Do you have your original 10.4 installer discs for the MacBook?
    Is your data backed up?

  • Cannot backup my files / copy my files and folders due to error message " The operation can't be completed because an item with the name ".DS_Store" already exists. "

    Hi Apple community!
    I have a [rather worrying] problem.
    When I try to copy all my files from my documents on my mac [or the entire documents folder] into an external drive, I get this error message
    " The operation can’t be completed because an item with the name “.DS_Store” already exists. "
    I am not given an option to skip this file or anything else.
    But I simply cannot complete the operation!
    I have tried deleting a few of the .ds_store files, [both in the original and in the destinations]
    but no success.
    The same thing keeps happening.
    At first, this was just happening when I was trying to backup to my dropbox folder [the one on my mac's harddrive, which gets synced to the cloud],
    but then I tried to back up my documents to my external hard drive, and I realized it is giving me the same error message.
    So effectively, it seems I cannot backup my files anywhere!
    Any help or advice would be greatly appreciated.
    Thank you.

    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    The purpose of the test is to determine whether the problem is caused by third-party software that loads automatically at startup or login, by a peripheral device, or by corruption of certain system caches. 
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards. Boot in safe mode and log in to the account with the problem. Note: If FileVault is enabled on some models, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to boot and run than normal, and some things won’t work at all, including sound output and  Wi-Fi on certain iMacs. The next normal boot may also be somewhat slow.
    The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin. Test while in safe mode. Same problem? After testing, reboot as usual (i.e., not in safe mode) and verify that you still have the problem. Post the results of the test.

  • Receiving error message 'Could not load file or assembly 'System.ServiceModel.Activation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.'

    I started getting this error message for the first time today. When I look in the event log I see it every time anyone tries to sync to a mobile device to the exchange server. I've also had this problem trying to connect using
    Outlook. I believe it is an IIS issue but I'm not absolutely sure so I'm posting this in the Exchange forum as well.
    The event viewer has the following information...
    3008
    A configuration error has occurred.
    5/1/2014 10:41:08 PM
    5/2/2014 5:41:08 AM
    7539d8a38c8b47869eda3f1749aba08d
    1
    1
    0
    /LM/W3SVC/1/ROOT/Microsoft-Server-ActiveSync-75-130434828686436855
    Full
    /Microsoft-Server-ActiveSync
    C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\sync\
    SERVER
    16284
    w3wp.exe
    NT AUTHORITY\SYSTEM
    ConfigurationErrorsException
    Could not load file or assembly 'System.ServiceModel.Activation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
    https://remote.testserver.com:443/Microsoft-Server-ActiveSync/default.eas?User=user.name&DeviceId=ApplC39GQ5xxxxxx&DeviceType=iPhone&Cmd=Ping
    /Microsoft-Server-ActiveSync/default.eas
    174.224.130.31
    False
    NT AUTHORITY\SYSTEM
    24
    NT AUTHORITY\SYSTEM
    False
    at System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) at System.Web.Configuration.Common.ModulesEntry..ctor(String name, String typeName, String propertyName, ConfigurationElement configElement) at System.Web.HttpApplication.BuildIntegratedModuleCollection(List`1 moduleList) at System.Web.HttpApplication.GetModuleCollection(IntPtr appContext) at System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) at System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) at System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) at System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext)
    I have tried most, if not all, of the different post's suggestions to no avail.
    The steps I have taken are as follows...
    1. Repaired .Net 4 (Both the client and extended)
    2. Uninstalled and reinstalled .Net Framework 4.0.
    3. Verified that the dll exists.
    4. Checked the applicationHost.config file. It contains the follow statement...
    <add name="ServiceModel-4.0" type="System.ServiceModel.Activation.ServiceHttpModule, System.ServiceModel.Activation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler,runtimeVersionv2.0"
    />
    5. Changed the following line in web.config to include the runtimeVersion...
    <add name="ServiceModel" type="System.ServiceModel.Activation.HttpModule, System.ServiceModel.Activation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler,runtimeVersionv2.0" />
    6. Executed aspnet_regiis.exe -iru from the ...\Framework64\v4.0.30319 directory.
    7. Went to inetpub\history to use an applicationHost.config file from yesterday but it only has history from 9PM tonight. It probably had what I needed before I started changing it tonight.
    I still receive the same error message.
    Like I said everything was working yesterday. In fact I didn't hear or see any issue until after 1PM today.
    Any help would be very appreciated!

    Hi,
    Please confirm whether users can access their mailboxes from Outlook Web Access or not. We can do the following changes to have a try:
    1. In IIS > Application Pools, change the .NET Framework Version to v2.0 and restart IIS service.
    2. If it doesn’t work, explore the Default Web site.
    3. Renamed the web.config file to web.config.old
    4. Reset IIS using iisreset command to have a try.
    Thanks,
    Winnie Liang
    TechNet Community Support

Maybe you are looking for