Accpet() need more file descriptor

my application server is using multithread to deal with high concurrency socket requests. When accpet() a request, it assign a FD, create a thread to deal, the thread will close the FD after finish the processing and thread_exit.
my question is: when there are threads concurrency dealing 56~57 FDs, accept() can't get a new FD (errno.24). I know FD is limited in one process, I can try to fork sub_process to reach the high concurrency.
But I wonder isn't there any other good method to solve the problem? How can a Web server reach a high concurrency?
Any suggest is appreciated!
Jenny

Hi Jenny,<BR><BR>
First of all, you did not say which release of Solaris you are using,<BR>but I'll assume you are on a version later than 2.4.<BR>
You are correct when you say that the number of file descriptors <BR>
that can be opened is a per-process limit. Depending on the OS <BR>version the default value for this limit changes, but there <BR>are simple ways to increase it.<BR>
First of all there are two types of limits: a hard (system-wide) <BR>
limit and a soft limit. The hard limit can only be changed by root<BR>
but the soft limit can be changed by any user. There is one restriction<BR> on soft limits, they can never be set higher then the<BR>corresponding hard limit.<BR>
1. Use the command ulimit(1) from your shell to increase the soft<BR>
limit from its default value (64 before Solaris 8) to a specified <BR>value less than the hard limit.<BR>
2. Use the setrlimit(2) call to change both the soft and hard limits.<BR> You must be root to change the hard limit though.<BR>
3. Modify the /etc/system file and include the following line in <BR>
to increase the hard limit to 128:<BR><BR>
<CODE>set rlim_fd_cur=0x80</CODE><BR><BR>
After changing the /etc/system file, the system should be rebooted <BR>so that the change takes effect.<BR><BR>
Note that stdio routines are limited to using file descriptors <BR>
0 through 255. Even though the limit can be set higher than 256, if<BR>the fopen function cannot get a file descriptor lower than 256,<BR>then it will fail. This can be a problem if other routines <BR>use the open function directly. For example, if 256 files are<BR>
opened with the open function and none of them are closed, no other<BR>files can be opened with the fopen function because all of<BR>the low-numbered file descriptors have been used.<BR>
Also, note that it is somewhat dangerous to set the fd limits higher<BR>than 1024. There are some structures, such as fd_set in<BR> <sys/select.h>, defined in the system that assume the maximum fd is<BR>1023. If a program uses an fd larger than 1023 with the macros<BR>and routines that access such a structure, the program will<BR>corrupt its memory space because it will modify memory outside<BR>of the bounds of the structure.<BR><BR>
Caryl<BR>
Sun Developer Technical Support<BR>

Similar Messages

  • File Descriptor Limit

    1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
    2. There is some documentation that states that the mx should never exceed 1024.
    3. Question:
    a. For Solaris 8 can we ever set the max to be > 1024?
    b. If we can, is there another ceiling?
    c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
    4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
    Thnx
    Aman

    The hard limit is 1024 for number of descriptors.
    The man page for limit(1) says that root can
    change the hard limits, but if you raise the
    limit for fd above 1024 you may encounter kernel
    performance or even failure considtions. The number
    is a recommendation and emperical based on what a
    selection of processors and memory models can
    tolerate. You might get more expert info by cross
    posting this question to the solaris-OS/kernel
    forum. Raising the hard limit might be possible, but
    I cannot speak to the risks with much direct
    knowledge.
    You might want to examine the design of an app that
    needs to have more than 1024 files open at once. maybe
    there is an alternative design that allows you to
    close more file descriptors.

  • RFFOCA_T: DME with file descriptor in first line (RBC)

    Hi All,
    I've customized the automatic payment run for a company located at Canada - including generated DME- file by the report RFFOCA_T. The DME file looks good - but sadly the house bank (RBC, Royal Bank of Scotland) is expecting two things different:
    "Different formats now exist for the Royal Bank and CIBC from the default CPA-005 specification.
    u2022 Type 'A' and 'Cu2019 records have been modified to handle RBC and CIBC
    u2022 A parameter was added to job submission to request the bank type
    This process has been revised to include two headers as part of the tape_header code segment.
    u2022 The first header must be the first line in the file and appear in the following format: $$AAPDCPA1464[PROD]NL$$
    u2022 The second header (positions 36 to 1464) must be filled with blanks, not zeros"
    (taken from "SCT Banner, Finance, Release Guide - January 2005, Release 7.0")
    In our DME-file the second header (position 36 to 1464) is correct, but the first header is completely missing.
    RBC wrote me in an email:: "The first line of the file needs the file descriptor ($$AAPDCPA1464[PROD]NL$$). The date format and the client number is correct. When the $$ file descriptor has been added please upload the TEST file":
    I could not find any solution at SAP/ OSS - can anybody help, please?
    Thanks a lot!
    Sandra.

    Hi Revi,
    I'm not sure if I understand you in the right way.
    I do not have a problem only with the $$ at the beginning. The whole first expected line as the file descriptor is missing. As we saw in the report code, it's not considered. At least I hope, there is a simple solution - like an update or else - but maybe we need to enlarge the report itself by a programmer?
    Thanks,
    Sandra

  • I am getting a warning need more space on my disk. I have checked the Console and there have been 4000 thousand messages in my system log in the last 2 days. I have emptied as many files as possible.

    Hi
    I am getting a warning on opening my mac saying I need more space as my setup disk is full.  I have emptied many files as well as my Trash.  I am planning to get an external drive for my photos, but it has been suggested that I check Console and under the system log the following message (of which they say 4000 messages have been logged in 2 days) '06/06/2014 16:08:23.341 com.apple.dynamic_pager: dynamic_pager: Need more space on the disk to enable swapping'.  This is a copy of one of the items.  It is from the system log.  I really do not think I have enough files to warrant this warning, with the exception of photos.  My storage shows 143 MB free out of 120.47. Thanks for any help.

    First off, no MacBook Air can run 10.3 or earlier.
    Secondly, are you sure you only have 143MB of storage free?  That's seriously low.  With MacBook Airs of 120 GB hard drives, or even 140 GB which is often the case when it says out of 120, you shouldn't allow your free space to go down below 20 GB.  MB is 1024 fold less than a GB.  So if you really have 143MB free on a MacBook Air, you are long past the minimum space you should be keeping it at, and need to start clearing a lot of space now*:
    http://www.macmaps.com/diskfull.html
    I'm asking this thread be moved to the MacBook Air forum,a as we can't say for certain what you have when you post in the 10.3 or earlier forum.

  • ORA-01195:  online backup of file 65 needs more recovery to be consistent

    Hi,
    I was doing a clone by taking the hot backup from prod to dev. The backup was good. And then I created the control file, Then I passed the command
    recover database until cancel using backup controlfile;
    It asked for the archived logs files. I supplied them until current time.
    Then I canceled.
    That's when I got this error
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01195: online backup of file 65 needs more recovery to be consistent
    ORA-01110: data file 65: '/d10/oradata/dwdev/kt01.dbf'
    ORA-01112: media recovery not started
    What am I doing wrong? I have not yet passed the command "Alter database open resetlogs"
    Should I do more logswitches in prod and pass those files to dev ? Or should I just put the kt tablespace in backup mode and copy the data files?

    Which set of archivelogs did you copy over to apply ? All the archivelogs from the first ALTER TABLESPACE ... BEGIN BACKUP to the archivelogs subsequent to the last ALTER TABLESPACE ... END BACKUP ?
    In the cloned datadabase, what messages do you see in the alert.log on having issued the RECOVER DATABASE command ? Does it complain about the datafiles being fuzzy ? Which archivelogs does it show as having been applied ?
    Can you check the log sequence numbers for the duration of the Backup PLUS ArchiveLogs subsequent to the Backup ?

  • Restore with brtools - need more archive redolog files

    Good day
    I will make online backup my oracle database with brtools
    (Oracle 10g, BRBACKUP 7.00 (39))
    brbackup -c -d util_file_online -t online -m all -u /
    bdztoexw anf  2009-01-23 15.27.40 ; 2009-01-23 16.47.21 ; 1  ...............     57    56     0     17671        226215105    17676        226268039  ALL
    online          util_file_online -
    7.00 (39)
    BR0280I BRBACKUP time stamp: 2009-01-23 16.43.38
    BR0232I 57 of 57 files saved by backup utility
    BR0230I Backup utility called successfully
    BR0280I BRBACKUP time stamp: 2009-01-23 16.43.40
    BR0340I Switching to next online redo log file for database instance VPP ...
    BR0321I Switch to next online redo log file for database instance VPP successful
    BR0117I ARCHIVE LOG LIST after backup for database instance VPP
    Parameter                      Value
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            /oracle/VPP/oraarch/VPParch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17673
    Next log sequence to archive   17676
    Current log sequence           17676            SCN: 226268039
    Database block size            8192             Thread: 1
    Current system change number   226268041        ResetId: 603135330
    After brbackup  I'll do "brarchive" in the same script:
    brarchive -c -d util_file -sd -u / > $br_out_file
    #ARCHIVE.. 17670  /oracle/VPP/oraarch/VPParch1_17670_603135330.dbf ; 2009-01-23 15.10.36 ; 43450368         226202112  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17671  /oracle/VPP/oraarch/VPParch1_17671_603135330.dbf ; 2009-01-23 15.36.12 ; 43430912         226215105  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17672  /oracle/VPP/oraarch/VPParch1_17672_603135330.dbf ; 2009-01-23 15.40.27 ; 43515904         226227928  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17673  /oracle/VPP/oraarch/VPParch1_17673_603135330.dbf ; 2009-01-23 15.41.06 ; 43729408         226238784  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17674  /oracle/VPP/oraarch/VPParch1_17674_603135330.dbf ; 2009-01-23 16.06.06 ; 43450368         226250315  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17675  /oracle/VPP/oraarch/VPParch1_17675_603135330.dbf ; 2009-01-23 16.43.40 ; 13243904         226263012  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    VPP  util_file  adztolzu svd  2009-01-23 16.47.22 ; 2009-01-23 16.54.47 ; 1  ...........     17670    17675        0        0  ------- 7.00 (39)  @0603135330
    BR0280I BRARCHIVE time stamp: 2009-01-23 16.51.11
    BR0232I 6 of 6 files saved by backup utility
    BR0230I Backup utility called successfully
    BR0016I 6 offline redo log files processed, total size 220.128 MB
    Then I take tape with this data and try restore only from this tape
    all datafiles and relevant archive redologs files (17670-17675) restored without errors
    but in the end ERROR occurred
    ERROR at line 1:
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/VPP/sapdata1/system_1/system.data1'
    ORA-00279: change 226268039 generated at 01/23/2009 16:43:40 needed for thread
    1
    ORA-00289: suggestion : /oracle/VPP/oraarch/VPParch1_17676_603135330.dbf
    ORA-00280: change 226268039 for thread 1 is in sequence #17676
    My online backup ended, before switch to redolog 17676
    Why I need this file? I think files (17670-17675) should be enough?
    Besides, the change 226268039 generated exactly at moment Switching to next online redo.
    Can I try open database regardless of this error?
    Thank you for your prompt response
    Andrey Timofeev
    Edited by: Andrey Timofeev on Jul 21, 2009 3:52 PM

    <P>Good day, sorry for TAG. Once more time. <BR>
    I will make online backup my oracle database with brtools<BR>
    (Oracle 10g, BRBACKUP 7.00 (39))</P>
    <P>brbackup -c -d util_file_online -t online -m all -u /</P>
    <HR>
    <P>bdztoexw anf  2009-01-23 15.27.40  2009-01-23 16.47.21  1  ...............     57    56     0     17671        226215105    17676        226268039  ALL<BR>
    online          util_file_online </P>
    <HR>
    <P> BR0280I BRBACKUP time stamp: 2009-01-23 16.43.38</P>
    <HR>
    <BR>
    <P>BR0232I 57 of 57 files saved by backup utility<BR>
    BR0230I Backup utility called successfully</P>
    <P>BR0280I BRBACKUP time stamp: 2009-01-23 16.43.40<BR>
    BR0340I Switching to next online redo log file for database instance VPP ...<BR>
    BR0321I Switch to next online redo log file for database instance VPP successful</P>
    <P>BR0117I ARCHIVE LOG LIST after backup for database instance VPP</P>
    <P>Parameter                      Value</P>
    <P>Database log mode              Archive Mode<BR>
    Automatic archival             Enabled<BR>
    Archive destination            /oracle/VPP/oraarch/VPParch<BR>
    Archive format                 %t_%s_%r.dbf<BR>
    Oldest online log sequence     17673<BR>
    Next log sequence to archive   17676<BR>
    Current log sequence           17676            SCN: 226268039<BR>
    Database block size            8192             Thread: 1<BR>
    Current system change number   226268041        ResetId: 603135330</P>
    <HR>
    <BR>
    <P>After brbackup  I'll do &quot;brarchive&quot; in the same script:</P>
    <P>brarchive -c -d util_file -sd -u / &gt; $br_out_file</P>
    <HR>
    <P>#ARCHIVE.. 17670  /oracle/VPP/oraarch/VPParch1_17670_603135330.dbf  2009-01-23 15.10.36  43450368         226202112  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17671  /oracle/VPP/oraarch/VPParch1_17671_603135330.dbf  2009-01-23 15.36.12  43430912         226215105  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17672  /oracle/VPP/oraarch/VPParch1_17672_603135330.dbf  2009-01-23 15.40.27  43515904         226227928  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17673  /oracle/VPP/oraarch/VPParch1_17673_603135330.dbf  2009-01-23 15.41.06  43729408         226238784  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17674  /oracle/VPP/oraarch/VPParch1_17674_603135330.dbf  2009-01-23 16.06.06  43450368         226250315  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17675  /oracle/VPP/oraarch/VPParch1_17675_603135330.dbf  2009-01-23 16.43.40  13243904         226263012  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    VPP  util_file  adztolzu svd  2009-01-23 16.47.22  2009-01-23 16.54.47  1  ...........     17670    17675        0        0  </P>
    <HR>
    <P>#</P>
    <HR>
    <P>BR0280I BRARCHIVE time stamp: 2009-01-23 16.51.11<BR>
    BR0232I 6 of 6 files saved by backup utility<BR>
    BR0230I Backup utility called successfully<BR>
    BR0016I 6 offline redo log files processed, total size 220.128 MB</P>
    <HR>
    <P>Then I take tape with this data and try restore only from this tape<BR>
    all datafiles and relevant archive redologs files (17670-17675) restored without errors <BR>
    but in the end ERROR occurred</P>
    <HR>
    <BR>
    <P>ERROR at line 1:<BR>
    ORA-01195: online backup of file 1 needs more recovery to be consistent<BR>
    ORA-01110: data file 1: '/oracle/VPP/sapdata1/system_1/system.data1'</P>
    <P>ORA-00279: change 226268039 generated at 01/23/2009 16:43:40 needed for thread<BR>
    1<BR>
    ORA-00289: suggestion : /oracle/VPP/oraarch/VPParch1_17676_603135330.dbf<BR>
    ORA-00280: change 226268039 for thread 1 is in sequence #17676</P>
    <HR>
    <BR>
    <P>My online backup ended, before swith to redolog 17676<BR>
    Why I need this file? I think files (17670-17675) should be enough?</P>
    <P>Besides, the cange 226268039 generated exactly at moment Switching to next online redo.<BR>
    Can I try open database regardless of this error?</P>
    <BR>
    <P>Thank you for your prompt response<BR>
    Andrey Timofeev</P>
    Edited by: Andrey Timofeev on Jul 21, 2009 4:53 PM
    Edited by: Andrey Timofeev on Jul 21, 2009 4:54 PM

  • Suddenly can't create dirs or files!? "Bad file descriptor"

    Tearing my hair out...
    Suddenly, neither root nor users can create files or directories in directories under /home. Attempting to do so gets: Error -51 in Finder, "Bad file descriptor" from command line, and "Invalid file handle" via SMB.
    However, files and dirs can be: read, edited, moved, and occasionally copied. Rebooting made no difference.
    Anyone have a clue on where to start on this?
    Mac OS X 10.3.9. Dual G4 XServe with 2 x 7 x 250 G XRAID.
    Many   Mac OS X (10.3.9)   Many
    Many   Mac OS X (10.3.9)   Many

    Indeed. This whole episode has exposed rather woeful lack of robustness on the part of the X Server and XRAID... various things failing and server hanging completely as a result of a few bad files on disk.. with lack of useful feedback as to what was happening.
    Best I can tell, we had reached the stage where the next available disk location for directory or file was bad... blocking any further additions.
    I've embarked on the process of copying everything off, remove crash-provoking files, replace one bad drive (hot swap didn't work), erase all, perform surface conditioning (bad-block finding) procedure, and maybe later this century will be copying all files back.
    Looks to me like the bad block finding procedure is finding a few bad blocks on the supposedly good drives... presumably will isolate those, but maybe we need to get more new drives.
    Many   Mac OS X (10.3.9)   Many

  • [SOLVED] Cups refuses to work: "bad file descriptor"

    I am not getting CUPS to work on a new installation.
    I need to use cups as a client printing to a printer attached to a separate server. What I did:
    1. I installed  the cups package and started/enabled the service in systemd.
    2. The remote server has a working CUPS installation (accessible and working from other computers)
    3. I can see the remote printer listed among the printers in CUPS's local  web interface
    However, every job sent to the printer (from the GUI: KDE based apps such as Okular, etcetera) silently fails.
    No job is ever listed on the CUPS web interface.
    If I try to check on CUPS status with lstat I get the following error:
    $> lpstat
    lpstat: bad file descriptor
    Any suggestion on how to fix the problem?
    Edit: more info
    It seems that cups is running fine according to systemd:
    [stefano@gorgias ~]$ systemctl status cups
    ● cups.service - CUPS Printing Service
    Loaded: loaded (/usr/lib/systemd/system/cups.service; enabled)
    Active: active (running) since Mon 2014-08-04 17:56:32 CDT; 16h ago
    Main PID: 2832 (cupsd)
    Status: "Scheduler is running..."
    CGroup: /system.slice/cups.service
    └─2832 /usr/bin/cupsd -f
    Aug 04 17:56:32 gorgias systemd[1]: Started CUPS Printing Service.
    but not so according to CUPS itself:
    [stefano@gorgias ~]$ lpstat -t
    scheduler is not running
    no system default destination
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    So systemd tells me the scheduler is running, while CUPS claims it is not
    Last edited by stefano (2014-08-06 22:15:53)

    Stefano,
    I can't thank you enough for posting your solution! I have spent the last several days trying to get libreoffice and evince to even see my printer. I've been up and down so many cups web interface sessions, I lost count long ago. Finally, I came across some commands I'd never seen before, in particular 'lpstat' which also gave me the 'bad file descriptor' message. Googling provided next to nothing in help, but it did produce a link to your post, which is like finding a needle in a haystack! Anyway, I updated /etc/cups/client.conf as you suggested, and now my applications can finally see my Brother HL-2280DW.
    For what it's worth, I'm running a fresh archlinux install (as of a week or two ago), and just installed cups a few days ago:
    root<t1>@benito:/etc/cups# uname -a
    Linux benito 3.16.1-1-ARCH #1 SMP PREEMPT Thu Aug 14 07:40:19 CEST 2014 x86_64 GNU/Linux
    root<t1>@benito:/etc/cups# pacman -Qs cups
    local/brother-hl2280dw 2.0.4_2-3
        Brother HL-2280DW CUPS Driver
    local/cups 1.7.5-1
        The CUPS Printing System - daemon package
    local/cups-filters 1.0.57-1
        OpenPrinting CUPS Filters
    local/cups-pdf 2.6.1-2
        PDF printer for cups
    local/libcups 1.7.5-1
        The CUPS Printing System - client libraries and headers
    local/python2-pycups 1.9.66-2
        Python CUPS Bindings
    local/system-config-printer 1.4.4-1
        A CUPS printer configuration tool and status applet
    Again, my sincere thanks!!
    (BTW, I'd also like to know why /etc/cups/client.conf doesn't work as advertised...)
    Last edited by archzen (2014-08-24 06:44:58)

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

  • Overcoming file descriptor limitation?

    Hello,
    I am developing a server, which should be able to handle more than 65535 concurrent connections. I have it implemented in Java, but I see limit in file descriptors. Since there is no fork() call in Java, I can't find out what to do.
    The server is basically some kind of HTTP proxy and the connection often waits for upstream HTTP server to handle the connection (which can take some time, during which I need to leave the socket open). I made a simple hack, which helped me, I used LD_PRELOAD to catch bind() library call and set Linux socket option TCP_DEFER_ACCEPT
                    if (setsockopt(sockfd, IPPROTO_TCP, TCP_DEFER_ACCEPT, (char *)&val, sizeof(int)) < 0) ....This tells kernel to accept() connection only when there's some data there, which helps a little (sockets waiting for handshake and request to come don't have to consume file descriptor). Any other hints? Should I somehow convince java to fork()? Or should I switch to 64-bit kernel and 64-bit java implementation?
    I can quite easily switch to Solaris if that would help.
    Any pointers/solutions appreciated.
    Juraj.

    You can use dbms_lob functions to access CLOBS, so changing the datatype may not be as problematic as you think.
    Also, in PL/SQL the VARCHAR2 limit is 32767, not 4000, so if you're accessing the table via a stored procedure you can change the column datatype to CLOB. Provided the data is less than 32767 in length you can use a PL/SQL variable to manipulate it.
    Incidentally, use CLOB not LONG for future compatibility.

  • Want to increase the file descriptors

    Hi,
    I am trying the increase the max number for file descriptor allowed in Solaris.
    I changed the ulimit soft value to hard limit value (65536) as a root.Even the ulimit -a shows the change value for the soft limit.
    When I run my test program to find the value of sysconf(_SC_OPEN_MAX) it shows the changed value.But when I try to open more than 253 files it fails.How do I increase this, also when I change the ulimit -n 65536 why the max number for files open is not increased.
    Looking forward for help.
    Thanks in advance
    -A

    The simplest workaround is to compile as a 64bit executable.
    -m64 in gcc. I don't remember offhand what the option is for Sun CC.
    The difficulty is that any non system libraries your using will also need to be recompiled to be 64 bit..

  • Set file descriptor limit for xinetd initiated process

    I am starting the amanda backup service on clients through xinetd and we
    are hitting the open file limit, ie file descriptor limit.
    I have set resource controls for the user and I can see from the shell that
    the file descriptor limit has increased, but I have not figured out how to get
    the resource control change to apply to the daemon started by xinetd.
    The default of 256 file channels persists for the daemon, I need to increase
    that number.
    I have tried a wrapper script, clearly doing it incorrectly for Solaris 10/SMF
    services. That route didn't work, or is not as straight forward as it used to be.
    There is a more direct way ?
    Thanks - Brian

    Hi Brian,
    It appears with 32 bits applications. You have to use the enabler of the extended FILE facility /usr/lib/extendedFILE.so.1
    % ulimit -n
    256
    % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'
    65536
    % ulimit -n
    65536
    % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
    % ./your_32_bits_applicationMarco

  • 6358629 SSLSocket.close() / read() deadlock  - need more info

    Java 1.5.0_14 is documented as having this fix ....
    "6358629 jsse runtime SSLSocket.close() and SSLSocket.read() deadlock "
    I cannot find this bug in the bug db and the sun page says sometimes bugs don't show up for security reasons.
    The complete absense of info on this bug other than its title means I cannot tell whether this is the same bug I am seeing randomly in production.
    Given that I cannot replicate the problem in a reproducable test case I need sufficient info to justify upgrading the prod VM and risk introducing new bugs in an otherwise stable platform.
    Is there any way to get more info on this bug ?
    Anyone seen a problem like this in their system?
    Is it possible Sun has just hidden the bug for no good reason?
    Any help appreciated.

    Hello, thank you very much for your information.
    I had met same bug on Linux.
    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
    My workaround is here. Closing file descriptor directry by calling private native method( closeSocket0 of java.net.PlainSocketImpl ) using reflection.
    It works well on my environment.
    public static void closeNativeSocket( Socket socket )
    try
    Object object = socket;
    String fieldName = "impl";
    Field field = null;
    Method method = null;
    Class clazz;
    clazz = Class.forName( "java.net.Socket" );
    field = clazz.getDeclaredField( fieldName );
    field.setAccessible( true );
    object = field.get( object );
    clazz = Class.forName( "java.net.PlainSocketImpl" );
    method = clazz.getDeclaredMethod( "socketClose0", new Class[] { Boolean.TYPE } );
    method.setAccessible( true );
    method.invoke( object, new Object[]{ new Boolean( false ) } );
    catch( Exception e )
    e.printStackTrace();
    hope this helps.

  • Genunix: basic rctl process.max-file-descriptor (value 256) exceeded

    Hi .,
    I am getting the following error in my console rapidly.
    I am using Sun Sparc server running with Solaris 10 ., We start getting this error
    suddently after a restart of the server and the error is continously rolling on the console...
    The Error:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    SunOS Release 5.10 Version Generic_118822-25 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Failed to send email alert for recent event.
    SC Alert: Failed to send email alert for recent event.
    Hostname: nitwebsun01
    NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk
    NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 287/0x0 has migrated from enclosure FAKE_ENCLR_SNO to enclosure DISKS
    checking ufs filesystems
    /dev/rdsk/c1t0d0s4: is logging.
    /dev/rdsk/c1t0d0s7: is logging.
    nitwebsun01 console login: Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 439
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
    Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:41 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 467
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
    Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:44 nitwebsun01 last message repeated 1 time
    Nov 20 14:56:49 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 503
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 510
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 516
    Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 540
    Nov 20 14:56:53 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 549
    Nov 20 14:56:53 nitwebsun01 last message repeated 4 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 665
    Nov 20 14:56:56 nitwebsun01 last message repeated 6 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 667
    Nov 20 14:56:56 nitwebsun01 last message repeated 2 times
    Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
    Nov 20 14:56:57 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 868
    Nov 20 14:56:57 nitwebsun01 /usr/lib/snmp/snmpdx: unable to get my IP address: gethostbyname(nitwebsun01) failed [h_errno: host not found(1)]
    Nov 20 14:56:58 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 887
    Nov 20 14:57:00 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 976
    nitwebsun01 console login: root
    Nov 20 14:57:00 nitwebsun01 last message repeated 2 times
    Here I attached my /etc/project file also..
    [root@nitwebsun01 /]$ cat /etc/project
    system:0::::
    user.root:1::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    noproject:2::::
    default:3::::
    process.max-file-descriptor=(privileged,1024,deny);
    process.max-sem-ops=(privileged,512,deny);
    process.max-sem-nsems=(privileged,512,deny);
    project.max-sem-ids=(privileged,1024,deny);
    project.max-shm-ids=(privileged,1024,deny);
    project.max-shm-memory=(privileged,4294967296,deny)
    group.staff:10::::
    [root@nitwebsun01 /]$
    Help me to came out of this issue
    Regards
    Suseendran .A

    This is an old post but I'm going to reply to it for future reference of others.
    Please ignore the first reply to this thread... by default /etc/rctladm.conf doesn't exist, and you should never use it. Just put it out of your mind.
    So, then... by default, a process can have no more than 256 file descriptors open at any given time. The likelyhood that you'll have a program using more than 256 files very low... but, each network socket counts as a file descriptor, therefore many network services will exceed this limit quickly. The 256 limit is stupid but it is a standard, and as such Solaris adheres to it. To look at the open file descriptors of a given process use "pfiles <pid>".
    So, to change it you have several options:
    1) You can tune the default threshold on the number of descriptors by specifying a new default threshold in /etc/system:
    set rlim_fd_cur=1024
    2) On the shell you can view your limit using 'ulimit -n' (use 'ulimit' to see all your limit thresholds). You can set it higher for this session by supplying a value, example: 'ulimit -n 1024', then start your program. You might also put this command in a startup script before starting your program.
    3) The "right" way to do this is to use a Solaris RCTL (resource control) defined in /etc/project. Say you want to give the "oracle" user 8152 fd's... you can add the following to /etc/project:
    user.oracle:101::::process.max-file-descriptor=(priv,8152,deny)
    Now log out the Oracle user, then log back in and startup.
    You can view the limit on a process like so:
    prctl -n process.max-file-descriptor -i process <pid>
    In that output, you may see 3 lines, one for "basic", one for "privileged" and one for "system". System is the max possible. Privileged is the limit by which you need to have special privs to raise. Basic is the limit that you as any user can increase yourself (such as using 'ulimit' as we did above). If you define a custom "priviliged" RCTL like we did above in /etc/projects it will dump the "basic" priv which is, by default, 256.
    For reference, if you need to increase the threshold of a daemon that you can not restart, you can do this "hot" by using the 'prctl' program like so:
    prctl -t basic -n process.max-file-descriptor -x -i process <PID>
    The above just dumps the "basic" resource control (limit) from the running process. Do that, then check it a minute later with 'pfiles' to see that its now using more FD's.
    Enjoy.
    benr.

  • How do I find the number of file descriptors in use by the system?

    Hey folks,
    I am trying to figure out how many file descriptors my Leopard system has in use. On FreeBSD, this is exposed via sysctl at the OID kern.open_files (or something close to that, can't recall exactly what the name is). I do see that OS X has kern.maxfiles, which gives the maximum number of file descriptors the system can have open, but I don't see a sysctl that tells me how many of the 12288 descriptors the kernel thinks are in use.
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors. I don't think it's that easy (perhaps it is and I'm just over complicating things).
    So, anyone know where this information is?
    Thanks for your time.

    glsmith wrote:
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors.
    Can't think of anything other than lsof right now. However:
    Only root can list all open files, all other users see only their own
    There is significant duplication, and
    All types of file descriptor are listed, which you may not want, so you need to consider filtering.
    As an example, the following will count all regular files opened by all users:
    sudo lsof | awk '/REG/ { print $NF }' | sort -u | wc -lIf you run it without the sudo, you get just your own open files.

Maybe you are looking for

  • How can I change apostrophe & quote from curly to straight. I'm using Word.

    I'm using a brand new Mac book Pro with word loaded in it. The curly quote marks and apostrophes mess up my feeds and translate as question marks inside of dark diamonds. I know this can be fixed because my 2007 mac book pro did the same thing regard

  • Cannot install Adobe Creative Suite 3

    Would need some more tips about how to get Adobe Creative Suite 3 design standard to at all install. The machines I want to install onto are PPC G5s dual core with 4 GB of RAM and Mac OS X 10.4.10 (we do not want 10.4.11 as we need Safari 2.0.4 to re

  • Is using a 12 volt charger bad for my iPhone 5?

    Is using a 12 volt charger bad for my iPhone 5?  I just purchased the iPad Air and it came with the same lightning cable that was included with my iPhone 5.  The only difference is that the charger that came with my iPhone is a 10 volt and the one th

  • Phtos on ipad to macbook pro

    Question?  I don't know if this has been covered already but I could not find a thread that specifically addressed this situation.  Just bought a new Macbook Pro 15' Retina.  Reasoning behind this is my late 2009-10 iMac's hard drive failed and not a

  • Debug loop seconds in process chain

    hi we have an issue. our DTP process has failed in our process chain and the reason was that the transformations were inactive. So we have activated the transformations and have executed the DTP manually. Now when i go to log view of my process chain