Max number of file descriptors in 32 vs 64 bit compilation

Hi,
I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
Did anybody ever see similar problem before?
Thanks in advance,
Mikhail

On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

Similar Messages

  • How do I find the number of file descriptors in use by the system?

    Hey folks,
    I am trying to figure out how many file descriptors my Leopard system has in use. On FreeBSD, this is exposed via sysctl at the OID kern.open_files (or something close to that, can't recall exactly what the name is). I do see that OS X has kern.maxfiles, which gives the maximum number of file descriptors the system can have open, but I don't see a sysctl that tells me how many of the 12288 descriptors the kernel thinks are in use.
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors. I don't think it's that easy (perhaps it is and I'm just over complicating things).
    So, anyone know where this information is?
    Thanks for your time.

    glsmith wrote:
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors.
    Can't think of anything other than lsof right now. However:
    Only root can list all open files, all other users see only their own
    There is significant duplication, and
    All types of file descriptor are listed, which you may not want, so you need to consider filtering.
    As an example, the following will count all regular files opened by all users:
    sudo lsof | awk '/REG/ { print $NF }' | sort -u | wc -lIf you run it without the sudo, you get just your own open files.

  • Max Number of Files?

    Is there a maximum number of photos/videos you can have in Photoshop Album?
    (Sorry if has already been answered. I searched but couldn't find anything)
    Thanks!

    Hello,
    I have the following questions which is linked to the "max number of files".
    Since I am using the oldest PhotoShop Album version (V1), I am wondering if I can continue with it. I have 6000 pictures, and growing ~100 each months.I would like to migrate to PhotoShop Elements 8, I tried the trial version, but when trying to import the PSA catalog file, I have an error around 35% conversion... Anybody know what I can do ???
    I tried to do a repair with PhotoShop Album, without success.
    I can import pictures, by groups of 1000
    But then, how could I merge all catalog ??
    Thanks,
    Kamayana

  • Max Number of files supported by new Zen Touch Firmwar

    Hello
    I used to have about 34gigs of music on my zen touch with the old firmware. But now it will only accept about 20 gigs. Does anyone know what is the max number of files that zen touch supports with new firmware ?
    Thank You

    RBBrittain wrote:
    Has anyone verified that this is true of MTP players as well? I doubt it'll be different (the tag-data library system is common to all Nomad-type Creative players, whether PDE or MTP), but with all the changes needed for MTP, it would be helpful.
    I don't have a 40Gb Touch, which is really the kind of player you'd need to test this.
    BTW, I was referring to the overall capacity limit, which is tied more to space than to number of songs; that was what Creative was talking about when they said MTP reduces the Touch's song capacity. Having a Micro myself, I've never had enough capacity to test the internal song limits.
    Free space is pretty obvious, and the initial question was relating to number of files and any potential limit. I'm not sure what it is about MTP that reduces the space, whether it's the firmware, file system, or some addition to the files.

  • E-edition freezes up with 1A.pdf The max. number of files are already open no other files can be opened or printed until some are closed , showing four of five and computer froze any I must close by shutting down manually.

    My online E-Edition newspaper will freeze up after loading the front page and gives me the message that the max. number of files are already open and no other files can be printed or opened until some are closed. 1A.pdf. When you click ok it repeats itself and you have to force shutdown by holding off button. I'm totally stumped.

    I have read that "1000" is default maximum for Acrobat 4 or Acrobat 5 (I don't remmember it exactly anymore). Concerning my PC and RAM. I have a high-end graphical computer at my office; so I'm sure the RAM memory is not the reason.
    Anyway, thanks for answer.

  • Throttling a file adapter to consume a max number of files per minute

    Is there a way to design a file adapter to throttle its processing bandwidth.
    A simple use case scenario is describes as follows;
    A File Adapter can only consumes a max of 5 files per minute. The producer average throughput is 3 files per minute but during peak times it can send 100 files per min. The peak times occur during end of year or quarterly accounting periods. If the consumer consumes more than 5 files per min the integrity of the environment and data is compromised.
    The SLA for the adapter is to :
    - Each file will be processed within 2 seconds.
    - Maximum File Transactions Per minute is 5
    An example is as follows.
    The producer sends 20 files to its staging directory within a minute. The consumer only processes 5 of these files in the first minute, sleeps then wakes up to consume the next 5 files in the second minute. This process is repeated until all files are processed.
    The producer can send another batch of files when ever it likes. So in the second minute the producer can send another 70 files. The consumer will throttle the files so it only processes 5 of these files every minute.

    Hi,
    If you have a polling frequency set to 2 secs..then controlling it to read only five files a min is difficult.?You can have the polling frequecy changed to 12secs
    or u can schedule a BPEL for every min and use synchronous read operation and loop it 5 times to read 5 files.

  • Max number of files that can be opened in harddrive through AEBS?

    What is the maximum number of files that can be opened in a external hard drive (Lacie 500 GB, USB 2.0) connected through the AEBS? Looks like there is a limit.
    When I am trying to 'seed' files through a bit-torrent client (Azureus 2.5.0.4) many of them give an error "Error: too many open files". I never had this error before.
    When I tried to delete a file in the hard drive, I got the error "too many open files". Then I stopped 'seeding' in Azureus and I was able to delete the file.

    Thanks Chris,  it has almost become unwieldy now at this stage ... I'm actually thinking of calling another template for certain sections
    I belive this is possible..

  • Max number of files in folder

    I'm setting up a web server running under OS 8.6. What's the limit as to number of files in a folder? Some of my image folders will have upwards of a thousand files. Do I need to break them up?
    Thanks for any insights.

    Hi, Thomas -
    This Apple KBase article states the limits for the HFS+ drive format; if your hard drive is not formatted as HFS+ (= Mac OS Extended), it should be.
    Article #24601 - HFS+ Format: Volume and File Limits
    Although the format allows for over 30,000 items (files and folders) in a folder, having such a large number 'loose' (not segregated into smaller bunches in their own folders) will slow down the display of Finder windows and Navigation Services windows.

  • Want to increase the file descriptors

    Hi,
    I am trying the increase the max number for file descriptor allowed in Solaris.
    I changed the ulimit soft value to hard limit value (65536) as a root.Even the ulimit -a shows the change value for the soft limit.
    When I run my test program to find the value of sysconf(_SC_OPEN_MAX) it shows the changed value.But when I try to open more than 253 files it fails.How do I increase this, also when I change the ulimit -n 65536 why the max number for files open is not increased.
    Looking forward for help.
    Thanks in advance
    -A

    The simplest workaround is to compile as a 64bit executable.
    -m64 in gcc. I don't remember offhand what the option is for Sun CC.
    The difficulty is that any non system libraries your using will also need to be recompiled to be 64 bit..

  • Max number of streams

    Hello,
    My application is running out of stdio streams. There is a note in stdio man pages: no more than 255 files may be opened using fopen(), and only file descriptors 0 through 255 can be used in a stream.
    The application creates a bunch of sockets (>255) and eventually calls fopen() that is failing because of the 255 limit. The application increases number of file descriptors that a process may create. This helps for socket() and open() calls but not for fopen(). Is there any workaround for this problem (other than use open() instead of fopen()?)
    Regards,
    --Stas                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hi
    First of all lot depends on which OS you are using. For all 32 bit
    OS stdio(3S) the limit is 256. This is because the fd itself is
    defined as an unsigned 8 bit value. On Sparc Solaris 7 and beyond with
    64 bit option turned on it is 65536 (64K).
    Having said that however there are ways to manipulate it. No guarantees that it will work, but try it anyways.
    All versions of Solaris (including Solaris 7 64-bit) have a default
    "soft" limit of 64 and a default "hard" limit of 1024.
    Processes may need to open many files or sockets as file
    descriptors. Standard I/O (stdio) library functions have
    a defined limit of 256 file descriptors as the fopen() call,
    datatype char, will fail if it can not get a file descriptor between
    0 and 255.
    The open() system call is of datatype "int", removing this limitation.
    However, if open() has opened 0-255 file descriptors without closing
    any, fopen() will not be able to open any file descriptors as all the
    low-numbered ones have been used up. Applications that need to use
    many file descriptors to open a large number of sockets, or
    other raw files, should be forced to use descriptors
    numbered above 256. This allows system functions such as
    name services, to work as they depend upon stdio routines.
    (See p 368 "Performance and Tuning - Java and the Internet").
    There are limitations on the number of file descriptors
    available to the current shell and its descendents. (See the ulimit
    man page). The maximum number of file descriptors that can
    be safely used for the shell and Solaris processes is 1024.
    This limitation has been lifted for Solaris 7 64-bit which
    can be 64k (65536) as explained before.
    Therefore the recommended maximum values to be added to /etc/system
    are:
    set rlim_fd_cur=1024
    set rlim_fd_max=1024
    The in the shell :
    Use the limit command with csh:
    % limit descriptors 1024
    Use the ulimit command with Bourne or ksh:
    $ ulimit -n 1024
    However, some third-party applications need the max raised.
    A possible recommendation would be to increase rlim_fd_max,
    but not the default (rlim_fd_cur). Then rlim_fd_cur can be
    raised on a per-process basis if needed, but the higher setting
    for rlim_fd_max doesn't affect all processes.
    Let me know how it goes
    -Manish
    SUN-DTS

  • Solaris file descriptor question

    Hi,
    We have an application on Solaris 2.6 and
    the shell in which the server runs has a
    file descriptor limit of 1024. What does
    this mean? Does this mean that every process
    from the shell will have 1024 fds? What
    is the maximum # of fds that a solaris 2.6
    system can provide?
    When I run "sysdef", I get:
    ffffffff:fffffffd file descriptors
    How do I interpret this line?
    Is this 64K - some value?
    If system limit is 64K and if each
    shell has 1024, how are the fds allocated
    to the shells?
    What I mean is:
    say I have 3 shells each with descriptor
    limit of 1024, then is the distribution
    something like 1024-2047 for shell 1,
    2048 - 3071 for shell 2 (i.e. 3072) and
    3072 - 4095 for shell 3?
    Appreciate any explanation of this anyone
    can offer.
    thanks,
    mshyam

    Hi There,
    About File Descriptors and Their Limitations:
    All versions of Solaris (including Solaris 7 64-bit) have a default "soft" limit of 64 and a default "hard" limit of 1024.
    Processes may need to open many files or sockets as file descriptors. Standard I/O (stdio) library functions have a defined limit of 256 file descriptors as the fopen() call, datatype char, will fail if it can not get a file descriptor between 0 and 255. The open() system call is of datatype int, removing this limitation. However, if open() has opened 0 to 255 file descriptors without closing any, fopen() will
    not be able to open any file descriptors as all the low-numbered ones have been used up. Applications that need to use many file descriptors to open a large number of sockets, or other raw files, should be forced to use descriptors numbered above 256. This allows system functions such as name services, to work as they depend upon stdio routines.
    (See p 368 "Performance and Tuning - Java and the Internet").
    There are limitations on the number of file descriptors
    available to the current shell and its descendents. (See the ulimit man page). The maximum number of file descriptors that can be safely used for the shell and Solaris processes is 1024.
    This limitation has been lifted for Solaris 7 64-bit which can be 64k (65536).
    Therefore the recommended maximum values to be added to /etc/system are:
    set rlim_fd_cur=1024
    set rlim_fd_max=1024
    To use the limit command with csh:
    % limit descriptors 1024
    To use the ulimit command with Bourne or ksh:
    $ ulimit -n 1024
    However, some third-party applications need the max raised. A possible recommendation would be to increase rlim_fd_max, but not the default (rlim_fd_cur). Then rlim_fd_cur can be raised on a per-process basis if needed, but the higher setting
    for rlim_fd_max doesn't affect all processes.
    I hope this helps your understanding about systemwide file descriptor max limit in conjunction with shell and per process file descriptor limits.
    ....jagruti
    Deveoper Technical Support
    Sun Microsystems Inc.

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • File descriptor

    Hi,
    I am a newbie in this, currently I face this problem where my application server running on Solaris 8, Sun Application Server 7, has been throwing the following error:
    Pr_proc_desc_table_full_error: file descriptor.
    This has caused the application to be very unstable. Checked with SUN app server 7 documentation online says there is a need to set the rlim_fd_max value to 4086. The current value = 1024.
    I need advice if this value should be changed to a higher value (4086 or higher)? And that this setting is set on the SUN OS level? Or you have any suggestion or concern? Please help.
    Thaks

    Yes, you need to increase the number of file descriptors available to the process. For Solaris 8, in /etc/system say:
    set rlim_fd_max=4096

  • Cannot increase file descriptors

    Hello all,
    I'm trying to increase the number of file descriptors of the system.
    Currently with ulimit -n, I get 2048. So I would like to increase the limit to 8192.
    I have added the following lines in the /etc/system file:
    set rlim_fd_max=8192
    set rlim_fd_cur=8192
    These are standard lines I have added in other systems and after rebooting I always get the right value. But in one of the machines of my server room this doesn't seem to work. The machine is exactly the same as all the others: SunFire V210, Solaris 10 with Patch Cluster 31/10/2007
    I have tried to reboot several times: init 6, reboot -- -r ... but I always get 2048 with ulimit -n
    Is there any other parameter somewhere than can limit this value?
    Thanx.

    Doing more tests... now I'm even more confused.
    Rebooting the system, I connected to the console and saw that during the boot part, there is a warning about the /etc/system file:
    Rebooting with command: boot
    Boot device: disk0 File and args:
    WARNING: unknown command 'nfs' on line 85 of etc/system
    WARNING: unknown command 'nfs' on line 86 of etc/system
    SunOS Release 5.10 Version Generic_118833-33 64-bit
    Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Those warnings refer to a problem of the /etc/system file that I had in the past (when I took over the system), but I modified the lines.
    They used to be just:
    nfs:nfs4_bsize=8192
    nfs:nfs4_nra=0
    Later I added the "set" in the front.
    Anyway, i changed the order of some commands and in lines 85 and 86 I have the following:
    85 * Begin MDD root info (do not edit)
    86 rootdev:/pseudo/md@0:0,30,blk
    The mirroring lines.
    So for some reason at boot, Solaris reads the old file. But I dont know which old file because it's been modified and I dont keep any backup of the original one. So, from where is Solaris reading that "strange" /etc/system file? It's definitely not the one I can see doing: cat /etc/system

  • How to close BDB open File Descriptors ?

    Hi,
    In our current BDB environment,at runtime we switch Berkeley DB directories so that application reads data from new directory.
    We ran into File Descriptors issue when we switch to newer version of data as system was exceeding allowed number of File Descriptors. And had to restart the machine.
    1) I wanted to confirm that would Closing BDB Environment using env.close() , fix this issue ? Meaning does closing of environment, closes BDB file descriptors ?
    2) Also is there any way to test this functionality of closing of BDB File Descriptors ? We tried with "lsof" command which lists open file descriptors but could not see anything related jdb files or similar ? So how do we make sure that, File Descriptors are indeed getting closed or not ?
    Please let me know if there is any other way to get around File Descriptor problem.
    Thanks,

    1) I wanted to confirm that would Closing BDB Environment using env.close() , fix this issue ? Meaning does closing of environment, closes BDB file descriptors ?Yes, Environment.close will close all file descriptors.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • What other options are there for burning a dvd outside of premiere pro?

    I tried the downloading of PPCS6 which included encoder. That didn't work, the dynamic link library did not contain a link to send something to encoder. Whats the procedure in buring a project from PP to a dvd not using encoder? Thanks for your help!

  • Flash carousel not appearing correctly

    Hi, I am building a website, and have a little flash problem. The site is here: IBM Szerver, Storage, DELL szerver, alkatrészek I am relatively new to Flash, and try to insert a flash slideshow component I purchased few days ago, but it does not even

  • Connecting T61P to HDMI Plasma via Port Rep.???

    Hi all, I have a T61 with the docking station which has DVI on the back.  On my Panny plasma I have HDMI connections available. So, can I connect my T61/docking station via cable to my HDMI plasma to project Windows on my plasma? Thanks!

  • How do I change the pixel size when converting Word doc to .pdf?

    When converting a Word Document to .pdf the file size is too large. How do I reduce the pixel size?

  • Ical wont stop asking for password

    hey guys, my iCal will not stop asking for my password and it is driving me nuts, it means i cant use the iCal at all which is very annoying as before it started tripping out i relied heavily upon it. is there a fix that anyone knows of out there to