LVRefNum in a CIN file descriptor difficulties

I am currently developing an application that uses labview and IMAQ to acquire images from a 1429 cameralink capture card.
We aim to analyse the images using some previously devoped c code. Our complications come when trying to access the images
from within a c interface node, where we are having difficulty in obtaining the file descriptors for each image.
typedef struct {
 int32 dimSize;
 LVRefNum ImageOut[1];
 } TD1;
typedef TD1 **TD1Hdl;
MgErr CINRun(TD1Hdl ImagesOut, uInt32 *ROIWidth, uInt32 *ROIHeight,
 TD2Hdl arg1, TD3Hdl arg2)
  File fd;
  LVRefNum *thisrefnum;
  uInt8 *target;
  int32 numread;
  int32 frames;
  MgErr test;
  frames=(*ImagesOut)->dimSize;
  thisrefnum=(*ImagesOut)->ImageOut;
  test=FRefNumToFD(thisrefnum[0], &fd);
  if(test==mgArgErr)DbgPrintf("mgArgErr");
An error is reported by FRefNumToFD ie i don't pass a valid refnum. Does anyone have any ideas what I done wrong?
Any suggestions much appreciated.
Ben

Hi Frisson,
  in the UK we held a Vision Day (another coming December 1st 2005 - contact the UK office on 01635 572410 to register an interest or to  find out more go here : http://sine.ni.com/apps/we/nievn.ni?action=display_offerings_by_event&event_id=14895&country=GB&site...)
Please find attached one of the presentation files detailing how the image is in memory, and how (in this case) we could interface to an image acquired in LabVIEW inside of LabWindows/CVI code.
Hope this helps.
Sacha Emery
National Instruments (UK)
// it takes almost no time to rate an answer
Attachments:
CVI.zip ‏659 KB

Similar Messages

  • LVRefNum in a CIN file descriptor difficulti​es

    I am currently developing an application that uses labview and IMAQ to acquire images from a 1429 cameralink capture card.
    We aim to analyse the images using some previously devoped c code. Our complications come when trying to access the images
    from within a c interface node, where we are having difficulty in obtaining the file descriptors for each image.
    typedef struct {
     int32 dimSize;
     LVRefNum ImageOut[1];
     } TD1;
    typedef TD1 **TD1Hdl;
    MgErr CINRun(TD1Hdl ImagesOut, uInt32 *ROIWidth, uInt32 *ROIHeight,
     TD2Hdl arg1, TD3Hdl arg2)
      File fd;
      LVRefNum *thisrefnum;
      uInt8 *target;
      int32 numread;
      int32 frames;
      MgErr test;
      frames=(*ImagesOut)->dimSize;
      thisrefnum=(*ImagesOut)->ImageOut;
      test=FRefNumToFD(thisrefnum[0], &fd);
      if(test==mgArgErr)DbgPrintf("mgArgErr");
    An error is reported by FRefNumToFD ie i don't pass a valid refnum. Does anyone have any ideas what I done wrong?
    Any suggestions much appreciated.
    Ben

    Hi,
      you posted this on the LabVIEW forum - I'm pointing this thread to that one so we can answer this in one place.
    Thanks
    http://forums.ni.com/ni/board/message?board.id=170​&message.id=150718
    Sacha Emery
    National Instruments (UK)
    // it takes almost no time to rate an answer

  • How do I find the number of file descriptors in use by the system?

    Hey folks,
    I am trying to figure out how many file descriptors my Leopard system has in use. On FreeBSD, this is exposed via sysctl at the OID kern.open_files (or something close to that, can't recall exactly what the name is). I do see that OS X has kern.maxfiles, which gives the maximum number of file descriptors the system can have open, but I don't see a sysctl that tells me how many of the 12288 descriptors the kernel thinks are in use.
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors. I don't think it's that easy (perhaps it is and I'm just over complicating things).
    So, anyone know where this information is?
    Thanks for your time.

    glsmith wrote:
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors.
    Can't think of anything other than lsof right now. However:
    Only root can list all open files, all other users see only their own
    There is significant duplication, and
    All types of file descriptor are listed, which you may not want, so you need to consider filtering.
    As an example, the following will count all regular files opened by all users:
    sudo lsof | awk '/REG/ { print $NF }' | sort -u | wc -lIf you run it without the sudo, you get just your own open files.

  • RFFOCA_T: DME with file descriptor in first line (RBC)

    Hi All,
    I've customized the automatic payment run for a company located at Canada - including generated DME- file by the report RFFOCA_T. The DME file looks good - but sadly the house bank (RBC, Royal Bank of Scotland) is expecting two things different:
    "Different formats now exist for the Royal Bank and CIBC from the default CPA-005 specification.
    u2022 Type 'A' and 'Cu2019 records have been modified to handle RBC and CIBC
    u2022 A parameter was added to job submission to request the bank type
    This process has been revised to include two headers as part of the tape_header code segment.
    u2022 The first header must be the first line in the file and appear in the following format: $$AAPDCPA1464[PROD]NL$$
    u2022 The second header (positions 36 to 1464) must be filled with blanks, not zeros"
    (taken from "SCT Banner, Finance, Release Guide - January 2005, Release 7.0")
    In our DME-file the second header (position 36 to 1464) is correct, but the first header is completely missing.
    RBC wrote me in an email:: "The first line of the file needs the file descriptor ($$AAPDCPA1464[PROD]NL$$). The date format and the client number is correct. When the $$ file descriptor has been added please upload the TEST file":
    I could not find any solution at SAP/ OSS - can anybody help, please?
    Thanks a lot!
    Sandra.

    Hi Revi,
    I'm not sure if I understand you in the right way.
    I do not have a problem only with the $$ at the beginning. The whole first expected line as the file descriptor is missing. As we saw in the report code, it's not considered. At least I hope, there is a simple solution - like an update or else - but maybe we need to enlarge the report itself by a programmer?
    Thanks,
    Sandra

  • "IOException: Bad file descriptor" thrown during readline()

    I'm working on a system to send data to bluetooth devices. Currently I have a dummy program that "finds" bluetooth devices by listening for input on System.in, and when one is found, the system sends some data to the device over bluetooth. Here is the code for listening for input on System.in
    InputStreamReader isr = new InputStreamReader(System.in);
    BufferedReader br = new BufferedReader(isr);
    boolean streamOpen = true;
    while(streamOpen) {
         String next = "";
         System.out.println("waiting for Input: ");
         try {
              next = br.readLine();
               // other code here
          } catch (IOException ioe) {
               ioe.printStackTrace();
    } // end of whileThis is running in it's own thread, constantly listening for input from System.in. There is also another thread that handles pushing the data to the bluetooth device. It works the first time it reads input, then the other thread starts running also, printing output to System.out. When the data has successfully been pushed to the device, the system waits for me to enter more information. As soon as I type something and press return, i get an endless (probably infinte if I don't kill the process) list of IOExceptions:Bad file descriptor exceptions that are thrown from the readline() method.
    Here is what is being printed:
    Waiting for Input: // <-- This is the thread listening for input on System.in
    system started with 1 Bluetooth Chip // From here down is the thread that pushing data to the BT device
    next device used 0
    default device 0000000000
    start SDP for 0000AA112233
    *** obex_push: 00:00:AA:11:22:33@9, path/to/file.txt, file.txt
    I'm not even sure which line it's trying to read when the exception gets thrown, whether it's the first line after "Waiting for Input: " or it's the line where I actually type something and hit return.
    Any ideas why this might be happening? Could it have something to do with reading from System.in from a thread that is not the main thread?
    Also, this is using java 1.6

    Actually, restarting the stream doesn't work either..... here's a sample program that I wrote.
    public class ExitListener extends Thread {
         private BufferedReader br;
         private boolean threadRunning;
         public ExitListener(UbiBoardINRIA ubiBoard) {
              super("Exit Listener");
              threadRunning = true;
              InputStreamReader isr = new InputStreamReader(System.in);
              br = new BufferedReader(isr);
         public void run() {
              while (threadRunning) {
                   try {
                        String read = br.readLine();
                        if (read.equalsIgnoreCase("Exit")) {
                             threadRunning = false;
                   } catch (IOException ioe) {
                        System.out.println("Can you repeat that?");
                        try {
                             br.close();
                             br = new BufferedReader(new InputStreamReader(System.in));
                        } catch (IOException ioe2) {
                             ioe2.printStackTrace();
                             System.out.println("Killing this thread");
                             threadRunning = false;
                   } // end of catch
         } // end of run
    }output:
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I know that this is probably not enough code to really see the problem, but my main question is what could be going on somewhere else in the code that could cause this BufferedReader to not be able to re-open

  • Suddenly can't create dirs or files!? "Bad file descriptor"

    Tearing my hair out...
    Suddenly, neither root nor users can create files or directories in directories under /home. Attempting to do so gets: Error -51 in Finder, "Bad file descriptor" from command line, and "Invalid file handle" via SMB.
    However, files and dirs can be: read, edited, moved, and occasionally copied. Rebooting made no difference.
    Anyone have a clue on where to start on this?
    Mac OS X 10.3.9. Dual G4 XServe with 2 x 7 x 250 G XRAID.
    Many   Mac OS X (10.3.9)   Many
    Many   Mac OS X (10.3.9)   Many

    Indeed. This whole episode has exposed rather woeful lack of robustness on the part of the X Server and XRAID... various things failing and server hanging completely as a result of a few bad files on disk.. with lack of useful feedback as to what was happening.
    Best I can tell, we had reached the stage where the next available disk location for directory or file was bad... blocking any further additions.
    I've embarked on the process of copying everything off, remove crash-provoking files, replace one bad drive (hot swap didn't work), erase all, perform surface conditioning (bad-block finding) procedure, and maybe later this century will be copying all files back.
    Looks to me like the bad block finding procedure is finding a few bad blocks on the supposedly good drives... presumably will isolate those, but maybe we need to get more new drives.
    Many   Mac OS X (10.3.9)   Many

  • Bad File Descriptor in /dev/fd/3, and 94Gb of disk space missing

    I noticed a few days ago, possibly as the result of a recent kernel panic, that I have a large chunk of hard drive space missing. The Finder reports that I have approximately 89Gb of free space, but using "df" reports that there is approximately 178Gb free. Using "du" doesn't report any unexpected huge files, so I tried running GrandPerspective. In addition to the usual file usage and free space, this shows a single 94Gb block of "miscellaneous used space".
    I then booted into Single User mode to run fsck on the startup drive. This reported several errors, and took 3 passes to repair the directory structure, but didn't recover the missing space. I have subsequently run TechTool Pro and DiskWarrior on the startup drive (both of which found various minor errors), but the 94Gb still refuses to show itself.
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    After searching Google, a "Bad file descriptor" error points to an inode issue, that fsck cannot fix, but I don't know enough (read: anything) about inodes to risk running the clri command to zero the problem inode.
    Short of blanking the startup disk and installing from scratch (not an attractive option), is there anything I can do to fix the broken inode and recover the missing space?
    Any help appreciated.

    Drawing Business wrote:
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    This is not an error and always happens with find unless you exclude the /dev hierarchy from the search. (Interestingly this seems to have gone away with 10.5??)
    To locate your missing space, try WhatSize. Another alternative which I have not used personally is Disk Inventory X.
    As an additional point, with 10.4 it is actually better to use Disk Utility, since it does more than fsck: Resolve startup issues and perform disk maintenance with Disk Utility and fsck, quote:
    Note: If you're using Mac OS X 10.4 or later, you should use Disk Utility instead of fsck, whenever possible.

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • Strange dots in Phantom "packed" .cine-files with Mac OSX 10.9

    Hi there
    I have problem with my Miro .cine-files. After upgrading my operating system to MAC OSX 10.9 (Maverick) I found strange dots on the right side of the frame in my footage shot with the Phantom Miro 320LC.
    I first thought it might be a bad pixel on the camera but then checked older footage of the same camera that was already edited and working perfectly fine. Even those .cine-files now have those dots in Premiere CC, Speedgrade & Davinci Resolve 10 Lite (FreeVersion). I also own the GlueTools Plugin for the .cine codec and if I load the files into AfterEffects the dots are NOT there.
    Screenshot Premiere CC - top right & bottom right of the frame
    Screenshot AfterEffects w/ GlueTools Plugin
    Another work-around is to load the .cine-files in the PCC-Software and safe them as "unpacked" .cine-files. The dots are gone in all programs mentioned above. The only program that interprets the footage correctly is Davinci - all other programs show the file approx. 2 stops darker and have the same overall green tint to it.
    Have you encountered the same problems? Did you find a solution beside downgrading OSX to something like 10.6.?
    As a reference I uploaded 3 files into my dropbox:
    https://www.dropbox.com/sh/aukk8hj8xjhkp1r/AACd5LwaanRKj4xhF1SHPrSla
    1. file: c_15153_187_ORG.cine --> original .cine file from Miro
    2. file: miro_test_unpacked.cine --> original .cine file converted with PCC-Software and safed "unpacked"
    3. flie: miro_test_packed.cine --> original .cine file converted with PCC-Software and safed "packed"
    I'm working as a Highspeed-Operator, the post is normally done somewhere else and the systems vary from MAC to PC with Adobe, Avid or ...
    Any suggestions would be appreciated.
    Kind regards
    torsten

    Is anybody from Adobe monitoring this issue? I'm having the same problems with Premiere and .CINE files. Previewing them looks fine (no longer having the darker issue) but when I render, it becomes green and pixelated.

  • Where is the file descriptor leak in this code?

    The following "appendStringToFile" method is used to append a String to a file. My java app calls this method a few times per minute, and then crashes after running for about 12 hours. The exception is "Too many open files". The code that calls it does so from a synchronized block, so concurrency is not the problem, and it would seem that only one file descriptor should be used at a time.
    Can anyone find the problem?
         public static void createParentDirectoryIfNeeded(String path) {
              String dirPath = path.substring(0, path.lastIndexOf('/'));
              File dir = new File(dirPath);
              if (!dir.exists()) {
                   dir.mkdirs();
         public static void appendStringToFile(String s, String path) {
              FileWriter fileWriter = null;
              try {
                   createParentDirectoryIfNeeded(path);
                   // create file if it doesn't already exist
                   File file = new File(path);
                   file.createNewFile();
                   if (s != null) {
                        fileWriter = new FileWriter(file, true);
                        fileWriter.write(s.toCharArray());
              } catch (IOException ioe) {
                   ErrorHandler.handleError(ioe, LOG);
              } finally {
                   if (fileWriter != null) {
                        try {
                             fileWriter.close();
                        } catch (IOException ioe) {
                             ErrorHandler.handleError(ioe, LOG);
         }Edited by: mikewertheim on Sep 26, 2008 11:54 AM
    Edited by: mikewertheim on Sep 26, 2008 11:54 AM
    Edited by: mikewertheim on Sep 26, 2008 11:55 AM
    Edited by: mikewertheim on Sep 26, 2008 11:56 AM

    I don't know what is causing your problem but I can suggest several improvements.
    1) Given a file 'f' then one can create the parent directory using   f.getParentFile().mkdirs();so that one does not need to useString dirPath = path.substring(0, path.lastIndexOf('/'));          2) There is not need to test if the directory exists before creating it. If it exists then   f.getParentFile().mkdirs(); will just do nothing.
    3) There is no need to usefile.createNewFile();because if the file does not exist then fileWriter = new FileWriter(file, true);will create it.

  • No of file descriptors in solaris 10

    hi,
    I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
    i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
    Thank you
    Jonu Joy

    Thank you alan
    even after setting the max file descriptor to 8192, the output from pfiles show as 4096
    Current rlimit: 4096 file descriptors
    would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
    thank you
    Jonu Joy

  • "The document 'Backup of Backup of Oral Report' could not be saved as 'Oral Report'. Bad file descriptor"

    Hello,
    Whenever I try to save my Keynote file, I receive an error message saying: "The document 'Backup of Backup of Oral Report' could not be saved as 'Oral Report'. Bad file descriptor".
    I've tried using other name to save my file, saving the file as a backup, saving it within my documents, and on a USB, but the file won't save. I'm using Keynote '09, Version 5.1.1 (1034) on my Mac Desktop running Mac OS X Version 10.6.8
    How can I fix this problem and save my file?

    change icloud backup to local backup on this compuet. this worked for me.

  • .CIN file doesn't open

    Hello,
         I got a .CIN file from a student friend and I was looking for an application that opens this kind of files, so I was looking on the internet and I found out that Adobe Photoshop CS 6 opens them. I installed the trial version and I tried to open the file but it keeps telling me "Could not complete your request because the file-format module cannot parse the file for CIN file". I don't know what to do so I will upload here the files, and please help me to open them because I really need them for my project.
    http://speedy.sh/csS9J/Imagos.CIN
    http://speedy.sh/8JCA2/Sal-du-cuorse.CIN
    Thanks,
    Sany

    Looking in the file I see "Calculux Indoor Project file" you may want to look ate this link
    http://www.corrupteddatarecovery.com/Repair/Calculux-Indoor-project-File-Repair-CIN-Data-C onversion.asp

  • File descriptor leak in socket programming

    We have a complex socket programming client package in java using java.nio (Selectors, Selectable channel).
    We use the package to connect to a server.
    Whenever the server is down, it tries to reconnect to the server again at regular intervals.
    In that case, the number of open file descriptors build up with each try. I am able to cofirm this using "pfile <pid>" command.
    But, it looks like we are closing the channels, selectors and the sockets properly when it fails to connect to the server.
    So we are unable to find the coding that causes the issue.
    We run this program in solaris.
    Is there a tool to track down the code that leaks the file descriptor.
    Thanks.

    Don't close the selector. There is a selector leak. Just close the socket channel. As this is a client you should then also call selector.selctNow() to have the close take final effect. Otherwise there is also a socket leak.

  • Help tracking down a file descriptor leak under java 6

    I have a large application I work on that runs fine under java5 (apart from possibly the latest update) but running under java 6 results in file descriptors used for TCP sockets being leaked.
    I'm testing this under FreeBSD 6 (both i386 and amd64) using diablo JDK and a port build jdk-1.6.0.3p3 but I have had reports from other users of exactly the same issue under various linux distributions. There are some reports that going back as far as 1.6.0b5 will resolve the issue but no later version works and a few reports that the latest 1.5 updates have the same issue.
    This application is using standard IO so Socket/ServerSocket and occasionally SSLSocket, no NIO is involved. Under the problem JDKs it will run for a while before available FDs are exhausted and then fall over with a "too many open files" exception. So far I have been unable to recreate the situation in a simple testcase and the fact it works fine under earlier JDKs is really causing me issues with deciding where to look for the issue.
    Using lsof to watch the FDs that are leaked I see a steadily increasing number shown in the following state:
    java 23438 djb 54u IPv4 0xffffff0091ad02f8 0t0 TCP *:* (CLOSED)
    java 23438 djb 55u IPv4 0xffffff0105aa45f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 56u IPv4 0xffffff01260c15f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 57u IPv4 0xffffff012a2ae8e8 0t0 TCP *:* (CLOSED)
    If these were showing as say (CLOSE_WAIT) then I would understand where they are coming from but as far as I understand the above means the socket has been fully closed but the FD simply hasn't been released. I'm not an expert on the TCP protocol however so I may be wrong here.
    I did try making the application set SoLinger(0,true) on all sockets which of course made all connecting clients think the connection was aborted rather than gracefully closed but even with this setting the FD leak persisted.
    I've gone as far as looking at what I think are the relevant parts of the src for both JDK versions I am using but there are very few changes and nothing that obviously looks linked.
    I'm fully prepared to spend a lot of time looking into this and I'm sure I'd eventually find the cause but if anyone here already knows what the answer may be or can simply give me a nudge in the best direction to look I would be very grateful.

    After weeks of dancing around the issue for weeks, we narrowed it down to garbage collection. If we put System.gc() to run periodically , file descriptors get garbage collected properly . I've tried playing with the settings by using XX:+UseConcMarkSweepGC which seems to help a great deal while system is under stress. However when there is light activity.. the file descriptors grow again and eventually bring everything down.
    Any clues ? is there any way to make gc to perform full collection more often ?
    pls whelp !!!

Maybe you are looking for