File descriptor

Hi,
I am a newbie in this, currently I face this problem where my application server running on Solaris 8, Sun Application Server 7, has been throwing the following error:
Pr_proc_desc_table_full_error: file descriptor.
This has caused the application to be very unstable. Checked with SUN app server 7 documentation online says there is a need to set the rlim_fd_max value to 4086. The current value = 1024.
I need advice if this value should be changed to a higher value (4086 or higher)? And that this setting is set on the SUN OS level? Or you have any suggestion or concern? Please help.
Thaks

Yes, you need to increase the number of file descriptors available to the process. For Solaris 8, in /etc/system say:
set rlim_fd_max=4096

Similar Messages

  • How do I find the number of file descriptors in use by the system?

    Hey folks,
    I am trying to figure out how many file descriptors my Leopard system has in use. On FreeBSD, this is exposed via sysctl at the OID kern.open_files (or something close to that, can't recall exactly what the name is). I do see that OS X has kern.maxfiles, which gives the maximum number of file descriptors the system can have open, but I don't see a sysctl that tells me how many of the 12288 descriptors the kernel thinks are in use.
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors. I don't think it's that easy (perhaps it is and I'm just over complicating things).
    So, anyone know where this information is?
    Thanks for your time.

    glsmith wrote:
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors.
    Can't think of anything other than lsof right now. However:
    Only root can list all open files, all other users see only their own
    There is significant duplication, and
    All types of file descriptor are listed, which you may not want, so you need to consider filtering.
    As an example, the following will count all regular files opened by all users:
    sudo lsof | awk '/REG/ { print $NF }' | sort -u | wc -lIf you run it without the sudo, you get just your own open files.

  • RFFOCA_T: DME with file descriptor in first line (RBC)

    Hi All,
    I've customized the automatic payment run for a company located at Canada - including generated DME- file by the report RFFOCA_T. The DME file looks good - but sadly the house bank (RBC, Royal Bank of Scotland) is expecting two things different:
    "Different formats now exist for the Royal Bank and CIBC from the default CPA-005 specification.
    u2022 Type 'A' and 'Cu2019 records have been modified to handle RBC and CIBC
    u2022 A parameter was added to job submission to request the bank type
    This process has been revised to include two headers as part of the tape_header code segment.
    u2022 The first header must be the first line in the file and appear in the following format: $$AAPDCPA1464[PROD]NL$$
    u2022 The second header (positions 36 to 1464) must be filled with blanks, not zeros"
    (taken from "SCT Banner, Finance, Release Guide - January 2005, Release 7.0")
    In our DME-file the second header (position 36 to 1464) is correct, but the first header is completely missing.
    RBC wrote me in an email:: "The first line of the file needs the file descriptor ($$AAPDCPA1464[PROD]NL$$). The date format and the client number is correct. When the $$ file descriptor has been added please upload the TEST file":
    I could not find any solution at SAP/ OSS - can anybody help, please?
    Thanks a lot!
    Sandra.

    Hi Revi,
    I'm not sure if I understand you in the right way.
    I do not have a problem only with the $$ at the beginning. The whole first expected line as the file descriptor is missing. As we saw in the report code, it's not considered. At least I hope, there is a simple solution - like an update or else - but maybe we need to enlarge the report itself by a programmer?
    Thanks,
    Sandra

  • "IOException: Bad file descriptor" thrown during readline()

    I'm working on a system to send data to bluetooth devices. Currently I have a dummy program that "finds" bluetooth devices by listening for input on System.in, and when one is found, the system sends some data to the device over bluetooth. Here is the code for listening for input on System.in
    InputStreamReader isr = new InputStreamReader(System.in);
    BufferedReader br = new BufferedReader(isr);
    boolean streamOpen = true;
    while(streamOpen) {
         String next = "";
         System.out.println("waiting for Input: ");
         try {
              next = br.readLine();
               // other code here
          } catch (IOException ioe) {
               ioe.printStackTrace();
    } // end of whileThis is running in it's own thread, constantly listening for input from System.in. There is also another thread that handles pushing the data to the bluetooth device. It works the first time it reads input, then the other thread starts running also, printing output to System.out. When the data has successfully been pushed to the device, the system waits for me to enter more information. As soon as I type something and press return, i get an endless (probably infinte if I don't kill the process) list of IOExceptions:Bad file descriptor exceptions that are thrown from the readline() method.
    Here is what is being printed:
    Waiting for Input: // <-- This is the thread listening for input on System.in
    system started with 1 Bluetooth Chip // From here down is the thread that pushing data to the BT device
    next device used 0
    default device 0000000000
    start SDP for 0000AA112233
    *** obex_push: 00:00:AA:11:22:33@9, path/to/file.txt, file.txt
    I'm not even sure which line it's trying to read when the exception gets thrown, whether it's the first line after "Waiting for Input: " or it's the line where I actually type something and hit return.
    Any ideas why this might be happening? Could it have something to do with reading from System.in from a thread that is not the main thread?
    Also, this is using java 1.6

    Actually, restarting the stream doesn't work either..... here's a sample program that I wrote.
    public class ExitListener extends Thread {
         private BufferedReader br;
         private boolean threadRunning;
         public ExitListener(UbiBoardINRIA ubiBoard) {
              super("Exit Listener");
              threadRunning = true;
              InputStreamReader isr = new InputStreamReader(System.in);
              br = new BufferedReader(isr);
         public void run() {
              while (threadRunning) {
                   try {
                        String read = br.readLine();
                        if (read.equalsIgnoreCase("Exit")) {
                             threadRunning = false;
                   } catch (IOException ioe) {
                        System.out.println("Can you repeat that?");
                        try {
                             br.close();
                             br = new BufferedReader(new InputStreamReader(System.in));
                        } catch (IOException ioe2) {
                             ioe2.printStackTrace();
                             System.out.println("Killing this thread");
                             threadRunning = false;
                   } // end of catch
         } // end of run
    }output:
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I know that this is probably not enough code to really see the problem, but my main question is what could be going on somewhere else in the code that could cause this BufferedReader to not be able to re-open

  • Suddenly can't create dirs or files!? "Bad file descriptor"

    Tearing my hair out...
    Suddenly, neither root nor users can create files or directories in directories under /home. Attempting to do so gets: Error -51 in Finder, "Bad file descriptor" from command line, and "Invalid file handle" via SMB.
    However, files and dirs can be: read, edited, moved, and occasionally copied. Rebooting made no difference.
    Anyone have a clue on where to start on this?
    Mac OS X 10.3.9. Dual G4 XServe with 2 x 7 x 250 G XRAID.
    Many   Mac OS X (10.3.9)   Many
    Many   Mac OS X (10.3.9)   Many

    Indeed. This whole episode has exposed rather woeful lack of robustness on the part of the X Server and XRAID... various things failing and server hanging completely as a result of a few bad files on disk.. with lack of useful feedback as to what was happening.
    Best I can tell, we had reached the stage where the next available disk location for directory or file was bad... blocking any further additions.
    I've embarked on the process of copying everything off, remove crash-provoking files, replace one bad drive (hot swap didn't work), erase all, perform surface conditioning (bad-block finding) procedure, and maybe later this century will be copying all files back.
    Looks to me like the bad block finding procedure is finding a few bad blocks on the supposedly good drives... presumably will isolate those, but maybe we need to get more new drives.
    Many   Mac OS X (10.3.9)   Many

  • Bad File Descriptor in /dev/fd/3, and 94Gb of disk space missing

    I noticed a few days ago, possibly as the result of a recent kernel panic, that I have a large chunk of hard drive space missing. The Finder reports that I have approximately 89Gb of free space, but using "df" reports that there is approximately 178Gb free. Using "du" doesn't report any unexpected huge files, so I tried running GrandPerspective. In addition to the usual file usage and free space, this shows a single 94Gb block of "miscellaneous used space".
    I then booted into Single User mode to run fsck on the startup drive. This reported several errors, and took 3 passes to repair the directory structure, but didn't recover the missing space. I have subsequently run TechTool Pro and DiskWarrior on the startup drive (both of which found various minor errors), but the 94Gb still refuses to show itself.
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    After searching Google, a "Bad file descriptor" error points to an inode issue, that fsck cannot fix, but I don't know enough (read: anything) about inodes to risk running the clri command to zero the problem inode.
    Short of blanking the startup disk and installing from scratch (not an attractive option), is there anything I can do to fix the broken inode and recover the missing space?
    Any help appreciated.

    Drawing Business wrote:
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    This is not an error and always happens with find unless you exclude the /dev hierarchy from the search. (Interestingly this seems to have gone away with 10.5??)
    To locate your missing space, try WhatSize. Another alternative which I have not used personally is Disk Inventory X.
    As an additional point, with 10.4 it is actually better to use Disk Utility, since it does more than fsck: Resolve startup issues and perform disk maintenance with Disk Utility and fsck, quote:
    Note: If you're using Mac OS X 10.4 or later, you should use Disk Utility instead of fsck, whenever possible.

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • Where is the file descriptor leak in this code?

    The following "appendStringToFile" method is used to append a String to a file. My java app calls this method a few times per minute, and then crashes after running for about 12 hours. The exception is "Too many open files". The code that calls it does so from a synchronized block, so concurrency is not the problem, and it would seem that only one file descriptor should be used at a time.
    Can anyone find the problem?
         public static void createParentDirectoryIfNeeded(String path) {
              String dirPath = path.substring(0, path.lastIndexOf('/'));
              File dir = new File(dirPath);
              if (!dir.exists()) {
                   dir.mkdirs();
         public static void appendStringToFile(String s, String path) {
              FileWriter fileWriter = null;
              try {
                   createParentDirectoryIfNeeded(path);
                   // create file if it doesn't already exist
                   File file = new File(path);
                   file.createNewFile();
                   if (s != null) {
                        fileWriter = new FileWriter(file, true);
                        fileWriter.write(s.toCharArray());
              } catch (IOException ioe) {
                   ErrorHandler.handleError(ioe, LOG);
              } finally {
                   if (fileWriter != null) {
                        try {
                             fileWriter.close();
                        } catch (IOException ioe) {
                             ErrorHandler.handleError(ioe, LOG);
         }Edited by: mikewertheim on Sep 26, 2008 11:54 AM
    Edited by: mikewertheim on Sep 26, 2008 11:54 AM
    Edited by: mikewertheim on Sep 26, 2008 11:55 AM
    Edited by: mikewertheim on Sep 26, 2008 11:56 AM

    I don't know what is causing your problem but I can suggest several improvements.
    1) Given a file 'f' then one can create the parent directory using   f.getParentFile().mkdirs();so that one does not need to useString dirPath = path.substring(0, path.lastIndexOf('/'));          2) There is not need to test if the directory exists before creating it. If it exists then   f.getParentFile().mkdirs(); will just do nothing.
    3) There is no need to usefile.createNewFile();because if the file does not exist then fileWriter = new FileWriter(file, true);will create it.

  • No of file descriptors in solaris 10

    hi,
    I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
    projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
    i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
    Thank you
    Jonu Joy

    Thank you alan
    even after setting the max file descriptor to 8192, the output from pfiles show as 4096
    Current rlimit: 4096 file descriptors
    would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
    thank you
    Jonu Joy

  • "The document 'Backup of Backup of Oral Report' could not be saved as 'Oral Report'. Bad file descriptor"

    Hello,
    Whenever I try to save my Keynote file, I receive an error message saying: "The document 'Backup of Backup of Oral Report' could not be saved as 'Oral Report'. Bad file descriptor".
    I've tried using other name to save my file, saving the file as a backup, saving it within my documents, and on a USB, but the file won't save. I'm using Keynote '09, Version 5.1.1 (1034) on my Mac Desktop running Mac OS X Version 10.6.8
    How can I fix this problem and save my file?

    change icloud backup to local backup on this compuet. this worked for me.

  • File descriptor leak in socket programming

    We have a complex socket programming client package in java using java.nio (Selectors, Selectable channel).
    We use the package to connect to a server.
    Whenever the server is down, it tries to reconnect to the server again at regular intervals.
    In that case, the number of open file descriptors build up with each try. I am able to cofirm this using "pfile <pid>" command.
    But, it looks like we are closing the channels, selectors and the sockets properly when it fails to connect to the server.
    So we are unable to find the coding that causes the issue.
    We run this program in solaris.
    Is there a tool to track down the code that leaks the file descriptor.
    Thanks.

    Don't close the selector. There is a selector leak. Just close the socket channel. As this is a client you should then also call selector.selctNow() to have the close take final effect. Otherwise there is also a socket leak.

  • Help tracking down a file descriptor leak under java 6

    I have a large application I work on that runs fine under java5 (apart from possibly the latest update) but running under java 6 results in file descriptors used for TCP sockets being leaked.
    I'm testing this under FreeBSD 6 (both i386 and amd64) using diablo JDK and a port build jdk-1.6.0.3p3 but I have had reports from other users of exactly the same issue under various linux distributions. There are some reports that going back as far as 1.6.0b5 will resolve the issue but no later version works and a few reports that the latest 1.5 updates have the same issue.
    This application is using standard IO so Socket/ServerSocket and occasionally SSLSocket, no NIO is involved. Under the problem JDKs it will run for a while before available FDs are exhausted and then fall over with a "too many open files" exception. So far I have been unable to recreate the situation in a simple testcase and the fact it works fine under earlier JDKs is really causing me issues with deciding where to look for the issue.
    Using lsof to watch the FDs that are leaked I see a steadily increasing number shown in the following state:
    java 23438 djb 54u IPv4 0xffffff0091ad02f8 0t0 TCP *:* (CLOSED)
    java 23438 djb 55u IPv4 0xffffff0105aa45f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 56u IPv4 0xffffff01260c15f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 57u IPv4 0xffffff012a2ae8e8 0t0 TCP *:* (CLOSED)
    If these were showing as say (CLOSE_WAIT) then I would understand where they are coming from but as far as I understand the above means the socket has been fully closed but the FD simply hasn't been released. I'm not an expert on the TCP protocol however so I may be wrong here.
    I did try making the application set SoLinger(0,true) on all sockets which of course made all connecting clients think the connection was aborted rather than gracefully closed but even with this setting the FD leak persisted.
    I've gone as far as looking at what I think are the relevant parts of the src for both JDK versions I am using but there are very few changes and nothing that obviously looks linked.
    I'm fully prepared to spend a lot of time looking into this and I'm sure I'd eventually find the cause but if anyone here already knows what the answer may be or can simply give me a nudge in the best direction to look I would be very grateful.

    After weeks of dancing around the issue for weeks, we narrowed it down to garbage collection. If we put System.gc() to run periodically , file descriptors get garbage collected properly . I've tried playing with the settings by using XX:+UseConcMarkSweepGC which seems to help a great deal while system is under stress. However when there is light activity.. the file descriptors grow again and eventually bring everything down.
    Any clues ? is there any way to make gc to perform full collection more often ?
    pls whelp !!!

  • How to determine which file descriptor opened my driver?

    Suppose a user process opens my driver twice. How does open() determine which file descriptor opened the device? In Linux, the kernel will pass a pointer to a structure which represents the open file descriptor. However, Solaris only passes the device number to open(), so I can only determine my device was opened, but not which file. I need this information because my driver needs to keep track of all file descriptors opened for the device.
    Thanks!
    -Darren

    I'm still at a loss why you need to know the file descriptor value (unless the app is sufficiently spaghettied that it has to query the driver to figure out what it opened with what). It's like asking what filename was used to open the device (which you can't get either). Since Solaris is based on a Streams framework, it would be bad to have drivers to even think it has a direct mapping into user space. It would be the same in asking (using /bin/sh):
    prog3 4>&1 3>&1 2>&1 | prog2 | prog1
    and you want to know from prog1 what descriptor prog3 wrote to. I don't see how linux even does this properly, since any given file open can have multiple file descriptors (via dup).

  • Upload-Szenario - WDRuntimeException Bad file descriptor

    Hi All,
    i'm using the Adobe Document Services on a NW04, ADS SP 19, NWDS 2.0.19 with IE 6.0.2900 SP2.
    If i use the Upload-UI-Element to show a PDF i got the error:  Bad file descriptor !!
    The PDF to upload is not corrupt, an i can open it with Acrobat Reader or the Tutorial (Download-Upload-Szenario) - Example.
    I had a binary Value-Attribute mapped as Data-Element of the UploadUI-Element. Read the Context-Value-Attribute to the controller context element and the Interactive Form Element had the reference to the attribute as pdfsource. The same code as the tutorial.
    What's wrong ?
    I didn't find anything about the error !
    Thanks for help.
    Regards Jürgen
    Exception(com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Bad file descriptor) during processing a Web Dynpro Application, Session with IDs: (J2EE7802600)ID1177271450DB10984010199363513659End,78651230d78a11db9c34000d608e44df,Id78651230d78a11db9c34000d608e44df6
    [EXCEPTION]
    com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Bad file descriptor
         at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.updateUpLoad(AbstractHttpClient.java:478)
         at com.sap.tc.webdynpro.progmodel.context.ModifiableBinaryType.parse(ModifiableBinaryType.java:95)
         at com.sap.tc.webdynpro.clientserver.data.DataContainer.doParse(DataContainer.java:1418)
         at com.sap.tc.webdynpro.clientserver.data.DataContainer.validatePendingUserInput(DataContainer.java:1328)
         at com.sap.tc.webdynpro.clientserver.data.DataContainer.validatePendingUserInput(DataContainer.java:672)
         at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.validate(ClientComponent.java:624)
         at com.sap.tc.webdynpro.clientserver.cal.ClientApplication.validate(ClientApplication.java:741)
         at com.sap.tc.webdynpro.clientserver.task.WebDynproMainTask.transportData(WebDynproMainTask.java:712)
         at com.sap.tc.webdynpro.clientserver.task.WebDynproMainTask.execute(WebDynproMainTask.java:649)
         at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:59)
         at com.sap.tc.webdynpro.clientserver.cal.ClientManager.doProcessing(ClientManager.java:251)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doWebDynproProcessing(DispatcherServlet.java:154)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:116)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doPost(DispatcherServlet.java:55)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:387)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:365)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:944)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:266)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:160)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    Caused by: java.io.IOException: Bad file descriptor
         at java.io.FileInputStream.readBytes(Native Method)
         at java.io.FileInputStream.read(FileInputStream.java:177)
         at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.writeIn2Out(AbstractHttpClient.java:493)
         at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.updateUpLoad(AbstractHttpClient.java:435)
         ... 29 more

    Two things I think helped to solve this
    PROPAGATE_EXCEPTIONS = True
    in config.py and I removed threading from my vassal ini file the ending uwsgi files looked like this:
    /etc/uwsgi/emperor.ini:
    [uwsgi]
    emperor = /etc/uwsgi/vassals
    master = true
    plugins = python2
    uid = http
    gid = http
    [/etc/uwsgi/vassals/test.ini:
    [uwsgi]
    chdir = /srv/http/test_dir/src
    wsgi-file = run.py
    callable = app
    processes = 4
    stats = 127.0.0.1:9191
    max-requests = 5000
    enable-threads = true
    vacuum = true
    thunder-lock = true
    socket = /run/uwsgi/test-sock.sock
    chmod-socket = 664
    harakiri = 60
    logto = /var/log/uwsgi/test.log
    Not sure on the
    PROPAGATE_EXCEPTIONS = True
    but removing the threads option in test.ini and making sure there was a master option in emperor.ini seemed to have solved the issue of sql being tossed around to different treads, or at least it complaining about it and crashing the site out, either or.
    Also don't use the uwsgi from this distribution, get it from pip, the distros are broken.

  • [SOLVED] Cups refuses to work: "bad file descriptor"

    I am not getting CUPS to work on a new installation.
    I need to use cups as a client printing to a printer attached to a separate server. What I did:
    1. I installed  the cups package and started/enabled the service in systemd.
    2. The remote server has a working CUPS installation (accessible and working from other computers)
    3. I can see the remote printer listed among the printers in CUPS's local  web interface
    However, every job sent to the printer (from the GUI: KDE based apps such as Okular, etcetera) silently fails.
    No job is ever listed on the CUPS web interface.
    If I try to check on CUPS status with lstat I get the following error:
    $> lpstat
    lpstat: bad file descriptor
    Any suggestion on how to fix the problem?
    Edit: more info
    It seems that cups is running fine according to systemd:
    [stefano@gorgias ~]$ systemctl status cups
    ● cups.service - CUPS Printing Service
    Loaded: loaded (/usr/lib/systemd/system/cups.service; enabled)
    Active: active (running) since Mon 2014-08-04 17:56:32 CDT; 16h ago
    Main PID: 2832 (cupsd)
    Status: "Scheduler is running..."
    CGroup: /system.slice/cups.service
    └─2832 /usr/bin/cupsd -f
    Aug 04 17:56:32 gorgias systemd[1]: Started CUPS Printing Service.
    but not so according to CUPS itself:
    [stefano@gorgias ~]$ lpstat -t
    scheduler is not running
    no system default destination
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    lpstat: Bad file descriptor
    So systemd tells me the scheduler is running, while CUPS claims it is not
    Last edited by stefano (2014-08-06 22:15:53)

    Stefano,
    I can't thank you enough for posting your solution! I have spent the last several days trying to get libreoffice and evince to even see my printer. I've been up and down so many cups web interface sessions, I lost count long ago. Finally, I came across some commands I'd never seen before, in particular 'lpstat' which also gave me the 'bad file descriptor' message. Googling provided next to nothing in help, but it did produce a link to your post, which is like finding a needle in a haystack! Anyway, I updated /etc/cups/client.conf as you suggested, and now my applications can finally see my Brother HL-2280DW.
    For what it's worth, I'm running a fresh archlinux install (as of a week or two ago), and just installed cups a few days ago:
    root<t1>@benito:/etc/cups# uname -a
    Linux benito 3.16.1-1-ARCH #1 SMP PREEMPT Thu Aug 14 07:40:19 CEST 2014 x86_64 GNU/Linux
    root<t1>@benito:/etc/cups# pacman -Qs cups
    local/brother-hl2280dw 2.0.4_2-3
        Brother HL-2280DW CUPS Driver
    local/cups 1.7.5-1
        The CUPS Printing System - daemon package
    local/cups-filters 1.0.57-1
        OpenPrinting CUPS Filters
    local/cups-pdf 2.6.1-2
        PDF printer for cups
    local/libcups 1.7.5-1
        The CUPS Printing System - client libraries and headers
    local/python2-pycups 1.9.66-2
        Python CUPS Bindings
    local/system-config-printer 1.4.4-1
        A CUPS printer configuration tool and status applet
    Again, my sincere thanks!!
    (BTW, I'd also like to know why /etc/cups/client.conf doesn't work as advertised...)
    Last edited by archzen (2014-08-24 06:44:58)

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

Maybe you are looking for