Unix - File Descriptors in startWeblogic.sh

Looking at the script - if the HARD limit is 1024, then the
script leaves the soft limit as is (usually at 64).
This doesn't make sense to me.
What is it that I don't understand here?
MIke
maxfiles=`ulimit -H -n`
if [ ! $? -a "$maxfiles" != 1024 ]; then
if [ "$maxfiles" = "unlimited" ]; then
maxfiles=1025
fi
if [ "$maxfiles" -lt 1024 ]; then
ulimit -n $maxfiles
else
ulimit -n 1024
fi
fi
Mike

Hi,
You're running startWebLogic.sh, so I'm assuming this is on a *nix platform.  Assuming that's the case, reboot your system (to make sure you have a clean start), then try running:
netstat -an|grep 7001
to see if there is something actually listening on port 7001.
If you do find something that is already listening on port 7001, try doing:
ps -ef | more
and review the output from that, to see if you can try to figure out what might be listening on port 7001.
Jim

Similar Messages

  • File descriptor setting

    We are using iDS 5.1 sp2 running on solaris 8. We have idar with 2 ldap server on back(1 master, 1 slave).
    We didnt't setup the max connection for iDAR, which mean unlimited connection is allowed. However, the unix system ulimit setting was 256, which is too low. I changed the setting under /etc/system and rebooted the unix.. Then the ulimit setting is 4096 for both hard limit and soft limit. It looks good.
    However, whenever the total connection to iDAR approaching 256, fwd.log file will show that "socket closed". The iDAR is still available, but the socked is used up.
    I have been wondering why the new setting didn't take effect for iDAR.
    Can anybody help me or give me some clue?
    Thanks!

    Hi,
    Welcome to oracle forums :)
    User wrote:
    Hi,
    We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
    process.max-file-descriptor
    basic 8.19K - deny 351158
    privileged 65.5K - deny -
    system 2.15G max deny -
    My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
    For more information on these settings please refer MOS tech note:
    *Kernel setup for Solaris 10 using project files. [ID 429191.1]*
    Hope helps :)
    Regards,
    X A H E E R

  • Max number of file descriptors in 32 vs 64 bit compilation

    Hi,
    I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
    However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
    Did anybody ever see similar problem before?
    Thanks in advance,
    Mikhail

    On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
    This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
    When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
    To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
    A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

  • Extracting the native file descriptor of a Socket .. HOW?

    Hello all,
    I'm trying to use JNI to make the unix system call getsockopt(), but I don't know how can I get the C socket file descriptor from a Java Socket object. I have to pass this file descriptor, an integer, to the getsockopt() function.
    Any suggestions?
    Fernando

    use reflect for assessing the priv var impl and after that , the priv member fd of the class SocketImpl.

  • How do I find the number of file descriptors in use by the system?

    Hey folks,
    I am trying to figure out how many file descriptors my Leopard system has in use. On FreeBSD, this is exposed via sysctl at the OID kern.open_files (or something close to that, can't recall exactly what the name is). I do see that OS X has kern.maxfiles, which gives the maximum number of file descriptors the system can have open, but I don't see a sysctl that tells me how many of the 12288 descriptors the kernel thinks are in use.
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors. I don't think it's that easy (perhaps it is and I'm just over complicating things).
    So, anyone know where this information is?
    Thanks for your time.

    glsmith wrote:
    There's lsof, but is that really the only way? And, I'm not even sure if I can just equate the number of lines from lsof to the number of in use descriptors.
    Can't think of anything other than lsof right now. However:
    Only root can list all open files, all other users see only their own
    There is significant duplication, and
    All types of file descriptor are listed, which you may not want, so you need to consider filtering.
    As an example, the following will count all regular files opened by all users:
    sudo lsof | awk '/REG/ { print $NF }' | sort -u | wc -lIf you run it without the sudo, you get just your own open files.

  • RFFOCA_T: DME with file descriptor in first line (RBC)

    Hi All,
    I've customized the automatic payment run for a company located at Canada - including generated DME- file by the report RFFOCA_T. The DME file looks good - but sadly the house bank (RBC, Royal Bank of Scotland) is expecting two things different:
    "Different formats now exist for the Royal Bank and CIBC from the default CPA-005 specification.
    u2022 Type 'A' and 'Cu2019 records have been modified to handle RBC and CIBC
    u2022 A parameter was added to job submission to request the bank type
    This process has been revised to include two headers as part of the tape_header code segment.
    u2022 The first header must be the first line in the file and appear in the following format: $$AAPDCPA1464[PROD]NL$$
    u2022 The second header (positions 36 to 1464) must be filled with blanks, not zeros"
    (taken from "SCT Banner, Finance, Release Guide - January 2005, Release 7.0")
    In our DME-file the second header (position 36 to 1464) is correct, but the first header is completely missing.
    RBC wrote me in an email:: "The first line of the file needs the file descriptor ($$AAPDCPA1464[PROD]NL$$). The date format and the client number is correct. When the $$ file descriptor has been added please upload the TEST file":
    I could not find any solution at SAP/ OSS - can anybody help, please?
    Thanks a lot!
    Sandra.

    Hi Revi,
    I'm not sure if I understand you in the right way.
    I do not have a problem only with the $$ at the beginning. The whole first expected line as the file descriptor is missing. As we saw in the report code, it's not considered. At least I hope, there is a simple solution - like an update or else - but maybe we need to enlarge the report itself by a programmer?
    Thanks,
    Sandra

  • Deleting the data from logical file/unix file

    Hi all.
        I need to delete the all the data from logical file (application server file/unix file).But I dont want to delete the logical file ( only data in the logical file should be deleted, i.e making file empty)
    Thanks in advance.
    Cheers.
    sami

    Hi Sami,
    Refer thsi document https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/4d7aeb7d-0c01-0010-fa8a-a4a8e8968a93.
    Regards,
    Flavya

  • How to delete string or line from unix file(dataset) of application server

    Hi  All,
    After transfer workarea information or all records into dataset(unix file). When I see the file in application server automatically the last line is shown a blank line. I am not passing any blank line.
    I have tried for single record than also the file generates the last line(2nd line) also a blank line.
    When I m reading the dataset, it is not reading the last blank line but why it is showing the last blank line?
    How to delete string or line from unix file(dataset) of application server?
    Please give your comments to resolve this.
    Thanks
    Tirumula Rao Chinni

    Hi Rio,
    I faced similar kind of issue working with files on UNIX platform.
    The line is a line feed to remove it use
    DATA : lv_carr_linefd TYPE abap_cr_lf VALUE cl_abap_char_utilities=>cr_lf. 
      DATA : lv_carr_return TYPE char1,                                   
             lv_line_feed   TYPE char1.                                          
      lv_line_feed   = lv_carr_linefd(1).
      lv_carr_return = lv_carr_linefd+1(1).
    Note: IMP: The character in ' ' is not space but is a special
    character set by pressing ALT and +255 simultaneosly
      REPLACE ALL OCCURRENCES OF lv_line_feed IN l_string WITH ' '.
      REPLACE ALL OCCURRENCES OF lv_carr_return IN l_string WITH ' '.

  • Unable to Open unix file in UNICODE system which created NON-UNICODE system

    Unable to Open unix file in UNICODE system which created in NON-UNICODE system
    We have two SAP systems both are ECC6.0 but System 1 is NON-Unicode and System2 is Unicode system.
    There is a common unix directory/folder for both system.
    Our requirement is to create one file on unix common folder and write the data to file from system1 .
    In system2 open the same file for appending mode to write the data .
    The file in system 1 created with below sentence.
    OPEN DATASET g_unix_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    Now I have to append the data from system 2 to same file.
    I have tried to used below statement in system 2 to open the file but sy-subrc value comes as '8'.
    1> OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING UTF-8.
    2>OPEN DATASET g_unix_file FOR APPENDING IN legacy TEXT MODE CODE PAGE
    cdp IGNORING CONVERSION ERRORS  .
    3>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING Default.
    4>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING NON-UNICODE.
    Tried out all the possibilities as per F1 help given for open dataset , but still there is problem with opn file in appending as well output mode.However the file successfully open in Input mode(Read).
    Please advice suggestion to resolve this issue.
    Thanks.

    Messgae captured as 'Permission Denied". The program gets triggered with system user Id PPID.
    How to check the security access of the User ID.

  • Files (fonts) appear as Unix files when viewing on an SMB share

    We have a UNIX server running Linux which our Mac users are connecting via SMB.  We moved our company fonts to this server from an OSX Server.  All of our machines are running 10.6.6 or later.  Very intermittantly without any rhyne or reason some users when they mount the Font share, the Fonts show up as Unix files rather than the Fonts.  On my machine (10.7) they all appear normally.  Now I know that SMB was revamped in Lion so this may be the reason I can see them no problem.  But the 10.6 machines, any suggestions on fixing this?

    Any solution on this issue? I have nearly the same exact issue in my environment, but is reversed.
    The problem we have is when saving files to the server from the Snow Leopard machine, the older Tiger machines see the fonts as unix executable files. The snow leopard machine sees the files fine and can open everything with no problem.
    This only happens when saving to a SMB server, as I can save the files from Snow Leopard to a flash drive and open on Tiger with no problems. Additionaly files saved from Tiger to the server are perfectly fine when accessing from Tiger or Snow Leopard.
    We're running Windows 2008 Server and connecting via the macs using the SMB method. Also duplicated the issue on a Windows 2003 Server.

  • "IOException: Bad file descriptor" thrown during readline()

    I'm working on a system to send data to bluetooth devices. Currently I have a dummy program that "finds" bluetooth devices by listening for input on System.in, and when one is found, the system sends some data to the device over bluetooth. Here is the code for listening for input on System.in
    InputStreamReader isr = new InputStreamReader(System.in);
    BufferedReader br = new BufferedReader(isr);
    boolean streamOpen = true;
    while(streamOpen) {
         String next = "";
         System.out.println("waiting for Input: ");
         try {
              next = br.readLine();
               // other code here
          } catch (IOException ioe) {
               ioe.printStackTrace();
    } // end of whileThis is running in it's own thread, constantly listening for input from System.in. There is also another thread that handles pushing the data to the bluetooth device. It works the first time it reads input, then the other thread starts running also, printing output to System.out. When the data has successfully been pushed to the device, the system waits for me to enter more information. As soon as I type something and press return, i get an endless (probably infinte if I don't kill the process) list of IOExceptions:Bad file descriptor exceptions that are thrown from the readline() method.
    Here is what is being printed:
    Waiting for Input: // <-- This is the thread listening for input on System.in
    system started with 1 Bluetooth Chip // From here down is the thread that pushing data to the BT device
    next device used 0
    default device 0000000000
    start SDP for 0000AA112233
    *** obex_push: 00:00:AA:11:22:33@9, path/to/file.txt, file.txt
    I'm not even sure which line it's trying to read when the exception gets thrown, whether it's the first line after "Waiting for Input: " or it's the line where I actually type something and hit return.
    Any ideas why this might be happening? Could it have something to do with reading from System.in from a thread that is not the main thread?
    Also, this is using java 1.6

    Actually, restarting the stream doesn't work either..... here's a sample program that I wrote.
    public class ExitListener extends Thread {
         private BufferedReader br;
         private boolean threadRunning;
         public ExitListener(UbiBoardINRIA ubiBoard) {
              super("Exit Listener");
              threadRunning = true;
              InputStreamReader isr = new InputStreamReader(System.in);
              br = new BufferedReader(isr);
         public void run() {
              while (threadRunning) {
                   try {
                        String read = br.readLine();
                        if (read.equalsIgnoreCase("Exit")) {
                             threadRunning = false;
                   } catch (IOException ioe) {
                        System.out.println("Can you repeat that?");
                        try {
                             br.close();
                             br = new BufferedReader(new InputStreamReader(System.in));
                        } catch (IOException ioe2) {
                             ioe2.printStackTrace();
                             System.out.println("Killing this thread");
                             threadRunning = false;
                   } // end of catch
         } // end of run
    }output:
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I'm sorry, can you repeat that command? - Stream closed
    Closed Stream
    Ready?: false
    I know that this is probably not enough code to really see the problem, but my main question is what could be going on somewhere else in the code that could cause this BufferedReader to not be able to re-open

  • Suddenly can't create dirs or files!? "Bad file descriptor"

    Tearing my hair out...
    Suddenly, neither root nor users can create files or directories in directories under /home. Attempting to do so gets: Error -51 in Finder, "Bad file descriptor" from command line, and "Invalid file handle" via SMB.
    However, files and dirs can be: read, edited, moved, and occasionally copied. Rebooting made no difference.
    Anyone have a clue on where to start on this?
    Mac OS X 10.3.9. Dual G4 XServe with 2 x 7 x 250 G XRAID.
    Many   Mac OS X (10.3.9)   Many
    Many   Mac OS X (10.3.9)   Many

    Indeed. This whole episode has exposed rather woeful lack of robustness on the part of the X Server and XRAID... various things failing and server hanging completely as a result of a few bad files on disk.. with lack of useful feedback as to what was happening.
    Best I can tell, we had reached the stage where the next available disk location for directory or file was bad... blocking any further additions.
    I've embarked on the process of copying everything off, remove crash-provoking files, replace one bad drive (hot swap didn't work), erase all, perform surface conditioning (bad-block finding) procedure, and maybe later this century will be copying all files back.
    Looks to me like the bad block finding procedure is finding a few bad blocks on the supposedly good drives... presumably will isolate those, but maybe we need to get more new drives.
    Many   Mac OS X (10.3.9)   Many

  • Bad File Descriptor in /dev/fd/3, and 94Gb of disk space missing

    I noticed a few days ago, possibly as the result of a recent kernel panic, that I have a large chunk of hard drive space missing. The Finder reports that I have approximately 89Gb of free space, but using "df" reports that there is approximately 178Gb free. Using "du" doesn't report any unexpected huge files, so I tried running GrandPerspective. In addition to the usual file usage and free space, this shows a single 94Gb block of "miscellaneous used space".
    I then booted into Single User mode to run fsck on the startup drive. This reported several errors, and took 3 passes to repair the directory structure, but didn't recover the missing space. I have subsequently run TechTool Pro and DiskWarrior on the startup drive (both of which found various minor errors), but the 94Gb still refuses to show itself.
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    After searching Google, a "Bad file descriptor" error points to an inode issue, that fsck cannot fix, but I don't know enough (read: anything) about inodes to risk running the clri command to zero the problem inode.
    Short of blanking the startup disk and installing from scratch (not an attractive option), is there anything I can do to fix the broken inode and recover the missing space?
    Any help appreciated.

    Drawing Business wrote:
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    This is not an error and always happens with find unless you exclude the /dev hierarchy from the search. (Interestingly this seems to have gone away with 10.5??)
    To locate your missing space, try WhatSize. Another alternative which I have not used personally is Disk Inventory X.
    As an additional point, with 10.4 it is actually better to use Disk Utility, since it does more than fsck: Resolve startup issues and perform disk maintenance with Disk Utility and fsck, quote:
    Note: If you're using Mac OS X 10.4 or later, you should use Disk Utility instead of fsck, whenever possible.

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • Problem with special character in Unix file

    Hi All,
    Need a help here. We have unicoded our system recently.
    It's regarding a special character (umlaut) that comes through a 3rd party system to an Unix file. This gets displayed in AL11 file as # instead of ö.
    As our program picks the file from Unix, it also has the # but we want ö.
    We could have done some fixes in our code to fix if AL11 file at least had ö and we got # in our program.
    But it's the other way round. So, how can we get rid of this issue? Please suggest.
    Regards,
    Sanj.

    How Al11 is reading file ? Look for
    OPEN DATASET "yourfilename" IN TEXT MODE ENCODING DEFAULT FOR INPUT
                                  IGNORING CONVERSION ERRORS.
    If the above code is there .. you might need to play with different 'OPEN DATASET " options .
    Also look for Note 1174468 - Non-7bit-ASCII characters used in ABAP Workbench
                       Note 1227961 - Names of text fields with non-7-bit ASCII characters
    Good Luck !
    ^Saquib

Maybe you are looking for