Open In limit

There seems to be a limit to the number of apps you can select to open a specific file in using the "Open In" functionality - it seems to be 10. For example, if you have over 10 apps on the iphone that will open PDF files, the menu will only pick 10 that you can use to select from using Open In. I was trying to find some info about this and whether there is a way you can pick which 10 you want to use or have in the list? I am finding apps I would never use to open a PDF are coming up in the list and I don't want to delete these apps as I use them for other things. Any help appreciated.

Here's hoping someone gets back to this question. The iPad has become nearly unusable in a fleet of enterprise customers because of this.
Another way to word the problem: Multiple (way over 10) apps now open pdf files. if we view a pdf in safari and then want to "open in" ibooks or some other app, that option may not be available because of the 10 app limit in the "open in" dialogue. Apple should remove the 10 app limit and make this scrollable. I have hundreds of apps on my iPad and 10 is a laughably small number of apps to open pdfs. Here are a list of types of apps that I use that need to be able to open pdfs:
iBooks, e-signature apps, google docs apps, pdf annotation apps, pdf signature apps, travel apps
I did a count and I personally have over 25 of these apps that open pdfs. Therefore limiting the list to 10 is simply unacceptable for business customers & power users alike. PLEASE FIX THIS APPLE!

Similar Messages

  • Open file limit in limits.conf not being enforced

    So I am running a Arch installation without a graphical UI and Im running elasticsearch on it as its own user (elasticsearch). Since elasticsearch needs to be able to handle more than the 4066 files that seems to the be the default, I edit /etc/security/limits.conf
    #* soft core 0
    #* hard rss 10000
    #@student hard nproc 20
    #@faculty soft nproc 20
    #@faculty hard nproc 50
    #ftp hard nproc 0
    #@student - maxlogins 4
    elasticsearch soft nofile 65000
    elasticsearch hard nofile 65000
    * - rtprio 0
    * - nice 0
    @audio - rtprio 65
    @audio - nice -10
    @audio - memlock 40000
    I restart the system, but the limit is seemingly still 4066. What gives? I read on the wiki that in order to enforce the values you need a pam enabled login. I dont have a graphical login manager and
    grep pam_limits.so /etc/pam.d/*
    Gives me this..
    /etc/pam.d/crond:session required pam_limits.so
    /etc/pam.d/polkit-1:session required pam_limits.so
    /etc/pam.d/su:session required pam_limits.so
    /etc/pam.d/system-auth:session required pam_limits.so
    /etc/pam.d/system-services:session required pam_limits.so
    Any ideas on what I have to do to raise the open file limit here?
    Thanks

    Seems like adding the LimitNOFILE parameter to the systemd service file did the trick, but still doesnt explain why limits.conf isnt being enforced.
    Last edited by Zygote (2014-07-17 10:04:43)

  • System Check error : Actual open files limit:7000, needed 8000 nodes:

    Hi,
    System check showing up error 'Actual open files limit:7000, needed 8000 nodes:' in our BIA system.
    SAP BIA Admin guide suggested open files limit should not be less than 8000. I verified at OS level  
    ulimit for open files (-n) is displayed as '8000' which is good.
    Please suggest me how to fix this error. What if we increase ulimit value for open files at OS level?
    Thanks in advance
    Regards,
    Srinivas.

    Hello Srinivas,
    please see SAP Note <a href="http://service.sap.com/sap/support/notes/1273695">1273695</a>.
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • Wiring NPN, Sinking, Open collector Limit Sensors

    I have a NI-Motion Controller (NI-PCI-7342) connected to a third part drive/motor. The End of Travel Sensors are NPN, Sinking, Open collector, Normally Closed, and the home sensor is similar only Normally Open. I want to wire these into the UMI-7764. The Manual for the Linear Table states:
    Limit Sensor / Home Sensor Electrical Connections
    LIMIT/HOME WIRING COLORS
    +VDC - Brown
    Ground - Blue
    Signal - Black
    I believe the +VDC (Brown) and the Ground (Blue) get connected to a 5V Power source. That leaves me the Signal (Black). Do I wire this into the corresponding Limit Switch locations in the UMI?
    Thanks in Advance.
    Nick Argyros
    Interconnect Devices Inc.
    Electrical Test Engeiner
    [email protected]

    Nick,
    If your limit switches are sinking, you would need to have the "signal" wire connected to the limit input - which would sink the signal to ground.
    The limit inputs have pullup resistor to pull them high - so if nothing is connected to the limit input, the signal will be high. A switch is used to pull it low by sinking to ground.
    If your limit switches are normally closed, you will have to set the polarity in Measurement and Automation Explorer (MAX) to active-high - since the signal is always pulled low until the limit is reached which will open the circuit and cause the line to be pulled high by the pullup resistors. In this case, a high signal = active.
    If the home switch is normally open, then polarity in MAX would be active-low.
    See the UMI-7764 Users Guide figure 7 (page 8) for Limit and Home switch connection
    Hopefully that explains the connections better.

  • Report exceeded opened cursors limit

    Hello everyone,
    I'm building this report using ref cursors. I've following the steps in "Oracle 9i building reports" document.
    My report has 2 ref cursors with a data link between them.
    I'm suposing that a cursor is opened once for the main query and then another cursor is opened for each record of the main query in order to fetch joined results. Am i right?
    So if the first query return 5000 records the report will open 5001 cursors wich throws ORA-01000 :(
    Is there any way to configure the report to close each cursor after it's no longer needed?
    Thanks in advance.
    DAniel

    Hi,
    Thanks again for your interest.
    That syntax is not allowed at Reports Builder SQL Query resource. Well, at least not with the & characters.
    I think I've worked around it.
    I built this function (eg. get_data_table) that returns data as a table type, let's say "t_foo"
    Then, in my query i wrote:
    select a.something, a.something_else from table(get_data_table(:user_p));
    user_p is an user parameter. get_data_table returns data from different tables concerning the "user_p" value.
    It's working so far :)
    Thanks,
    Daniel
    Edited by: dfgrosso on 6/Abr/2011 6:31

  • "Open in" 10 app limit ?

    Hi,
    The Apple "Open In" limit of 10 apps has removed the option to open PDF files in some of my apps.
    A large number of apps now support PDF, and I can not delete other apps to make this work.
    Is there a normal workable solution to this problem?
    Sean Udal

    Here's hoping someone gets back to this question. The iPad has become nearly unusable in a fleet of enterprise customers because of this.
    Another way to word the problem: Multiple (way over 10) apps now open pdf files. if we view a pdf in safari and then want to "open in" ibooks or some other app, that option may not be available because of the 10 app limit in the "open in" dialogue. Apple should remove the 10 app limit and make this scrollable. I have hundreds of apps on my iPad and 10 is a laughably small number of apps to open pdfs. Here are a list of types of apps that I use that need to be able to open pdfs:
    iBooks, e-signature apps, google docs apps, pdf annotation apps, pdf signature apps, travel apps
    I did a count and I personally have over 25 of these apps that open pdfs. Therefore limiting the list to 10 is simply unacceptable for business customers & power users alike. PLEASE FIX THIS APPLE!

  • Maximum open Cursors Excedded error - for every transaction

    Hi All,
    I am getting the maximum opn cursors exceeded error suddenly for every single db transaction i am trying to make in my application. this did not happen previously during my developemnt and testing phase.
    I have a question here that i tried to google but failed to get satisfactory answer.:-
    When we use a cursor in the stored procedure to fetch data, how to make oracle automatically close the cursors once the stored proc finishes executin. Or is there something i have to follow other with my current open cursors limit to ensure this problem does not happen?
    Thanks,
    Chaitanya

    Hi Justin,
    My oracle stored procs are called by java framework. In each place i was closing the connection object but there were a few places where i was not closing the resultset object which directly pointed to my oracle cursor.
    I have closed the objects in such places and tried again but still i am getting the same error. Mit it be an instance where the oracle db is not allowing me to connect to it at all. Something like restarting it would help? Restarting the server where the oracle software is hosted.
    Please excuse my blatant ignorance in this issue.
    Thanks,
    Chaitanya

  • Max no of cursors opened

    Hi all,
    For our project we are using Oracle8i as back-end. I am using JDBC api to manipulate the data in the oracle database.
    I am getting an error saying that "Max no of cursors opened exceeds limit". This error am freq getting. It causes the table locking.
    Is this happening becuase of ResultSet is not closing?
    Will ResultSet automatically closed if I close the statment object in finally block of my code or do I need to close the ResultSet explicitly.
    Is there any solution for this. This am getting if no of clients are concurantly update the same table.
    Any suggestions and solutions appreciated.
    thanks in advance
    ANILA

    It's been asked here before, but I'm embarrassed to confess that I still don't know the answer:
    What if you're using a connection pool? My understanding of pools is that they create a pool of live, open connections that an application can draw from. The cost of opening connections is paid once and amortized over all the requests made.
    That makes me think that I should leave connections open.
    When I do database access with Tomcat, I set up a JNDI connection pool. I have my apps close every ResultSet and Statement, but I don't close the connections. I leave them open to be put back in the pool for the next request to use.
    Is this correct? - MOD

  • JDBC & Open Cursors

    Hi,
    Could it be possible that even if all ResultSet and Statement objects are closed (in finally blocks so that the close() statements are guaranteed to be executed), one can still get ORA 1000 or max open cursors exceeded? Is there a bug in JDBC that makes the cursors linger even though they have been closed by the program? I ask this because a colleague of mine insists that there is such a JDBC bug and that the only workaround is to close the Connection object. He also claims that the "bug" is fixed in Oracle 9i. We are using Oracle 8.1.7.0.0 on HP-UX 11.0 with java 1.3.1 Any info greatly appreciated. Thanks.
    Vasu

    Max cursors open, is the problem encountered because of the database limitation of opening of new sessions. A new database session is created whenever a Statement/PreparedStatament is opened.
    Make sure to close, Statement/PreparedStatement after its usage, as you have done in finally. This will ensure that it wont exceed the maximum open cursor limit.
    I think this is the only cause for the baove problem and can be avoided.
    Hope this helps
    Ravi

  • Problem to increaase max open files to more than 1024

    Hello,
    I tried to increase the max open files to more than 1024 in the following way
    $ sudo nano /etc/security/limits.conf
    mysql hard nofile 8192
    mysql soft nofile 1200
    However, after reboot I am still not able to start MariaDB:
    $ sudo systemctl start mysqld
    Job for mysqld.service failed. See 'systemctl status mysqld.service' and 'journalctl -xn' for details.
    $ sudo systemctl status mysqld
    ● mysqld.service - MariaDB database server
    Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
    Active: activating (start-post) since Tue 2014-09-02 13:08:20 EST; 53s ago
    Main PID: 6504 (mysqld); : 6505 (mysqld-post)
    CGroup: /system.slice/mysqld.service
    ├─6504 /usr/bin/mysqld --pid-file=/run/mysqld/mysqld.pid
    └─control
    ├─6505 /bin/sh /usr/bin/mysqld-post
    └─6953 sleep 1
    Sep 02 13:08:20 acpfg mysqld[6504]: 140902 13:08:20 [Warning] Could not increase number of max_open_files to more than 1024 (request: 4607)
    I am using the following /etc/mysql/my.cnf
    [mysql]
    # CLIENT #
    port = 3306
    socket = /home/u/tmp/mysql/mysql.sock
    [mysqld]
    # GENERAL #
    user = mysql
    default-storage-engine = InnoDB
    socket = /home/u/tmp/mysql/mysql.sock
    pid-file = /home/u/tmp/mysql/mysql.pid
    # MyISAM #
    key-buffer-size = 32M
    myisam-recover = FORCE,BACKUP
    # SAFETY #
    max-allowed-packet = 16M
    max-connect-errors = 1000000
    skip-name-resolve
    sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY
    sysdate-is-now = 1
    innodb = FORCE
    innodb-strict-mode = 1
    # DATA STORAGE #
    datadir = /home/u/tmp/mysql/
    # BINARY LOGGING #
    log-bin = /home/u/tmp/mysql/mysql-bin
    expire-logs-days = 14
    sync-binlog = 1
    # CACHES AND LIMITS #
    tmp-table-size = 32M
    max-heap-table-size = 32M
    query-cache-type = 0
    query-cache-size = 0
    max-connections = 500
    thread-cache-size = 50
    open-files-limit = 65535
    table-definition-cache = 1024
    table-open-cache = 2048
    # INNODB #
    innodb-flush-method = O_DIRECT
    innodb-log-files-in-group = 2
    innodb-log-file-size = 128M
    innodb-flush-log-at-trx-commit = 1
    innodb-file-per-table = 1
    innodb-buffer-pool-size = 2G
    # LOGGING #
    log-error = /home/u/tmp/mysql/mysql-error.log
    log-queries-not-using-indexes = 1
    slow-query-log = 1
    slow-query-log-file = /home/u/tmp/mysql/mysql-slow.log
    How is it possible to increase the number?
    Thank you in advance.

    Change/add in the my.ini file, under [mysqld]:
    max_allowed_packet=2048M
    Reference:
    * http://dev.mysql.com/doc/refman/5.5/en/ … wed_packet

  • Set file descriptor limit for xinetd initiated process

    I am starting the amanda backup service on clients through xinetd and we
    are hitting the open file limit, ie file descriptor limit.
    I have set resource controls for the user and I can see from the shell that
    the file descriptor limit has increased, but I have not figured out how to get
    the resource control change to apply to the daemon started by xinetd.
    The default of 256 file channels persists for the daemon, I need to increase
    that number.
    I have tried a wrapper script, clearly doing it incorrectly for Solaris 10/SMF
    services. That route didn't work, or is not as straight forward as it used to be.
    There is a more direct way ?
    Thanks - Brian

    Hi Brian,
    It appears with 32 bits applications. You have to use the enabler of the extended FILE facility /usr/lib/extendedFILE.so.1
    % ulimit -n
    256
    % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'
    65536
    % ulimit -n
    65536
    % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
    % ./your_32_bits_applicationMarco

  • ORA-00210: cannot open the specified controlfile

    Hi,
    We are trying to run a package remotly and while executing the package . we are getting ORA-00210: cannot open the specified controlfile. I did crosscheck the controlfile through v$controlfile parameter and the ctlfile shows there. I did also go thorugh with metalink, it says you need to chech weather some process has locked it, or the max number of open file limit of OS. I would appriciate, if you can describe both of the above options.
    OS Solaris
    DB 9.2.0.8.0
    hare krishna
    Alok

    Thanks Damorgan, I've verified it by checking the alert.log file and I also switch logfiles, it's working absolutly file. I am going to paste the comple error stack.
    Connecting to the database CHRY_STAGE_STG.
    ORA-00210: cannot open the specified controlfile
    : controlfile: '/u700/oradata/pubint/control01.ctl'
    ORA-27041: unable to open file
    SVR4 Error: 24: Too many open files
    Additional information: 3
    ORA-06512: at "CHRY_STAGE_STG.DIFF_UTILS", line 1080
    ORA-06512: at "CHRY_STAGE_STG.PKG_GROUP_IMG_CROSS_REFERENCE", line 199
    ORA-06512: at line 6
    Process exited.
    Disconnecting from the database CHRY_STAGE_STG.
    and when I verfied the diff_utils at line 1080 , I found the following.
    EXECUTE IMMEDIATE 'INSERT /*+ APPEND */ INTO ' || p_HistoryTable ||
    '( SELECT ' || p_column || ' FROM ' || p_MasterTable || ' A INNER JOIN ' ||
    p_CurrentTable || ' B ON A.' || p_Keys || ' = B.' || p_Keys || ')';
    hare krishna
    Alok

  • Report this postReply with quote Credit check for unconfirmed item

    Hello,
    I have configured automatic credit control for sales orders.
    When a confirmed order quantity is entered, I get a warning message plus the credit block, just as I need. If I have no confirmed quantity of the items, I get no message, just a saved order.
    Our item confirmation is done only on what is in stock, not on RLT, so it is common that an item has 0 confirmation. I need a credit block if the value would otherwise exceed the open credit limit? How can I do this? I've tried both the static and dynamic checks, but they only work with confirmed quantities.
    Thanks

    pricing:
       step  Des                      From     print     subtotal
       120   Total                                   X        1
       130   Credit value           120         X        A
    and also in automatic credit control select open orders
    but system calculate the credit value=confirmed quantity*price.

  • Closing an implicit cursor

    Hi,
    Can anyone let me know how to close an implicit cursor opened for every individual sql statement? Please note that I am talking about the implicit cursor, not the explicit cursor. I am reaching the maximum open cursor limit and I don't want to increase the open_cursor parameter further as i have already set it a very big value (10000). Rather I would like to know how to close the implicit cursors opened by oracle itself.
    Please help.
    Thanks in advance.

    Thanks for your advice. In fact we have already
    started re-constructing our coding using explicit
    cursor. But the thing is even if the implicit cursors
    are kept open till the session ending, it should be
    closed when the maximum open_cursor limit is reached
    by oracle itself.
    The work arroud is well accepted, but I am very eager
    to know whether somehow implicit cursor can be closed
    or not.You have cursor leaks in your application. Are you closing the cursors? In JDBC (I know you said VB.Net but I don't know much about VB) you call the close method on the Statement objects. May be you are closing most of the cursors but a few are not being closed. In that case, the number of open cursors will slowly creep up to a high number. How often does this error come up? How long before you start seeing it?
    Also, take a look at the v$open_cursor view (using your session SID) when you get the error and see if there are certain statements that are repeated a ton of times. This could help you find those statements that are not closed by the application.
    Also note, there is a dbms_sql.close_cursor method. Check it out:
    http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96590/adg09dyn.htm#26799
    -Raj

  • File too large error unpacking War during app deploy - RHEL &WLS 10.3.5

    I'm stumped and I'm hoping someone can help out here. Does anyone have any insights into the cause of my problem below, or tips on how to diagnose the cause?
    scenario
    We ran into an open files limit issue on our RH Linux servers, and had the SA boost the our open files limit fro 1024 to 3096. This seems to have solved the open files limit issue, once we restarted the node managers and the managed servers (our WLS startup script sets the soft limit to the hard limit).
    But now we've got a new issue, right after this change. The admin server is now no longer able to deploy and war/ear, as when I click on "Activate" after the install I get
    Message icon - Error An error occurred during activation of changes, please see the log for details.
    Message icon - Error Failed to load webapp: 'TemplateManagerAdmin-1.0-SNAPSHOT.war'
    Message icon - Error File too large
    on the console, and see the stack trace below in the Admin server log (nothing in the managed server logs) - indicating it's getting the error in exploding the war.
    I've tried both default deployment mode, and the mode "will make the deployment available in the following location" where the war is manually copied to the same location on each box, available to each server - all with the same result. I've also tried restarting the admin server, but no luck.
    The files are not overly large (<=34 MByte) and we had no trouble with them before today. I'm able to log in as the WebLogic user and copye files, etc. with no problem.
    There is no disk space issue - plenty of space left on all of our filesystems. There is, as far as I can tell, no OS or user file size limit issue:
         -bash-3.2$ ulimit -a
         core file size (blocks, -c) 0
         data seg size (kbytes, -d) unlimited
         scheduling priority (-e) 0
         file size (blocks, -f) unlimited
         pending signals (-i) 73728
         max locked memory (kbytes, -l) 32
         max memory size (kbytes, -m) unlimited
         open files (-n) 3096
         pipe size (512 bytes, -p) 8
         POSIX message queues (bytes, -q) 819200
         real-time priority (-r) 0
         stack size (kbytes, -s) 10240
         cpu time (seconds, -t) unlimited
         max user processes (-u) unlimited
         virtual memory (kbytes, -v) unlimited
         file locks (-x) unlimited
    environment
    WLS 10.3.5 64-bit
    Linux 64-bit RHEL 5.6
    Sun Hotspot 1.6.0_29 (64--bit)
    stack trace
    ####<Mar 6, 2013 4:09:33 PM EST> <Error> <Console> <nj09mhm5111> <prp_appsvcs_admin> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <steven_elkind> <> <> <1362604173724> <BEA-240003> <Console encountered the following error weblogic.application.ModuleException: Failed to load webapp: 'TemplateManagerAdmin-1.0-SNAPSHOT.war'
    at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:393)
    at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:176)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:199)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:517)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:159)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:45)
    at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:613)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
    at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:184)
    at weblogic.application.internal.SingleModuleDeployment.prepare(SingleModuleDeployment.java:43)
    at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:207)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)
    at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)
    at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.io.IOException: File too large
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at weblogic.utils.io.StreamUtils.writeTo(StreamUtils.java:19)
    at weblogic.utils.FileUtils.writeToFile(FileUtils.java:117)
    at weblogic.utils.jars.JarFileUtils.extract(JarFileUtils.java:285)
    at weblogic.servlet.internal.ArchivedWar.expandWarFileIntoDirectory(ArchivedWar.java:139)
    at weblogic.servlet.internal.ArchivedWar.extractWarFile(ArchivedWar.java:108)
    at weblogic.servlet.internal.ArchivedWar.<init>(ArchivedWar.java:57)
    at weblogic.servlet.internal.War.makeExplodedJar(War.java:1093)
    at weblogic.servlet.internal.War.<init>(War.java:186)
    at weblogic.servlet.internal.WebAppServletContext.processDocroot(WebAppServletContext.java:2789)
    at weblogic.servlet.internal.WebAppServletContext.setDocroot(WebAppServletContext.java:2666)
    at weblogic.servlet.internal.WebAppServletContext.<init>(WebAppServletContext.java:413)
    at weblogic.servlet.internal.WebAppServletContext.<init>(WebAppServletContext.java:493)
    at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:418)
    at weblogic.servlet.internal.WebAppModule.registerWebApp(WebAppModule.java:972)
    at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:382)

    In the end, the problem was not in the Admin server where the log entry is, but in one of the managed servers where there was no such log entry.
    Somehow, and we have no idea how, the NodeManager process had the soft limit for max file size set to 2k blocks. Thus, the managed server inherited that. We restarted the Node Manager, then the managed server, and the problem went away.
    The diagnostic that turned the trick:
    cat /proc/<pid>/limits
    for the managed server showed the bad limit setting, then diagnosis proceeded from there. The admin server, of course, had "unlimited" since it was not the source of the problem.

Maybe you are looking for

  • Created doc in InDesign, made it a form in Acrobat, Now need to edit original

    So I made this document in InDesign and exported it to PDF.  From the PDF, I used acrobat pro to make the document a form with fields.  I spent a long time setting it up perfectly and then realized that there is an error on the piece that was made wh

  • OS X partition becomes corrupt after switching from Bootcamp

    So I've had to reinstall os x 10.9.1 no less than three  times this week and multiple times before in the past after switching between os x and windows 8.1. This usually happens after I've been using windows for a prolonged period then switch back to

  • Zen Touch 40GB: USB connection dies mid-transfer.

    I just received a brand new 40GB Zen Touch. While transferring a group of songs to it (whether via MediaSource or Media Player) it successfully transfers a few before freezing, waiting for a minute, and then claiming that either the Zen Touch is no l

  • IDOC for SAP BP Message "Table BAPIADTEL entry to be changed

    Hi Experts, I have a requirement to update BP Address details using LSMW. I am usign IDOC Message Type "BUPA_C_CHANGEADDRESS" and Basic Type "BUPA_C_CHANGEADDRESS01" for this requirement. Using this i am able to update City, PO BOX, Postal Code, Stre

  • Round Corners on a table Indesign CS3

    Hello, can anyone help me with achieving rounded corners on a table without drawing a box with rounded corners and drawing another box on top that contains the table data is there a way to do this ? Thanking you in advance for any help on this.