Setting system resources by  ulimit

Hi
I need to increase the maximum number of file descriptors in my application . I increased the number of file descriptors by using ulimit .
but do I have to increase other system resources like maximum file size , maximum data segment size etc. ??
If yes is there some method to calculate the ratio in which they should increase with increase in file descriptors ..??
Thanks

Hi,
Just the number of file descriptors, should be fine.
Do you a problem with your application?
Regards,
Ralph
SUN DTS

Similar Messages

  • PR_Accept() failed, error -5974 (Insufficient system resources.)

    How can I go about determining exactly which resource was insufficient? Or has anyone encountered this and has an idea of typical reasons?
    With about 1000 connections this morning, I saw this in the log:
    PR_Accept() failed, error -5974 (Insufficient system resources.)
    Our setup:
    iPlanet Directory Server 5.1
    HPUX platform - 6 gigs ram, dual 100mb nics, plenty of disc space.

    Thanks Gary. I was going to set the ulimit and the hard and soft limits on the OS. I had done this before on a Solaris system, but with HP, I'm not sure where the equivalent of /etc/system is. I may have to check with our HP guys.
    BTW, here's the output of idsktune. I don't see any warnings for only 64 threads being available. We are going to try 5.2 Patch 4, but we cannot apply it in this environment yet -- still doing a POC. Thanks!
    $ ./idsktune
    Sun Java Enterprise System platform tuning analysis version 12-DEC-2003.
    Copyright 2002-2003 Sun Microsystems, Inc.
    NOTICE : System is hppa2.0/644-hp9000/800/rp3440 -hpux_B.11.11.
    NOTICE : The tcp_keepalive_interval is set to 7200000 milliseconds
    (120 minutes). This may cause temporary server congestion from lost
    client connections.
    NOTICE : The NDD tcp_smallest_anon_port is currently 49152. This allows a
    maximum of 16384 simultaneous connections.
    NOTICE : ndd settings can be placed in /etc/rc.config.d/nddconf
    WARNING: Only the superuser can check what patches are installed. You must
    run idsktune as root to ensure necessary patches are present.
    NOTICE : The following patches might not be installed:
    Patch PHCO_24402 "libc cumulative header file patch".
    Patch PHCO_26061 "s700_800 11.11 Kernel configuration commands patch".
    Patch PHCO_27632 "initialised TLS, Psets, Mutex performance", which supercedes PHCO_26466.
    Patch PHCO_27740 "libc cumulative patch".
    Patch PHCO_27958 "mountall cumulative patch", which supercedes PHCO_24777.
    Patch PHKL_24751 "preserve IPSW W-bit and GR31 lower bits".
    Patch PHKL_25233 "select(2) and poll(2) hang".
    Patch PHKL_25468 "eventport (/dev/poll) pseudo driver".
    Patch PHKL_25993 "thread nostop for NFS, rlimit max value fix".
    Patch PHKL_25994 "thread NOSTOP, Psets Enablement", which supercedes PHKL_24253.
    Patch PHKL_27094 "Psets Enablement Patch, slpq1 perf".
    Patch PHKL_27316 "priority inversion and thread hang", which supercedes PHKL_25367.
    Patch PHKL_27686 "MO 4k sector size; FIFO; EventPort, perf".
    Patch PHKL_28122 "signals, threads enhancement, Psets Enablement".
    Patch PHKL_28267 "Required for large heap on SDK 1.3 and 1.4 VM-JFS ddlock, mmap,thread perf, user limits".
    Patch PHNE_28089 "cumulative ARPA Transport patch".
    Patch PHSS_26560 "ld(1) and linker tools cumulative patch".
    Patch PHSS_26971 "Japanese TrueType fonts".
    Patch PHSS_26973 "Korean TrueType fonts".
    Patch PHSS_26975 "Chinese-Simple TrueType fonts".
    Patch PHSS_26977 "Traditional Chinese TrueType fonts".
    Patch PHSS_28370 "X/Motif runtime patch".
    Patch PHSS_28470 "X Font Server SEP2001 Periodic Patch. Common patch for Asian TrueType fonts.".
    NOTICE : Patches are available from http://us-support.external.hp.com/
    WARNING: largefiles option is not present on mount of /opt,
    files may be limited to 2GB in size.
    Any ideas?

  • Error 15 with CLFN in nisyscfg.lvlib:Set System Image

    Recently upgraded to LV 2012 and code that worked for imaging cRIOs in LV 2011 had to be replaced with the new and improved nisyscfg.lvlib VIs: Set System Image, Get System Image, etc,...
    I'm receiving the following error message when my installed executable attempts to image a cRIO: 'Error 15 occurred at Call Library Function Node in nisyscfg.lvlib: Set System Image (Folder).vi:1->nisyscfg.lvlib: Set System Image (File).vi:1->GetSetSystemImage.vi'. The possible reason(s) displays: 'LabVIEW: Resource not found.'
    What installation packages/products are required for the system configuration VIs? I'm including NISysCfg_Runtime and NISysAPI_Framework.
    Thanks for your time.  
    Solved!
    Go to Solution.

    I think I've solved part of the problem. Now I'm getting 'Error -2147220364 occurred at nisyscfg.lvlib: Set System Image (Folder).vi:1' with possible reason(s): 'NI System Configuration: (Hex 0x80040474) Command is not supported by given protocol.'

  • Insufficient system resources exist to complete the requested service

    [I did intend to start this post with a screenshot of the above error when I initiate the transfer from Windows Explorer, but apparently 'Body text cannot contain images or links until we are able to verify your account.' so I will just have to do some typing,
    viz the error dialog says:
     'An unexpected error is keeping you from copying the file. If you continue to receive this error, you can use the error code to search for help with this problem.
    Error 0x800705AA: Insufficient system resources exist to complete the requested service.'
    I get this error pretty much 100% of the time from one particular PC when trying to copy a folder of 10 2GB files to a server with both mirror and parity storage spaces.
    I recently purchased a Thecus W5000 running Windows Storage Server 2012 R2 Essentials. Absent any guidance either way I decided to set up a storage pool across the three 3TB WD Red drives that I have installed in it and to allocate 1.5TB of that space to a
    mirror storage space and the remainder to a parity storage space. Having read some faily dire things about storage spaces, but wanting the resilience provided by those two types of storage space, I decided to run some benchmarking tests before finalising anything.
    To that end I only went as far through the Essentials setup as creating a handful of user accounts before setting up the storage spaces and sharing both of them, with all authenticated users permitted full control. My benchmarking consists of a Take Command
    batch file timing three large directory copies - one with 10 2GB files, one with 10240 10K files and another with a multi-level directory with a variety of files of differing sizes. The first two are completely artificial and the latter is a real world example
    but all are roughly 20GB total size.
    To test various aspects of this I copied the three structures to and then from a partition created on the internal disk (the W5000 has a 500GB SSHD) and to the two storage space partitions. I also created a version of the batch file for use internally which
    did something similar between the internal disk and the two storage space partitions, and another as a control that tested the same process between the two Windows PCs. The internal test ran to successful completion, as did the PC to PC copy and the external
    one from my Windows 8.1 64-bit system (i5 3570K, 16GB RAM, 1TB HD) but when I ran it from my Windows 7 Pro 64-bit gaming rig (i7 2600K, 8GB RAM, 1TB HD) I got a number of failures with this error from Take Command:
    TCC: (Sys) C:\Program Files\bat\thecus_test_pass.btm [31] Insufficient system resources exist to complete the requested service.
    (where line 31 of that batch file is a copy command from local D: to the parity space on the Thecus).
    The error occurs only when copying large files (the 2GB ones already mentioned but some of those in the real world structure that are about 750MB in size) from the Win7 system to the Thecus and only when doing so to the storage space volumes - ie. copying to
    the internal disk works fine, copying from all volumes works fine, copying internally within the Thecus works fine, copying between the Win8 and Win7 machines works fine and initiating the copy as a pull from the server between the same two disks also works
    fine. One aspect of this that surprised me somewhat was just how quickly the copy fails when initiated from Windows Explorer - checking out the details section of the copy dialog I see roughly ten seconds of setting up and then within five seconds after the
    first file transfer is shown as starting the error dialog pops up (as per the image no longer at the top of this post).
    There are no entries in the event log on either machine related to this error and I've had the System Information window of the Sysinternals Process Explorer up and running on both machines whilst testing this, and it shows nothing surprising on either side.
    I've also run with an xperf base active and I can't see anything pertinent in the output from either system.
    Frankly, I am at a loss and have no idea what other troubleshooting steps I should try. The vast majority of the existing advice for this error message seems to relate to Windows 2003 and memory pools - which both the fact that this works from one PC but not
    the other and the SysInfo/xperf output seems to suggest is not the issue. The other thing I've seen mentioned is IRPStackSize, but again if that was the problem I would expect the failure to occur where ever I initiated the large file transfer from.

    Ff it works from the win 8 box, it must be in the win 7 box?
    I'm going to answer this one first because much of the rest of this is not going to be pertinent to the problem at hand. I've been over and over this aspect whilst trying to think this issue through and you are right, except that it only happens when copying
    files to the Thecus and only then when the target is a ReFS partition on a mirror or parity storage space. So the best I can come up with is that it is most likely an issue on the Win7 box that is triggered by something that is happening on the server side,
    but even that is a bit of a stretch. This is why the lack of information from the error message bugs me so much - in order to debug a problem like this you need to know what resource has been exhausted and in which part of the software stack.
    Now that may not be easy to do in a generic way, and since programmers are inherently lazy it is tempting just to return a simple error value and be done with it. However, I've been in the position of doing just that in a commercial product and ended up
    having to go back and improve the error information when that particular message/code was tripped and I was expected to debug the problem! Obviously there is a significant difference between a Microsoft consumer product and a mainframe product that costs many
    times as much and comes with a built in maintenance fee, but the underlying requirement is the same - somebody needs to be able to solve the problem using the information returned. In this case that simply isn't possible.
    You spend your time testing file copies, where I devote most of my time to backup and restore
    I don't really want to be testing file copies - the initial intention was to benchmark the different storage space and file system combinations that I was intending to use but the error whilst doing so has spiralled into a cycle of testing and tweaking that
    really isn't achieving anything. My primary reason for having a NAS at all has always been backup. My current strategy for the two boxes participating in this testing involves having a local drive/partition to hold backups, running a daily incremental file
    copy to that partition which is then immediately copied to a NAS and backing that up with a regular (needs to be at least once per month to be totally secure) full image copy of the local disks that is also copied to the NAS afterwards (hence my fascination
    with copying large files).
    There is a weakness in that strategy because I've never been very good at performing that full image backup regularly enough, so one of the reasons for buying the W5000 was the possibility of making those backups automatic and driven from the server end.
    However, that takes the local backup drives out of the equation and leaves me with the need to backup the NAS, which I don't do with my existing unit because there are (nearly) always copies held elsewhere.
    The other reasons for going with the Thecus were a desire to backup the other machines in the household - I've always dreaded a hard drive failure on my wife's laptop but getting her to perform any kind of housekeeping is nigh on impossible and also to provide
    a file server capability protected by a single set of userids (the existing NAS data is open to all household members). So my goal is backup and restore too ;)
    I meant a different nic on the beast (win 7)
    I should have realised that but obviously wasn't thinking straight. I don't have a spare gigabit NIC to hand (although perhaps even a megabit one might provide an interesting data point) although there is such a card in one of my other (less used) PCs that
    I could cannibalise for testing purposes. Another project for the coming weekend methinks.
    put some limits on it to keep the lawyers happy. 2gb ram, OS loaded on a drive, limit the # of Hard Drives
    That statement got me thinking, because I've never been able to find a definition anywhere of what the restrictions are with WSS 2012 R2 Essential - if I bring up the software license terms on the box itself they are for 2012 Standard!?- and wonder whether
    they'd stop me doing things like adding RAM or changing the processor.
    Even my buddies at wegotserved do not seem to have done any hands on reviews and they get "everything."
    The cynic in me wonders whether that is because Thecus know that they've just shovelled this onto a handful of existing boxes that barely meet the spec. and which simply aren't up to snuff as anything other than a box full of disks.  The Thecus boxes
    look like good value because they include the server OS (the unit cost me roughly 50% more than I could buy Windows Server 2012 R2 Essentials for) but if you can't realise that value then they are just an expensive NAS. 
    if perhaps the algorithms in the Seagate SSHD do not know ReFS?
    I haven't put a ReFS partition on the SSHD, only on the three 3TB WD Reds.
    I will ask my contacts at MS to take a look at this thread, but they stay so busy with v.next I don't know if they will spend many cycles on it
    Perhaps you could ask them if the next version of the OS could do a better job of identifying which resources have been exhausted, by what part of the stack and where in the maze of connectivity that makes up a modern computing environment?? {gd&r}
    Cheers, Steve

  • Insufficient System Resources when merging large files

    My client is running on Windows Server 2003, 64 bit. He has 30 gig of RAM and a large amount of file storage. The drives are NTFS.
    I have a program that produces a number of text files from different processes, and then merges them when done. After running the code for many days (we're working with a lot of data), the merge process is failing with "Insufficient System Resources Exist to complete the requested process".
    Insufficient system resources exist to complete the requested service
    java.io.IOException: Insufficient system resources exist to complete the requested service
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(Unknown Source)
    at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
    at sun.nio.cs.StreamEncoder.implWrite(Unknown Source)
    at sun.nio.cs.StreamEncoder.write(Unknown Source)
    at java.io.OutputStreamWriter.write(Unknown Source)
    at java.io.BufferedWriter.write(Unknown Source)
    at functionality.ScenarioThreadedLoanProcessor.processLoans(ScenarioThreadedLoanProcessor.java:723)
    at functionality.ScenarioThreadedLoanProcessor.construct(ScenarioThreadedLoanProcessor.java:227)
    at utility.SwingWorker$2.run(SwingWorker.java:131)
    at java.lang.Thread.run(Unknown Source)
    I've investigated this problem in other places, and most of the answers seem to not apply to my case.
    I've looked a this: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4938442
    But I am not using file channels (I don't think...), and I am copying data in chunks of <32MB.
    Here's the relevant code (I realize I don't need to be re-allocating the buffer, that's a legacy from an older version, but I don't think that's the cause.)
    There are usually four threads, 1-4 reports, and 1 scenario, so this loop shouldn't be executing thousands and thousands of times.
    for (int scenario = 0; scenario < scenarios.size(); scenario++)
                        for (int report = 0; report < reportPages.size(); report++)
                             for (LoanThread ln : loanThreads)
                                  BufferedReader br = new BufferedReader(new FileReader(new File(ln.reportPtr.get(
                                            (scenario * reportPages.size()) + report).getFileName())));
                                  br.readLine();//header, throw away
                                  BufferedWriter bw = new BufferedWriter(new FileWriter(formReport.get((scenario * reportPages.size()) + report)
                                            .getFileName(), true));//append
                                  char[] bu = new char[1024 * 1024 * 16];
                                  int charsRead = 0;
                                  while ((charsRead = br.read(bu)) != -1)
                                       bw.write(bu, 0, charsRead);
                                  bw.flush();
                                  bw.close();
                                  br.close();
                                  File f = new File(ln.reportPtr.get((scenario * reportPages.size()) + report).getFileName());
                                  f.delete();
    Any thoughts?Edited by: LizardSF on Jul 29, 2011 8:11 AM
    Edited by: sabre150 on 29-Jul-2011 09:02
    Added [ code] tags to make the code (more) readable

    1) You can allocate the buffer at the start of the outer loop to save the GC working overtime. You might even be able to move it out of the loops all together but I would need to see more code to be sure of that.+1. Exactly. This is the most important change you must do.
    The other change I would make is to reduce the buffer size. The disk only works in units of 4-16k at a time anyway. You will be surprised how much you can reduce it without affecting performance at all. I would cut it down to no more than a megabyte.
    You could also speed it up probably by a factor of at least two by using InputStreams and OutputStreams and a byte[] buffer, instead of Readers and Writers and char[], as you're only copying the file anyway. Also, those BufferedReaders and Writers are contributing nothing much, in fact nothing after the readLine(), as you are already using a huge buffer. Finally, you should also investigate FileChannel.transferTo() to get even more performance, and no heap memory usage whatsoever. Note that like your copy loop above, you have to check the result it returns and loop until the copy is complete. There are also upper limits on the transferCount that are imposed by the operating system and will cause exceptions, so again don't try to set it too big. A megabyte is again sufficient.

  • Windows Modules Installer - Error 1450 Insufficient System Resources

    I've run into a major problem with the release of Windows 7 Enterprise.  It seems that somewhere along the line after installing my applications, the Windows Modules Installation Service has decided to stop working.
    When trying to start the service through Services Management I get the following:  "Windows could not start the Windows Modules Installer service on Local Computer.  Error 1450: Insufficient system resources exist to complete the requested service."
    This means that Windows Update doesn't find updates, sysprep fails, and I get a blank list for "Turn Windows features on or off".
    Can anyone help?
    (update: After testing with some clean installs of Windows 7 Enterprise, it looks like the problem happens after installing ArcGIS Arcinfo Workstation 9.3.  Uninstalling did not fix it.)
    trelf

    i hope this gets posted to the thread - if not i apologize, this is my first post on here.
    Let me just say this - i've been a Microsoft guy for like... over 20 years now, but I have to say this year this whole platform has really seemed to fall apart for me on so many levels, and when i have the best equipment money can buy its a bit of a let
    down to say the least.
    Short version is - i tried the above and it didn't work, and i'm getting this Windows Modules Installer constantly trying to run and its failing repeatedly for no known reason... i suspect maybe tied to my backups which are microsoft windows based as well.
    This fix above DID NOT work for me unfortunately - i set the limit, did the reboot and still can't run SFC /SCANNOW - it fails with same error and still having the same errors in the System Log - its 7036 and 7023 alternating one after the other nonstop
    it appears, every 30 seconds.
    Any ideas?
    Thanks,
    cnorton3
    PS: just to note, i DO NOT have ArcGIS installed / funny enough i do know what the software is but i do not (and never have) used/installed it.

  • "System Resource Exceeded" for simple select query in Access 2013

    Using Access 2013 32-bit on a Windows Server 2008 R2 Enterprise. This computer has
    8 GB of RAM.
    I am getting:
    "System Resource Exceeded"  errors in two different databases
    for simple queries like:
    SELECT FROM .... GROUP BY ...
    UPDATE... SET ... WHERE ...
    I compacted the databases several times, no result. One database size is approx 1 GB, the other one is approx. 600 MB.
    I didn't have any problems in Office 2010
    so I had to revert to this version.
    Please advise.
    Regards,
    M.R.

    Hi Greg. I too am running Access on an RDP server. Checking Task Manager, I can see many copies of MSACCESS running in the process list, from all users on the server. We typically have 40-60 users on that server. I am only changing the Processor Affinity
    for MY copy, and only when I run into this problem. Restarting Access daily, I always get back to multi-processor mode soon thereafter.
    As this problem only seems to happen on very large Access table updates, and as there are only three of us performing those kind of updates, we have good control on who might want to change the affinity setting to solve this problem. However, I
    understand that in other environments this might not be a good solution. In my case, we have 16 processors on the server, so I always take #1, my co-worker here in the US always takes #2, etc. This works for us, and I am only describing it here in case it
    works for someone else.
    The big question in my mind is what multi-threading methods are employed by Microsoft for Access that would cause this problem for very large datasets. Processing time for an update query on, say, 2 million records is massively improved by going down
    to 1 processor. The problem is easily reproduced, and so far I have not seen it in Excel even when working with very large worksheets. Also have not seen it in MS SQL. It is just happening in Access.

  • Access System Resources using Java Applet via Java Script

    Hello
    I can access my Applet public methods (and this methods access system resources) via Java Script if I do the following: System.setSecurityManager(null);However, I'm making this post because I don't like this solution.
    Supposedly, setting the SM to null is like making the Applet (which is signed and was accepted by the user via a prompt from the browser) behave like a normal Java program that has no restrictions. (http://java.sun.com/docs/books/tutorial/essential/environment/security.html, second paragraph)
    However, this feels like a workaround of something that is supposed to be there (the SM).
    Also, if I make the methods invocation from inside the applet (using swing buttons and textboxes for example) I can use the standard SM without no problems.
    From my readings, the problem regarding Java Script invocation, comes from the fact that the Java Script is not a secure (not signed) source (because you can invoke public methods the way you wish from it i guess) on the contrary to the applet methods invoked by the buttons.
    Possible solutions I found in the web range from using the public static Object doPrivileged(PrivilegedAction action) method or imaginative things like creating new threads on the public method that call the private methods that access the system resources (supposedly because the new thread runs under the safe environment of the applet)
    I almost got a glimpse of hope with this post http://forums.sun.com/thread.jspa?threadID=5325271&tstart=0
    However, none of these solutions worked, the only results were with the setResourceManager(null)So, any one can contribute with a solution for this? Should I keep trying to find a solution other then the one I already have?
    Regards
    Cad

    1. yes
    2. yes
    Note for 2. the converter will run the applet with SUN jre for sure if the user has IE.
    IE will use the ActiveX technology to run the applet (as with Macromedia Flash).
    For Netscape I am not sure, but I would think Netsape will use the plug in provided by
    SUN.
    Note for SUN jre 1.3. If this applet is to be used within a company that uses a proxy with
    ntlm authentication the 1.3 applet cannot connect (to the Internet) through this proxy since
    ntlm athentication is supported since j2re1.4.2_03. There is one version before that but
    that one will pop up a window asking for the user's domain account and password wich
    is both lame and crappy.
    As for the IE settings, IE has a default setting that askes users the "do you trust"
    queston for AciveX controls within the Internet securety zone (tools -> internet options
    -> security).
    Sincy anybody can make ActiveX controls (allso sign them) a user that has a problem
    to find the "no" button will sooner or later install a malicuous ActiveX control (spy ware
    or a virus).
    If this user's desktop is within your company's network it will cause serious harm.
    This is why most company's disable this by changing the default internet expolorer
    settings. Since I assume you are writhing this applet to be used by a company I allso
    assumed that company has someone to maintain the desktops. In that case I
    assume that person would want to control the security within the SUN jre instead of
    letting the user deside what to trust and what not.

  • Using system resource to redirect class loading

    It seems to be an open secret that the JVM can be manipulated to load classes (or packages?) from alternate locations.
    Take, for example, this excerpt from a SAX user guide:
         In case your Java environment did not arrange for a compiled-in default (or to use
         the META-INF/services/org.xl.sax.driver system resource), you'll probably need to
         set the org.xml.sax.driver Java system property to the full classname of the SAX
         driver, as in
              java -Dorg.xml.sax.driver=com.example.xml.SAXDriver MySAXApp sample.xml
    So what is the "META-INF/services/org.xml.sax.driver" system resource? Clearly this resource is tied (historically) to the Manifest file in the jar archive format. Is the META-INF/services prefix special for remapping class names? What other services are there? What other resources can be placed in the META-INF directory? Does this work only for Manifest files in jar archives, or does it also work for conventional file systems?
    Does the -D option set class locations or package locations? How is the replacement class located (using the same class path)? Is the actual package/class name available for every (potentially) loaded package/class?
    How does this work? Where is this documented? What is this called?

    Thanks for the reply, but this doesn't really answer
    the questions that I intented. Let me try again.
    The original posting shows two ways that the JVM can
    be coerced to load a implementation from an alternate
    location. This coeretion process seems to be an
    undocumented, ad hoc extension to the JVM. For the
    purposes of these question, I call this process "class
    substitution".No, you are misinterpreting what you are seeing there.
    The example command that you gave does nothing but define a property which is available via System.getProperty().
    That is all it does in terms of java (actually Sun's implementation of java.)
    >
    A) Where in the Java documentation is this type of
    class substitution discussed? I have searched the
    Java language definition, Sun's Java site, and "Java
    In a Nutshell" for references to "system resources",
    "services", and various alternatives to no avail. The
    documentation has some reference to Properties, but
    the connection to class substitution is opaque to me.
    There is no class substitution.
    There is however, class loading. And that can be done in several ways.
    1. For one you can write your own class loader - see java.lang.ClassLoader.
    2. Or you can also load your own classes, which is likely to be what the command line you see is doing (when you run other code associated with that in your own application.) That is called reflection. You could start with the java.lang.Class - and practically any java book will cover it.
    3. And it is possible to to load your own class instead of a core class. For example you could replace java.lang.String. Your command line does not do that. But if you look that the documentation for java (Sun) you will find references to the Xbootclasspath option.
    For 1 and 2 above the Sun JVM will do nothing with the command example you gave unless you write some code or use some code (outside the sun jvm) that uses it. For 3 it doesn't have anything to do with example you gave.
    B) Can I achieve this result (class substitution) If I
    put a META_INF directory in a normal file system class
    directory, or does this only work for JAR files. I'm
    sure a simple experiment would suffice, but I'd like
    to know WHY?

  • After Effects does not recognize correct system resources

    On my computer at work, I have After Effects CS4 and After Effects CS3. The computer is a 2x2.66 GHz Quad Core Intel Xeon with 8 gigs of ram running OSX 10.5.6.
    Both versions of After Effects can only see/use 1.7 gigs of ram. My understanding is that After Effects should see as available 3 gigs of ram. (On the identical computer at home I can in fact see 3 gigs of ram.)
    I need the maximum resources when working since I need to do longer HD ram previews. In addition at this low memory setting the software does not run right, with multiple memory crashes that should not be happening -- the kind of things that happen when you have insufficient system resources to run a program. I'm an advanced user who has been using this software for over 15 years -- it's not operator error.
    I'm hoping someone has some suggestions of things I can do to alleviate this bug.
    [As an additional note, the multiprocessing option is also telling me I have 16 processors, when I have only 8.]

    >  [As an additional note, the multiprocessing option is also telling me
    I have 16 processors, when I have only 8.]
    This is the easy question, so I'll deal with it first.
    This a borderline bug. After Effects CS4 isn't able to take advantage of hyperthreading (which turns 8 physical processor cores into 16 virtual processor cores). However, it can _see_ the virtual cores, and it reports them as being there. Yeah, it's misleading. We fixed it in After Effects CS5---and we fixed it in the good way, by making After Effects CS5 capable of using the virtual cores provided by hyperthreading.
    OK. Now for the harder question:
    > Both versions of After Effects can only see/use 1.7 gigs of ram. My
    understanding is that After Effects should see as available 3 gigs of
    ram.
    Please be very specific about what you mean by "see". Exactly what are your settings in the Memory & Multiprocessing preferences? (A screenshot would remove all ambiguity.)
    Have a look at this page: "FAQ: Why doesn't After Effects see and use all of my RAM?"
    Also, see this page, especially the first comment on the bottom.

  • Excel 2010 not enough system resources to display completely

    I have searched the web high and low for an answer to this issue we are having with one user who uses Excel 2010 quite extensively and I have not been able to find an answer that resolves the issue.
    She keeps getting the error message "Not enough system resources to display completely."  The spreadsheet in question is a simple two worksheet file that she does simple calculations on.  Her PC is an HP Elitebook 8560w (only 1 month
    old) with an i7 Intel processor, 8GB RAM, 500GB HDD with over 300GB free, Windows 7 Professional (64-bit) with all the latest patches and updates & Office 2010 Professional (32-bit).  The video and printer drivers are all update to the latest versions. 
    The spreadsheet is one she has used before on her older Windows XP Pro SP3 desktop with no problems.   We have tried the "set the Zoom to 100% or less" resolution that has been mentioned all over the web and it does not work. 
    We are still having this issue.  Does anyone know how to resolve this in Excel 2010.  I have seen the "solutions" for Excel XP, 7, 2000, 2003 and 2007 and they don't seem to work, or at least the ones I have tried!  :-)
    Would welcome all serious suggestions.  Mahalo Nui Loa!

    I've had this problem on several computers on our network.  I found various recommendations, but no solid answers.  I have resolved this problem on two computers with
    just the first steps performed.  The third computer require also following the step highlighted in yellow below:
    Deleted all temp files by:
    Start > Run > enter %temp% in the search box and press Enter
    Delete everything from this folder
    Cleared Recent Docs List & Lower Number Displaying:
    File > Options > Advanced > Scroll to Display category > change number of recent documents.  To clear the list, set this to 0 and save.  You can adjust up after that if you choose.
    Delete files from the following folders:
    (may have to view hidden files – top corner drop down > folder options > view tab > Hidden files and folders > select Show hidden files, folders and drives.
    Local Service:
    %windir%\ServicesProfiles\LocalService\AppData\LocalLow\Microsoft\CryptnetUrlCache\Content
    %windir%\ServicesProfiles\LocalService\AppData\LocalLow\Microsoft\CryptnetUrlCache\MetaData
    Network Service:
    %windir%\ServicesProfiles\NetworkService\AppData\LocalLow\Microsoft\CryptnetUrlCache\Content
    %windir%\ServicesProfiles\NetworkService\AppData\LocalLow\Microsoft\CryptnetUrlCache\MetaData
    LocalSystem:
    %windir%\System32\config\systemprofile\AppData\LocalLow\Microsoft\CryptnetUrlCache\Content
    %windir%\System32\config\systemprofile\AppData\LocalLow\Microsoft\CryptnetUrlCache\MetaData
    The above process worked to prevent this problem on two machines, but not another.  On the one that this didn’t work, I also
    Start > Programs > Windows Update > View Update History.  Look for KB# 2597166
    If it exists, click it and uninstall. 
    Can also try, though I didn’t.  I may do this first, but I didn’t feel this was my issue.
    Control Panel > Add/Remove Programs > Select Office > Change > Run Repair

  • Not enough system-resources to finish required-service

    Hi,
    I have developed a backup-program, using java.util.zip package.
    When I applie this program to zip a file greather than 100Mo, an OutOfMemoryError occurs.
    So, I try to call the JVM with Xms and Xmx parameters set to 512Mo:
    java -Xms512M -Xmx512M ClassName
    Then, my program crash on the following exception:
    java.io.IOException: Ressources syst�me insuffisantes pour terminer le service demand�
    (Translation: "Not enough system-resources to perform the required-service")
    My workstation is equiped of 512Mo-RAM for an Athlon 1000MHz under Win2KPro 5.00.2195 and I using Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0-C) - Java HotSpot(TM) Client VM (build 1.3.0-C, mixed mode)
    The memory-usage after the Exception-evenment is 112Mo,
    the maximum-memory-usage is 216Mo,
    the virtual-memory-usage is 532Mo,
    Is it an another way than increasing physical-memory of my workstation to contain my JVM ?
    Thanks beforehand for your replies.

    You create (write) an archive I suppose.
    I wrote a little web-server in Java which serves the requests on the basis of the zip archives listed in its configuration. That is, it lets us browse inside several zip archives with a browsers without having to unpack them. I use this thing on my PC against a couple of complete Java SDK documentation packages etc. The total size is above 150MB (megabytes, or, si vous pr�f�rez, megaoctets). The thing runs without memory options without problems for a week (I usually shut the box down for the weekend).
    The bottom line: with reading the archives I encounter no problems with the java zip package.

  • Your request cannot be processed due to low system resources.

    When we try to open some of the reports we get following messages all the time. Can some one know why these errors comes? Its crystal reports...
    All other reports works just fine.  Environment: BO XIR2,SP4 , Linux , web logic 9.2
    Apr 28 12:09:52 Servername boe_pagesd[4769]: A failure occurred while the Page Server was processing report 'rptReportname' (id=1596058) for user xxxx
    Apr 28 12:09:52 Servername boe_cachesd[4500]: Your request cannot be processed due to low system resources.  Please contact your system administrator.
    Apr 28 12:10:02 Servername boe_pagesd[4769]: A failure occurred while the Page Server was processing report 'rptReportname' (id=1596058) for user xxxx
    Apr 28 12:10:02 Servername boe_pagesd[4769]: A failure occurred while the Page Server was processing report 'rptReportname' (id=1596058) for user xxxx
    Thanks,
    Anil

    Use "top" or "ps" on linux to look at the memory and file handle consumption for the crystal processes.  You should also double-check ulimit.
    It could also be that other non-boe processes are running on your machine and competing for resources.  Also, check if there are any clues in the system logs and look for any core files that could be in your bin directories.

  • Access 2010 warn "System Resources Exceeded" when change machine

    My Access file has 1.2GB and need to run a lot of simple Update SQLs. The program run well on my old PC.
    I recently change the PC from Core 2 Due to I5-4570S. The Access
    warn "System Resources Exceeded" when running. I use the old PC to run and it is ok. 
    My old PC is E7500 2 GB ram, Win 7 (64bit) + Office 2010 (64bit). New PC is I5-4570S with 8GB ram, with same OS and Office. Why the system resources would exceeded on my new machine?
    I searched many websites for this and cannot find a well solution. Does anyone know how to solve it?
    I have posted this question on Access 2010 help forum before, please refer to:
    http://social.msdn.microsoft.com/Forums/office/en-US/60810447-2f18-452e-b752-c23bda4a67aa/access-2010-warn-system-resources-exceeded-when-change-machine?forum=accessdev

    I don't use ODBC. I use ADO with ACE connection string to connect database.
    The Access version of new pc is newer than my old pc already.
    I tried in different PCs with different setting and found that in 2GB ram, the program is stable. If 4GB or above, the error appeared. 
    The error does not appear at the same command all the time. It may not appear sometimes. If I try to compact the database and re-run the command, it may pass, but not always. 

  • System Resources Running Low

    I have a Satellite A105-S4064 model, XP version. I keep getting the following message and I do not know what it means. If anyone can help me out I would appreciate it. I have plenty of memory left and battery checks out fine.
    Message reads:
    "System Resources are running low. Close some applications running on this computer."
    May not be these exact words, but close to it.

    Satellite A505-S6980 
    someone reopens the computer later, I get a message that says...
    You have Windows set to do what when the lid is closed? I set mine to Do Nothing.
       Control Panel > Power Options > Choose what closing the lid does
    -Jerry

Maybe you are looking for

  • How do I bind to directory server with SSL and authentication?

    I'm running Lion Server 10.7.3, Open Directory master. In Open Directory/Settings/LDAP, I've checked the box to Enable SSL and selected a (self-signed) certificate. In Policies/Binding, I've checked the box to Enable Authenticated Directory Binding.

  • How to display the Overall result value at the top of work book

    Hi,   How can i display the value of  Overall result(Sum of the Total) column at the top of the work book in Text field. Thanks Sreadhar

  • ITunes not detecting iPhone 5, Windows 7 64Bit

    Hey, so I have had this issue for a while now where iTunes simply won't detect my iPhone. I have tried the following: 1. Rolling back to an earlier version. 2. Following Apple's uninstall guide: iTunes QuickTime Apple Software Update Apple Mobile Dev

  • Resource bundles, loading in beans or in jsp or share them?

    Hi, I have a question and hope it is not a stupid one. What is the best way to work with resource bundles/messages developing faces application? One way is to load and use it in jsp. It is convenient and easy. Another is to use it within a bean. It's

  • Business Objects 4.0 GA

    Hello everyone, Could anyone from SAP please answer my question below. When is Business Objects 4.0 BI Platform going to General Availability? This is quite critical to us as we are trying to setup a new Business Objects environment and would really