Large number of FNDSM and FNDLIBR processes

hi,
description of my system
Oracle EBS 11.5.10 + oracle 9.2.0.5 +HP UX 11.11
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processes
come down from 250 to 80 but these apps processes just dont get killed automatically .
can i kill these processes manually??
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processes
and under what circumstances , should i run cmclean ?

Hi,
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processesThis means there are lots of zombie processes running and all these need to be killed.
Shutdown your application and database and take a bounce of the server as there are too many zombie processes. I have once faced the issue in which due to these zombie process CPU utilization has gone to 100% on continuous count.
Once you restart the server, start database and listener run cmclean and start the application services.
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processesNo it's not normal and should not be neglected. I should also advice you to run the [Oracle Application Object Library Concurrent Manager Setup Test|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=200360.1]
and under what circumstances , should i run cmclean ?[CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=134007.1]
You can run the cmclean if you find that after starting the applications managers are not coming up or actual processes are not equal to target processes.
Thanks,
Anchorage :)

Similar Messages

  • Video recording has large number of dots and arrows over the recording

    My iPod nano was delivered today.
    I've just recorded some video but the recording has large number of dots and arrows in a grid pattern all over the recording. It's unusable. Anybody have similar experience?

    Can't say that I have. Have you read the iPod nano (5th generation) - User Guide or gone through Knowledge Base for possible solutions?
    Every iPod comes with complimentary, single-incident telephone technical support within 90 days of your iPod purchase. _If you also purchased AppleCare, then your warranty is extended for technical support and hardware coverage for *two years* from the original purchase date of your iPod.

  • Communicate large number of parameters and variables between Verstand and Labview Model

    We have a dyno setup with a PXI-E chassis running Veristand 2014 and Inertia 2014. In order to enhance capabilities and timing of Veristand, I would like to use Labview models to perform tasks not possible by Veristand and Inertia. An example of this is to determine the maximum of a large number of thermocouples. Veristand has a compare funtion, but it compares only two values at a time. This makes for some lengthy and inflexible programming. Labview, on the other hand, has a function which aloows one to get the maximum of elements in an array in a single step. To use Labview I need to "send" the 50 or so thermocouples to the Labview model. In addition to the variables which need to be communicated between Veristand and Labview, I also need to present Labview with the threshold and confguration parameters. From the forums and user manuaIs understand that one has to use the connector pane in Labview and mapping in Veristand System Explorer to expose the inports and outports. The problem is that the Labview connector pane is limited to 27 I/O. How do I overcome that limitation?
    BTW. I am fairly new to Labview and Versitand.
    Thank you.
    Richard
    Solved!
    Go to Solution.

    @Jarrod:
    Thank you for the help. I created a simple test model and now understand how I can use clusters for a large number of variables. Regarding the mapping process: Can one map a folder of user channels to a cluster (one-step mapping)? Alternatively, I understand one can import a mapping (text) file in System Explorer. Is this import partial or does it replace all the mapping? The reason I am asking is that, if it is partial, then I can have separate mapping files for different configurations and my final mapping can be a combination of imported mapping files.
    @SteveK:
    Thank you for the hint on using a Custom Device. I understand that the Custom Device will be much more powerful and can be more generic. The problem at this stage is that my limitations in programming in Labview is far gretater than Labview models' limitations in Veristand. I'll definitely consider the Custom Device route once I am more provicient with LabView. Hopefully I'll be able to re-use some of the VI's I created for the LabView models.
    Thanks
    Richard

  • Need to blacklist (block) a large number of domains and email addresses

    I tried to import a large number of domains into a transport rule (new/set-transportrule -name xxx -domainis $csv -quarantine $true). 
    I added a few domains manually and tested to make sure the rule was working.  It was.  Then I ran the cmdlet to import a small number of domains (4).  That also worked.  Then I ran the cmdlet on a large number of domains.  The cmdlet
    worked with no error message (once I got the list of domains well under 4096 bytes).  I waited a day to see if the domains would show up in the rule.  But the imported domains did not show up.
    Is there a better solution to blocking large numbers of domains and email addresses?  Or is the transport rule the only option? 

    Since you do not want to crop your images to a square 1:1 aspect ratio changing the canvas to be square will not make your images square they will retain their Aspect Ratio and  image size will be changer to fit within your 1020 px square. There will be a border or borders on a side or two borders on opposite sides.   You do not need a script because Photoshop ships with a Plug-in script to be used in Actions.   What is good about Plugins is the support Actions.  When you record the action the plug-in during action recording records the setting you use in its dialog into  the actions step.  When the Action is played the Plug-in use the recorded setting an bypasses displaying its dialog. So the Action can be Batch.  The Action you would record would have two  Steps.   Step 1  menu File>Automate>Fit Image... in the Fit Image dialog enter 1020 in the width and height  fields.  Step 2 Canvas size enter 1020 pixels in width and height  not relative leave the anchor point centered it you want even borders on two sides set color to white in the canvas size dialog. You can batch the action.
    The above script will also work. Its squares the document then re-sizes to 1020x1020  the action re-sizes the image to fit with in an area 1020 x 1020 then add any missing canvas. The script like the action only process one image so it would also need to be batched. Record the script into and action and batch the action. As the author wrote. The script re size canvas did not specify an anchor point so the default center anchor point is uses  like the action canvas will be added to two sides.

  • How do I create new versions of a large number of images, and place them in a new location?

    Hello!
    I have been using Aperture for years, and have just one small problem.  There have been many times where I want to have multiple versions of a large number of images.  I like to do a color album and B&W album for example.
    Previously, I would click on all the images at one, and select new version.  The problem is this puts all of the new versions in a stack.  I then have to open all the stacks, and one by one move the new versions to a different album.  Is there any way to streamline this proccess?  When it's only 10 images, it's no problem.  When it's a few hundred (or more) its rather time consuming..
    What I'm hoping for is a way to either automatically have new versions populate a separate album, or for a way to easily select all the new versions I create at one time, and simply move them with as few steps as possible to a new destination.
    Thanks for any help,
    Ricardo

    Ricardo,
    in addition to Kirby's and phosgraphis's excellent suggestions, you may want to use the filters to further restrict your versions to the ones you want to access.
    For example, you mentioned
      I like to do a color album and B&W album for example.
    You could easily separate the color versions from the black-and-white versions by using the filter rule:
    Adjustment includes Black&white
         or
    Adjustment does not include Black&white
    With the above filter setting (Add rule > Adjustment includes Black&White) only the versions with Black&White adjustment are shown in the Browers. You could do similar to separate cropped versions from uncropped ones.
    Regards
    Léonie

  • I have lost all my captioning and keywording for a large number of .dng and raw files on transfer to LR6 CC - any help!

    I have imported my old LR5 catalogue into LR6 CC but appear to have lost a lot of my captioning and keywording to a very large number of files which are either .dng or other raw formats. I am working on a Mac. Can somebody please help as I am very worried about losing vital information.
    Thanks!
    Gerry

    OK, I may have pinned this down to the problem, and it may not be LR6 related. Reimporting my LR5 catalogue shows that the information is there on files that are marked as missing from catalogue (i.e. marked '!'). These are missing because I have renamed them, and it would suggest that the renamed file is not linking with the old .xmp file. Do you know of a way I can link the renamed file with the old .xmp file? Would I have to laboriously go through all my .xmp files and rename them for instance?

  • I am experiencing a large number of bugs, and I am seeking some help with fixing one or more of them; here is the list I have created.

    32GB GSM iPhone5S 7.0.3 Bug List
    - Reoccurring iOS crashes/reboots/restarts, under to specific circumstances:
    • Sliding down notification center
    • Sliding up control center
    • Tapping the top area of the screen where the status bar would be, within full-screen applications that don't show the status bar.
    • Tapping the bottom area of the screen where one would slide up to activate the control center, within full-screen applications that don't show the status bar.
    • Configuring various different iOS UI/iOS App/3rd party App settings, from within the Settings App.
    • Configuring various different iOS UI/iOS settings, from within the Control Center.
    - Specific iOS App bugs:
    • iWork apps freezing/lagging, even when no other applications are open and/or backgrounded.
    • App Store not showing reviews properly (i.e., either: not showing all reviews, not show any reviews, and/or showing reviews in an incorrect order; I've confirmed these issues by checking app review details online, for an array of different apps, doing so both on multiple iPhone models, and on desktop computers. The apps show their reviews online on said mentioned platforms, but a large portion of them aren't showing any/all reviews properly within the App Store specifically.
    • Reoccurring Siri-created reminders' specified reminder details are deleted, after having said reminder(s) created by Siri, when attempting to manually reconfigure one or more of said reminders' options.
    - Specific iOS UI bugs:
    • Notification Center "Today" section, Reminders' scheduled dates showing either black or white, intermittently.
    * Please note that these issues are still present, regardless of the fact that I have already tried to fix them by reinstalling iOS7, hard-rebooting, and restating my iPhone.

    Ad a new, temporary column (or one you can hide when not using). Let's say the current Prices are in column C and the new column is D.
    you can inflate the prices in column C by 5% by entering the formula in the first cell of column D as:
    =C2*105%
    likes this:
    to fill down, select D2, then copy, then select the D3 thru the end of the column, then paste.  Now select C2 thru the end of column C, now paste over the existing values using the menu item "Edit > Paste Values"
    you can now hide column D until you need to use it again.

  • Aperture to Lightroom move, super large number of photos and videos

    Hi everyone. I am moving my photo and video collection from Aperture libraries to Lightroom. Editing is so much better in lightroom, but with image organization I run in into tons of problems. The biggest one is size. I have around 500,000 photos and videos. I have only imported 7,000 and preview catalogs are getting huge, 16Gbs right now. Would love suggestions on large photo/video storage management.

    Just one thing for now: preview data size is controlled by automatically expiring 1:1 previews (or manually deleting them), and making sure standard previews are no larger than necessary - they can be medium or "low" (not that low) quality, and do not need to be larger than your monitor size (and can be substantially smaller, if preview storage is a primary concern). Also, note: previews can be relocated to another disk if the one where the catalog has to be isn't big enough - just make a link in the catalog folder to where the previews will actually be - not recommended except as a last resort: generally better to get a bigger drive (or relocate entire catalog folder) if possible..
    PS - Don't make 1:1 previews upon import unless you really need them.

  • Creative X-Fi problems - large number of playlists and protected content

    Hi,
    Apologies if I'm posting to the wrong forum - please point me in the right direction!
    I'm a developer, consulting for a client trying to launch a business around pre-loading mp3 players, and later updating the content and playlists on the player.
    We're trying to load a Creative Zen X-Fi with 7000 pieces of protected subscription content (WMDRM Root and Leaf licenses) and about 400 playlists which point to this content. We're using the WMDM SDK (not the WPD/MTP Sdk) to load the content onto the X-Fi just fine. Building the playlists and loading them to the device usually succeeds, though we occasionally see a WMDM-specific out of memory error code. Later, we update (remove, then add) content to the device and replace all of the playlists after new content is added. This is very buggy, and we can't tell if we're hitting device limitations or what. Music and playlists are loaded into the Music folder and My Playlists folder, respectively - matching WMP behavior.
    Some straightforward questions:
    . After writing 430 playlists to the device (some contain > 000 songs) all is well. Playlists play as expected, therefore rights resolve ok. If I delete the playlists, then re-create these playlists again, it appears as though things are successful, though when I unplug the device it's screen goes blank. Hitting the reset button works around the problem, but customers won't like that.
    2. When loading new content I periodically get C00D2779 - 'The file could not be transferred because the device clock is not set'. This is after a successful update of the root license. Is there something I can do to proacti'vely set the secure clock? I would think not - it's not very secure if I can set it - but I'm puzzled by this error.
    3. Is there a documented limit (or any guidelines/help) on the X-Fi's limit on number of playlists, number of pieces of protected content, number of entries in a playlist?
    4. Does Creative provide a developer's forum where my questions might get some tracktion?
    Thanks - any help/pointers are appreciated!
    Ben McAllister
    http://www.listenfaster.com

    wow. i certainly would want to know this one too since i never get to put these huge amount of files and playlists on my player.
    http://storeyourpicture.com/images/s...lectronics.jpg

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • ABAP trail 7.00 timeouts and background process

    Does anyone knows if it's possible to change the number of dialog and background processes?
    And the transaction to change the timeouts of such processes?
    Thank you

    you can change the timeout in rz11.Parameter-name is rdisp/max_wprun_time. Backgound processes habe no timeout, so this value is only valid for dia-processes
    To change the number, you have to adjust your instance profile. In the forum  'Netweaver Administrator' should be some information about that. So just the short hint: you can adjust the profiles in RZ10, but have to do some steps for that. Just search in NW-Admin-forum above.
    Regards,
    ulf

  • Submit submit a large number of task to a thread pool (more than 10,000)

    i want to submit a large number of task to a thread pool (more than 10,000).
    Since a thread pool take runnable as input i have to create as many objects of Runnable as the number of task, but since the number of task is very large it causes the memory overflow and my application crashes.
    Can you suggest me some way to overcome this problem?

    Ravi_Gupta wrote:
    I have to serve them infinitely depending upon the choice of the user.
    Take a look at my code (code of MyCustomRunnable is already posted)
    public void start(Vector<String> addresses)
    searching = true;What is this for? Is it a kind of comment?
    >
    Vector<MyCustomRunnable> runnables = new Vector<MyCustomRunnable>(1,1);
    for (String address : addresses)
    try
    runnables.addElement(new MyCustomRunnable(address));
    catch (IOException ex)
    ex.printStackTrace();
    }Why does MyCustomRunnable throw an IOException? Why is using up resources when it hasn't started. Why build this vector at all?
    >
    //ThreadPoolExecutor pool = new ThreadPoolExecutor(100,100,50000L,TimeUnit.MILLISECONDS,new LinkedBlockingQueue());
    ExecutorService pool = Executors.newFixedThreadPool(100);You have 100 CPUs wow! I can only assume your operations are blocking on a Socket connection most of the time.
    >
    boolean interrupted = false;
    Vector<Future<String>> futures = new Vector<Future<String>>(1,1);You don't save much by reusing your vector here.
    for(int i=1; !interrupted; i++)You are looping here until the thread is interrupted, why are you doing this? Are you trying to generate loading on a remote server?
    System.out.println("Cycle: " + i);
    for(MyCustomRunnable runnable : runnables)Change the name of you Runnable as it clearly does much more than that. Typically a Runnable is executed once and does not create resources in its constructor nor have a cleanup method.
    futures.addElement((Future<String>) pool.submit(runnable));Again, it unclear why you would use a vector rather than a list here.
    >
    for(Future<String> future : futures)
    try
    future.get();
    catch (InterruptedException ex)
    interrupted = true;If you want this to break the loop put the try/catch outside the loop.
    ex.printStackTrace();
    catch (ExecutionException ex)
    ex.printStackTrace();If you are generating a load test you may want to record this kind of failure. e.g. count them.
    futures.clear();
    try
    Thread.sleep(60000);Why do you sleep even if you have been interrupted? For better timing, you should sleep, before check if you futures have finished.
    catch(InterruptedException e)
    searching = false;again does nothing.
    System.out.println("Thread pool terminated..................");
    //return;remove this comment. its dangerous.
    break;why do you have two way of breaking the loop. why not interrupted = true here.
    searching = false;
    System.out.println("Shut downing pool");
    pool.shutdownNow();
    try
    for(MyCustomRunnable runnable : runnables)
    runnable.close(); //release resources associated with it.
    catch(IOException e)put the try/catch inside the loop. You may want to ignore the exception but if one fails, the rest of the resources won't get cleaned up.
    The above code serve the task infinitely untill it is terminated by user.
    i had created a large number of runnables and future objects and they remain in memory until
    user terminates the operation might be the cause of the memory overflow.It could be the size of the resources each runnable holds. Have you tried increasing your maximum memory? e.g. -Xmx512m

  • Suggestion for a future release (Large number of playlists)

    It would really be great if you could send a playlist to a connected iPod by right clicking on it. Instead of having to drag it. When you have a large number of playlists and you want to drag one from the bottom it can take 2 or 3 minutes just to get it to your iPod so you can drop it.
    And before anyone says it I know that you can use Sync to transfer playlists to your iPod, but when your trawling through your playlists, sometimes auditioning songs so you can decide what you want to listen to on your travels next week, it would be a much nicer user experience to be able to send it to your iPod then and there.

    Send suggestions directly to Apple via the feedback links in http://www.apple.com/contact/ The Discussions are end user to end user assistance and people at Apple do not regularly read these discussions.

  • Setting up and installing app from App Store on a large number of iPads

    I need to set up and install just one application available on the Apple Store (not a free one) for 140 iPads. Already read the thoughts about setting up a large number of iPads. But as I'm not setting up any enterprise network for them and don't want to install profiles on them, I am wondering how to do that:
    - Multiple App Store Accounts to download the apps, or just the same one dowloading the app each time (as it's for a commercial use, I need one app per iPad)?
    Thanks for your thoughts
    Baptiste

    This can be a challenge. The solution we have tried relies upon "gifting" the applicationn. You log into iTunes, purchase the app, then gift it by specifying a delivery email address. You can specify multiple recipients by repeating the process over and over again. The recipient then access the email containing the gift link on their ipad and installs it from a link within their email as displayed on the ipad. We've found in some preliminary testing that there is no need for the email address listed in the iTunes purchase to be the same email address as the iTunes acount address. This would be useful when the gift is sent to a corporate email address, but the iTunes account on the device is a different, personal email address. Whoever clicks the correct link from an ipad can install the app. It's not pretty by any means, but it avoids having to have each user purchase the app on their own and be re-imbursed.

Maybe you are looking for