How to handle large number of Threads !!?

hello friends,
i done a program that is digging all files and folders for finding given filename
in this logic i created one thread per directory and and that thread list out all
sub files and directory and matches the search filename string , in that if sub
directory found than it creates new thread and forth ..
problem is after running program , at point it is creating large thread around
3000 threads and system goes down ,
so what should be solution for that?
requesting you to suggest the good scheduling of thread that give better performance of the system
logic code is given below:
import java.io.File;
class DiggtheFiles implements Runnable
     private File     currentdir     = null;
     private String     search          = "";
     DiggtheFiles(File file, String search)
          this.currentdir = file;
          this.search = search;
     private File     files[]     = null;
     public void run()
          files = currentdir.listFiles();
          if (files != null && files.length > 0)
               synchronized (this)
                    for (int i = 0; i < files.length; i++)
                         if (files.isDirectory())
                              data.dircount++;
                              if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                              *new Thread(new DiggtheFiles(files[i],search)).start();*
                         else
                              if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                              data.filecount++;
               Filechooserpanel.consolearea.setText("DIR:" + data.dircount
                         + " FILE:" + data.filecount + " Thread Completed:"
                         + data.threadcount);
               data.threadcount++;
               return;
          else
               data.threadcount++;
               return;

Remove the synchronized and use a work list instead of doing new Thread(), do workList.add() instead and have a threadpooledexceutor take workitem of the list.
Or go non-parallel and just do a recursion-call?

Similar Messages

  • How to handle large number (7200+) identical HorizontalRule's

    I have an application where performance is becoming an issue. I have 30 VBoxes that each contain identical HorizontalRules, the number of
    HorizontalRules is unknown until render time but can easily be up to 240 per VBox. As there are 30 VBoxes this results in 7200 HorizontalRules being added to the stage. This results in large memory consumption and poor rendering time on lower specification machines. To speed this up I have tried using Sprites and graphics.draw to render the line, but I think it is the time taken to create the Object that is the problem.
    Is there any way to create one HorizontalRule and add it to the stage multiple times? I know that Flex will remove the child if it is added again with a different [x,y] coordinate so my attempts to do that failed.
    Thanks for any suggestions.

    The VBoxes are typically about 20 pixels wide and each has a line about 4 or 5 pixels apart all the way down their length. That is why there are so many of the little blighters.
    You are probably right, but I had a problem with graphics.draw in that they had to be added as rawChildren. The component is resizable and I found it impossible to move the drawn lines. I could not get a good reference to them, even if I stored each line in a seperate Array before adding it as a rawChild, and so I could not even delete them reliably. I know that the rawChildren and children/elements have things at different indexes, but I didn't manage to find a way to use that information.
    You have given me the idea that I shouldn't draw them all on the screen at once though. With the full 7,200 lines, only about 30% are on the screen, I will look into using an item renderer. Is that the correct way to do it?
    Does anyone know how to relibly access rawChildren so they can be moved? Maybe my app is just a little wierd. I will try building a test case and see if I can add and remove from the rawChild list freely in a simpler application.

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to reconnect large number of photos?

    how to reconnect large number of photos?

    Click File >> Reconnect >> All Missing Files and it should automatic import. If it doesn't, Browse for yourself and locate the folder where media corresponding to these missing files is present.
    Thanks
    Andaleeb

  • How to block large number of scheduling agreements?

    How to block large number of scheduling agreements (Purchasing doc type 'LPA') ?

    Hi
    You can use T.code MASS and choose scheduling agreement and you see field LOEKZ there (deletion indicator) You can mark it for deletion and it can be undelete if you want in future.
    If you want to block it with source list, you need to do LSMW or BDC for ME01 and for your agreement block it in a validity period for a plant
    Regards
    Antony

  • How to decide the number of threads?

    Dear all,
    Suppose we have a multi-thread application, and we know that each thread is doing the same kind of work. What is the reasonable number of threads that we should set at begining.
    It seems that a thread is expensive, so we should use only 1 thread. (wrong)
    It seems that we should use 100000000000 threads. (wrong)
    I would like to know your opnion about the above questions.
    Regards.
    Pengyou

    pengyou wrote:
    Thanks for the two answers.
    In fact, I have made a crawler to crawl internet sites. I would like to know in practice should I:
    1. Run the application to crawl the Internet site one by one;
    2. Use multi-threads to crawl n sites in prarallel; (If one thread failed down, what to do?);
    3. Run the application n times to crawl n sites.You will likely be spending a bunch of time in blocking I/O, so should gain a bunch by adding multiple threads. However, please be nice to the web hosts:
    - Put all requests to the same domain into a single thread. This prevents your crawler from stealing too much bandwidth from the web host, making it harder for other real live people from using the site. Some sites will actually detect such hits and treat them as attacks and block them. One way to make sure you don't make too many simultaneous requests to the same domain would be to run all of a domain's requests into a single thread.
    - To get the right amount of Threads, you will need to profile. Run the crawler with different amount of threads and see what settings make the best use of CPU time (fill CPU time to nearly 100%) without excessive thread switching.
    How to handle failed threads? I would suggest using a Thread Pool (ThreadPoolExecutorService) and let the thread pool handle thread generation / death. You would have to make sure your code handles exceptions intelligently, and synchronizes properly still.

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • How to handle large data while acquisition? BNC 2110

    I want to acquire data using  BNC 2110, I am writing a software in VB 6. We will use 3 channels. We are supposed to scan about 10000 points before AcquiredData is triggered. in all we will need to scan 10000 * 1000 * 1000 before data is put into a binary fall. Can anybody let me know, how to hande this large number points

    Hello Vjuno,
    In order to acquire 10,000,000,000 points you are going to have to be streaming this data to your hard drive as you go.  To do this you'll need to write the data you read to a file each loop iteration.  In general it is a good practice to make your "samples to read" at least 10% of your sample rate in seconds to avoid overflowing buffers, however, depending on your computer you may be able to go faster.  I made an example program in LabVIEW and was able to read 10,000 points at a time from each of 3 analog inputs at 333MHz and write the values to file without overflowing a buffer.  However, even opening a web browser while the code was running was enough to delay the VI long enough for the buffer to overflow.
    You can use the DAQmx Configure Input Buffer call to increase the buffer size and account for spikes in CPU usage from other processes, and you should also monitor the "Available Samples Per Channel" property to make sure you aren't steadily gaining samples in your buffer.  Since you want to acquire 10 billion samples at 1MHz this acquisition will take several hours; if you're not able to keep the buffer empty then it will become apparent before the end of your acquisition.  By monitoring the samples in the buffer you can tell if you're pulling the samples out fast enough, if you find that this number is steadily increasing then you should either reduce the sample rate or increase the number of samples to read each time you call the DAQmx Read.
    In my example program I used a write to TDMS (binary) file and a PCI-6251.
    I hope this helps, and have a good night.
    Cheers,
    Brooks

  • How to remove large number of "sent" but still keep them available in case I need to search for something?

    Over several years, I have accumulated some 1400 messages in the SENT folder.
    Lately my Thunderbird is taking a long time for some actions that used to be instant, like selecting another message to read or delet.
    I regularly remove incoming messages to keep the in-box under 300, and I have almost nothing in the other folders.
    I tried changing the "local folders" to a different directory, but when I do that, all the old "sent" messages become un-available for searching and potentially continuing a conversation.
    Is there a way to move the large number of "sent" messages from the "local folders" but still keep them available for access from within Thunderbird, in case I need something from past chains of replies?

    https://wiki.mozilla.org/Thunderbird:Testing:Antivirus_Related_Performance_Issues#McAfee

  • How to handle a number of records in stored procedure?

    i wanna handle a number of records in a stored procedure one by one.
    what should i do?
    can any one give me some sample about the following question?
    Q:
    tb_main,tb_attach are two tables.
    i want to create a procedure to write off a record in tb_main. before do that, i want to write off all records in tb_attach which is related with the record to be written off in tb_main. and a procedure named pr_write_off_attach for writing off a record in tb_attach has been created already.
    what should i do?
    help!!
    null

    Dg.dataProvider.length is the number of records in the ArrayCollection
    Dg.rowCount is the number of visible rows.
    Alex Harui
    Flex SDK Developer
    Adobe Systems Inc.
    Blog: http://blogs.adobe.com/aharui

  • How to handle large images?

    Hi,
    Does anyone know how to handle big jpg images (1280*960) so that they could be presented in a midlet.
    The problem is that the images requires so much memory that they can't be decoded to an Image object with Image.createImage method. One solution would be to extract thumbnail image from exif headers. Unfortunately at least images taken with Nokia 6680 don't contain thumbnail in exif headers.
    So the only solution seems to be to decode the byte presentation of the image and resize it before creating an Image object.
    Do anybody know any library for this or tips where to start?
    Br, Ilpo

    Hi,
    I think it is not possible. My application contains a file browser (which uses jsr-75). User can use the browser to select an image either from phone memory or memory card. After the selection I would like to present the selected image for that user can be sure it is the right image. The selected image will be then sent to the server side with some additional data for further processing (but that is another story).
    Now the problem is that for example with Nokia 6680 user can take images as big as 1280*960 and I can't present them anymore because of the memory restrictions. With 640*480 image there is no problem because I can create an image object and then use a simple algorithm to resize the image for presentation.
    Br, Ilpo

  • How to handle user exception in thread

    hi all
    How to hanble user exception in thread
    I cant throw any user exception here
    what is the error
    Thread threadConnection = new Thread(new Runnable() {
                   public void run() {
                        try {
                             _connection = DriverManager.getConnection(strURL, strUser,
                                       strPassword);
                        } catch (SQLException sqle) {
                             String strMessage = "Error Connecting To Settlement Service Database";
                             String strCause = "Error Occured attempting to Connect to the Settlement Service Database.<br>";
                             strCause = strCause + "The JDBC Configuration is:";
                             strCause = strCause + "<ul>";
                             strCause = strCause + "<li>JDBC Driver  : " + strDriver
                                       + "</li>";
                             strCause = strCause + "<li>JDBC URL     : " + strURL
                                       + "</li>";
                             strCause = strCause + "<li>JDBC User    : " + strUser
                                       + "</li>";
                             strCause = strCause + "<li>JDBC Password: (Masked) </li>";
                             strCause = strCause + "</ul>";
                             String strRecovery = "Please make sure the Settlement Service Database is running can be accessed.";
                             VFExceptionInfo vfExceptionInfo = VFExceptionInfoGenerator
                                       .generate(strMessage, strCause, strRecovery, sqle);
                             throw new DBAccessException(vfExceptionInfo);
                                  //unhandle exception type DBAccessException
              threadConnection.start();
              

    Who should catch the exception if it is thrown?
    You need to signal the error in any other way. What should happen if the connection can't be established?
    Kaj

  • How to "find" large number of lost songs

    I have a large number of songs (100s) that have become "lost". They have not moved, but iTunes has an exclamation point next to them.
    Thee must be some way to rebuild iTunes db or otherwise have iTunes rebuild its index so the songs can be found.
    I am not sure why this happened, EXCEPT that I tried to install iTunes on my VMWare partition.
    Any help would be greatly appreciated.

    Bryan Schmiedeler wrote:
    ... iTunes has an exclamation point next to them.
    try this script:
    _*iTunes Track CPR v1.3*_
    This script attempts to locate the files of so-called "dead tracks"--iTunes tracks designated with (!)--that you assume are not actually missing but are still located in the iTunes Music folder in their "iTunes File Order" (Music -> Artist -> Album -> file.xxx)."

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to show large number in BO 6.5 report?

    Hi
    In data base one column value is having very large value - 123456789123456789
    When I am fetching data from database to show it on report it is getting printed as - 1.23456789123457E+17
    How can I display the actual number stored in database not with E.
    Thanks
    Puneet

    Hi Puneet,
    BO supports 15 digits in number format. It is a limitation and by design.
    You can use the number format in the following way to resolve the issue.
    At Database level like SQL server support 7 digit after comma.
    But at report level we can increase by going to object format and custom the number by ###.000000000(more then 7)
    Regards,
    Sarbhjeet Kaur

Maybe you are looking for

  • Size of resulting file not displayed within Preview

    Under Snow Leopard, when you were saving a file from within Preview, you were shown the size of the resultant file, and the effect on this size as you changed the quality.  Under OS X 7, Preview, you can still change the quality of the file that you

  • How do you know if you have all the requirements in order to download Firefox?

    I've been trying to update my Firefox (version 3.6.17) so I can still go on YouTube and to just update it. I've already tried to reinstall it, but its the same version. I would like to know if there's a command for terminal to show if I have all the

  • Second Macbook, Same PROBLEM!, please help!!

    this is my second macbook because the other one i had had a stuck pixel from the moment i opened it, now 6 weeks on to my second one i have 2 stuck pixels!, what can i do? i have tried all the "fixers" but didnt work!, i have searched and searched fo

  • Incoming Email Subject Field length increase

    Hi, In CRM 7.0,I need to increase the incoming email Subject Field length ,When i try to modify UI configuration,Subject fields are not enabled for configuration (Other fields in the same screen are enabled for modifications).. Please give your input

  • Standalone flash player for linux full screen?

    hi all, just wondering (as I couldn't find any option in readme file, help, etc). If it is possible to launch flashplayer (the standalone linux version) from command line straight to full screen, so the presentation plays straight away. Any info woul