Best way to schedule lots of small threads

Hello,
I'm working on a large set of data and trying to split up the job between multiple threads, to be more exact about 1000 threads.
The maximum number of threads I can run simultaneously is defined by MAX_THREADS.
To be most efficient I would need to know when a thread finishes to start the next one.
I've tried a solution where I start MAX_THREADS, then sleep for a certain time, then check if all threads are still running and start new threads in place of the ones finished. I found it really hard to figure out a good timing for polling the threads.
My next solution would be to create MAX_THREADS number of helper threads that kick off a thread from a pool, then join them and start new ones as those finish.
Another idea is to create a listener that would be activated when a certain thread finishes so it can kick off a new thread.
Could anyone suggest a good solution for this problem? Or best practice?
Thanks,
Lajos

This ThreadPool approach is wrong puckstopper31. The
concept of a ThreadPool is to re-use the Threads that
have already been allocated. When a job is done, a
thread shouldn't removed from the ThreadPool, unless
some configurations specify that there shouldn't be
more than a certain amount of idling Threads.
Otherwise, you're getting rid of one of the most
important reasons for using a ThreadPool: reducing
the overhead of Thread creation and destruction.I'm not disagreeing that this is one way to use a thread pool, but it is not the ONLY way to use a thread pool. The implementation I outlined works extremely well for what it is designed to do.
PS.

Similar Messages

  • What is best way to reconnect lots of tracks in library?

    From time to time I have clicked on a track only to get the message that the program can't find the track & it asks if I would like to find it. This is usually quite easy to do if you have just one of 2 tracks.
    What do you do if you have hundreds or thousand to reconnect because some file were moved to a different location? I know that if I have files go missing in several different programs I only have to locate one file & I then they asked if I wish to try to find other files in that same or similar location, & all the rest get reconnected. Is there a similar feature in i Tunes for doing this & if not what is the best way to reconnect lots of music tracks?

    All of my music files had been copied to an external hard drive before my PC crashed. I'm not certain what drive letter Windows had assigned that drive since I have had so many. In any event I copied all of the music files from that drive onto the folder Windows 7 calls desktop> music files. I'm guessing that this is a 'C' drove.
    What I think you are suggesting is that copy the files from the external drive to a different drive or partition on my PC, say 'D' drive & see if i Tunes picks up all of my files. Then if It doesn't do so rename 'D' drive to 'E' etc until I find a drive letter that i Tunes remembers. If It does recognize tracks that it could not find won't the tracks that its currently finding no longer work?
    What would happen if I were to tell i Tunes to add any entire extra folder that appears to be missing. I can see that this approach might add more tracks into my library but would this approach result in creating lots of duplicates? I already know that i Tunes does an abysmal job of looking out for duplicates. I have seen on the duplicate lists many tracks that should not appear there. The artists name & song title might be the same but the singer has recorded the same song at different times & the album name & songs duration is different. I have looked at a listing by
    an artist & seen real duplicates that aren't on i Tunes official duplicate list.

  • Best way to schedule MySQL backups.

    Hi All,
    I'm trying to use our Mac mini Server to schedule MySQL backups FROM another server.  The newer MySQL Workbench does not have a way to schedule backups, but I did find AutoMySQLBackup:
    http://sourceforge.net/projects/automysqlbackup/
    Has anyone tried this with success, or is there a better/any other option?  Again, I need to connect to MySQL on another server and backup the database locally - making offsite backups.
    Thanks in advance for your help - Vijay

    RMAN is the only utility that can read RMAN backupsets (which are the default) - forget about export/import for this purpose as they are completely different tools.
    RMAN has some validate functionality built in but the best way to test to actually perform a restore. See the above link for details on that.

  • Best way to stop or kill a thread

    hi what would say is the best way to kill a thread in this situation.
    1. I have 200 threads
    2. Each Thread has a reference stored in a hashtable example;
    for( int i=0; i<200; i++){
    Thread t = new exThread(i);
    hashtable.put(Integer(i) , t );
    t.start();
    each thread is running in an infinite while loop.
    now what would you say is the best way to kill the thread from this parent class.
    One thought of mine is to access get the reference and call stop.
    example;
    Thread tRef = hashtable.get(Integer(100));
    tRef.stop();
    In the stop method i would clear up whatever it was doing - release resources properly and - when it goes out of the stop scope , i'm guessing it would be destroyed.
    Any thoughts or other recommendations ?
    Stev

    Limeybrit is correct....the way Sun recommends (and which I use) is a boolean at the top of your runnable code. If false, you simply return and don't hit any of the other code in the runnable method.
    At the end of your run process, you simply set your Thread to null and wait for the garbage collector to clean up.

  • What is the best way to schedule a job for PI interface?

    Hi,
    I have this PI scenario to implement.
    Every day, something (?) needs to trigger PI to call RFC function, get the data and insert it to database.
    What is the best practice to schedule such a job in PI?
    I want to know best practice to trigger PI interface and how to schedule it.
    Also, do I have to use BPM for this?
    Thanks for your help.
    -Won
    Edited by: WON HWANG on Sep 10, 2010 11:21 PM

    Hi Won,
    it depends a bit on the scenario that needs to be implemented. I will try to go point by point and give you the options
    1. You will need to send a message to PI
         - This can be from PI or from ECC. I suggest you use an ABAP proxy client.
         - You will need to write an ABAP program that sends this message (calling the proxy client).
         - You can schedule this program in a job (SM37).
         - If users want control over the job, better schedule it in ECC; if only you or an administrator will control the job, you can do it in PI
         - I would use a generic format which you can reuse to trigger other interfaces: From- and To-date, Context, Action, User
         - You can have conditions in the Receiver Determination and Interface Determination to route the message to the correct interface, based on the Context and Action for example.
         - From- and To-date can be used to make selections in any RFC or table query.
    2. You will need a BPM process in PI
         - Step 1: Receive the trigger
         - Step 2: Call the RFC
         - Step 3: Send to database
    this is because the RFC call is synchronous, and you cannot route the response to another receiver than the sender of the RFC call. Another options is writing a wrapper in ECC around the RFC function, and at the end implement another PI ABAP proxy client that will be routed to your database interface.
    Hope this helps.
    kr
    Peter Glas

  • Best way to store lots of tile images

    Hey, In my game I'm gonna need to store probably thousands of tile images. They are all in .png format, and I am wondering the best way to store the image data. I could of course keep them in some folder called images, and then have thousands of files...but then people could jack them, and plus that would be inneficient use of space??? right?
    What is the best way to store them??
    thanks

    Anything can be "jacked". The other possible ways are storing them in some sort of archived file like a JAR or ZIP or GZIP. The only way to even remotely ensure no one will "jack" your images would be to store them in a compressed format which you created. Which would entail creating your own compression algorithms. Have Fun! :)
    Seriously though, I wouldn't worry too much about the images and people "jacking" them. As far as inneficient use of space, I personally wouldn't worry about this either unless your app will be delivered via WebStart or Applet. Then load time could be an issue. You could use lazy downloading though and download the images when they are needed.
    HTH
    Gregg

  • Best way to stream lots of data to file and post process it

    Hello,
    I am trying to do something that seems like it should be quite simple but am having some difficulty figuring out how to do it.  I am running a test that has over 100 channels of mixed sensor data.  The test will run for several days or longer at a time and I need to log/stream data at about 4Hz while the test is running.  The data I need to log is a mixture of different data types that include a time stamp, several integer values (both 32 and 64 bit), and a lot of floating point values.  I would like to write the data to file in a very compressed format because the test is scheduled to run for over a year (stopping every few days) and the data files can get quite large.  I currently have a solution that simply bundles all the date into a cluster then writes/streams the cluster to a binary file as the test runs.  This approach works fine but involves some post processing to convert the data into a format, typically a text file, that can be worked with in programs like Excel or DIAdem.   After the files are converted into a text file they are, no surprise, a lot larger than (about 3 times) the original binary file size.
    I am considering several options to improve my current process.  The first option is writing the data directly to a tdms file which would allow me to quicly import the data into DIAdem (or Excel with a plugin) for processing/visualization.   The challenge I am having (note, this is my first experience working with tdms files and I have a lot to learn) is that I can not find a simple way to write/stream all the different data types into one tdms file and keep each scan of data (containing different data types) tied to one time stamp.  Each time I write data to file, I would like the write to contain a time stamp in column 1, integer values in columns 2 through 5, and floating point values in the remaining columns (about 90 of them).  Yes, I know there are no columns in binary files but this is how I would like the data to appear when I import it into DIAdem or Excel.  
    The other option I am considering is just writing a custom data plugin for DIAdem that would allow me to import the binary files that I am currently creating directly into DIAdem.  If someone could provide me with some suggestions as to what option would be the best I would appreciate it.  Or, if there is a better option that I have not mentioned feel free to recommend it.  Thanks in advance for your help.

    Hello,
    Here is a simple example, of course here I only create one value per iteration in the while loop for simplicity. You can also set properties of the file which can be useful, and set up different channels.
    Beside, you can use multiple groups to have more flexibility in data storage. You can think of channels like columns, and groups as sheets in Excel, so you see this way your data when you import the tdms file into Excel.
    I hope it helps, of course there are much more advanced features with TDMS files, read the help docs!

  • Best way to place lots of images into one

    Hello,
    I have made a Javascript script that loops between 20.000+ images and puts them into a main image. Code is:
    var sampleDocToOpen = File('~/a/'+(l+startF)+'/'+(i+start)+'.png')
    open (sampleDocToOpen)
    app.activeDocument.selection.selectAll()
    app.activeDocument.selection.copy()
    app.activeDocument.close()
    app.documents[0]
    app.activeDocument.paste()
    it works, but it is extremely slow (I also have the main image flattened every 30-40 images to make it quicker).
    So i tried to place the images directly into the main image via:
    function placeFile(file) {
         var desc = new ActionDescriptor();
         desc.putPath( charIDToTypeID('null'), file );
         desc.putEnumerated( charIDToTypeID('FTcs'), charIDToTypeID('QCSt'), charIDToTypeID('Qcsa') );
             var offsetDesc = new ActionDescriptor();
             offsetDesc.putUnitDouble( charIDToTypeID('Hrzn'), charIDToTypeID('#Pxl'), 0.000000 );
             offsetDesc.putUnitDouble( charIDToTypeID('Vrtc'), charIDToTypeID('#Pxl'), 0.000000 );
         desc.putObject( charIDToTypeID('Ofst'), charIDToTypeID('Ofst'), offsetDesc );
         executeAction( charIDToTypeID('Plc '), desc, DialogModes.NO );
    But after opening few images it says that "scratch disks are full", and I verify that it is true since almost 500GB of the HD are used by the application... Images are 512x512 each, thus the resulting image main image is 83456x63488.
    Is there a faster way I could try to use??
    Thanks a lot!

    And also realize what is going on. First your using a script. Though a script is a program with logic its source code which is not compiled into machine code its has to be intrepided into machine executable code as the script processor  iterate through your buffered source code. This itself takes time. Then you have to understand what is happening in your loop and what Photoshop has to do once the Script code can be executed.  Using Copy and Paste or Place requires Photoshop to open and read in 20,000 encoded jpeg images files and one way or the other decode the encoded image for rendering to copy it to the Clipboard  or render the embedded smart object copy into the Smart Object Layer. Then there is the other code that you have to add canvas and distribute the images layers across the canvas. Intrepiding the script source code some 20,0000 times is something like doing 20,000 compiles where the source code is only read in once but is turned into machine executable code 20,0000 times.  To process 20,000 encoded images files using a full feature image Editor program and a scripting process is going to take a long tome and require a lot of machine resources.  If you want a fast solution you will need custom compiled program designed for your process. Still reading and decoding 20,000 small jpeg image files is going to take some time and require quite a bit of machine resources. It might be better to use an image file format that doesn't use compression and complex image encoding to cut down on the amount of processing reqruired .  

  • Best way to schedule and get notification for plsql job

    hi
    i would like to execute a procedure on weekly basis and want to be notified if the job fails
    which is correct way of doing this?
    1. create user defined metric and call the procedure this way
    2. create scheduler job dbms_scheduler
    the other question is while I have exception handling within plsql procedure, how will the grid agent or dbms_scheduler distinguish between success and failure ?
    do I have to set a variable or execute special command as part of exception handling ?

    Have you researched the ADD_JOB_EMAIL_NOTIFICATION procedure of DBMS_SCHEDULER?
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_sched.htm#BABBFBGI
    This procedure adds e-mail notifications for a job. E-mails are then sent to the specified list of recipients whenever any of the specified job state events is raised.For more information about monitoring jobs, including a description of job logs and job email notifications, look here:
    http://docs.oracle.com/cd/E14072_01/server.112/e10595/scheduse008.htm
    You can configure a job to send e-mail notifications when it changes state. The job state events for which e-mails can be sent are listed in Table 28-13. E-mail notifications can be sent to multiple recipients, and can be triggered by any event in a list of job state events that you specify. You can also provide a filter condition, and only job state events that match the filter condition generate notifications. You can include variables like job owner, job name, event type, error code, and error message in both the subject and body of the message. The Scheduler automatically sets values for these variables before sending the e-mail notification.

  • What is the best way to get my bank (small bank) to participate in this program (Apple Pay)??

    I also would like to know the faster way to do this, I tried to call the bank and according to them they don't know anything, so at this point I'm like who do I need to talk to about this great program? Nobody knows....please help thanks!!!

    It is up to them to contact Apple.

  • Best way to split 3D project with constantly moving cameras

    I created a 1 minute project in which the camera travels between 3 sets, while there is an animated 2D background. The background consists of overlaying .psd files which move around. The camera sweeps each set and then zooms to the next. Within each set are .mov and .psd files with tons of keyframing (although not too many filters). I think I'm starting to hit the outer limits of my hardware. I have it set to render draft quality for now, and it's still functional. I'm wondering what would be the best way to split it into smaller files, yet retain smooth motion of the camera between sets? I still need to fine tune the duration of each set, so I'm a little hesitant to break it up a this point, but may need to do so anyway. Thanks in advance.

    Ok, I did it. Project complete. It turned out that I did have enough RAM, just not enough to RAM preview it all. I did flatten quite a bit of it prior to the camera moves. And an additional challenge was when I wanted to add a 5 second clip at the beginning. I wound up adding a second camera because all my timing was thrown off.
    Anyway, here's a link to the final video.
    http://www.youtube.com/watch?v=McBqVzrejNw
    Hey Bogie, I was thinking of starting a separate thread in the FCP or Compressor forums with recommended encodes for getting this monster of a file on the web. I tried several iterations and finally found one that both looked good, was big enough to watch, and small enough to load quickly. Does it make sense to start a new thread for which I already have the answer?

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Aperture 3.0 referenced masters of 16K photos/104GB on external HD. In library, previews are 21GB & thumbnails 13GB. Best way to make those smaller?

    The more I read in the community, the more confused I get.  I've got my masters backup'd to another HD & the cloud.  Was going to backup the library expecting it to be pretty small and saw preview & thumbnail folders are relatively large.  Have read a good way to do it is move both folders somewhere else (as a backup) then restart Aperture to rebuild. I've also read that they sometimes come back distorted.  I know the gurus on here hate when you tinker inside the library.  I've deleted a lot of pictures using file > delete original image and all versions to get them out of the masters, but I don't think they leave the previews or thumbnails.  What is the best way to clean it up and/or reduce the size?  Thanks in advance.

    You can't shrink the thumbnails.  But for the previews, you can reduce the size, and you can reduce the quality (both are options in Aperture settings).  Then select all your photos and rebuild the thumbnails.  You can probably knock 2/3 of the size out by reducing quality to 8 or so, and limiting the preview size to your screen size.

  • What is the best way to implement a scheduled task?

    Dear kind sirs...
    let us say I have a JSF application, and it is working perfectly fine...
    I need the method like
    void DoProcessing()
    *// processing code here*
    to execute everyday at 7AM...
    so what is the best way to do it? I need this to be part of the JSF application... not in a different process... and I want the method to execute at that exact time every day.
    and what are the main steps to do that?
    best regards

    Dear Mr. Chris...
    the reason I am asking about this is because we are required to provide reports for a number of customers by email every day... each report requires retrieving values from a DB.
    I made a test few hours ago about making a thread sleeps and check the time when it wakes up, it works, no problem about that...
    I placed a thread in the servlet context and started it... and it kept working for about an hour writing in the log every 5 seconds... so I guess the idea works...
    but do you think that this way of implementing the scheduler is ok? for I have not done it before and I don't know the cons of such a method.
    thanks for any advice or comment.
    best regards

  • Threading issues - Best way to debug

    Hey everyone... I have a small server program that I wrote in Java. Its module based where each thread that is created runs through a list of modules and returns the output. It works just fine and I am doing a little stress test using Siege. I set it up running 50 concurrent users with no delay and take millions of requests with no errors at all.
    However, when I move it up to 125 concurrent users, I start getting NullPointerExceptions on a fairly routine basis (inside of one of the modules). The server is still running with no issues, just with that many users it appears to have a bug.
    I was wondering what the best way figure out why this would be happening. Or is this kind of thing to be expected and I should just implement some method of throttling for the threads. I plan to do that eventually, just trying to get the bugs out first.
    Thanks in advance!

    I think you are seeing the NPE concurrent scenario because of the assumptions you code is making about how JVM should behave. You code may be making assumption of "happens-before" pattern which may seem valid when you look at the code, but in fact are not.
    Take a look at this presentation that outlines the Java Memory model and defines the contract between concurrent programs and JVM.
    http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-1630&yr=2006&track=coreplatform
    Throttling the method is a bad idea (and a cop out). How would you arrive at the optimal number threads that can safely access your program ? Will your code be portable when it is deployed on a computer with different computing power than what you are testing on ?
    Hope this helps.
    -Prashant

Maybe you are looking for

  • Infopath takes forever to open DESIGN

    Hello, I had an infopath which was 540KB and I add 3 more views to it which bumped it up to 767KB. Now, after i saved and closed the form and reopen it, it takes about 10 mins for the design to open. Any suggestions why this happens? I have 16 views

  • Problem in Cash Sales

    Hello All, I have configured Cash Sales and made one order also where in the document flow it is showing that all the Delivery , PGI & Invoice transactions are completed whereas against the Cash Sales Order status it is showing " Being Processed" des

  • Can you block attachments from one sender?

    I have someone who emails me from their work email and it includes (ridiculous I know) an attachment in every email which is basically an image of their business card. So I'm wondering if there's a way to filter or block just this image from their ac

  • Java EE End Of Support

    I've found information here [http://java.sun.com/products/archive/eol.policy.html] that indicates that support for Java SE 1.4 ends summer 2008, and Java SE 1.5 ends in October 2009. Does the same apply to Java EE 1.4 and 1.5? If not, what are the EO

  • Create a new field in Business Partner Relationships

    Hi experts, We are currently on SAP CRM 7.0 EHP1. We have a requirement to maintain a custom checkbox (Relevant for business) on each business partner relationship. Can you please let us know on how we can achive this requirement. Thanks in advance,