I am not sure how pool size works or if it applies to the following situation: I would like a pool of 10 worker processes that basically all do the same thing (based on args i send). I would like to be able to pass args to the pool of workers and the next available one will handle it. If there happens to be 10 already processing, I would like the request to be queued until a worker is available. What is the best way to handle this? The docs say do_work only gets called on new_worker, and that I should call delete. But won''t this in effect create a new process and delete the process? How can I call delete when I am not sure when the process will be done? Thanks for any insight/feedback. Chris