hemant
2007-Dec-17 12:23 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
Hi Folks, I am glad to announce 1.0 release of BackgrounDRb. This would be a major release since 0.2 release of BackgrounDRb. Here is a brief summary of changes: - BackgrounDRb is DRb no longer. It makes use of EventDriven network programming library packet ( http://packet.googlecode.com ). - Since we moved to packet, many nasty thread issues, result hash corruption issues are totally gone. Lots of work has went in making scheduler rock solid stable. - Each worker, still runs in its own process, but each worker has a event loop of its own and all the events are triggered by the internal reactor loop. In a nutshell, you are not encouraged to use threads in your workers now. All the workers are already concurrent, but you are encouraged to use co-operative multitasking, rather than pre-emptive. A simple example is, For implement something like progress bar in old version of bdrb, you would: - start your processing a thread (so as your worker can receive further request from rails ) and have a instance variable ( protected by mutex ) which is updated on progress and can be send to rails. - With new backgroundrb, progress bar would be: process your damn request and just use register_status() to register status of your worker. Just because you are doing some processing won''t mean that your worker will block. It can still receive requests from rails. - Now, you can schedule multiple methods with their own triggers. | :schedules: | :foo_worker: | :foobar: | :trigger_args: */5 * * * * * * | :data: Hello World | :barbar: | :trigger_args: */10 * * * * * * - Inside each worker, you can start tcp server or connect to a external server. Two important methods available in all workers are: start_server("localhost",port,ModuleName) connect("localhost",port,ModuleName) Connected client or outgoing connection would be integrated with Event Loop and you can process requests from these guys asynchronously. This mouse trap can allow you to build truly distributed workers across your network. - Each worker comes with a "thread_pool" object, which can be used to run tasks concurrently. For example: thread_pool.defer(url) { |url| scrap_wiki_content(url) } - Each worker has access to method "register_status" which can be used to update status of worker or store results. Results of a worker can be retrieved even after a worker has died. By default the results would be saved in master process memory, but you can configure BackgrounDRb to store these results in a memcache server or a cluster using following option in configuration file: # backgroundrb.yml | :backgroundrb: | :port: 11006 | :ip: 0.0.0.0 | :log: foreground | :result_storage: | :memcache: "10.10.10.2:11211,10.10.10.6:11211" - Relevant URLs: ** Home Page: http://backgroundrb.rubyforge.org ** SVN : http://svn.devjavu.com/backgroundrb/trunk ** Bug Reports/Ticks: http://backgroundrb.devjavu.com/report - Credits : ** Ezra Zygmuntowicz,skaar for taking BackgrounDRb so far. ** Kevin for helping out with OSX issues. ** Andy for patches and initial testing. ** Paul for patching up README. ** Other initial users. ** Matz, Francis for general inspiration. -- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
Josh Symonds
2007-Dec-17 14:17 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
Awesome, thanks for all the work, Hemant! Does this release also fix the issue with BackgrounDRb writing ActiveRecord messages to the development log? On Dec 17, 2007 6:23 AM, hemant <gethemant at gmail.com> wrote:> Hi Folks, > > I am glad to announce 1.0 release of BackgrounDRb. > > This would be a major release since 0.2 release of BackgrounDRb. > > Here is a brief summary of changes: > > - BackgrounDRb is DRb no longer. It makes use of EventDriven network > programming library packet ( http://packet.googlecode.com ). > > - Since we moved to packet, many nasty thread issues, result hash > corruption issues are totally gone. Lots of work has went in > making scheduler rock solid stable. > > - Each worker, still runs in its own process, but each worker has a > event loop of its own and all the events are triggered by the internal > reactor loop. In a nutshell, you are not encouraged to use threads > in your workers now. All the workers are already concurrent, but you > are encouraged to use co-operative multitasking, rather than > pre-emptive. A simple example is, > > For implement something like progress bar in old version of bdrb, you > would: > - start your processing a thread (so as your worker can receive > further request from rails ) and have a instance > variable ( protected by mutex ) which is updated on progress and > can be send to rails. > > - With new backgroundrb, progress bar would be: > process your damn request and just use register_status() to > register status of your worker. Just because > you are doing some processing won''t mean that your worker will > block. It can still receive requests from rails. > > > - Now, you can schedule multiple methods with their own triggers. > > | :schedules: > | :foo_worker: > | :foobar: > | :trigger_args: */5 * * * * * * > | :data: Hello World > | :barbar: > | :trigger_args: */10 * * * * * * > > - Inside each worker, you can start tcp server or connect to a > external server. Two important methods available in all workers are: > > start_server("localhost",port,ModuleName) > connect("localhost",port,ModuleName) > > Connected client or outgoing connection would be integrated with > Event Loop and you can process requests from these guys > asynchronously. This mouse trap can allow you to build truly > distributed workers across your network. > > - Each worker comes with a "thread_pool" object, which can be used > to run tasks concurrently. For example: > > thread_pool.defer(url) { |url| scrap_wiki_content(url) } > > - Each worker has access to method "register_status" which can be > used to update status of worker or store results. Results of a worker > can be retrieved even after a worker has died. > > By default the results would be saved in master process memory, but > you can configure BackgrounDRb to store these results in a memcache > server or a cluster using following option in configuration file: > > # backgroundrb.yml > > | :backgroundrb: > | :port: 11006 > | :ip: 0.0.0.0 > | :log: foreground > | :result_storage: > | :memcache: "10.10.10.2:11211,10.10.10.6:11211" > > > - Relevant URLs: > ** Home Page: http://backgroundrb.rubyforge.org > ** SVN : http://svn.devjavu.com/backgroundrb/trunk > ** Bug Reports/Ticks: http://backgroundrb.devjavu.com/report > > - Credits : > ** Ezra Zygmuntowicz,skaar for taking BackgrounDRb so far. > ** Kevin for helping out with OSX issues. > ** Andy for patches and initial testing. > ** Paul for patching up README. > ** Other initial users. > ** Matz, Francis for general inspiration. > > > -- > Let them talk of their oriental summer climes of everlasting > conservatories; give me the privilege of making my own summer with my > own coals. > > http://gnufied.org > _______________________________________________ > Backgroundrb-devel mailing list > Backgroundrb-devel at rubyforge.org > http://rubyforge.org/mailman/listinfo/backgroundrb-devel >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20071217/47dc223e/attachment-0001.html
Hello! I remember there was a thread sometime last month about multiple copies of the same worker. Is that still impossible to do? What I am trying to accomplish is to limit number of concurrent connections for a lengthy search. I used to put search requests in a queue, then as slots freed up, they spawned off a new copy of the "search worker" which killed itself upon completion. That way no more than 8 searches were conducted at the same time. Will it be possible to do this in the new version or is there an alternative solution to this problem? Thanks! Danila
Mickael Faivre-Macon
2007-Dec-17 14:50 UTC
[Backgroundrb-devel] Multiple copies of same worker
Hermant replied to me yesterday that it is still possible. Mickael. On Dec 17, 2007 3:40 PM, Danila Ulyanov <du at bestwaytech.com> wrote:> Hello! > > I remember there was a thread sometime last month about multiple copies > of the same worker. Is that still impossible to do? > > What I am trying to accomplish is to limit number of concurrent > connections for a lengthy search. I used to put search requests in a > queue, then as slots freed up, they spawned off a new copy of the > "search worker" which killed itself upon completion. That way no more > than 8 searches were conducted at the same time. > > Will it be possible to do this in the new version or is there an > alternative solution to this problem? > > Thanks! > > Danila
hemant
2007-Dec-17 14:59 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
Hi On Dec 17, 2007 7:47 PM, Josh Symonds <veraticus at gmail.com> wrote:> Awesome, thanks for all the work, Hemant! Does this release also fix the > issue with BackgrounDRb writing ActiveRecord messages to the development > log?Yes, John, The issue you mention has been fixed in the latest release.> > > > On Dec 17, 2007 6:23 AM, hemant < gethemant at gmail.com> wrote: > > > > > > > > Hi Folks, > > > > I am glad to announce 1.0 release of BackgrounDRb. > > > > This would be a major release since 0.2 release of BackgrounDRb. > > > > Here is a brief summary of changes: > > > > - BackgrounDRb is DRb no longer. It makes use of EventDriven network > > programming library packet ( http://packet.googlecode.com ). > > > > - Since we moved to packet, many nasty thread issues, result hash > > corruption issues are totally gone. Lots of work has went in > > making scheduler rock solid stable. > > > > - Each worker, still runs in its own process, but each worker has a > > event loop of its own and all the events are triggered by the internal > > reactor loop. In a nutshell, you are not encouraged to use threads > > in your workers now. All the workers are already concurrent, but you > > are encouraged to use co-operative multitasking, rather than > > pre-emptive. A simple example is, > > > > For implement something like progress bar in old version of bdrb, you > would: > > - start your processing a thread (so as your worker can receive > > further request from rails ) and have a instance > > variable ( protected by mutex ) which is updated on progress and > > can be send to rails. > > > > - With new backgroundrb, progress bar would be: > > process your damn request and just use register_status() to > > register status of your worker. Just because > > you are doing some processing won''t mean that your worker will > > block. It can still receive requests from rails. > > > > > > - Now, you can schedule multiple methods with their own triggers. > > > > | :schedules: > > | :foo_worker: > > | :foobar: > > | :trigger_args: */5 * * * * * * > > | :data: Hello World > > | :barbar: > > | :trigger_args: */10 * * * * * * > > > > - Inside each worker, you can start tcp server or connect to a > > external server. Two important methods available in all workers are: > > > > start_server("localhost",port,ModuleName) > > connect("localhost",port,ModuleName) > > > > Connected client or outgoing connection would be integrated with > > Event Loop and you can process requests from these guys > > asynchronously. This mouse trap can allow you to build truly > > distributed workers across your network. > > > > - Each worker comes with a "thread_pool" object, which can be used > > to run tasks concurrently. For example: > > > > thread_pool.defer(url) { |url| scrap_wiki_content(url) } > > > > - Each worker has access to method "register_status" which can be > > used to update status of worker or store results. Results of a worker > > can be retrieved even after a worker has died. > > > > By default the results would be saved in master process memory, but > > you can configure BackgrounDRb to store these results in a memcache > > server or a cluster using following option in configuration file: > > > > # backgroundrb.yml > > > > | :backgroundrb: > > | :port: 11006 > > | :ip: 0.0.0.0 > > | :log: foreground > > | :result_storage: > > | :memcache: "10.10.10.2:11211,10.10.10.6:11211" > > > > > > - Relevant URLs: > > ** Home Page: http://backgroundrb.rubyforge.org > > ** SVN : http://svn.devjavu.com/backgroundrb/trunk > > ** Bug Reports/Ticks: http://backgroundrb.devjavu.com/report > > > > - Credits : > > ** Ezra Zygmuntowicz,skaar for taking BackgrounDRb so far. > > ** Kevin for helping out with OSX issues. > > ** Andy for patches and initial testing. > > ** Paul for patching up README. > > ** Other initial users. > > ** Matz, Francis for general inspiration. > > > > > > -- > > Let them talk of their oriental summer climes of everlasting > > conservatories; give me the privilege of making my own summer with my > > own coals. > > > > http://gnufied.org > > _______________________________________________ > > Backgroundrb-devel mailing list > > Backgroundrb-devel at rubyforge.org > > http://rubyforge.org/mailman/listinfo/backgroundrb-devel > > > >-- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
Hi> On Dec 17, 2007 3:40 PM, Danila Ulyanov <du at bestwaytech.com> wrote: > > Hello! > > > > I remember there was a thread sometime last month about multiple copies > > of the same worker. Is that still impossible to do? > > > > What I am trying to accomplish is to limit number of concurrent > > connections for a lengthy search. I used to put search requests in a > > queue, then as slots freed up, they spawned off a new copy of the > > "search worker" which killed itself upon completion. That way no more > > than 8 searches were conducted at the same time. > > > > Will it be possible to do this in the new version or is there an > > alternative solution to this problem? > >Yes, you can have multiple copies of same worker running. But there are two catches: 1. You have to specify a unique job_key for each worker, when you are making a call to a start worker. 2. To start a worker, you will have to use MiddleMan.new_worker(:worker => :foo_worker, :job_key => "whoa_man") To exit a worker, once its done with search, simply call "exit" on them. -- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
Is there any special reason that creating worker does not return a unique job_key itself upon creation like it used to in the old backgroundrb? :-) Would there be much overhead creating/destroying up to 8 workers versus having 8 workers running continuously and just reusing them? Thanks! hemant wrote:> Hi > >> On Dec 17, 2007 3:40 PM, Danila Ulyanov <du at bestwaytech.com> wrote: >>> Hello! >>> >>> I remember there was a thread sometime last month about multiple copies >>> of the same worker. Is that still impossible to do? >>> >>> What I am trying to accomplish is to limit number of concurrent >>> connections for a lengthy search. I used to put search requests in a >>> queue, then as slots freed up, they spawned off a new copy of the >>> "search worker" which killed itself upon completion. That way no more >>> than 8 searches were conducted at the same time. >>> >>> Will it be possible to do this in the new version or is there an >>> alternative solution to this problem? >>> > > Yes, you can have multiple copies of same worker running. But there > are two catches: > > 1. You have to specify a unique job_key for each worker, when you are > making a call to a start worker. > 2. To start a worker, you will have to use > MiddleMan.new_worker(:worker => :foo_worker, :job_key => "whoa_man") > > To exit a worker, once its done with search, simply call "exit" on them. > >
Ivan S. Manida
2007-Dec-17 18:35 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
Hemant, "ruby script/backgroundrb stop" does not stop the currently running server. What is a good way to restart the server, besides doing a kill -9 `cat backgroundrb.pid`? hemant wrote:> Hi Folks, > > I am glad to announce 1.0 release of BackgrounDRb. > > This would be a major release since 0.2 release of BackgrounDRb.
hemant kumar
2007-Dec-17 18:52 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
On Mon, 2007-12-17 at 21:35 +0300, Ivan S. Manida wrote:> Hemant, > > "ruby script/backgroundrb stop" does not stop the currently running > server. What is a good way to restart the server, besides doing a > kill -9 `cat backgroundrb.pid`? > >You got to be kidding. :) It does stops backgroundrb server for me. Here are possible fixes: 1. Remove old "backgroundrb" script lying in your script directory. 2. run: rake backgroundrb:setup 3. Remove any :log: foreground option if you are using 3. start bdrb server: ./script/backgroundrb start 4. stop bdrb server: ./script/backgroundrb stop 5. File a bug with super critical priority if it doesn''t work. Mention, which OS, which version of bdrb. :)> hemant wrote: > > Hi Folks, > > > > I am glad to announce 1.0 release of BackgrounDRb. > > > > This would be a major release since 0.2 release of BackgrounDRb.-- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
Ivan Manida
2007-Dec-17 21:40 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
hemant kumar wrote:> On Mon, 2007-12-17 at 21:35 +0300, Ivan S. Manida wrote: >> Hemant, >> >> "ruby script/backgroundrb stop" does not stop the currently running >> server. What is a good way to restart the server, besides doing a >> kill -9 `cat backgroundrb.pid`? >> >> > > You got to be kidding. :) > It does stops backgroundrb server for me. Here are possible fixes: > > 1. Remove old "backgroundrb" script lying in your script directory. > 2. run: rake backgroundrb:setup > 3. Remove any :log: foreground option if you are using > 3. start bdrb server: ./script/backgroundrb start > 4. stop bdrb server: ./script/backgroundrb stop > 5. File a bug with super critical priority if it doesn''t work. Mention, > which OS, which version of bdrb. > > :)OS is Solaris, bdrb is up-to-the-minute. I did recreate the control scripts. After further investigation, it *maybe* happens only while there is a worker that does something. But the fact is there - it reports "deleting pid file" to console and the process is not removed - start fails since port is taken, have to kill. I''ll test more tomorrow and will file a bug with more details.
hemant kumar
2007-Dec-18 02:18 UTC
[Backgroundrb-devel] [ANN] BackgrounDRb release 1.0 available now
Hi, On Tue, 2007-12-18 at 00:40 +0300, Ivan Manida wrote:> > OS is Solaris, bdrb is up-to-the-minute. I did recreate the control > scripts. After further investigation, it *maybe* happens only while > there is a worker that does something. But the fact is there - it > reports "deleting pid file" to console and the process is not removed - > start fails since port is taken, have to kill. I''ll test more tomorrow > and will file a bug with more details.Could be a solaris specific issue. I will check it up. -- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
Hi On Dec 17, 2007 8:40 PM, Danila Ulyanov <du at bestwaytech.com> wrote:> Is there any special reason that creating worker does not return a > unique job_key itself upon creation like it used to in the old > backgroundrb? :-)It does return that now.> > Would there be much overhead creating/destroying up to 8 workers versus > having 8 workers running continuously and just reusing them? > > Thanks! > > > hemant wrote: > > Hi > > > >> On Dec 17, 2007 3:40 PM, Danila Ulyanov <du at bestwaytech.com> wrote: > >>> Hello! > >>> > >>> I remember there was a thread sometime last month about multiple copies > >>> of the same worker. Is that still impossible to do? > >>> > >>> What I am trying to accomplish is to limit number of concurrent > >>> connections for a lengthy search. I used to put search requests in a > >>> queue, then as slots freed up, they spawned off a new copy of the > >>> "search worker" which killed itself upon completion. That way no more > >>> than 8 searches were conducted at the same time. > >>> > >>> Will it be possible to do this in the new version or is there an > >>> alternative solution to this problem? > >>> > > > > Yes, you can have multiple copies of same worker running. But there > > are two catches: > > > > 1. You have to specify a unique job_key for each worker, when you are > > making a call to a start worker. > > 2. To start a worker, you will have to use > > MiddleMan.new_worker(:worker => :foo_worker, :job_key => "whoa_man") > > > > To exit a worker, once its done with search, simply call "exit" on them. > > > > >-- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org