Hello, I think I may have discovered a bug in BackgrounDRb. It seems that
when the data being requested is large (via send_request), say in the range of
64k+, it crashes or at least incapacitates BackgrounDRb.
For example, take these two workers:
class SmallWorker < BackgrounDRb::MetaWorker
set_worker_name :small_worker
set_no_auto_load(true)
attr_reader :payload
def create(args)
@payload = ''s '' * 512
end
end
class LargeWorker < BackgrounDRb::MetaWorker
set_worker_name :large_worker
set_no_auto_load(true)
attr_reader :payload
def create(args)
@payload = ''L'' * 65600
end
end
I''ll start with SmallWorker. Created by:
@job_key = rand 1000 # Nevermind the weak job keys for now.
MiddleMan.new_worker(:worker => :small_worker, :data => ''not
used'',
:job_key => @job_key)
then retrieved with:
@ret = MiddleMan.send_request(:worker => :small_worker,
:job_key => @job_key, :worker_method => :payload)
SmallWorker returns about 1k worth of data in the @ret hash; and it will work
every time.
Now, do the same thing with LargeWorker, which has a payload of larger than
64k. @ret will be just nil and BackgrounDRb will no longer accept requests.
This was done in an isolated rails project, with only the bare minimum of
files created for testing purposes.
If this is simply a matter of me not setting something up properly, great,
just let me know and problem solved. Otherwise, I think we have a bug here.
Given that it has the issue around the 64k mark, it would seem likely we have
a 16bit number tracking something that needs at least a 32.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080604/9e710825/attachment.html>