Displaying 20 results from an estimated 42 matches for "pool_siz".
Did you mean:
pool_size
2007 Apr 26
0
pool_size
...one else that has contributed.
I''m writing a web app to subscribe to feeds with video enclosures and
then play them in a flash player, or export them to MythTV or
something like it. I''m using BackgrounDRb for the downloading and
video processing.
The docs mention that the pool_size method can be used to limit the
number of threads that are active at once, and the other jobs will
just queue up, but it doesn''t seem to be limiting the number of
threads. I see there were a couple other discussion threads about
this in December and March, but neither of them rea...
2008 Jan 03
1
Thread_pool bug?
I have a previous question regarding long tasks and the thread_pool (sorry
for the dup, I didn''t see the first go through). To try and track things
down, I made a change based on a suggestion found in the archives. I moved
my import contacts worker to its own file and set the pool_size to 1.
class ImportContactsWorker < BackgrounDRb::MetaWorker
set_worker_name :import_contacts_worker
pool_size(1)
def create(args = nil)
# Restart any import jobs that didn''t complete or start
ImportJob.process_all_ready
end
def import_contacts(args = nil)
thread_...
2007 May 15
6
Behaviour of pool_size setting
...er processes are filling up my process list.
I start my workers like this:
key = MiddleMan.new_worker(:class => :execution_worker, :args => {...some_args...})
Why do I see so much more than my declared number of 30 workers? Am I
wrong somehow? How do I have to understand the behaviour of pool_size?
What happens when I have 30 workers working and the 31st, 32nd, ...,
300th request to start a worker comes in?
Thanks & Regards!
Christian
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cschlaefcke.vcf
Type: text/x-vcard
Size: 368 bytes
Desc: not avai...
2007 Mar 29
3
thread pool
Hi,
I am trying to limit the number of worker threads running at the same
time. Adding ":poolsize: 2" to backgroundrb.yml or starting
backgroundrb server like this
script/backgroundrb start -- -s 2
does not seem to help. I can still create more than 2 worker threads
from a rails app.
I tried this on
ubuntu dapper
- ruby 1.8.4
- backgroundrb 0.2.1
as well as
2006 Dec 04
4
Question about acls
...it slows
things down a lot. I would like to keep running rails on my main
server (server1) but move backgroundrb to a secondary server (server2).
I looked through the source trying to figure out how to do this and I
pieced together this configuration file:
--
:port: 22222
:timer_sleep: 60
:pool_size: 15
:load_rails: true
:rails_env: development
:environment: development
:host: localhost
:uri: druby://localhost:22222/
:database_yml: config/database.yml
:protocol: druby
:acl:
:deny: all
:allow: localhost 127.0.0.1 server1
:order: deny,allow
autostart:
1:
job_key: chat_notifier...
2008 Jan 13
3
right usage of bdrb
Hi,
i''m going to implement a syndication-service, which will get lists in
xml with some meta-data an enclosed video files, which will get encoded
at the end. The syndication run will be startet every five minutes of
a full hour.
So i thought to build 4 Worker. One for checking which feeds to
syndicate (syndication_worker) at a specific time, one for processing
the list
2006 Dec 08
3
Thread Pool Size?
Hi All,
It might be lack of sleep, but I am struggling to accurately limit our pool
size. It seems like I can specify it on the server with the -s command
line option and also on the client via the YAML pool_size. Is that right?
Which one wins?
Our problem is that we are getting about 40 threads on each backgroundrb box
and it''s flooring our db and each bgrb box.
We want around 8.
Is there a way to put a hard ceiling on the server side thread pool?
Here is our setup:
5 app boxes, app01 - app05...
2007 Oct 12
0
Trouble on Multi Server setup
...setup, server1 will be the report server, and server2 and
server3 are the application/web servers
The backgroundrb.yml looks like this on servers 2 and 3:
port: 22223
timer-sleep: 60
load_rails: true
rails_env: production
environment: production
host: 172.16.0.1
database_yml: config/database.yml
pool_size: 10
(172.16.0.1 is the local IP for server1)
On the report server (server1), I have this:
port: 22223
timer-sleep: 60
load_rails: true
rails_env: production
environment: production
host: localhost
database_yml: config/database.yml
pool_size: 10
protocal: druby
:deny: all
:allow: localhost 1...
2007 Jun 21
0
config file questions
example:
:host: 192.168.1.101
:protocol: druby
:worker_dir: lib/workers
:rails_env: production
:pool_size: 15
:load_rails: true
:timer_sleep: 60
:acl:
:deny: all
:allow: localhost 192.168.1.*
:order: deny allow
I have this config file on 2 app servers running rails. The 2 servers are
running behind a load balancer. I''m starting the backgroundrb server on the
one server with the ip of...
2008 May 20
1
Couple questions on BDRb and concurrent processing
My Rails site uses BackgroundDRb and I have a couple of questions:
1. Can someone point me to some sample code/examples for how to use
thread_pool? The website doc says to add a line
======
pool_size 10
======
in my Worker class which seems straightforward.
I wasn''t able to understand this part though:
=========
thread_pool.defer(wiki_scrap_url) { |wiki_url| scrap_wikipedia(wiki_url) }
=========
Can someone explain this code please? And where does this code go? In my
Rails method th...
2008 May 09
1
register_status for excess thread_pool?
Hi,
Newbie here. I''ve got a worker (for generating PDF reports) that
uses the "thread_pool" to allow processing multiple reports
simultaneously and queue up any requests that exceed the thread
pool (pool_size = 10 currently).
def process_pdf(user)
thread_pool.defer(user) do |user|
makepdf(user)
end
end
My question is: I use a mutex to handle synchronizing the
register_status. This works fine:
def makepdf(user)
txt = user.to_s + " started"
save_status(u...
2008 Jan 22
2
Threadpool and queuing of tasks
...true)
1a. If none, goto step 5
2. Update status to ''processing''
3. Send to search method
4. Repeat 1
5. Done
Search method (many threads):
1. Perform the search
2. Update status to ''complete''
3. Done
The easy answer is to split these into two workers. Set the pool_size of
Dispatch to 1, and Search to 5 or 10. However, eating two processes (master
and worker) for something so simple as Dispatch seems like serious overkill
to me. Since I currently run on one server, the extra processes cut into
the memory the main site wants.
A related question is how to implem...
2007 May 31
0
Slave socket problem again
...1813) protocol: druby
20070531-10:24:33 (31813) uri: druby://localhost:2000
20070531-10:24:33 (31813) config:
/home/kskalski/railsapp/sars/config/backgroundrb.yml
20070531-10:24:33 (31813) rails_env: development
20070531-10:24:33 (31813) socket_dir: /tmp/backgroundrb.31813
20070531-10:24:33 (31813) pool_size: 5
20070531-10:24:33 (31813) host: localhost
20070531-10:24:33 (31813) acl: denyallallowlocalhost 127.0.0.1orderdenyallow
20070531-10:24:33 (31813) temp_dir: /tmp
20070531-10:24:33 (31813) port: 2000
20070531-10:24:33 (31813) Installed DRb ACL
)
the server still uses sockets!!! I have observed,...
2004 Aug 05
1
Windows build
...xx /aer10/or
[359] [360] [361]
! Interrmake[1]: *** Deleting file `refman.pdf'
uption.
<to be read again>
\@tempskipa
l.21899 \end{Value}
?
! Emergency stop.
<to be read again>
\@tempskipa
docs/README says that it is necessary to set a certain pool_size var to
500000, but I was not able to find any conf file
with that var. By the way, I'm using fptex (default install).
Any help would be great.
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2012 Sep 04
1
Repeated Asterisk 10.7.0 crashes
...sage () from /lib64/libc.so.6
(gdb) up
#3 0x0000003686e71e7e in _int_malloc () from /lib64/libc.so.6
(gdb) up
#4 0x0000003686e7382d in calloc () from /lib64/libc.so.6
(gdb) up
#5 0x000000000054a2a0 in _ast_calloc (num_structs=1, struct_size=88,
field_mgr_offset=64, field_mgr_pool_offset=16, pool_size=128,
file=0x101010101010101 <Address 0x101010101010101 out of bounds>,
lineno=1235, func=0x58af9e "ast_log")
at /usr/src/asterisk-10.7.1/include/asterisk/utils.h:495
495 AST_INLINE_API(
Once this starts happening, it seems to keep happening, but Asterisk
seems to...
2012 Aug 07
4
pop3 proxying error
Hi Timo,
I've got some errors with pop3 proxying and dovecot 2.1.9
I's occured on the proxy side:
Aug 7 13:16:47 dev1 dovecot: pop3-login: Fatal: master:
service(pop3-login): child 23046 killed with signal 11 (core dumped)
Server side shows no error, and runs the same dovecot version.
Thanks for your help.
Best regards
Tonio Buonaguidi
Core dump:
GNU gdb (GDB) 7.4.1-debian