similar to: re: Btrfs: fix num_workers_starting bug and other bugs in async thread

Displaying 20 results from an estimated 400 matches similar to: "re: Btrfs: fix num_workers_starting bug and other bugs in async thread"

2007 Oct 15
24
Design flaw? - num_processors, accept/close
Rails instances themselves are almost always single-threaded, whereas Mongrel, and it''s acceptor, are multithreaded. In a situation with long-running Rails pages this presents a problem for mod_proxy_balancer. If num_processors is greater than 1 ( default: 950 ), then Mongrel will gladly accept incoming requests and queue them if its rails instance is currently busy. So even
2009 Mar 20
1
chan_ss7 with ringing, but no voice stream.
hello, all of users: sorry, resend it again for clarifying the message. I have implemented cha_ss7 in china. initially, the chan_ss7 can not support the call group. i modify the code. now the problem is that, both sides can hear the ring, but i can not hear the voice from each other. i think the ss7 does not send the voice steam to the destination. in chan_ss7, i added:
2009 Jul 18
0
[PATCH 5/6] fs/btrfs: convert nested spin_lock_irqsave to spin_lock
From: Julia Lawall <julia@diku.dk> If spin_lock_irqsave is called twice in a row with the same second argument, the interrupt state at the point of the second call overwrites the value saved by the first call. Indeed, the second call does not need to save the interrupt state, so it is changed to a simple spin_lock. The semantic match that finds this problem is as follows:
2006 Nov 25
3
[PATCH] HTTP accept filter support for FreeBSD
This small patch extends configure_socket_options to support FreeBSD''s accf_http(9), which defers accept() until there''s a full HTTP request to read. Seems to work fine on 6.1-STABLE. DragonflyBSD should work too provided the /freebsd/ line is modified to match it. accf_http(9): http://www.freebsd.org/cgi/man.cgi?query=accf_http&sektion=9 -- Thomas
2011 Aug 26
0
[PATCH] Btrfs: make some functions return void
The type of some functions that return only 0 is changed to ''void''. In addition, the check on the return value in the caller of these functions becomes unnecessary. So, these check is removed. Signed-off-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> --- fs/btrfs/async-thread.c | 17 ++++-------- fs/btrfs/async-thread.h | 4 +- fs/btrfs/compression.c | 14 ++++------
2008 Jan 23
14
Again: Workaround found for request queuing vs. num_processors, accept/close
Hello all. I too found out that I sometimes have some action that can take up to 10 seconds in my rails application. I''ve read all arguments Zed made about polling/and inbox strategies, and I think I just can''t work around my feeling that a "wrong" request that takes up too much time should be able to lock subsequent requests in mongrels queue. That''s what
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2013 Sep 02
2
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote: > As Michael point out, We used to limit the max pending DMAs to get better cache > utilization. But it was not done correctly since it was one done when there's no > new buffers submitted from guest. Guest can easily exceeds the limitation by > keeping sending packets. > > So this patch moves the check into main
2013 Sep 02
2
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote: > As Michael point out, We used to limit the max pending DMAs to get better cache > utilization. But it was not done correctly since it was one done when there's no > new buffers submitted from guest. Guest can easily exceeds the limitation by > keeping sending packets. > > So this patch moves the check into main
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: > We used to limit the max pending DMAs to prevent guest from pinning too many > pages. But this could be removed since: > > - We have the sk_wmem_alloc check in both tun/macvtap to do the same work > - This max pending check were almost useless since it was one done when there's > no new buffers coming from
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: > We used to limit the max pending DMAs to prevent guest from pinning too many > pages. But this could be removed since: > > - We have the sk_wmem_alloc check in both tun/macvtap to do the same work > - This max pending check were almost useless since it was one done when there's > no new buffers coming from
2013 Apr 11
1
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling errors when setting backend), we in fact track the polling state through poll->wqh, so there's no need to duplicate the work with an extra vhost_net_polling_state. So this patch removes this and make the code simpler. This patch also removes the all tx starting/stopping code in tx path according to
2013 Apr 11
1
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling errors when setting backend), we in fact track the polling state through poll->wqh, so there's no need to duplicate the work with an extra vhost_net_polling_state. So this patch removes this and make the code simpler. This patch also removes the all tx starting/stopping code in tx path according to
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. Plase review. Thanks Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use vhost_add_used_n() vhost_net: determine whether or not to use zerocopy at one time vhost_net: poll vhost
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. Plase review. Thanks Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use vhost_add_used_n() vhost_net: determine whether or not to use zerocopy at one time vhost_net: poll vhost
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2007 Dec 14
0
async-observer rails plugin
I''m pleased to announce the existence of async-observer. This is the very first public release. In the future I''ll confine announcements to the beanstalk-talk mailing list. WHAT IS ASYNC OBSERVER? ----------------------- Async Observer is a Rails plugin that provides deep integration with Beanstalk. beanstalkd is a fast, distributed, in-memory work-queue service. Its
2013 Aug 29
0
[PATCH] Btrfs: don't use an async starter for most of our workers
We only need an async starter if we can''t make a GFP_NOFS allocation in our current path. This is the case for the endio stuff since it happens in IRQ context, but things like the caching thread workers and the delalloc flushers we can easily make this allocation and start threads right away. Also change the worker count for the caching thread pool. Traditionally we limited this to 2