Displaying 20 results from an estimated 36 matches for "num_pending".
Did you mean:
not_pending
2013 Sep 02
2
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
> As Michael point out, We used to limit the max pending DMAs to get better cache
> utilization. But it was not done correctly since it was one done when there's no
> new buffers submitted from guest. Guest can easily exceeds the limitation by
> keeping sending packets.
>
> So this patch moves the check into main
2013 Sep 02
2
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
> As Michael point out, We used to limit the max pending DMAs to get better cache
> utilization. But it was not done correctly since it was one done when there's no
> new buffers submitted from guest. Guest can easily exceeds the limitation by
> keeping sending packets.
>
> So this patch moves the check into main
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
> We used to limit the max pending DMAs to prevent guest from pinning too many
> pages. But this could be removed since:
>
> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
> - This max pending check were almost useless since it was one done when there's
> no new buffers coming from
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
> We used to limit the max pending DMAs to prevent guest from pinning too many
> pages. But this could be removed since:
>
> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
> - This max pending check were almost useless since it was one done when there's
> no new buffers coming from
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2013 Apr 11
1
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling
errors when setting backend), we in fact track the polling state through
poll->wqh, so there's no need to duplicate the work with an extra
vhost_net_polling_state. So this patch removes this and make the code simpler.
This patch also removes the all tx starting/stopping code in tx path according
to
2013 Apr 11
1
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling
errors when setting backend), we in fact track the polling state through
poll->wqh, so there's no need to duplicate the work with an extra
vhost_net_polling_state. So this patch removes this and make the code simpler.
This patch also removes the all tx starting/stopping code in tx path according
to
2013 Aug 16
0
[PATCH 6/6] vhost_net: remove the max pending check
We used to limit the max pending DMAs to prevent guest from pinning too many
pages. But this could be removed since:
- We have the sk_wmem_alloc check in both tun/macvtap to do the same work
- This max pending check were almost useless since it was one done when there's
no new buffers coming from guest. Guest can easily exceeds the limitation.
- We've already check upend_idx != done_idx
2013 Aug 20
0
[PATCH 6/6] vhost_net: remove the max pending check
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>> We used to limit the max pending DMAs to prevent guest from pinning too many
>> pages. But this could be removed since:
>>
>> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
>> - This max pending check were almost useless
2013 Aug 30
0
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
As Michael point out, We used to limit the max pending DMAs to get better cache
utilization. But it was not done correctly since it was one done when there's no
new buffers submitted from guest. Guest can easily exceeds the limitation by
keeping sending packets.
So this patch moves the check into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx. But a 5% drop
2013 Sep 02
0
[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
>> As Michael point out, We used to limit the max pending DMAs to get better cache
>> utilization. But it was not done correctly since it was one done when there's no
>> new buffers submitted from guest. Guest can easily exceeds the limitation by
>> keeping
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2011 Dec 23
2
re: Btrfs: fix num_workers_starting bug and other bugs in async thread
...back,
601 struct btrfs_worker_thread, worker_list);
602 found:
603 /*
604 * this makes sure the worker doesn''t exit before it is placed
605 * onto a busy/idle list
606 */
607 atomic_inc(&worker->num_pending);
608 spin_unlock_irqrestore(&workers->lock, flags);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And again here.
Btw, does find_worker() ever get called with IRQs disabled? If so then
__btrfs_start_workers() enables them. Maybe that function should use
spin_lock_irqs...
2014 Feb 25
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
> We used to stop the handling of tx when the number of pending DMAs
> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
> of both host and guest. But it was too aggressive in some cases, since
> any delay or blocking of a single packet may delay or block the guest
> transmission. Consider the following
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>> We used to stop the handling of tx when the number of pending DMAs
>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> of both host and guest. But it was too aggressive in some cases, since
>> any delay or blocking of a single packet
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>> We used to stop the handling of tx when the number of pending DMAs
>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> of both host and guest. But it was too aggressive in some cases, since
>> any delay or blocking of a single packet
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all:
This series tries to unify and simplify vhost codes especially for zerocopy.
Plase review.
Thanks
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use vhost_add_used_n()
vhost_net: determine whether or not to use zerocopy at one time
vhost_net: poll vhost
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all:
This series tries to unify and simplify vhost codes especially for zerocopy.
Plase review.
Thanks
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use vhost_add_used_n()
vhost_net: determine whether or not to use zerocopy at one time
vhost_net: poll vhost