Displaying 17 results from an estimated 17 matches for "sock_def_read".
2016 Apr 27
2
[PATCH] vhost_net: stop polling socket during rx processing
On Tue, Apr 26, 2016 at 03:35:53AM -0400, Jason Wang wrote:
> We don't stop polling socket during rx processing, this will lead
> unnecessary wakeups from under layer net devices (E.g
> sock_def_readable() form tun). Rx will be slowed down in this
> way. This patch avoids this by stop polling socket during rx
> processing. A small drawback is that this introduces some overheads in
> light load case because of the extra start/stop polling, but single
> netperf TCP_RR does not notice...
2016 Apr 27
2
[PATCH] vhost_net: stop polling socket during rx processing
On Tue, Apr 26, 2016 at 03:35:53AM -0400, Jason Wang wrote:
> We don't stop polling socket during rx processing, this will lead
> unnecessary wakeups from under layer net devices (E.g
> sock_def_readable() form tun). Rx will be slowed down in this
> way. This patch avoids this by stop polling socket during rx
> processing. A small drawback is that this introduces some overheads in
> light load case because of the extra start/stop polling, but single
> netperf TCP_RR does not notice...
2016 May 30
1
[PATCH V2 2/2] vhost_net: conditionally enable tx polling
On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote:
> We always poll tx for socket, this is sub optimal since:
>
> - it will be only used when we exceed the sndbuf of the socket.
> - since we use two independent polls for tx and vq, this will slightly
> increase the waitqueue traversing time and more important, vhost
> could not benefit from commit
>
2016 May 30
1
[PATCH V2 2/2] vhost_net: conditionally enable tx polling
On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote:
> We always poll tx for socket, this is sub optimal since:
>
> - it will be only used when we exceed the sndbuf of the socket.
> - since we use two independent polls for tx and vq, this will slightly
> increase the waitqueue traversing time and more important, vhost
> could not benefit from commit
>
2011 Jul 01
1
[79030.229547] motion: page allocation failure: order:6, mode:0xd4
...store_fl_direct_reloc+0x4/0x4
[79030.229746] [<ffffffff817845f9>] ? _raw_spin_unlock_irqrestore+0x69/0x80
[79030.229757] [<ffffffff810416fe>] ? __wake_up_sync_key+0x5e/0x80
[79030.229766] [<ffffffff815865e0>] ? vidioc_dqbuf+0x80/0x80
[79030.229777] [<ffffffff81663f6e>] ? sock_def_readable+0x3e/0x70
[79030.229787] [<ffffffff8173f9ce>] ? unix_dgram_sendmsg+0x62e/0x6d0
[79030.229797] [<ffffffff816603bd>] ? sock_sendmsg+0xfd/0x120
[79030.229806] [<ffffffff815a4d63>] ? __videobuf_mmap_mapper+0x123/0x200
[79030.229816] [<ffffffff8156182d>] video_usercopy+0x...
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V2:
- Don't enable rx vq if we meet an err or rx vq is empty
Changes from V1:
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V2:
- Don't enable rx vq if we meet an err or rx vq is empty
Changes from V1:
2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
On Mon, May 30, 2016 at 02:47:53AM -0400, Jason Wang wrote:
> We don't stop rx polling socket during rx processing, this will lead
> unnecessary wakeups from under layer net devices (E.g
> sock_def_readable() form tun). Rx will be slowed down in this
> way. This patch avoids this by stop polling socket during rx
> processing. A small drawback is that this introduces some overheads in
> light load case because of the extra start/stop polling, but single
> netperf TCP_RR does not notice...
2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
On Mon, May 30, 2016 at 02:47:53AM -0400, Jason Wang wrote:
> We don't stop rx polling socket during rx processing, this will lead
> unnecessary wakeups from under layer net devices (E.g
> sock_def_readable() form tun). Rx will be slowed down in this
> way. This patch avoids this by stop polling socket during rx
> processing. A small drawback is that this introduces some overheads in
> light load case because of the extra start/stop polling, but single
> netperf TCP_RR does not notice...
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V1:
- use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open
coding.
-
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V1:
- use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open
coding.
-
2016 Apr 28
0
[PATCH] vhost_net: stop polling socket during rx processing
On 04/27/2016 07:28 PM, Michael S. Tsirkin wrote:
> On Tue, Apr 26, 2016 at 03:35:53AM -0400, Jason Wang wrote:
>> We don't stop polling socket during rx processing, this will lead
>> unnecessary wakeups from under layer net devices (E.g
>> sock_def_readable() form tun). Rx will be slowed down in this
>> way. This patch avoids this by stop polling socket during rx
>> processing. A small drawback is that this introduces some overheads in
>> light load case because of the extra start/stop polling, but single
>> netperf TCP_RR...
2016 May 30
0
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
We don't stop rx polling socket during rx processing, this will lead
unnecessary wakeups from under layer net devices (E.g
sock_def_readable() form tun). Rx will be slowed down in this
way. This patch avoids this by stop polling socket during rx
processing. A small drawback is that this introduces some overheads in
light load case because of the extra start/stop polling, but single
netperf TCP_RR does not notice any change. In a sup...
2016 Jun 01
0
[PATCH V3 1/2] vhost_net: stop polling socket during rx processing
We don't stop rx polling socket during rx processing, this will lead
unnecessary wakeups from under layer net devices (E.g
sock_def_readable() form tun). Rx will be slowed down in this
way. This patch avoids this by stop polling socket during rx
processing. A small drawback is that this introduces some overheads in
light load case because of the extra start/stop polling, but single
netperf TCP_RR does not notice any change. In a sup...
2005 Jun 04
11
kernel oops/IRQ exception when networking between many domUs
Hi,
I try to build experimental networks with Xen and stumbled over the same
problem that has been described quite well by Mark Doll in his posting
"xen_net: Failed to connect all virtual interfaces: err=-100"
here:
http://lists.xensource.com/archives/html/xen-users/2005-04/msg00447.html
As it was still present in 2.0.6, I tried 3.0-devel and found NR_PIRQS
and NR_DYNIRQS had been
2005 Jun 04
11
kernel oops/IRQ exception when networking between many domUs
Hi,
I try to build experimental networks with Xen and stumbled over the same
problem that has been described quite well by Mark Doll in his posting
"xen_net: Failed to connect all virtual interfaces: err=-100"
here:
http://lists.xensource.com/archives/html/xen-users/2005-04/msg00447.html
As it was still present in 2.0.6, I tried 3.0-devel and found NR_PIRQS
and NR_DYNIRQS had been
2003 Jun 09
7
Dual T400P, SMP, performance issues
Hi,
We are trying to validate Asterisk as a media gateway PRI <-> SIP with two
T400P (8 T1s) per box. The first
experience with BOX1 (Compaq, 2.53 GHz, 1 Gb RAM) and just one T400P was
encouraging - on the load
test with 3 T1s worth of calls we had on average 75% idle CPU.
Not so with BOX2 (Dell, single 2.6 GHz Xeon, 1 Gb RAM, 2 T400P) and BOX3
(Dell, dual 2.6 GHz Xeon,
2 Gb RAM, 2 T400P,