search for: 100us

Displaying 20 results from an estimated 35 matches for "100us".

Did you mean: 100's
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...tx can starve rx? >> I just want to keep it user-controllable. Unless memorizing it busypoll >> can run unexpectedly long. > > I think the total amount of time for busy polling is bounded. If I was > wrong, it should be a bug somewhere. Consider this kind of scenario: 0. Set 100us busypoll for example. 1. handle_tx() runs busypoll. 2. Something like zerocopy queues tx_work within 100us. 3. busypoll exits and call handle_tx() again. 4. Repeat 1-3. In this case handle_tx() does not process packets but busypoll essentially runs beyond 100us without endtime memorized. This may...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...tx can starve rx? >> I just want to keep it user-controllable. Unless memorizing it busypoll >> can run unexpectedly long. > > I think the total amount of time for busy polling is bounded. If I was > wrong, it should be a bug somewhere. Consider this kind of scenario: 0. Set 100us busypoll for example. 1. handle_tx() runs busypoll. 2. Something like zerocopy queues tx_work within 100us. 3. busypoll exits and call handle_tx() again. 4. Repeat 1-3. In this case handle_tx() does not process packets but busypoll essentially runs beyond 100us without endtime memorized. This may...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...want to keep it user-controllable. Unless memorizing it busypoll >>>> can run unexpectedly long. >>> I think the total amount of time for busy polling is bounded. If I was >>> wrong, it should be a bug somewhere. >> Consider this kind of scenario: >> 0. Set 100us busypoll for example. >> 1. handle_tx() runs busypoll. >> 2. Something like zerocopy queues tx_work within 100us. >> 3. busypoll exits and call handle_tx() again. >> 4. Repeat 1-3. >> >> In this case handle_tx() does not process packets but busypoll >> esse...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...want to keep it user-controllable. Unless memorizing it busypoll >>>> can run unexpectedly long. >>> I think the total amount of time for busy polling is bounded. If I was >>> wrong, it should be a bug somewhere. >> Consider this kind of scenario: >> 0. Set 100us busypoll for example. >> 1. handle_tx() runs busypoll. >> 2. Something like zerocopy queues tx_work within 100us. >> 3. busypoll exits and call handle_tx() again. >> 4. Repeat 1-3. >> >> In this case handle_tx() does not process packets but busypoll >> esse...
2018 Jul 02
1
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...gt;>>> busypoll >>>>>> can run unexpectedly long. >>>>> I think the total amount of time for busy polling is bounded. If I was >>>>> wrong, it should be a bug somewhere. >>>> Consider this kind of scenario: >>>> 0. Set 100us busypoll for example. >>>> 1. handle_tx() runs busypoll. >>>> 2. Something like zerocopy queues tx_work within 100us. >>>> 3. busypoll exits and call handle_tx() again. >>>> 4. Repeat 1-3. >>>> >>>> In this case handle_tx() doe...
2018 Jun 30
1
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
...mail.com wrote: > From: Tonghao Zhang <xiangxia.m.yue at gmail.com> > > This patch improves the guest receive and transmit performance. > On the handle_tx side, we poll the sock receive queue at the > same time. handle_rx do that in the same way. > > We set the poll-us=100us and use the iperf3 to test Where/how do you configure poll-us=100us ? Are you talking about /proc/sys/net/core/busy_poll ? p.s. Nice performance boost! :-) > its bandwidth, use the netperf to test throughput and mean > latency. When running the tests, the vhost-net kthread of > that V...
2018 Jul 02
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...>>> I just want to keep it user-controllable. Unless memorizing it busypoll >>> can run unexpectedly long. >> I think the total amount of time for busy polling is bounded. If I was >> wrong, it should be a bug somewhere. > Consider this kind of scenario: > 0. Set 100us busypoll for example. > 1. handle_tx() runs busypoll. > 2. Something like zerocopy queues tx_work within 100us. > 3. busypoll exits and call handle_tx() again. > 4. Repeat 1-3. > > In this case handle_tx() does not process packets but busypoll > essentially runs beyond 100us wi...
2018 Jul 02
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...-controllable. Unless memorizing it busypoll >>>>> can run unexpectedly long. >>>> I think the total amount of time for busy polling is bounded. If I was >>>> wrong, it should be a bug somewhere. >>> Consider this kind of scenario: >>> 0. Set 100us busypoll for example. >>> 1. handle_tx() runs busypoll. >>> 2. Something like zerocopy queues tx_work within 100us. >>> 3. busypoll exits and call handle_tx() again. >>> 4. Repeat 1-3. >>> >>> In this case handle_tx() does not process packets bu...
2018 Jun 27
2
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...; > > > For avoiding deadlock, change the code to lock the vq one > > by one and use the VHOST_NET_VQ_XX as a subclass for > > mutex_lock_nested. With the patch, qemu can set differently > > the busyloop_timeout for rx or tx queue. > > > > We set the poll-us=100us and use the iperf3 to test > > its throughput. The iperf3 command is shown as below. > > > > on the guest: > > iperf3 -s -D > > > > on the host: > > iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 > > > > * With the patch: 23.1 Gbits/s...
2018 Jun 26
3
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...receive queue at the same time. handle_rx do that in the same way. For avoiding deadlock, change the code to lock the vq one by one and use the VHOST_NET_VQ_XX as a subclass for mutex_lock_nested. With the patch, qemu can set differently the busyloop_timeout for rx or tx queue. We set the poll-us=100us and use the iperf3 to test its throughput. The iperf3 command is shown as below. on the guest: iperf3 -s -D on the host: iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 * With the patch: 23.1 Gbits/sec * Without the patch: 12.7 Gbits/sec Signed-off-by: Tonghao Zhang <zhangtonghao at...
2018 Jun 26
3
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...receive queue at the same time. handle_rx do that in the same way. For avoiding deadlock, change the code to lock the vq one by one and use the VHOST_NET_VQ_XX as a subclass for mutex_lock_nested. With the patch, qemu can set differently the busyloop_timeout for rx or tx queue. We set the poll-us=100us and use the iperf3 to test its throughput. The iperf3 command is shown as below. on the guest: iperf3 -s -D on the host: iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 * With the patch: 23.1 Gbits/sec * Without the patch: 12.7 Gbits/sec Signed-off-by: Tonghao Zhang <zhangtonghao at...
2018 Jun 30
9
[PATCH net-next v3 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. This patches are splited from previous big patch: http://patchwork.ozlabs.org/patch/934673/ For more performance report, see patch 4. Tonghao Zhang (4): net: vhost:
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Hi Jason, On 2018/06/29 18:30, Jason Wang wrote: > On 2018?06?29? 16:09, Toshiaki Makita wrote: ... >> To fix this, poll the work instead of enabling notification when >> busypoll is interrupted by something. IMHO signal_pending() and >> vhost_has_work() are kind of interruptions rather than signals to >> completely cancel the busypoll, so let's run busypoll after
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Hi Jason, On 2018/06/29 18:30, Jason Wang wrote: > On 2018?06?29? 16:09, Toshiaki Makita wrote: ... >> To fix this, poll the work instead of enabling notification when >> busypoll is interrupted by something. IMHO signal_pending() and >> vhost_has_work() are kind of interruptions rather than signals to >> completely cancel the busypoll, so let's run busypoll after
2008 Sep 09
9
[PATCH 2/4] CPUIDLE: Avoid remnant LAPIC timer intr while force hpetbroadcast
CPUIDLE: Avoid remnant LAPIC timer intr while force hpetbroadcast LAPIC will stop during C3, and resume to work after exit from C3. Considering below case: The LAPIC timer was programmed to expire after 1000us, but CPU enter C3 after 100us and exit C3 at 9xxus. 0us: reprogram_timer(1000us) 100us: entry C3, LAPIC timer stop 9xxus: exit C3 due to unexpected event, LAPIC timer continue running 10xxus: reprogram_timer(1000us), fail due to the past expiring time. ......: no timer softirq raised, no change to LAPIC timer. ......: if...
2018 Jun 27
0
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...andle_rx do that in the same way. > > For avoiding deadlock, change the code to lock the vq one > by one and use the VHOST_NET_VQ_XX as a subclass for > mutex_lock_nested. With the patch, qemu can set differently > the busyloop_timeout for rx or tx queue. > > We set the poll-us=100us and use the iperf3 to test > its throughput. The iperf3 command is shown as below. > > on the guest: > iperf3 -s -D > > on the host: > iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 > > * With the patch: 23.1 Gbits/sec > * Without the patch: 12.7 Gbits/sec &gt...
2018 Jun 30
0
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patch improves the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. We set the poll-us=100us and use the iperf3 to test its bandwidth, use the netperf to test throughput and mean latency. When running the tests, the vhost-net kthread of that VM, is alway 100% CPU. The commands are shown as below. iperf3 -s -D iperf3 -c IP -i 1 -P 1 -t 20 -M 1400 or netserver netperf -H IP -t TCP_RR -l...
2006 Feb 11
1
TE411P Really Bad Echo ORION
The Orion echo canceller is just ok. The Tellabs units work just as well if you don't mind 10 mins of soldering. I have the orion running with an adit 600 and a TE110P. Echo cancel is fairly good, but I have loads of problems with DTMF digits. -Darren ________________________________ From: asterisk-users-bounces@lists.digium.com
1999 Sep 09
2
KINGSTON SOHO Hub
I just ran into a completely unexpected problem. It appears that the Kingston SOHO Hubs don't do TCP. They do NETBEUI very well. After 4 hours of troubleshooting I went home and got my cheap Bay Networks 8 port hub and everything worked. This is to hopefully prevent others from running into the same problem. Nothing on Kingstons site mentions no TCP support but swapping only the hub worked
2013 Jul 10
3
Performance of Xen VCPU Scheduling
...he listed patches are reasonable or could have side-effects. Interesting results in the link above: - xpin decreases startup time of vms in a bootstorm - xpin has a pathological case when the vms are burning lots of cpu - nopin produces an interesting cluster of high event channel latency between 100us-1000us when enough vms are using lots of cpu - experimental tweaks on the xen credit1 scheduler code that cause the xpin pathological case to go away, and other increases (and sometimes decreases) in bootstorm and vm density performance. cheers, Marcus