search for: 34674

Displaying 14 results from an estimated 14 matches for "34674".

Did you mean: 24674
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...ess, rx_irq, tx_irq}: >> >> upstream: >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> napi tx: >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> 1,0,1: 36269 Mbps, 394 Gcyc >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> This is a particularly strong example. It is also representative >> of most RR tests. It is less pronounced in other streaming tests. >> 10x TCP_RR, for instance: >> >> upstream: >> >> 1,1,1: 42267 Mbps, 301 Gcyc >> 1,0,2: 4...
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...ess, rx_irq, tx_irq}: >> >> upstream: >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> napi tx: >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> 1,0,1: 36269 Mbps, 394 Gcyc >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> This is a particularly strong example. It is also representative >> of most RR tests. It is less pronounced in other streaming tests. >> 10x TCP_RR, for instance: >> >> upstream: >> >> 1,1,1: 42267 Mbps, 301 Gcyc >> 1,0,2: 4...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...pi-tx, it is more pronounced in that mode than without napi. 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: upstream: 1,1,1: 28985 Mbps, 278 Gcyc 1,0,2: 30067 Mbps, 402 Gcyc napi tx: 1,1,1: 34492 Mbps, 269 Gcyc 1,0,2: 36527 Mbps, 537 Gcyc (!) 1,0,1: 36269 Mbps, 394 Gcyc 1,0,0: 34674 Mbps, 402 Gcyc This is a particularly strong example. It is also representative of most RR tests. It is less pronounced in other streaming tests. 10x TCP_RR, for instance: upstream: 1,1,1: 42267 Mbps, 301 Gcyc 1,0,2: 40663 Mbps, 445 Gcyc napi tx: 1,1,1: 42420 Mbps, 303 Gcyc 1,0,2: 42267 Mbps,...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...pi-tx, it is more pronounced in that mode than without napi. 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: upstream: 1,1,1: 28985 Mbps, 278 Gcyc 1,0,2: 30067 Mbps, 402 Gcyc napi tx: 1,1,1: 34492 Mbps, 269 Gcyc 1,0,2: 36527 Mbps, 537 Gcyc (!) 1,0,1: 36269 Mbps, 394 Gcyc 1,0,0: 34674 Mbps, 402 Gcyc This is a particularly strong example. It is also representative of most RR tests. It is less pronounced in other streaming tests. 10x TCP_RR, for instance: upstream: 1,1,1: 42267 Mbps, 301 Gcyc 1,0,2: 40663 Mbps, 445 Gcyc napi tx: 1,1,1: 42420 Mbps, 303 Gcyc 1,0,2: 42267 Mbps,...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...1: 28985 Mbps, 278 Gcyc >> >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> >> >> napi tx: >> >> >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> >> 1,0,1: 36269 Mbps, 394 Gcyc >> >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> >> >> This is a particularly strong example. It is also representative >> >> of most RR tests. It is less pronounced in other streaming tests. >> >> 10x TCP_RR, for instance: >> >> >> >> upstream: >>...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...1: 28985 Mbps, 278 Gcyc >> >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> >> >> napi tx: >> >> >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> >> 1,0,1: 36269 Mbps, 394 Gcyc >> >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> >> >> This is a particularly strong example. It is also representative >> >> of most RR tests. It is less pronounced in other streaming tests. >> >> 10x TCP_RR, for instance: >> >> >> >> upstream: >>...
2017 Apr 21
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: > > upstream: > > 1,1,1: 28985 Mbps, 278 Gcyc > 1,0,2: 30067 Mbps, 402 Gcyc > > napi tx: > > 1,1,1: 34492 Mbps, 269 Gcyc > 1,0,2: 36527 Mbps, 537 Gcyc (!) > 1,0,1: 36269 Mbps, 394 Gcyc > 1,0,0: 34674 Mbps, 402 Gcyc > > This is a particularly strong example. It is also representative > of most RR tests. It is less pronounced in other streaming tests. > 10x TCP_RR, for instance: > > upstream: > > 1,1,1: 42267 Mbps, 301 Gcyc > 1,0,2: 40663 Mbps, 445 Gcyc > > napi t...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...> >> > >> 1,1,1: 28985 Mbps, 278 Gcyc > >> 1,0,2: 30067 Mbps, 402 Gcyc > >> > >> napi tx: > >> > >> 1,1,1: 34492 Mbps, 269 Gcyc > >> 1,0,2: 36527 Mbps, 537 Gcyc (!) > >> 1,0,1: 36269 Mbps, 394 Gcyc > >> 1,0,0: 34674 Mbps, 402 Gcyc > >> > >> This is a particularly strong example. It is also representative > >> of most RR tests. It is less pronounced in other streaming tests. > >> 10x TCP_RR, for instance: > >> > >> upstream: > >> > >> 1,1,1...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...t;> 1,0,2: 30067 Mbps, 402 Gcyc > >> >> > >> >> napi tx: > >> >> > >> >> 1,1,1: 34492 Mbps, 269 Gcyc > >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) > >> >> 1,0,1: 36269 Mbps, 394 Gcyc > >> >> 1,0,0: 34674 Mbps, 402 Gcyc > >> >> > >> >> This is a particularly strong example. It is also representative > >> >> of most RR tests. It is less pronounced in other streaming tests. > >> >> 10x TCP_RR, for instance: > >> >> > >&gt...
2010 Apr 05
17
[Bug 27455] New: dualhead not working, second display always black
https://bugs.freedesktop.org/show_bug.cgi?id=27455 Summary: dualhead not working, second display always black Product: xorg Version: unspecified Platform: x86 (IA32) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Driver/nouveau AssignedTo: nouveau at
2005 Jun 20
4
script to make a CentOS 4.0 respository updates server...
List, I have a CentOS 4.0 server in my intranet which will serves as a CentOS repository updates, so I need a script to do that job, I can not use RSYNC. I just want to download CentOS 4.0 updates every night. Can you help me? Regards, Israel
2017 Apr 18
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
From: Willem de Bruijn <willemb at google.com> Convert virtio-net to a standard napi tx completion path. This enables better TCP pacing using TCP small queues and increases single stream throughput. The virtio-net driver currently cleans tx descriptors on transmission of new packets in ndo_start_xmit. Latency depends on new traffic, so is unbounded. To avoid deadlock when a socket reaches
2010 Jul 13
3
Problem mapping Samba shares in Windows
Hi, In our company we are currently running a Samba server and Windows XP clients. At the moment we are having problems with mapping Samba shares in Windows. Shares are being mapped through a windows startup script, which executes net use (with the option persistent:no) command. For most users this works most of the time, nevertheless it often fails, the exect reason for this isn't clear
2009 Sep 24
3
[LLVMdev] Is line number in DbgStopPointInst in LLVM accurate?
...32 568, i32 0, { }* bitcast (%llvm.dbg.compile_unit.type* @llvm.dbg.compile_unit134216 to { }*)) [34673] File Name: [14 x i8] c"sql_insert.cc\00, LineNo: 568, Inst: call void @llvm.dbg.stoppoint(i32 568, i32 0, { }* bitcast (%llvm.dbg.compile_unit.type* @llvm.dbg.compile_unit134216 to { }*)) [34674] File Name: [14 x i8] c"sql_insert.cc\00, LineNo: 567, Inst: call void @llvm.dbg.stoppoint(i32 567, i32 0, { }* bitcast (%llvm.dbg.compile_unit.type* @llvm.dbg.compile_unit134216 to { }*)) [34676] File Name: [14 x i8] c"sql_insert.cc\00, LineNo: 567, Inst: call void @llvm.dbg.stoppoint(...