Displaying 20 results from an estimated 49 matches for "minf".
Did you mean:
min
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...0 avg=121085.38 stdev=174416.11 min=0.00
clat (usec) max=3438.30 avg=59863.35 stdev=116607.69 min=0.00
clat (usec) max=3745.65 avg=454501.30 stdev=332699.00 min=0.00
clat (usec) max=4089.75 avg=442374.99 stdev=304874.62 min=0.00
cpu sys=615.12 majf=24080.50 ctx=64253616.50 usr=68.08 minf=17907363.00
cpu sys=1235.95 majf=23389.00 ctx=59788148.00 usr=98.34 minf=20020008.50
cpu sys=764.96 majf=28414.00 ctx=848279274.00 usr=36.39 minf=19737254.00
cpu sys=714.13 majf=21853.50 ctx=854608972.00 usr=33.56 minf=18256760.50
with unlocked kick
read iops=118559.00 bw=59279.66 r...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...0 avg=121085.38 stdev=174416.11 min=0.00
clat (usec) max=3438.30 avg=59863.35 stdev=116607.69 min=0.00
clat (usec) max=3745.65 avg=454501.30 stdev=332699.00 min=0.00
clat (usec) max=4089.75 avg=442374.99 stdev=304874.62 min=0.00
cpu sys=615.12 majf=24080.50 ctx=64253616.50 usr=68.08 minf=17907363.00
cpu sys=1235.95 majf=23389.00 ctx=59788148.00 usr=98.34 minf=20020008.50
cpu sys=764.96 majf=28414.00 ctx=848279274.00 usr=36.39 minf=19737254.00
cpu sys=714.13 majf=21853.50 ctx=854608972.00 usr=33.56 minf=18256760.50
with unlocked kick
read iops=118559.00 bw=59279.66 r...
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI,
T5220, S10U4 + patches
mdb -k
> ::memstat
While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0
So about 2mln xcalls per second.
Let''s check with dtrace:
dtrace -n sysinfo:::xcalls''{@=count();}'' -n tick-1s''{...
2009 Mar 31
9
Hwo to disable the polling function of mac_srs
In crossbow, each mac_srs has a kernel thread called "mac_rx_srs_poll_ring"
to poll the hardware and crossbow will wakeup this thread to poll packets
from the hardware automatically. Does crossbow provide any method to disable
the polling mechanism, for example disabling the this kernel thread?
Thanks
Zhihui
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2009 Jul 09
3
performance troubleshooting
...168 1306 8182 3165 8 39 53
7 0 0 2711160 474176 248 675 22 0 0 0 0 0 0 0 84 1147 8616 2461 2 23 74
</pre>
<br>
Notice the run queue. Is there a DTrace script (from the DTT package) that I can use to figure out what is going on?
<br><br>
mpstat shows:
<pre>
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 3511 48 97 784 197 1067 32 239 365 0 5814 5 43 0 52
1 1287 28 43 429 0 901 37 215 314 0 2821 3 40 0 57
2 2954 54 155 1442 1079 1176 26 241 339 0 4927 4 42 0 54...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...=21,046KB/s, iops=5,261, runt= 6243msec
slat (usec): min=4, max=11,678, avg=164.73, stdev=442.99
clat (usec): min=11, max=19,552, avg=2229.92, stdev=1367.33
bw (KB/s) : min= 0, max=24448, per=2.03%, avg=3402.15, stdev=7878.23
cpu : usr=0.35%, sys=18.66%, ctx=6023, majf=0, minf=25
IO depths : 1=0.0%, 2=0.0%, 4=0.1%, 8=0.1%, 16=0.1%, 32=87.5%, >=64=12.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0%
issued r/w: total=98224/32848, short=0/0...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...=21,046KB/s, iops=5,261, runt= 6243msec
slat (usec): min=4, max=11,678, avg=164.73, stdev=442.99
clat (usec): min=11, max=19,552, avg=2229.92, stdev=1367.33
bw (KB/s) : min= 0, max=24448, per=2.03%, avg=3402.15, stdev=7878.23
cpu : usr=0.35%, sys=18.66%, ctx=6023, majf=0, minf=25
IO depths : 1=0.0%, 2=0.0%, 4=0.1%, 8=0.1%, 16=0.1%, 32=87.5%, >=64=12.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0%
issued r/w: total=98224/32848, short=0/0...
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...05:23 2018
write: IOPS=4443, BW=17.4MiB/s (18.2MB/s)(256MiB/14748msec)
bw ( KiB/s): min=16384, max=19184, per=99.92%, avg=17760.45, stdev=602.48, samples=29
iops : min= 4096, max= 4796, avg=4440.07, stdev=150.66, samples=29
cpu : usr=4.00%, sys=18.02%, ctx=131097, majf=0, minf=7
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwt: total=0,65536,0, short=0,0,0, dr...