Displaying 20 results from an estimated 40 matches for "randread".
2019 Sep 03
2
[PATCH v3 00/13] virtio-fs: shared file system for virtual machines
...s not cache anything in guest
> > and provides strong coherence. Other modes which provide less strong
> > coherence and hence are faster are yet to be benchmarked.
> >
> > - Three fio ioengines psync, libaio and mmap have been used.
> >
> > - I/O Workload of randread, radwrite, seqread and seqwrite have been run.
> >
> > - Each file size is 2G. Block size 4K. iodepth=16
> >
> > - "multi" means same operation was done with 4 jobs and each job is
> > operating on a file of size 2G.
> >
> > - Some results are &...
2019 Sep 03
2
[PATCH v3 00/13] virtio-fs: shared file system for virtual machines
...s not cache anything in guest
> > and provides strong coherence. Other modes which provide less strong
> > coherence and hence are faster are yet to be benchmarked.
> >
> > - Three fio ioengines psync, libaio and mmap have been used.
> >
> > - I/O Workload of randread, radwrite, seqread and seqwrite have been run.
> >
> > - Each file size is 2G. Block size 4K. iodepth=16
> >
> > - "multi" means same operation was done with 4 jobs and each job is
> > operating on a file of size 2G.
> >
> > - Some results are &...
2018 Oct 15
0
[Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
...tio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > >
> > > > If I add "-device
> > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
> > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > >
> > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > >
> > > > Any idea about this issue?
> > > >
> > > > I don't know this is a qemu issue...
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...u-kvm), the patch can increase I/O
performance a lot with VIRTIO_RING_F_EVENT_IDX enabled:
- without the patch: 14K IOPS
- with the patch: 34K IOPS
fio script:
[global]
direct=1
bsrange=4k-4k
timeout=10
numjobs=4
ioengine=libaio
iodepth=64
filename=/dev/vdc
group_reporting=1
[f1]
rw=randread
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: "Michael S. Tsirkin" <mst at redhat.com>
Cc: virtualization at lists.linux-foundation.org
Signed-off-by: Ming Lei <ming.lei at canonical.com>
---
drivers/block/virtio_blk.c | 9 ++++++---
1 file changed, 6 insertions(...
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...u-kvm), the patch can increase I/O
performance a lot with VIRTIO_RING_F_EVENT_IDX enabled:
- without the patch: 14K IOPS
- with the patch: 34K IOPS
fio script:
[global]
direct=1
bsrange=4k-4k
timeout=10
numjobs=4
ioengine=libaio
iodepth=64
filename=/dev/vdc
group_reporting=1
[f1]
rw=randread
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: "Michael S. Tsirkin" <mst at redhat.com>
Cc: virtualization at lists.linux-foundation.org
Signed-off-by: Ming Lei <ming.lei at canonical.com>
---
drivers/block/virtio_blk.c | 9 ++++++---
1 file changed, 6 insertions(...
2018 Nov 02
2
[PATCH 0/1] vhost: add vhost_blk driver
On Fri, Nov 02, 2018 at 06:21:22PM +0000, Vitaly Mayatskikh wrote:
> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
> driver allows VM to reach a near bare-metal disk performance. See IOPS
> numbers below (fio --rw=randread --bs=4k).
>
> This implementation uses kiocb interface. It is slightly slower than
> going directly through bio, but is simpler and also works with disk
> images placed on a file system.
>
> # fio num-jobs
> # A: bare metal over block
> # B: bare metal over file
> # C:...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
On 2018/11/3 ??2:21, Vitaly Mayatskikh wrote:
> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
> driver allows VM to reach a near bare-metal disk performance. See IOPS
> numbers below (fio --rw=randread --bs=4k).
>
> This implementation uses kiocb interface. It is slightly slower than
> going directly through bio, but is simpler and also works with disk
> images placed on a file system.
>
> # fio num-jobs
> # A: bare metal over block
> # B: bare metal over file
> # C: vi...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
On 2018/11/3 ??2:21, Vitaly Mayatskikh wrote:
> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
> driver allows VM to reach a near bare-metal disk performance. See IOPS
> numbers below (fio --rw=randread --bs=4k).
>
> This implementation uses kiocb interface. It is slightly slower than
> going directly through bio, but is simpler and also works with disk
> images placed on a file system.
>
> # fio num-jobs
> # A: bare metal over block
> # B: bare metal over file
> # C: vi...
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
...node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
> > e.
> > Applying these patches fixes memory placement so that virtqueues and driver
> > state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
> >
> > The fio 4KB randread benchmark results do not show a significant improvement:
> >
> > Name IOPS Error
> > virtio-blk 42373.79 =C2=B1 0.54%
> > virtio-blk-numa 42517.07 =C2=B1 0.79%
>
>
> I remember I did something similar in vhost by using page_to_nid() fo...
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
...node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
> > e.
> > Applying these patches fixes memory placement so that virtqueues and driver
> > state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
> >
> > The fio 4KB randread benchmark results do not show a significant improvement:
> >
> > Name IOPS Error
> > virtio-blk 42373.79 =C2=B1 0.54%
> > virtio-blk-numa 42517.07 =C2=B1 0.79%
>
>
> I remember I did something similar in vhost by using page_to_nid() fo...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...lti-vq feature, 'num_queues=N' need to be added into
>> '-device virtio-blk-pci ...' of qemu command line, and suggest to pass
>> 'vectors=N+1' to keep one MSI irq vector per each vq, and the feature
>> depends on x-data-plane.
>>
>> Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
>> verify the improvement.
>>
>> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...lti-vq feature, 'num_queues=N' need to be added into
>> '-device virtio-blk-pci ...' of qemu command line, and suggest to pass
>> 'vectors=N+1' to keep one MSI irq vector per each vq, and the feature
>> depends on x-data-plane.
>>
>> Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
>> verify the improvement.
>>
>> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8...
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
...e() code run on vCPU 0 in vNUMA node 0 (host NUMA
node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
e.
Applying these patches fixes memory placement so that virtqueues and driver
state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
The fio 4KB randread benchmark results do not show a significant improvement:
Name IOPS Error
virtio-blk 42373.79 =C2=B1 0.54%
virtio-blk-numa 42517.07 =C2=B1 0.79%
Stefan Hajnoczi (3):
virtio-pci: use NUMA-aware memory allocation in probe
virtio_ring: use NUMA-aware memory allocation...
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
...e() code run on vCPU 0 in vNUMA node 0 (host NUMA
node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
e.
Applying these patches fixes memory placement so that virtqueues and driver
state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
The fio 4KB randread benchmark results do not show a significant improvement:
Name IOPS Error
virtio-blk 42373.79 =C2=B1 0.54%
virtio-blk-numa 42517.07 =C2=B1 0.79%
Stefan Hajnoczi (3):
virtio-pci: use NUMA-aware memory allocation in probe
virtio_ring: use NUMA-aware memory allocation...
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...it #v2.0.0-virtblk-mq.1
For enabling the multi-vq feature, 'num_queues=N' need to be added into
'-device virtio-blk-pci ...' of qemu command line, and suggest to pass
'vectors=N+1' to keep one MSI irq vector per each vq, and the feature
depends on x-data-plane.
Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, though...
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...it #v2.0.0-virtblk-mq.1
For enabling the multi-vq feature, 'num_queues=N' need to be added into
'-device virtio-blk-pci ...' of qemu command line, and suggest to pass
'vectors=N+1' to keep one MSI irq vector per each vq, and the feature
depends on x-data-plane.
Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, though...
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
...MA node 0 (host NUMA
> node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
> e.
> Applying these patches fixes memory placement so that virtqueues and driver
> state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
>
> The fio 4KB randread benchmark results do not show a significant improvement:
>
> Name IOPS Error
> virtio-blk 42373.79 =C2=B1 0.54%
> virtio-blk-numa 42517.07 =C2=B1 0.79%
I remember I did something similar in vhost by using page_to_nid() for
descriptor ring. And I get little...
2014 Mar 23
0
for Chris Mason ( iowatcher graphs)
...en virtual machine write to
disk and run blktrace from dom0 like this blktrace -w 60 -d
/dev/disk/vbd/21-920 -o - > test.trace
/dev/disk/vbd/21-920 is the software raid contains 2 lv volumes , each
lv volume created in big srp attached disk
Inside vm i try to do some work via fio:
[global]
rw=randread
size=128m
directory=/tmp
ioengine=libaio
iodepth=4
invalidate=1
direct=1
[bgwriter]
rw=randwrite
iodepth=32
[queryA]
iodepth=1
ioengine=mmap
direct=0
thinktime=3
[queryB]
iodepth=1
ioengine=mmap
direct=0
thinktime=5
[bgupdater]
rw=randrw
iodepth=16
thinktime=40
size=128m
After that i try to ge...
2014 May 30
0
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...enabled:
> - without the patch: 14K IOPS
> - with the patch: 34K IOPS
>
> fio script:
> [global]
> direct=1
> bsrange=4k-4k
> timeout=10
> numjobs=4
> ioengine=libaio
> iodepth=64
>
> filename=/dev/vdc
> group_reporting=1
>
> [f1]
> rw=randread
>
> Cc: Rusty Russell <rusty at rustcorp.com.au>
> Cc: "Michael S. Tsirkin" <mst at redhat.com>
> Cc: virtualization at lists.linux-foundation.org
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
Acked-by: Rusty Russell <rusty at rustcorp.com.au&g...
2014 May 30
0
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...bled:
> - without the patch: 14K IOPS
> - with the patch: 34K IOPS
>
> fio script:
> [global]
> direct=1
> bsrange=4k-4k
> timeout=10
> numjobs=4
> ioengine=libaio
> iodepth=64
>
> filename=/dev/vdc
> group_reporting=1
>
> [f1]
> rw=randread
>
> Cc: Rusty Russell <rusty at rustcorp.com.au>
> Cc: "Michael S. Tsirkin" <mst at redhat.com>
> Cc: virtualization at lists.linux-foundation.org
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
Acked-by: Michael S. Tsirkin <mst at redhat.com>...