search for: swiderski

Displaying 15 results from an estimated 15 matches for "swiderski".

2012 Jun 26
6
[PATCH] Add a page cache-backed balloon device driver.
...Reclaim in the guest is therefore automatic and implicit (via the regular page reclaim). This means that inflating the balloon is similar to the existing balloon mechanism, but the deflate is different--it re-uses existing Linux kernel functionality to automatically reclaim. Signed-off-by: Frank Swiderski <fes at google.com> --- drivers/virtio/Kconfig | 13 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_fileballoon.c | 636 +++++++++++++++++++++++++++++++++++ include/linux/virtio_balloon.h | 9 + include/linux/virtio_ids.h | 1 + 5 fi...
2012 Jun 26
6
[PATCH] Add a page cache-backed balloon device driver.
...Reclaim in the guest is therefore automatic and implicit (via the regular page reclaim). This means that inflating the balloon is similar to the existing balloon mechanism, but the deflate is different--it re-uses existing Linux kernel functionality to automatically reclaim. Signed-off-by: Frank Swiderski <fes at google.com> --- drivers/virtio/Kconfig | 13 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_fileballoon.c | 636 +++++++++++++++++++++++++++++++++++ include/linux/virtio_balloon.h | 9 + include/linux/virtio_ids.h | 1 + 5 fi...
2014 Mar 14
4
[PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size
virtio-blk set the default queue depth to 64 requests, which was insufficient for high-IOPS devices. Instead set the blk-queue depth to the device's virtqueue depth divided by two (each I/O requires at least two VQ entries). Signed-off-by: Venkatesh Srinivas <venkateshs at google.com> --- drivers/block/virtio_blk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git
2014 Mar 14
4
[PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size
virtio-blk set the default queue depth to 64 requests, which was insufficient for high-IOPS devices. Instead set the blk-queue depth to the device's virtqueue depth divided by two (each I/O requires at least two VQ entries). Signed-off-by: Venkatesh Srinivas <venkateshs at google.com> --- drivers/block/virtio_blk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git
2014 Mar 15
0
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
...t;tytso at mit.edu> Signed-off-by: Venkatesh Srinivas <venkateshs at google.com> Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtio-dev at lists.oasis-open.org Cc: virtualization at lists.linux-foundation.org Cc: Frank Swiderski <fes at google.com> --- This is a combination of my patch and Vekatash's patch. I agree that setting the default automatically is better than requiring the user to set the value by hand. drivers/block/virtio_blk.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff...
2014 Mar 15
1
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
...gt;Signed-off-by: Venkatesh Srinivas <venkateshs at google.com> >Cc: Rusty Russell <rusty at rustcorp.com.au> >Cc: "Michael S. Tsirkin" <mst at redhat.com> >Cc: virtio-dev at lists.oasis-open.org >Cc: virtualization at lists.linux-foundation.org >Cc: Frank Swiderski <fes at google.com> >--- > >This is a combination of my patch and Vekatash's patch. I agree that >setting the default automatically is better than requiring the user to >set the value by hand. > > drivers/block/virtio_blk.c | 10 ++++++++-- > 1 file changed, 8 inse...
2014 Mar 15
1
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
...gt;Signed-off-by: Venkatesh Srinivas <venkateshs at google.com> >Cc: Rusty Russell <rusty at rustcorp.com.au> >Cc: "Michael S. Tsirkin" <mst at redhat.com> >Cc: virtio-dev at lists.oasis-open.org >Cc: virtualization at lists.linux-foundation.org >Cc: Frank Swiderski <fes at google.com> >--- > >This is a combination of my patch and Vekatash's patch. I agree that >setting the default automatically is better than requiring the user to >set the value by hand. > > drivers/block/virtio_blk.c | 10 ++++++++-- > 1 file changed, 8 inse...
2014 Mar 19
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
...d-by: "Theodore Ts'o" <tytso at mit.edu> Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs at google.com> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtio-dev at lists.oasis-open.org Cc: virtualization at lists.linux-foundation.org Cc: Frank Swiderski <fes at google.com> Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index a2db9ed288f2..c101bbc72095 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -491,10 +491,11 @@ static struct...
2014 Mar 19
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
...d-by: "Theodore Ts'o" <tytso at mit.edu> Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs at google.com> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtio-dev at lists.oasis-open.org Cc: virtualization at lists.linux-foundation.org Cc: Frank Swiderski <fes at google.com> Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index a2db9ed288f2..c101bbc72095 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -491,10 +491,11 @@ static struct...
2014 Mar 17
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
Theodore Ts'o <tytso at mit.edu> writes: > The current virtio block sets a queue depth of 64, which is > insufficient for very fast devices. It has been demonstrated that > with a high IOPS device, using a queue depth of 256 can double the > IOPS which can be sustained. > > As suggested by Venkatash Srinivas, set the queue depth by default to > be one half the the
2014 Mar 17
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
Theodore Ts'o <tytso at mit.edu> writes: > The current virtio block sets a queue depth of 64, which is > insufficient for very fast devices. It has been demonstrated that > with a high IOPS device, using a queue depth of 256 can double the > IOPS which can be sustained. > > As suggested by Venkatash Srinivas, set the queue depth by default to > be one half the the
2012 Jul 25
0
No subject
...ge will be mapped in, allowing automatic (and fast, > >> + * compared to requiring a host notification via a virtio queue to get memory > >> + * back) reclaim. > >> + * > >> + * Copyright 2008 Rusty Russell IBM Corporation > >> + * Copyright 2011 Frank Swiderski Google Inc > >> + * > >> + * This program is free software; you can redistribute it and/or modify > >> + * it under the terms of the GNU General Public License as published by > >> + * the Free Software Foundation; either version 2 of the License, or > >...
2012 Jul 25
0
No subject
...ge will be mapped in, allowing automatic (and fast, > >> + * compared to requiring a host notification via a virtio queue to get memory > >> + * back) reclaim. > >> + * > >> + * Copyright 2008 Rusty Russell IBM Corporation > >> + * Copyright 2011 Frank Swiderski Google Inc > >> + * > >> + * This program is free software; you can redistribute it and/or modify > >> + * it under the terms of the GNU General Public License as published by > >> + * the Free Software Foundation; either version 2 of the License, or > >...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it