Displaying 20 results from an estimated 10000 matches similar to: "Prevent guest to host CPU overloads"
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote:
> For virtio-blk, I don't think it is always better to take more queues, and
> we need to leverage below things in host side:
>
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
>
> -
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote:
> For virtio-blk, I don't think it is always better to take more queues, and
> we need to leverage below things in host side:
>
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
>
> -
2014 Jun 26
0
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
On 2014-06-25 20:08, Ming Lei wrote:
> Hi,
>
> These patches try to support multi virtual queues(multi-vq) in one
> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
> hardware queue.
>
> With this approach, both scalability and performance on virtio-blk
> device can get improved.
>
> For verifying the improvement, I implements virtio-blk multi-vq
2014 Jun 18
0
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Wed, Jun 18, 2014 at 12:34 AM, Paolo Bonzini <pbonzini at redhat.com> wrote:
> Il 17/06/2014 18:00, Ming Lei ha scritto:
>
>>> > If you want to do queue steering based on the guest VCPU number, the
>>> > number
>>> > of queues must be = to the number of VCPUs shouldn't it?
>>> >
>>> > I tried using a divisor of the
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
This patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance problems on
virtio-blk device get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
This patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance problems on
virtio-blk device get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> These patches are not ready to be merged because I was unable to measure a
> performance improvement. I'm publishing them so they are archived in case
> someone picks up this work again in the future.
>
> The goal of these patches is to allocate virtqueues and driver state from the
> device's NUMA node for optimal memory
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> >
> > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > > These patches are not ready to be merged because I was unable to measure a
> > > performance improvement. I'm publishing them so they are archived in case
> > > someone
2020 May 13
0
Running libvirtd inside chroot (mock to be precise)
Hi,
I was wondering whether it's possible to run libvirtd inside a chroot
environment.
The assumption is that only one instance of libvirtd would be running on
the machine at a time, but still, inside chroot.
Currently in my chroot env I have:
- /dev/kvm added with mknod
- /dev/vhost-net added with mknod
- mounted:
- /dev/net
- /dev/shm
- /run/dbus
When I run libvirtd in
2015 Aug 30
0
CPU feature 'svm' not reported inside guest without 'host-passthrough'
Hi,
I am confused by the behaviour of the CPU description in my VM. I have a host with an AMD CPU with
SVM feature and I want to try nested virtualization on a Fedora22 guest. The host is Fedora22 as well.
libvirtd (libvirt) 1.2.13.1
kernel 4.1.6-200.fc22.x86_64
1.
I tried 'custom' mode with model qemu64 with required feature 'SVM' but '/proc/cpuinfo' in the guest
2017 Oct 25
0
Re: Need to increase the rx and tx buffer size of my interface
Hi Michal,
An update to what I have already said : when I try adding <driver
name='qemu' txmode='iothread' ioeventfd='on' event_idx='off' queues='1'
rx_queue_size='512' tx_queue_size='512'> although it showed me the error as
mentioned, when I checked the xml again I saw that <driver name='qemu'
txmode='iothread'
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
>
> On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > These patches are not ready to be merged because I was unable to measure a
> > performance improvement. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
>
> On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > These patches are not ready to be merged because I was unable to measure a
> > performance improvement. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2020 Nov 04
0
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
>
> - The optional iothread attribute assigns the disk to an IOThread as defined by
> the range for the domain iothreads value. Multiple disks may be assigned to
> the same IOThread and are numbered from 1 to the domain iothreads value.
> Available for a disk device target configured to use
2017 Oct 26
0
Re: Need to increase the rx and tx buffer size of my interface
Hi Yalan,
Thank you for your response. I do not have the following packages installed
vhost backend driver
qemu-kvm-rhev package
Are these packages available for free? How can I install them?
In my KVM VM, I must have an IP address to the interfaces that I am trying
to increasing the buffers. That is the reason I was using macvtap (direct
type interface). Is it possible to have my interfaces
2020 Nov 04
0
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
>
> - The optional iothread attribute assigns the disk to an IOThread as defined by
> the range for the domain iothreads value. Multiple disks may be assigned to
> the same IOThread and are numbered from 1 to the domain iothreads value.
> Available for a disk device target configured to use