Displaying 20 results from an estimated 115 matches for "iothreads".
Did you mean:
iothread
2020 Nov 04
4
Libvirt driver iothread property for virtio-scsi disks
The docs[1] say:
- The optional iothread attribute assigns the disk to an IOThread as defined by
the range for the domain iothreads value. Multiple disks may be assigned to
the same IOThread and are numbered from 1 to the domain iothreads value.
Available for a disk device target configured to use "virtio" bus and "pci"
or "ccw" address types. Since 1.2.8 (QEMU 2.1)
Does it mean that virtio-...
2019 Jan 17
1
Re: virt-install and IOThreads
On Thu, Jan 17, 2019 at 4:35 PM Cole Robinson <crobinso@redhat.com> wrote:
>
> On 01/17/2019 05:58 AM, Igor Gnatenko wrote:
> > Hello,
> >
> > is there any way of specifying iothreads via virt-install command?
> >
> > I've tried appending ",iothread='1'" but that didn't work.
> >
> > Thanks in advance!
> >
>
> No unfortunately it's not wired up in virt-install. It's fairly easy to
> patch+extend the command...
2020 Nov 04
0
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
>
> - The optional iothread attribute assigns the disk to an IOThread as defined by
> the range for the domain iothreads value. Multiple disks may be assigned to
> the same IOThread and are numbered from 1 to the domain iothreads value.
> Available for a disk device target configured to use "virtio" bus and "pci"
> or "ccw" address types. Since 1.2.8 (QEMU 2.1)
>
>...
2019 Jan 17
2
virt-install and IOThreads
Hello,
is there any way of specifying iothreads via virt-install command?
I've tried appending ",iothread='1'" but that didn't work.
Thanks in advance!
--
-Igor Gnatenko
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
...lator/mount/fuse.so[0x2b55b18d88ff]
/lib64/libpthread.so.0[0x3a67606367]
/lib64/libc.so.6(clone+0x6d)[0x3a66ad2f7d]
---------
Here's the client configuration:
volume client-stripe-1
type protocol/client
option transport-type ib-verbs
option remote-host gluster1
option remote-subvolume iothreads
end-volume
volume client-stripe-2
type protocol/client
option transport-type ib-verbs
option remote-host gluster2
option remote-subvolume iothreads
end-volume
volume client-stripe-3
type protocol/client
option transport-type ib-verbs
option remote-host gluster3
option remote-subvo...
2020 Nov 04
0
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
>
> - The optional iothread attribute assigns the disk to an IOThread as defined by
> the range for the domain iothreads value. Multiple disks may be assigned to
> the same IOThread and are numbered from 1 to the domain iothreads value.
> Available for a disk device target configured to use "virtio" bus and "pci"
> or "ccw" address types. Since 1.2.8 (QEMU 2.1)
>
>...
2008 Dec 01
2
Error while copying/moving file
An HTML attachment was scrubbed...
URL: http://zresearch.com/pipermail/gluster-users/attachments/20081201/151a90cd/attachment.htm
2015 Nov 18
2
enabling virtio-scsi-data-plane in libvirt
Can somebody knows how to enable virtio-scsi-data-plane in libvirt for
specific domain?
I know that i need to replace "-device virtio-scsi-pci" with "-object
iothread,id=io1 -device virtio-scsi-pci,iothread=io1" in qemu, but how
can i do this in libvirt?
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
...ke more queues, and
> we need to leverage below things in host side:
>
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
>
> - iothreads' loading(if iothreads are at full loading, increasing
> queues doesn't help at all)
>
> In my test, I only use the current per-dev iothread(x-dataplane)
> in qemu to handle 2 vqs' notification and precess all I/O from
> the 2 vqs, and looks it can improve IOPS by ~30%.
&...
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
...ke more queues, and
> we need to leverage below things in host side:
>
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
>
> - iothreads' loading(if iothreads are at full loading, increasing
> queues doesn't help at all)
>
> In my test, I only use the current per-dev iothread(x-dataplane)
> in qemu to handle 2 vqs' notification and precess all I/O from
> the 2 vqs, and looks it can improve IOPS by ~30%.
&...
2019 Jan 17
0
Re: virt-install and IOThreads
On 01/17/2019 05:58 AM, Igor Gnatenko wrote:
> Hello,
>
> is there any way of specifying iothreads via virt-install command?
>
> I've tried appending ",iothread='1'" but that didn't work.
>
> Thanks in advance!
>
No unfortunately it's not wired up in virt-install. It's fairly easy to
patch+extend the command line if you're interested, thi...
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
..." nodeset="1"/>
...
</numatune>
This will ensure also the guest memory pinning. But wait, there is more.
In your later e-mails you mention slow disk I/O. This might be caused by
various variables but the most obvious one in this case is qemu I/O
loop, I'd say. Without iothreads qemu has only one I/O loop and thus if
your guest issues writes from all 32 cores at once this loop is unable
to handle it (performance wise) and therefore the performance drop. You
can try enabling iothreads:
https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation
This is a qemu featur...
2011 Feb 24
0
No subject
...r1/stripe
end-volume
volume posix-distribute
type storage/posix
option directory /export/gluster1/distribute
end-volume
volume locks
type features/locks
subvolumes posix-stripe
end-volume
volume locks-dist
type features/locks
subvolumes posix-distribute
end-volume
volume iothreads
type performance/io-threads
option thread-count 16
subvolumes locks
end-volume
volume iothreads-dist
type performance/io-threads
option thread-count 16
subvolumes locks-dist
end-volume
volume server
type protocol/server
option transport-type ib-verbs
option auth.addr.iothreads.a...
2014 Jun 17
2
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
Il 17/06/2014 18:00, Ming Lei ha scritto:
>> > If you want to do queue steering based on the guest VCPU number, the number
>> > of queues must be = to the number of VCPUs shouldn't it?
>> >
>> > I tried using a divisor of the number of VCPUs, but couldn't get the block
>> > layer to deliver interrupts to the right VCPU.
> For blk-mq's
2014 Jun 17
2
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
Il 17/06/2014 18:00, Ming Lei ha scritto:
>> > If you want to do queue steering based on the guest VCPU number, the number
>> > of queues must be = to the number of VCPUs shouldn't it?
>> >
>> > I tried using a divisor of the number of VCPUs, but couldn't get the block
>> > layer to deliver interrupts to the right VCPU.
> For blk-mq's
2015 Nov 30
2
Re: enabling virtio-scsi-data-plane in libvirt
2015-11-19 16:09 GMT+03:00 John Ferlan <jferlan@redhat.com>:
> Check out virsh iothread{info|pin|add|del} and of course the
> corresponding virDomain{Add|Pin|Del}IOThread and virDomainGetIOThreadInfo.
Yes, thanks! Does in near feature libvirt devs integrate this ability
to domain format? As i understand all qemu stable features supported
by libvirt. And data plane for virtio-blk is
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
...lso pauses in writes but it finishes, speed is reduced though. On
1-NUMA node, with the same test, I can see steady writes from the very
beginning to the very end at roughly the same speed.
Maybe it could be related to the fact, that NVME is PCI device that is linked
to one NUMA node only?
As of iothreads, I have only 1 disk (the vde) that is exposed to high i/o
load, so I believe more I/O threads is not applicable here. If I understand
correctly, I cannot set more iothreads to a single device.. And it does not
seem to be iothreads linked as the same scenario in 1-NUMA configuration works
OK (I mean...
2019 Jul 11
2
[PATCH] drm/virtio: kick vq outside of the vq lock
Replace virtqueue_kick by virtqueue_kick_prepare, which requires
serialization, and virtqueue_notify, which does not. Repurpose the
return values to indicate whether the vq should be notified.
This fixes a lock contention with qemu host. When the guest calls
vibad rtqueue_notify, the qemu vcpu thread exits the guest and waits
for the qemu iothread to perform the MMIO. If the qemu iothread is
2019 Jul 11
2
[PATCH] drm/virtio: kick vq outside of the vq lock
Replace virtqueue_kick by virtqueue_kick_prepare, which requires
serialization, and virtqueue_notify, which does not. Repurpose the
return values to indicate whether the vq should be notified.
This fixes a lock contention with qemu host. When the guest calls
vibad rtqueue_notify, the qemu vcpu thread exits the guest and waits
for the qemu iothread to perform the MMIO. If the qemu iothread is
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
...beginning to the very end at roughly the same speed.
>
> Maybe it could be related to the fact, that NVME is PCI device that is linked
> to one NUMA node only?
Can be. I don't know qemu internals that much to know if its capable of
doing zero copy disk writes.
>
>
> As of iothreads, I have only 1 disk (the vde) that is exposed to high i/o
> load, so I believe more I/O threads is not applicable here. If I understand
> correctly, I cannot set more iothreads to a single device.. And it does not
> seem to be iothreads linked as the same scenario in 1-NUMA configuration w...