The docs[1] say: - The optional iothread attribute assigns the disk to an IOThread as defined by the range for the domain iothreads value. Multiple disks may be assigned to the same IOThread and are numbered from 1 to the domain iothreads value. Available for a disk device target configured to use "virtio" bus and "pci" or "ccw" address types. Since 1.2.8 (QEMU 2.1) Does it mean that virtio-scsi disks do not use iothreads? I'm experiencing a horrible performance using nested vms (up to 2 levels of nesting) when accessing NFS storage running on one of the VMs. The NFS server is using scsi disk. My theory is: - Writing to NFS server is very slow (too much nesting, slow disk) - Not using iothreads (because we don't use virtio?) - Guest CPU is blocked by slow I/O Does this make sense? [1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms Nir
Sergio Lopez
2020-Nov-04 16:42 UTC
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:> The docs[1] say: > > - The optional iothread attribute assigns the disk to an IOThread as defined by > the range for the domain iothreads value. Multiple disks may be assigned to > the same IOThread and are numbered from 1 to the domain iothreads value. > Available for a disk device target configured to use "virtio" bus and "pci" > or "ccw" address types. Since 1.2.8 (QEMU 2.1) > > Does it mean that virtio-scsi disks do not use iothreads?virtio-scsi disks can use iothreads, but they are configured in the scsi controller, not in the disk itself. All disks attached to the same controller will share the same iothread, but you can also attach multiple controllers.> I'm experiencing a horrible performance using nested vms (up to 2 levels of > nesting) when accessing NFS storage running on one of the VMs. The NFS > server is using scsi disk. > > My theory is: > - Writing to NFS server is very slow (too much nesting, slow disk) > - Not using iothreads (because we don't use virtio?) > - Guest CPU is blocked by slow I/OI would discard the lack of iothreads as the culprit. They do improve the performance, but without them the performance should be quite decent anyway. Probably something else is causing the trouble. I would do a step by step analysis, testing the NFS performance from outside the VM first, and then elaborating upwards from that. Sergio.> Does this make sense? > > [1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms > > Nir >
Daniel P. Berrangé
2020-Nov-04 16:54 UTC
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:> The docs[1] say: > > - The optional iothread attribute assigns the disk to an IOThread as defined by > the range for the domain iothreads value. Multiple disks may be assigned to > the same IOThread and are numbered from 1 to the domain iothreads value. > Available for a disk device target configured to use "virtio" bus and "pci" > or "ccw" address types. Since 1.2.8 (QEMU 2.1) > > Does it mean that virtio-scsi disks do not use iothreads? > > I'm experiencing a horrible performance using nested vms (up to 2 levels of > nesting) when accessing NFS storage running on one of the VMs. The NFS > server is using scsi disk.When you say 2 levels of nesting do you definitely have KVM enabled at all levels, or are you ending up using TCG emulation, because the latter would certainly explain terrible performance.> > My theory is: > - Writing to NFS server is very slow (too much nesting, slow disk) > - Not using iothreads (because we don't use virtio?) > - Guest CPU is blocked by slow I/ORegards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
Nir Soffer
2020-Nov-04 18:00 UTC
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 4, 2020 at 6:42 PM Sergio Lopez <slp@redhat.com> wrote:> > On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote: > > The docs[1] say: > > > > - The optional iothread attribute assigns the disk to an IOThread as defined by > > the range for the domain iothreads value. Multiple disks may be assigned to > > the same IOThread and are numbered from 1 to the domain iothreads value. > > Available for a disk device target configured to use "virtio" bus and "pci" > > or "ccw" address types. Since 1.2.8 (QEMU 2.1) > > > > Does it mean that virtio-scsi disks do not use iothreads? > > virtio-scsi disks can use iothreads, but they are configured in the > scsi controller, not in the disk itself. All disks attached to the > same controller will share the same iothread, but you can also attach > multiple controllers.Thanks, I found that we do use this in ovirt: <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-6f070142-1dbe-4be3-90c6-1a2274a2f8a0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> However the VMs in this setup are not created by oVirt, but manually using libvirt. I'll make sure we configure the controller in the same way.> > I'm experiencing a horrible performance using nested vms (up to 2 levels of > > nesting) when accessing NFS storage running on one of the VMs. The NFS > > server is using scsi disk. > > > > My theory is: > > - Writing to NFS server is very slow (too much nesting, slow disk) > > - Not using iothreads (because we don't use virtio?) > > - Guest CPU is blocked by slow I/O > > I would discard the lack of iothreads as the culprit. They do improve > the performance, but without them the performance should be quite > decent anyway. Probably something else is causing the trouble. > > I would do a step by step analysis, testing the NFS performance from > outside the VM first, and then elaborating upwards from that.Makes sense, thanks.
Nir Soffer
2020-Nov-04 18:00 UTC
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 4, 2020 at 6:54 PM Daniel P. Berrangé <berrange@redhat.com> wrote:> > On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote: > > The docs[1] say: > > > > - The optional iothread attribute assigns the disk to an IOThread as defined by > > the range for the domain iothreads value. Multiple disks may be assigned to > > the same IOThread and are numbered from 1 to the domain iothreads value. > > Available for a disk device target configured to use "virtio" bus and "pci" > > or "ccw" address types. Since 1.2.8 (QEMU 2.1) > > > > Does it mean that virtio-scsi disks do not use iothreads? > > > > I'm experiencing a horrible performance using nested vms (up to 2 levels of > > nesting) when accessing NFS storage running on one of the VMs. The NFS > > server is using scsi disk. > > When you say 2 levels of nesting do you definitely have KVM enabled at > all levels, or are you ending up using TCG emulation, because the latter > would certainly explain terrible performance.Good point, I'll check that out, thanks.> > My theory is: > > - Writing to NFS server is very slow (too much nesting, slow disk) > > - Not using iothreads (because we don't use virtio?) > > - Guest CPU is blocked by slow I/O > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| >
Apparently Analagous Threads
- Re: Libvirt driver iothread property for virtio-scsi disks
- Re: Libvirt driver iothread property for virtio-scsi disks
- Re: virt-install and IOThreads
- virt-install and IOThreads
- [RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device