Fangge Jin <fjin at redhat.com> writes:
> I can share some test results with you:
> 1. If no memtune->hard_limit is set when start a vm, the default memlock
> hard limit is 64MB
> 2. If memtune->hard_limit is set when start a vm, memlock hard limit
will
> be set to the value of memtune->hard_limit
> 3. If memtune->hard_limit is updated at run-time, memlock hard limit
won't
> be changed accordingly
>
> And some additional knowledge:
> 1. memlock hard limit can be shown by ?prlimit -p <pid-of-qemu> -l?
> 2. The default value of memlock hard limit can be changed by setting
> LimitMEMLOCK in /usr/lib/systemd/system/virtqemud.service
Ah, that explains it to me, thank you. And since in the default case
the systemd limit is not reported in <memtune> of a running VM, I assume
libvirt takes it as "not set" and sets the higher limit when setting
up
a zero-copy migration. Good.
Regards,
Milan
> BR,
> Fangge Jin
>
> On Wed, Aug 17, 2022 at 19:25 Milan Zamazal <mzamazal at redhat.com>
wrote:
>
>> Peter Krempa <pkrempa at redhat.com> writes:
>>
>> > On Wed, Aug 17, 2022 at 10:56:54 +0200, Milan Zamazal wrote:
>> >> Hi,
>> >>
>> >
>> >> do I read libvirt sources right that when <memtune> is
not used in the
>> >> libvirt domain then libvirt takes proper care about setting
memory
>> >> locking limits when zero-copy is requested for a migration?
>> >
>> > Well yes, for a definition of "proper". In this instance
qemu can lock
>> > up to the guest-visible memory size of memory for the migration,
thus we
>> > set the lockable size to the guest memory size. This is a simple
upper
>> > bound which is supposed to work in all scenarios. Qemu is also
unlikely
>> > to ever use up all the allowed locking.
>>
>> Great, thank you for confirmation.
>>
>> >> I also wonder whether there are any other situations where
memory limits
>> >> could be set by libvirt or QEMU automatically rather than
having no
>> >> memory limits? We had oVirt bugs in the past where certain
VMs with
>> >> VFIO devices couldn't be started due to extra requirements
on the amount
>> >> of locked memory and adding <hard_limit> to the domain
apparently
>> >> helped.
>> >
>> > <hard_limit> is not only an amount of memory qemu can lock
into ram, but
>> > an upper bound of all memory the qemu process can consume. This
includes
>> > any qemu overhead e.g. used for the emulation layer.
>> >
>> > Guessing the correct size of overhead still has the same problems
it had
>> > and libvirt is not going to be in the business of doing that.
>>
>> To clarify, my point was not whether libvirt should, but whether
libvirt
>> or any related component possibly does (or did in the past) impose
>> memory limits. Because as I was looking around it seems there are no
>> real memory limits by default, at least in libvirt, but some limit had
>> been apparently hit in the reported bugs.
>>
>>