Blair Bethwaite
2017-Feb-14 07:13 UTC
[libvirt-users] high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
Hi all, In IRC last night Dan helpfully confirmed my analysis of an issue we are seeing attempting to launch high memory KVM guests backed by hugepages... In this case the guests have 240GB of memory allocated from two host NUMA nodes to two guest NUMA nodes. The trouble is that allocating the hugepage backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME and so libvirt then most unhelpfully kills the barely spawned guest. Dan said there was currently no workaround available so I'm now looking at building a custom libvirt which sets QEMU_JOB_WAIT_TIME=60s. I have two related questions: 1) will this change have any untoward side-effects? 2) if not, then is there any reason not to change it in master until a better solution comes along (or possibly better, alter qemuDomainObjBeginJobInternal to give a domain start job a little longer compared to other jobs)? -- Cheers, ~Blairo
Michal Privoznik
2017-Feb-14 13:47 UTC
Re: [libvirt-users] high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On 02/14/2017 08:13 AM, Blair Bethwaite wrote:> Hi all, > > In IRC last night Dan helpfully confirmed my analysis of an issue we are > seeing attempting to launch high memory KVM guests backed by hugepages... > > In this case the guests have 240GB of memory allocated from two host NUMA > nodes to two guest NUMA nodes. The trouble is that allocating the hugepage > backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME > and so libvirt then most unhelpfully kills the barely spawned guest. Dan > said there was currently no workaround available so I'm now looking at > building a custom libvirt which sets QEMU_JOB_WAIT_TIME=60s.I don't think I understand this. Who is running the other job? I mean, I'd expect qemu fail to create the socket and thus hitting 30s timeout in qemuMonitorOpenUnix().> > I have two related questions: > 1) will this change have any untoward side-effects?Since this timeout is shared with other jobs, you might have to wait a bit longer for an API to return with error if a domain is stuck and unresponsive.> 2) if not, then is there any reason not to change it in master until a > better solution comes along (or possibly better, alter > qemuDomainObjBeginJobInternal > to give a domain start job a little longer compared to other jobs)?It's a trade off between "responsiveness" of a libvirt API and being able to talk to qemu which is under heavy load. From libvirt's POV we are unable to tell whether qemu is doing something or is stuck (e.g. looping endlessly). So far, we felt like 30 seconds is a good choice. But I don't mind being proven wrong. Michal
Daniel P. Berrange
2017-Feb-14 13:57 UTC
Re: [libvirt-users] high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On Tue, Feb 14, 2017 at 06:13:20PM +1100, Blair Bethwaite wrote:> Hi all, > > In IRC last night Dan helpfully confirmed my analysis of an issue we are > seeing attempting to launch high memory KVM guests backed by hugepages... > > In this case the guests have 240GB of memory allocated from two host NUMA > nodes to two guest NUMA nodes. The trouble is that allocating the hugepage > backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME > and so libvirt then most unhelpfully kills the barely spawned guest. Dan > said there was currently no workaround available so I'm now looking at > building a custom libvirt which sets QEMU_JOB_WAIT_TIME=60s. > > I have two related questions: > 1) will this change have any untoward side-effects? > 2) if not, then is there any reason not to change it in master until a > better solution comes along (or possibly better, alter > qemuDomainObjBeginJobInternal > to give a domain start job a little longer compared to other jobs)?What is the actual error you're getting during startup. I'm not entirely sure QEMU_JOB_WAIT_TIME is the thing that's the problem. IIRC, the job wait time only comes into play when 2 threads are contending on the same QEMU process. ie one has an existing job running and a second comes along and tries to run a second job. The second will timeout after the QEMU_JOB_WAIT_TIME is reached. The first job which holds the lock will never timeout. During guest startup I didn't believe we had contending jobs in this way - all the jobs needed to startup QEMU should be serialized, so I'm not sure why QEMU_JOB_WAIT_TIME would even get hit. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
Blair Bethwaite
2017-Feb-15 02:43 UTC
Re: [libvirt-users] high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On 15 February 2017 at 00:57, Daniel P. Berrange <berrange@redhat.com> wrote:> What is the actual error you're getting during startup.# virsh -d0 start instance-0000037c start: domain(optdata): instance-0000037c start: found option <domain>: instance-0000037c start: <domain> trying as domain NAME error: Failed to start domain instance-0000037c error: monitor socket did not show up: No such file or directory Full libvirtd debug log at https://gist.github.com/bmb/08fbb6b6136c758d027e90ff139d5701 On 15 February 2017 at 00:47, Michal Privoznik <mprivozn@redhat.com> wrote:> I don't think I understand this. Who is running the other job? I mean, > I'd expect qemu fail to create the socket and thus hitting 30s timeout > in qemuMonitorOpenUnix().Yes you're right, I just blindly started looking for 30s constants in the code and that one seemed the most obvious but I had not tried to trace it all the way back to the domain start job or checked the debug logs yet, sorry. So looking a bit more carefully I see the real issue is in src/qemu/qemu_monitor.c: 321 static int 322 qemuMonitorOpenUnix(const char *monitor, pid_t cpid) 323 { 324 struct sockaddr_un addr; 325 int monfd; 326 int timeout = 30; /* In seconds */ Is this safe to increase? Is there any reason to keep it at 30s given (from what I'm seeing on a fast 2-socket Haswell system) that hugepage backed guests larger than ~160GB memory will not be able to start in that time? -- Cheers, ~Blairo
Possibly Parallel Threads
- Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
- Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
- Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
- Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
- Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME