Michael C. Cambria
2017-Jun-01 12:47 UTC
[libvirt-users] libvirtd not accepting connections
Hi, Ever since I recently upgraded to Fedora 25, I can't get kvm working. It's worked on this system since initial fedora 20 install. All upgrades were done via yum, then once available, dnf. I do have libvirt-sock in LISTENING state: STREAM LISTENING /var/run/libvirt/libvirt-sock I noticed I also have multiple connections in CONNECTING state: STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock I rebooted and tried again. Each e.g. 'virsh -c" or virt-manager command I issue, I end up with another socket in CONNECTING state. restarting libvirtd.service close them leaving just one LISTENING strace for virsh -c shows: 4301 socket(AF_UNIX, SOCK_STREAM, 0) = 5 4301 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/libvirt/libvirt-sock"}, 110) = 0 4301 getsockname(5, {sa_family=AF_UNIX}, [128->2]) = 0 4301 futex(0x7f3c0a2e4fc8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 4301 fcntl(5, F_GETFD) = 0 4301 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 4301 fcntl(5, F_GETFL) = 0x2 (flags O_RDWR) 4301 fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0 systemctl status libvirtd.service and journalctl -f -u libvirtd.service do not show any log entries for libvird after it has been started. e.g. it doesn't log anything about the connection attempts. Anyone have an idea where to look next? Thanks, MikeC
Martin Kletzander
2017-Jun-01 13:56 UTC
[libvirt-users] libvirtd not accepting connections
On Thu, Jun 01, 2017 at 08:47:55AM -0400, Michael C. Cambria wrote:> >Hi, > >Ever since I recently upgraded to Fedora 25, I can't get kvm working.It's not really about KVM but about libvirt. So the daemon is running...>It's worked on this system since initial fedora 20 install. All >upgrades were done via yum, then once available, dnf. > >I do have libvirt-sock in LISTENING state: STREAM LISTENING >/var/run/libvirt/libvirt-sock > >I noticed I also have multiple connections in CONNECTING state: > >STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock >STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock >STREAM CONNECTING 0 /var/run/libvirt/libvirt-sock > >I rebooted and tried again. Each e.g. 'virsh -c" or virt-manager >command I issue, I end up with another socket in CONNECTING state. >restarting libvirtd.service close them leaving just one LISTENING > >strace for virsh -c shows: > > >4301 socket(AF_UNIX, SOCK_STREAM, 0) = 5 >4301 connect(5, {sa_family=AF_UNIX, >sun_path="/var/run/libvirt/libvirt-sock"}, 110) = 0 >4301 getsockname(5, {sa_family=AF_UNIX}, [128->2]) = 0 >4301 futex(0x7f3c0a2e4fc8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 >4301 fcntl(5, F_GETFD) = 0 >4301 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 >4301 fcntl(5, F_GETFL) = 0x2 (flags O_RDWR) >4301 fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0 >virsh is just waiting for the daemon, there's no useful info.>systemctl status libvirtd.service and journalctl -f -u libvirtd.service >do not show any log entries for libvird after it has been started. e.g. >it doesn't log anything about the connection attempts. > >Anyone have an idea where to look next? >Check if the daemon is running, check any possible QEMU processes that might be running, enable debug logs for libvirtd [1] and try grabbing a backtrace of the process (For example `gdb -batch -p $(pidof libvirtd) -ex "t a a bt full"' after you install debuginfo packages).>Thanks, >MikeC > > >_______________________________________________ >libvirt-users mailing list >libvirt-users at redhat.com >https://www.redhat.com/mailman/listinfo/libvirt-users-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20170601/167d75b0/attachment.sig>
Martin Kletzander
2017-Jun-02 13:43 UTC
Re: [libvirt-users] libvirtd not accepting connections
[adding back the ML, you probably hit reply instead of reply-all, this way other people might help if they know more] On Fri, Jun 02, 2017 at 08:10:01AM -0400, Michael C. Cambria wrote:> >Hi, > >libvirtd never seems to get notified that there is work to do. journalct >-f indicated that nothing was logged when connections were attempted via >virsh. > >I also tried 'LIBVIRT_DEBUG=1 libvirtd --verbose' and once startup >finished, there were no more log entries even though virsh attempts were >made. >That's because it gets overridden by the configuration files. This might be a bug, but it's not related to what's happening.>"ps ax" shows about a dozen "qemu-system-alpha" processes. I don't know >if it matters but I didn't expect to see this. I didn't intentionally >configure alpha emulations (assuming that's what it is) and certainly >don't want to waste resources having it running. >Libvirt caches the capabilities of the emulators it can find in your system in order not to waste resources. These processes are expected to go away after they reply with all libvirt asks them for. However, it seems like the initialization cannot be completed precisely due to the fact that these processes don't communicate. There might be some details about qemu-system-alpha that are different when compared to, e.g. qemu-system-x86 and libvirt is not (yet) adapted to them, but I installed that emulator and libvirt daemon runs as usual. It looks like a problem in QEMU. Could you, as a workaround, try uninstalling that qemu binary from your system and restarting the service? Also, what versions of libvirt and qemu do you have installed?>Here is gdb output: > >$ sudo gdb -batch -p $(pidof libvirtd) -ex "t a a bt full" > batch.out >[mcc@eastie-fid4-com triage]$ cat batch.out >[New LWP 17587] >[New LWP 17588] >[New LWP 17589] >[New LWP 17590] >[New LWP 17591] >[New LWP 17592] >[New LWP 17593] >[New LWP 17594] >[New LWP 17595] >[New LWP 17596] >[New LWP 17597] >[New LWP 17598] >[New LWP 17599] >[New LWP 17600] >[New LWP 17601] >[New LWP 17602] >[Thread debugging using libthread_db enabled] >Using host libthread_db library "/lib64/libthread_db.so.1". >0x00007fcd6b4a501d in poll () at ../sysdeps/unix/syscall-template.S:84 >84 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS) > >Thread 17 (Thread 0x7fcd3bf18700 (LWP 17602)): >#0 0x00007fcd6b4a501d in poll () at ../sysdeps/unix/syscall-template.S:84 >No locals. >#1 0x00007fcd6b4c310e in __poll_chk (fds=<optimized out>, >nfds=<optimized out>, timeout=<optimized out>, fdslen=<optimized out>) >at poll_chk.c:27 >No locals. >#2 0x00007fcd6f07bf41 in poll (__timeout=-1, __nfds=<optimized out>, >__fds=0x7fcd3bf16ec0) at /usr/include/bits/poll2.h:41 >No locals. >#3 virCommandProcessIO (cmd=cmd@entry=0x7fcd344228f0) at >util/vircommand.c:2049 > i = <optimized out> > fds = {{fd = 22, events = 1, revents = 0}, {fd = 24, events >1, revents = 0}, {fd = 1802946632, events = 32717, revents = 0}} > nfds = <optimized out> > outfd = <optimized out> > errfd = 24 > inlen = 0 > outlen = 0 > errlen = 0 > inoff = 0 > ret = 0 > __func__ = "virCommandProcessIO" > __FUNCTION__ = "virCommandProcessIO" >#4 0x00007fcd6f08025a in virCommandRun (cmd=cmd@entry=0x7fcd344228f0, >exitstatus=exitstatus@entry=0x7fcd3bf1749c) at util/vircommand.c:2274 > ret = 0 > outbuf = 0x7fcd341c1850 "`\030\034\064\315\177" > errbuf = 0x0 > st = {st_dev = 140519450702688, st_ino = 5007254661694877440, >st_nlink = 140519321774352, st_mode = 1862726288, st_uid = 32717, st_gid >= 1865325018, __pad0 = 32717, st_rdev = 140732281970554, st_size = 0, >st_blksize = 11, st_blocks = 8, st_atim = {tv_sec = 140519321774368, >tv_nsec = 140519450703056}, st_mtim = {tv_sec = 140519321774352, tv_nsec >= 140519450703024}, st_ctim = {tv_sec = 140520244259750, tv_nsec >140520310044608}, __glibc_reserved = {140520244259750, 140520310041179, >140519321774320}} > string_io = <optimized out> > async_io = <optimized out> > str = 0x7fcd3bf17420 "\260t\361;\315\177" > tmpfd = <optimized out> > __FUNCTION__ = "virCommandRun" > __func__ = "virCommandRun" >#5 0x00007fcd404a27cf in virQEMUCapsInitQMP (qmperr=0x7fcd3bf174a0, >runGid=107, runUid=107, libDir=<optimized out>, qemuCaps=0x7fcd340fd3e0) >at qemu/qemu_capabilities.c:3700 > cmd = 0x7fcd344228f0 > pid = 0 > ret = -1 > mon = 0x0 > status = 0 > monarg = 0x7fcd343a2570 >"unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait" > vm = 0x0 > config = {type = 9, data = {file = {path = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock", append = 0}, nmdm >{master = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock", slave = 0x0}, tcp >{host = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock", service = 0x0, listen >= false, protocol = 0}, udp = {bindHost = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock", bindService = 0x0, >connectHost = 0x0, connectService = 0x0}, nix = {path = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock", listen = false}, >spicevmc = 873799056, spiceport = {channel = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock"}}, logfile = 0x0, >logappend = 0} > monpath = 0x7fcd34151d90 >"/var/lib/libvirt/qemu/capabilities.monitor.sock" > pidfile = 0x7fcd341ad8b0 >"/var/lib/libvirt/qemu/capabilities.pidfile" > xmlopt = 0x0 >#6 virQEMUCapsNewForBinaryInternal (binary=binary@entry=0x7fcd34016cb0 >"/usr/bin/qemu-system-alpha", libDir=<optimized out>, >cacheDir=0x7fcd343be860 "/var/cache/libvirt/qemu", runUid=107, >runGid=107, qmpOnly=qmpOnly@entry=false) at qemu/qemu_capabilities.c:3830 > qemuCaps = 0x7fcd340fd3e0 > sb = {st_dev = 64768, st_ino = 1838294, st_nlink = 1, st_mode >33261, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size >8829680, st_blksize = 4096, st_blocks = 17248, st_atim = {tv_sec >1496358589, tv_nsec = 77994286}, st_mtim = {tv_sec = 1492132244, tv_nsec >= 0}, st_ctim = {tv_sec = 1494196699, tv_nsec = 451929606}, >__glibc_reserved = {0, 0, 0}} > rv = <optimized out> > qmperr = 0x7fcd341c1870 "" > __FUNCTION__ = "virQEMUCapsNewForBinaryInternal" >#7 0x00007fcd404a3a73 in virQEMUCapsNewForBinary (runGid=<optimized >out>, runUid=<optimized out>, cacheDir=<optimized out>, >libDir=<optimized out>, binary=0x7fcd34016cb0 >"/usr/bin/qemu-system-alpha") at qemu/qemu_capabilities.c:3871 >No locals. >#8 virQEMUCapsCacheLookup (cache=cache@entry=0x7fcd341c9000, >binary=0x7fcd34016cb0 "/usr/bin/qemu-system-alpha") at >qemu/qemu_capabilities.c:3986 > ret = 0x0 > __func__ = "virQEMUCapsCacheLookup" >#9 0x00007fcd404a3d22 in virQEMUCapsInitGuest >(guestarch=VIR_ARCH_ALPHA, hostarch=VIR_ARCH_X86_64, >cache=0x7fcd341c9000, caps=0x7fcd341a9980) at qemu/qemu_capabilities.c:824 > qemubinCaps = 0x0 > x86_32on64_kvm = <optimized out> > ppc64_kvm = <optimized out> > kvmbin = 0x0 > ret = -1 > i = <optimized out> > binary = 0x7fcd34016cb0 "/usr/bin/qemu-system-alpha" > kvmbinCaps = 0x0 > native_kvm = <optimized out> > arm_32on64_kvm = <optimized out> >#10 virQEMUCapsInit (cache=0x7fcd341c9000) at qemu/qemu_capabilities.c:1109 > caps = 0x7fcd341a9980 > i = 1 > hostarch = VIR_ARCH_X86_64 > __func__ = "virQEMUCapsInit" >#11 0x00007fcd404def20 in virQEMUDriverCreateCapabilities >(driver=driver@entry=0x7fcd34342370) at qemu/qemu_conf.c:766 > i = <optimized out> > j = <optimized out> > caps = <optimized out> > sec_managers = 0x0 > doi = <optimized out> > model = <optimized out> > lbl = <optimized out> > type = <optimized out> > cfg = 0x7fcd3448cbb0 > virtTypes = {3, 1} > __FUNCTION__ = "virQEMUDriverCreateCapabilities" > __func__ = "virQEMUDriverCreateCapabilities" >#12 0x00007fcd4051fef3 in qemuStateInitialize (privileged=true, >callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:844 > driverConf = 0x0 > conn = 0x0 > cfg = 0x7fcd3448cbb0 > run_uid = <optimized out> > run_gid = <optimized out> > hugepagePath = 0x0 > i = <optimized out> > __FUNCTION__ = "qemuStateInitialize" >#13 0x00007fcd6f1789af in virStateInitialize (privileged=<optimized >out>, callback=0x55f56a9b3180 <daemonInhibitCallback>, >opaque=0x55f56be1cf00) at libvirt.c:770 > i = 9 > __func__ = "virStateInitialize" >#14 0x000055f56a9b31db in daemonRunStateInit (opaque=0x55f56be1cf00) at >libvirtd.c:959 > dmn = 0x55f56be1cf00 > sysident = 0x7fcd34000910 > __func__ = "daemonRunStateInit" >#15 0x00007fcd6f0d98f2 in virThreadHelper (data=<optimized out>) at >util/virthread.c:206 > args = 0x0 > local = {func = 0x55f56a9b31a0 <daemonRunStateInit>, funcName >0x55f56a9f28d3 "daemonRunStateInit", worker = false, opaque >0x55f56be1cf00} >#16 0x00007fcd6b7766ca in start_thread (arg=0x7fcd3bf18700) at >pthread_create.c:333 > __res = <optimized out> > pd = 0x7fcd3bf18700 > now = <optimized out> > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140519450707712, >-3574063505887647860, 0, 140732281962543, 140519450708416, >140519450707712, 3601779982174594956, 3601954753778231180}, >mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev >0x0, cleanup = 0x0, canceltype = 0}}} > not_first_call = <optimized out> > pagesize_m1 = <optimized out> > sp = <optimized out> > freesize = <optimized out> > __PRETTY_FUNCTION__ = "start_thread" >#17 0x00007fcd6b4b0f7f in clone () at >.../sysdeps/unix/sysv/linux/x86_64/clone.S:105 >No locals. >[...]
Michael C. Cambria
2017-Jun-02 13:53 UTC
Re: [libvirt-users] libvirtd not accepting connections
On 06/02/2017 09:43 AM, Martin Kletzander wrote:> [adding back the ML, you probably hit reply instead of reply-all, this > way other people might help if they know more] > > On Fri, Jun 02, 2017 at 08:10:01AM -0400, Michael C. Cambria wrote: >> >> Hi, >> >> libvirtd never seems to get notified that there is work to do. journalct >> -f indicated that nothing was logged when connections were attempted via >> virsh. >> >> I also tried 'LIBVIRT_DEBUG=1 libvirtd --verbose' and once startup >> finished, there were no more log entries even though virsh attempts were >> made. >> > > That's because it gets overridden by the configuration files. This > might be a bug, but it's not related to what's happening. > >> "ps ax" shows about a dozen "qemu-system-alpha" processes. I don't know >> if it matters but I didn't expect to see this. I didn't intentionally >> configure alpha emulations (assuming that's what it is) and certainly >> don't want to waste resources having it running. >> > > Libvirt caches the capabilities of the emulators it can find in your > system in order not to waste resources. These processes are expected to > go away after they reply with all libvirt asks them for. However, it > seems like the initialization cannot be completed precisely due to the > fact that these processes don't communicate. > > There might be some details about qemu-system-alpha that are different > when compared to, e.g. qemu-system-x86 and libvirt is not (yet) adapted > to them, but I installed that emulator and libvirt daemon runs as > usual. It looks like a problem in QEMU. Could you, as a workaround, > try uninstalling that qemu binary from your system and restarting the > service? > > Also, what versions of libvirt and qemu do you have installed?# LIBVIRT_DEBUG=1 libvirtd --verbose 2017-06-02 00:16:30.317+0000: 18088: info : libvirt version: 2.2.1, package: 1.fc25 (Fedora Project, 2017-05-10-22:06:21, buildvm-29.phx2.fedoraproject.org) I'll check on qemu as soon as I can get to the machine. The version should be the latest one gets via 'dnf update' on fedora 25> >> Here is gdb output: >> >> $ sudo gdb -batch -p $(pidof libvirtd) -ex "t a a bt full" > batch.out >> [mcc@eastie-fid4-com triage]$ cat batch.out >> [New LWP 17587] >> [New LWP 17588] >> [New LWP 17589] >> [New LWP 17590] >> [New LWP 17591] >> [New LWP 17592] >> [New LWP 17593] >> [New LWP 17594] >> [New LWP 17595] >> [New LWP 17596] >> [New LWP 17597] >> [New LWP 17598] >> [New LWP 17599] >> [New LWP 17600] >> [New LWP 17601] >> [New LWP 17602] >> [Thread debugging using libthread_db enabled] >> Using host libthread_db library "/lib64/libthread_db.so.1". >> 0x00007fcd6b4a501d in poll () at ../sysdeps/unix/syscall-template.S:84 >> 84 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS) >> >> Thread 17 (Thread 0x7fcd3bf18700 (LWP 17602)): >> #0 0x00007fcd6b4a501d in poll () at >> ../sysdeps/unix/syscall-template.S:84 >> No locals. >> #1 0x00007fcd6b4c310e in __poll_chk (fds=<optimized out>, >> nfds=<optimized out>, timeout=<optimized out>, fdslen=<optimized out>) >> at poll_chk.c:27 >> No locals. >> #2 0x00007fcd6f07bf41 in poll (__timeout=-1, __nfds=<optimized out>, >> __fds=0x7fcd3bf16ec0) at /usr/include/bits/poll2.h:41 >> No locals. >> #3 virCommandProcessIO (cmd=cmd@entry=0x7fcd344228f0) at >> util/vircommand.c:2049 >> i = <optimized out> >> fds = {{fd = 22, events = 1, revents = 0}, {fd = 24, events >> 1, revents = 0}, {fd = 1802946632, events = 32717, revents = 0}} >> nfds = <optimized out> >> outfd = <optimized out> >> errfd = 24 >> inlen = 0 >> outlen = 0 >> errlen = 0 >> inoff = 0 >> ret = 0 >> __func__ = "virCommandProcessIO" >> __FUNCTION__ = "virCommandProcessIO" >> #4 0x00007fcd6f08025a in virCommandRun (cmd=cmd@entry=0x7fcd344228f0, >> exitstatus=exitstatus@entry=0x7fcd3bf1749c) at util/vircommand.c:2274 >> ret = 0 >> outbuf = 0x7fcd341c1850 "`\030\034\064\315\177" >> errbuf = 0x0 >> st = {st_dev = 140519450702688, st_ino = 5007254661694877440, >> st_nlink = 140519321774352, st_mode = 1862726288, st_uid = 32717, st_gid >> = 1865325018, __pad0 = 32717, st_rdev = 140732281970554, st_size = 0, >> st_blksize = 11, st_blocks = 8, st_atim = {tv_sec = 140519321774368, >> tv_nsec = 140519450703056}, st_mtim = {tv_sec = 140519321774352, tv_nsec >> = 140519450703024}, st_ctim = {tv_sec = 140520244259750, tv_nsec >> 140520310044608}, __glibc_reserved = {140520244259750, 140520310041179, >> 140519321774320}} >> string_io = <optimized out> >> async_io = <optimized out> >> str = 0x7fcd3bf17420 "\260t\361;\315\177" >> tmpfd = <optimized out> >> __FUNCTION__ = "virCommandRun" >> __func__ = "virCommandRun" >> #5 0x00007fcd404a27cf in virQEMUCapsInitQMP (qmperr=0x7fcd3bf174a0, >> runGid=107, runUid=107, libDir=<optimized out>, qemuCaps=0x7fcd340fd3e0) >> at qemu/qemu_capabilities.c:3700 >> cmd = 0x7fcd344228f0 >> pid = 0 >> ret = -1 >> mon = 0x0 >> status = 0 >> monarg = 0x7fcd343a2570 >> "unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait" >> vm = 0x0 >> config = {type = 9, data = {file = {path = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock", append = 0}, nmdm >> {master = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock", slave = 0x0}, tcp >> {host = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock", service = 0x0, listen >> = false, protocol = 0}, udp = {bindHost = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock", bindService = 0x0, >> connectHost = 0x0, connectService = 0x0}, nix = {path = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock", listen = false}, >> spicevmc = 873799056, spiceport = {channel = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock"}}, logfile = 0x0, >> logappend = 0} >> monpath = 0x7fcd34151d90 >> "/var/lib/libvirt/qemu/capabilities.monitor.sock" >> pidfile = 0x7fcd341ad8b0 >> "/var/lib/libvirt/qemu/capabilities.pidfile" >> xmlopt = 0x0 >> #6 virQEMUCapsNewForBinaryInternal (binary=binary@entry=0x7fcd34016cb0 >> "/usr/bin/qemu-system-alpha", libDir=<optimized out>, >> cacheDir=0x7fcd343be860 "/var/cache/libvirt/qemu", runUid=107, >> runGid=107, qmpOnly=qmpOnly@entry=false) at >> qemu/qemu_capabilities.c:3830 >> qemuCaps = 0x7fcd340fd3e0 >> sb = {st_dev = 64768, st_ino = 1838294, st_nlink = 1, st_mode >> 33261, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size >> 8829680, st_blksize = 4096, st_blocks = 17248, st_atim = {tv_sec >> 1496358589, tv_nsec = 77994286}, st_mtim = {tv_sec = 1492132244, tv_nsec >> = 0}, st_ctim = {tv_sec = 1494196699, tv_nsec = 451929606}, >> __glibc_reserved = {0, 0, 0}} >> rv = <optimized out> >> qmperr = 0x7fcd341c1870 "" >> __FUNCTION__ = "virQEMUCapsNewForBinaryInternal" >> #7 0x00007fcd404a3a73 in virQEMUCapsNewForBinary (runGid=<optimized >> out>, runUid=<optimized out>, cacheDir=<optimized out>, >> libDir=<optimized out>, binary=0x7fcd34016cb0 >> "/usr/bin/qemu-system-alpha") at qemu/qemu_capabilities.c:3871 >> No locals. >> #8 virQEMUCapsCacheLookup (cache=cache@entry=0x7fcd341c9000, >> binary=0x7fcd34016cb0 "/usr/bin/qemu-system-alpha") at >> qemu/qemu_capabilities.c:3986 >> ret = 0x0 >> __func__ = "virQEMUCapsCacheLookup" >> #9 0x00007fcd404a3d22 in virQEMUCapsInitGuest >> (guestarch=VIR_ARCH_ALPHA, hostarch=VIR_ARCH_X86_64, >> cache=0x7fcd341c9000, caps=0x7fcd341a9980) at >> qemu/qemu_capabilities.c:824 >> qemubinCaps = 0x0 >> x86_32on64_kvm = <optimized out> >> ppc64_kvm = <optimized out> >> kvmbin = 0x0 >> ret = -1 >> i = <optimized out> >> binary = 0x7fcd34016cb0 "/usr/bin/qemu-system-alpha" >> kvmbinCaps = 0x0 >> native_kvm = <optimized out> >> arm_32on64_kvm = <optimized out> >> #10 virQEMUCapsInit (cache=0x7fcd341c9000) at >> qemu/qemu_capabilities.c:1109 >> caps = 0x7fcd341a9980 >> i = 1 >> hostarch = VIR_ARCH_X86_64 >> __func__ = "virQEMUCapsInit" >> #11 0x00007fcd404def20 in virQEMUDriverCreateCapabilities >> (driver=driver@entry=0x7fcd34342370) at qemu/qemu_conf.c:766 >> i = <optimized out> >> j = <optimized out> >> caps = <optimized out> >> sec_managers = 0x0 >> doi = <optimized out> >> model = <optimized out> >> lbl = <optimized out> >> type = <optimized out> >> cfg = 0x7fcd3448cbb0 >> virtTypes = {3, 1} >> __FUNCTION__ = "virQEMUDriverCreateCapabilities" >> __func__ = "virQEMUDriverCreateCapabilities" >> #12 0x00007fcd4051fef3 in qemuStateInitialize (privileged=true, >> callback=<optimized out>, opaque=<optimized out>) at >> qemu/qemu_driver.c:844 >> driverConf = 0x0 >> conn = 0x0 >> cfg = 0x7fcd3448cbb0 >> run_uid = <optimized out> >> run_gid = <optimized out> >> hugepagePath = 0x0 >> i = <optimized out> >> __FUNCTION__ = "qemuStateInitialize" >> #13 0x00007fcd6f1789af in virStateInitialize (privileged=<optimized >> out>, callback=0x55f56a9b3180 <daemonInhibitCallback>, >> opaque=0x55f56be1cf00) at libvirt.c:770 >> i = 9 >> __func__ = "virStateInitialize" >> #14 0x000055f56a9b31db in daemonRunStateInit (opaque=0x55f56be1cf00) at >> libvirtd.c:959 >> dmn = 0x55f56be1cf00 >> sysident = 0x7fcd34000910 >> __func__ = "daemonRunStateInit" >> #15 0x00007fcd6f0d98f2 in virThreadHelper (data=<optimized out>) at >> util/virthread.c:206 >> args = 0x0 >> local = {func = 0x55f56a9b31a0 <daemonRunStateInit>, funcName >> 0x55f56a9f28d3 "daemonRunStateInit", worker = false, opaque >> 0x55f56be1cf00} >> #16 0x00007fcd6b7766ca in start_thread (arg=0x7fcd3bf18700) at >> pthread_create.c:333 >> __res = <optimized out> >> pd = 0x7fcd3bf18700 >> now = <optimized out> >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140519450707712, >> -3574063505887647860, 0, 140732281962543, 140519450708416, >> 140519450707712, 3601779982174594956, 3601954753778231180}, >> mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev >> 0x0, cleanup = 0x0, canceltype = 0}}} >> not_first_call = <optimized out> >> pagesize_m1 = <optimized out> >> sp = <optimized out> >> freesize = <optimized out> >> __PRETTY_FUNCTION__ = "start_thread" >> #17 0x00007fcd6b4b0f7f in clone () at >> .../sysdeps/unix/sysv/linux/x86_64/clone.S:105 >> No locals. >> > > [...]