similar to: Vm in state "in shutdown"

Displaying 20 results from an estimated 100 matches similar to: "Vm in state "in shutdown""

2018 Mar 08
1
Statistics domain memory block when domain shutdown
Hi My libvirt version is 3.4.0,host system is centos 7.4 ,kernel is 3.10.0-693.el7.x86_64 , when I shutdown domain in virtual system, My program call virDomainMemoryStats, My program blocked in this api. the call stack is #0 0x00007ff242d78a3d in poll () from /lib64/libc.so.6 #1 0x00007ff243755ce8 in virNetClientIOEventLoop () from /lib64/libvirt.so.0 #2 0x00007ff24375654b in
2013 Mar 25
0
Bug in DOMINFO command when balloon driver is used on a vm with more then 8 GB of MaxMemory ?
Hi , I Sent this to the wrong list (libvirt-devel) on friday ... so i am trying to send it to the correct one this time. Apologize for double posting. I also created a ticket on bugzilla.redhat.com for this https://bugzilla.redhat.com/show_bug.cgi?id=927336 still i am posting it here because is absolutely possible i am doing something wrong and someone here will see it . Description of the
2013 Jun 09
0
lmpt-service crash after update
Hello, since the last update today without a change on the config, the lmtp-service crash with the follow messages: -------- Jun 9 13:16:43 kobe kernel: lmtp[25881]: segfault at 4 ip b7568e83 sp bfbe01b0 error 4 in libdovecot.so.0.0.0[b750c000+c6000] -------- Jun 9 13:16:43 kobe dovecot: lmtp(25881): Fatal: master: service(lmtp): child 25881 killed with signal 11 (core dumped) -------- GNU gdb
2006 Apr 24
1
E1 testing
Skipped content of type multipart/alternative-------------- next part -------------- Console logs from Asterisk A: Executing Dial("SIP/test0-5821", "Zap/6/327557670||Tt") in new stack -- Requested transfer capability: 0x00 - SPEECH -- Called 6/327557670 -- Zap/6-1 is proceeding passing it to SIP/test0-5821 -- Accepting UNAUTHENTICATED call from 195.66.73.122:
2017 Dec 22
2
Re: [BUG] Not exiting media forced a promptly close of libvirt 3.10
Hi Daniel, sorry. Here the requested stack trace. Best regards Holger ===================================================================================== Thread 18 (Thread 0x7f0d495e0700 (LWP 10742)): #0  0x00007f0d55e690bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1  0x00007f0d5892176a in virCondWait (c=c@entry=0x5557f238db28,
2009 Nov 24
2
SLES 10 client keeps removing and re-adding accounts to groups
SLES 10 clients keeps removing and re-adding accounts to groups. Can''t use this product in production as a result, I''d like to use it though. Using clients 25.1 with master 25.1 This keeps re-occuring with every single puppet client run: Nov 24 09:57:09 puppetd[26915]: (//unixuser/User[jdoe]/groups) groups changed ''wheel'' to ''unixadm,wheel''
2017 May 30
0
Asterisk 13.16.0 Now Available
The Asterisk Development Team would like to announce the release of Asterisk 13.16.0. This release is available for immediate download at http://downloads.asterisk.org/pub/telephony/asterisk The release of Asterisk 13.16.0 resolves several issues reported by the community and would have not been possible without your participation. Thank you! The following issues are resolved in this release:
2017 May 30
0
Asterisk 14.5.0 Now Available
The Asterisk Development Team would like to announce the release of Asterisk 14.5.0. This release is available for immediate download at http://downloads.asterisk.org/pub/telephony/asterisk The release of Asterisk 14.5.0 resolves several issues reported by the community and would have not been possible without your participation. Thank you! The following issues are resolved in this release:
2015 Jan 07
2
Block Commit: [100 %]error: failed to pivot job for disk vda
Hello. I'm seeing this error while doing a backup of a VM. + virsh blockcommit kaltura vda --active --verbose --pivot Block Commit: [100 %]error: failed to pivot job for disk vda error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed I'm on qemu 2.2.0 and libvirt-1.2.11. Does
2018 Jan 18
3
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/18/2018 08:25 AM, Ján Tomko wrote: > On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote: >> Hello libvirt-users list, >> >> We're catching the same bug since 3.4.0 version (3.3.0 works OK). >> So, we have process that is permanently connected to libvirtd via socket >> and it is collecting stats, listening to events and control the VPSes.
2019 Jun 04
2
blockcommit of domain not successfull
Hi, i have several domains running on a 2-node HA-cluster. Each night i create snapshots of the domains, after copying the consistent raw file to a CIFS server i blockcommit the changes into the raw files. That's running quite well. But recent the blockcommit didn't work for one domain: I create a logfile from the whole procedure:
2018 Jan 17
4
Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
Hello libvirt-users list, We're catching the same bug since 3.4.0 version (3.3.0 works OK). So, we have process that is permanently connected to libvirtd via socket and it is collecting stats, listening to events and control the VPSes. When we try to 'shutdown' a number of VPSes we often catch the bug. One of VPSes sticks in 'in shutdown' state, no related 'qemu'
2019 Jun 11
2
Re: blockcommit of domain not successfull
----- On Jun 5, 2019, at 4:49 PM, Peter Krempa pkrempa@redhat.com wrote: > On Wed, Jun 05, 2019 at 13:33:49 +0200, Lentes, Bernd wrote: >> Hi Peter, >> >> thanks for your help. >> >> ----- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkrempa@redhat.com wrote: > > [...] > >> >> > >> > So that's interresting. Usually assertion
2020 Oct 12
3
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error: internal error: Failed to reserve port" error is received and migration does not succeed: virsh # migrate cartridge qemu+tls://ratchet.lan/system --live --persistent --undefinesource --copy-storage-all --verbose error: internal error: Failed to reserve port 49153 virsh # On target host with debug logs, nothing
2019 Jun 05
3
Re: blockcommit of domain not successfull
Hi Peter, thanks for your help. ----- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkrempa@redhat.com wrote: >> ============================================================= >> ... >> 2019-05-31 20:31:34.481+0000: 4170: error : qemuMonitorIO:719 : internal error: >> End of file from qemu monitor >> 2019-06-01 01:05:32.233+0000: 4170: error : qemuMonitorIO:719 :
2017 Feb 14
3
high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
Hi all, In IRC last night Dan helpfully confirmed my analysis of an issue we are seeing attempting to launch high memory KVM guests backed by hugepages... In this case the guests have 240GB of memory allocated from two host NUMA nodes to two guest NUMA nodes. The trouble is that allocating the hugepage backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME and so libvirt then
2012 Nov 25
1
Live migration with non-shared storage leads to corrupted file system
Hi, We have the following environment for live-migration with non-shared stroage between two nodes, Host OS: RHEL 6.3 Kernel: 2.6.32-279.el6.x86_64 Qemu-kvm: 1.2.0 libvirt: 0.10.1 and use "virsh" to do the job as virsh -c 'qemu:///system' migrate --live --persistent --copy-storage-all <guest-name> qemu+ssh://<target-node>/system The
2015 Jan 07
0
Re: Block Commit: [100 %]error: failed to pivot job for disk vda
On 01/07/2015 07:19 AM, Thomas Stein wrote: > Hello. > > I'm seeing this error while doing a backup of a VM. > > + virsh blockcommit kaltura vda --active --verbose --pivot > Block Commit: [100 %]error: failed to pivot job for disk vda > error: internal error: unable to execute QEMU command > 'block-job-complete': The active block job for device >
2018 Jan 17
0
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/17/2018 03:45 PM, Serhii Kharchenko wrote: > Hello libvirt-users list, > > We're catching the same bug since 3.4.0 version (3.3.0 works OK). > So, we have process that is permanently connected to libvirtd via socket > and it is collecting stats, listening to events and control the VPSes. > > When we try to 'shutdown' a number of VPSes we often catch the
2018 Jan 18
0
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote: >Hello libvirt-users list, > >We're catching the same bug since 3.4.0 version (3.3.0 works OK). >So, we have process that is permanently connected to libvirtd via socket >and it is collecting stats, listening to events and control the VPSes. > >When we try to 'shutdown' a number of VPSes we often