Displaying 20 results from an estimated 900 matches similar to: "Live migration with non-shared storage leads to corrupted file system"
2013 Nov 06
2
virt-resize problem for Windows 2003
Hi,
I'm using virt-resize to expand the primary partition (C:) in a
Windows 2003 image. The command works fine but after expanding, when I boot
into Windows 2003, all the other partitions (D:, E:, and F:) are lost.
After using the disk management tool within Windows 2003, I can re-label
the above three partitions and all the files are still there. But it is
really annoying because every
2019 Aug 05
2
Vm in state "in shutdown"
Description of problem:
libvirt 3.9 on CentOS Linux release 7.4.1708 (kernel 3.10.0-693.21.1.el7.x86_64) on Qemu version 2.10.0
I’m currently facing a strange situation. Sometimes my vm is shown by ‘virsh list’ as in state “in shutdown” but there is no qemu-kvm process linked to it.
Libvirt log when “in shutdown” state occur is as follows:
“d470c3b284425b9bacb34d3b5f3845fe” is vm’s name,
2018 Mar 08
1
Statistics domain memory block when domain shutdown
Hi
My libvirt version is 3.4.0,host system is centos 7.4 ,kernel is 3.10.0-693.el7.x86_64 , when I shutdown domain in virtual system, My program call virDomainMemoryStats, My program blocked in this api. the call stack is
#0 0x00007ff242d78a3d in poll () from /lib64/libc.so.6
#1 0x00007ff243755ce8 in virNetClientIOEventLoop () from /lib64/libvirt.so.0
#2 0x00007ff24375654b in
2015 Jan 07
2
Block Commit: [100 %]error: failed to pivot job for disk vda
Hello.
I'm seeing this error while doing a backup of a VM.
+ virsh blockcommit kaltura vda --active --verbose --pivot
Block Commit: [100 %]error: failed to pivot job for disk vda
error: internal error: unable to execute QEMU command
'block-job-complete': The active block job for device
'drive-virtio-disk0' cannot be completed
I'm on qemu 2.2.0 and libvirt-1.2.11.
Does
2016 Mar 16
2
overview zlib efficiency? Summary and added note
Hi,
use "doveadm" to get all real message
doveadm -f table fetch -A "size.physical" ALL | awk
'{s+=$2}END{printf("%.2fMB\n", s/1024/1024);}'
189247.67MB .. 185G
use "du" to get size on disc:
In my case
with deduplication:
/srv/stroage/# du -s -h *
53G vmail
75G vmail_sis
without deduplication
/srv/stroage/# du -s -h -l *
53G
2015 Jan 07
0
Re: Block Commit: [100 %]error: failed to pivot job for disk vda
On 01/07/2015 07:19 AM, Thomas Stein wrote:
> Hello.
>
> I'm seeing this error while doing a backup of a VM.
>
> + virsh blockcommit kaltura vda --active --verbose --pivot
> Block Commit: [100 %]error: failed to pivot job for disk vda
> error: internal error: unable to execute QEMU command
> 'block-job-complete': The active block job for device
>
2018 Jan 17
0
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/17/2018 03:45 PM, Serhii Kharchenko wrote:
> Hello libvirt-users list,
>
> We're catching the same bug since 3.4.0 version (3.3.0 works OK).
> So, we have process that is permanently connected to libvirtd via socket
> and it is collecting stats, listening to events and control the VPSes.
>
> When we try to 'shutdown' a number of VPSes we often catch the
2018 Jan 18
0
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote:
>Hello libvirt-users list,
>
>We're catching the same bug since 3.4.0 version (3.3.0 works OK).
>So, we have process that is permanently connected to libvirtd via socket
>and it is collecting stats, listening to events and control the VPSes.
>
>When we try to 'shutdown' a number of VPSes we often
2012 Nov 25
0
image transfer incompletely in live migration
Hi all,
I am using libvirt version 0.10.1 and qemu-kvm version 1.2.0 with RHEL 6.3
When I use libvirt API to make block migrate with a VM, it failed
frequently. I check the migrated VM and found that the filesystem damaged.
But sometimes it works very fine, I don't know why, it seems to be random.
All failed tests have the same condition, image transfer incompletely in
live migration. The
2006 May 19
2
Limitation of storage size.
I want to config one 50T OST stroage, is it okay?
Since I know that the limitation of ext3 is 1XT. Thnaks
2019 Jun 11
2
Re: blockcommit of domain not successfull
----- On Jun 5, 2019, at 4:49 PM, Peter Krempa pkrempa@redhat.com wrote:
> On Wed, Jun 05, 2019 at 13:33:49 +0200, Lentes, Bernd wrote:
>> Hi Peter,
>>
>> thanks for your help.
>>
>> ----- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkrempa@redhat.com wrote:
>
> [...]
>
>>
>> >
>> > So that's interresting. Usually assertion
2019 Jun 05
0
Re: blockcommit of domain not successfull
On Tue, Jun 04, 2019 at 14:44:29 +0200, Lentes, Bernd wrote:
> Hi,
Hi,
>
> i have several domains running on a 2-node HA-cluster.
> Each night i create snapshots of the domains, after copying the consistent raw file to a CIFS server i blockcommit the changes into the raw files.
> That's running quite well.
> But recent the blockcommit didn't work for one domain:
>
2020 Oct 12
3
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error:
internal error: Failed to reserve port" error is received and
migration does not succeed:
virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --verbose
error: internal error: Failed to reserve port 49153
virsh #
On target host with debug logs, nothing
2019 Jun 04
2
blockcommit of domain not successfull
Hi,
i have several domains running on a 2-node HA-cluster.
Each night i create snapshots of the domains, after copying the consistent raw file to a CIFS server i blockcommit the changes into the raw files.
That's running quite well.
But recent the blockcommit didn't work for one domain:
I create a logfile from the whole procedure:
2015 Jan 07
2
Re: Block Commit: [100 %]error: failed to pivot job for disk vda
On Wednesday 07 January 2015 09:46:09 Eric Blake wrote:
> On 01/07/2015 07:19 AM, Thomas Stein wrote:
> > Hello.
> >
> > I'm seeing this error while doing a backup of a VM.
> >
> > + virsh blockcommit kaltura vda --active --verbose --pivot
> > Block Commit: [100 %]error: failed to pivot job for disk vda
> > error: internal error: unable to execute
2018 Jan 18
3
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/18/2018 08:25 AM, Ján Tomko wrote:
> On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote:
>> Hello libvirt-users list,
>>
>> We're catching the same bug since 3.4.0 version (3.3.0 works OK).
>> So, we have process that is permanently connected to libvirtd via socket
>> and it is collecting stats, listening to events and control the VPSes.
2019 Jun 05
3
Re: blockcommit of domain not successfull
Hi Peter,
thanks for your help.
----- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkrempa@redhat.com wrote:
>> =============================================================
>> ...
>> 2019-05-31 20:31:34.481+0000: 4170: error : qemuMonitorIO:719 : internal error:
>> End of file from qemu monitor
>> 2019-06-01 01:05:32.233+0000: 4170: error : qemuMonitorIO:719 :
2018 Jan 17
4
Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
Hello libvirt-users list,
We're catching the same bug since 3.4.0 version (3.3.0 works OK).
So, we have process that is permanently connected to libvirtd via socket
and it is collecting stats, listening to events and control the VPSes.
When we try to 'shutdown' a number of VPSes we often catch the bug. One of
VPSes sticks in 'in shutdown' state, no related 'qemu'
2018 Apr 17
0
Re: can't find how to solve "QEMU guest agent is not connected"
On Tue, Apr 17, 2018 at 07:54:14PM +0900, Matt wrote:
> I am trying to make Qemu agent work with libvirt thanks to
> https://github.com/NixOS/nixops/pull/922 with libvirt 4.1.0. I've been
> trying to make it work for quite some time but I still haven't the
> slightest idea of what is wrong, I keep seeing "Guest agent is not
> responding: QEMU guest agent is not
2017 Feb 14
3
high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
Hi all,
In IRC last night Dan helpfully confirmed my analysis of an issue we are
seeing attempting to launch high memory KVM guests backed by hugepages...
In this case the guests have 240GB of memory allocated from two host NUMA
nodes to two guest NUMA nodes. The trouble is that allocating the hugepage
backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME
and so libvirt then