Displaying 20 results from an estimated 10000 matches similar to: "how libvirt check qemu domain status?"
2018 Sep 14
2
Re: live migration and config
14.09.2018 15:43, Jiri Denemark пишет:
> On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote:
>>
>> 13.09.2018 18:57, Jiri Denemark пишет:
>>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
>>>> 13.09.2018 17:47, Jiri Denemark пишет:
>>>>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
2018 Sep 13
2
Re: live migration and config
13.09.2018 17:47, Jiri Denemark пишет:
> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
>> After some mistakes yesterday we ( me and my colleague ) think that it
>> will be wise for libvirt to check config file existence on remote side
> Which config file?
>
VM config file, namely qemu.
We forgot to mount shared storage (namely gluster volume), on which we
2018 Sep 13
2
Re: live migration and config
13.09.2018 18:57, Jiri Denemark пишет:
> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
>>
>> 13.09.2018 17:47, Jiri Denemark пишет:
>>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
>>>> After some mistakes yesterday we ( me and my colleague ) think that it
>>>> will be wise for libvirt to check config file existence
2018 Sep 13
3
live migration and config
Hello!
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for libvirt to check config file existence on remote side
and through error if not,
before migrating, otherwise migration will fail and VM fs can be
damaged, because it is sort of remove of power plug...
We missed twice yesterday :-(
Could you tell me is there already such option or any plans to
2019 Mar 03
3
virsh hangs when backup jobs run on the kvm host
Hi,
Has anyone met the same problem that virsh hangs when there are some backup jobs running on the kvm host? I use dirty bitmap supported by qemu to do backups every 6 minitues. About a few hours later, I ran 'virsh list', It hung. Then I stopped the backup jobs ?nothing was getting better. I had checked the libvirt logs ?but found nothing useful .
Maybe the high I/O caused the
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
First a heartfelt thanks for writing back.
In a solution (not having this issue) we do use nfs-ganesha to host filesystem squashfs root FS objects to compute nodes. It is working great. We also have fuse-through-LIO.
The solution here is 3 servers making up with cluster admin node.
The XFS issue is only observed when we try to replace an existing one with another XFS on top, and only with RAW,
2018 Jul 23
2
G729
20.07.2018 23:35, John Kiniston пишет:
>
> On Fri, Jul 20, 2018 at 11:41 AM Saint Michael <venefax at gmail.com
> <mailto:venefax at gmail.com>> wrote:
>
> The community would benefit if a non/licensed version of G729
> would be included with Asterisk, since the license expired.
> The current codec source code posted still requires
2017 Sep 09
2
GlusterFS as virtual machine storage
Yes, this is my observation so far.
On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it> wrote:
> Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
>
>> Sorry, I did not start the glusterfsd on the node I was shutting
>> yesterday and now killed another one during FUSE test, so it had to
>> crash immediately (only one of three nodes were actually
2017 Sep 09
0
GlusterFS as virtual machine storage
Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
> Sorry, I did not start the glusterfsd on the node I was shutting
> yesterday and now killed another one during FUSE test, so it had to
> crash immediately (only one of three nodes were actually up). This
> definitely happened for the first time (only one node had been killed
> yesterday).
>
> Using FUSE seems to be OK with
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems to be OK with replica 3. So this can be gfapi related
or maybe rather libvirt related.
I tried
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly
fine. And trust me, there had been A LOT of various crashes, reboots and
kill of nodes.
Maybe it's a version thing ? A new bug in the new gluster releases that
doesn't affect our 3.7.15.
On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote:
> Well, that makes me feel better.
>
> I've seen all these
2018 Oct 12
2
asterisk 16 manager --END COMMAND--
Hello!
Just upgraded asterisk from 13 to 16 and found that php-agi library is
not compatible.
It waits for --END COMMAND--
after command is completed,
but, as I see from tcpdump, now asterisk does not send such string after
command is completed.
Could you tell me, is it possible to get previous behaviour ?
Or what now manager sends as command completed ?
Thank you!
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hey Erik,
I am running a similar setup with no issues having Ubuntu Host Systems
on HPE DL380 Gen 10.
I however used to run libvirt/qemu via nfs-ganesha on top of gluster
flawlessly.
Recently I upgraded to the native GFAPI implementation, which is poorly
documented with snippets all over the internet.
Although I cannot provide a direct solution for your issue, I am
however suggesting to try
2017 Sep 09
0
GlusterFS as virtual machine storage
Well, that makes me feel better.
I've seen all these stories here and on Ovirt recently about VMs going
read-only, even on fairly simply layouts.
Each time, I've responded that we just don't see those issues.
I guess the fact that we were lazy about switching to gfapi turns out to
be a potential explanation <grin>
-wk
On 9/9/2017 6:49 AM, Pavel Szalbot wrote:
>
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>)
We are also typically on a somewhat slower GlusterFS LAN network (bonded
2x1G, jumbo frames) so that may be a factor.
I'll try to setup a trusted pool to test libgfapi soon.
I'm curious as to how much faster it is, but the fuse mount is fast
enough, dirt simple to use, and just works on all VM ops such as
2017 Jun 03
2
Re: libvirtd not accepting connections
On Sat, Jun 03, 2017 at 09:22:58AM -0400, Michael C Cambria wrote:
>
>
>On 06/02/2017 09:53 AM, Michael C. Cambria wrote:
>>
>>
>> On 06/02/2017 09:43 AM, Martin Kletzander wrote:
>>> [adding back the ML, you probably hit reply instead of reply-all, this
>>> way other people might help if they know more]
>>>
>>> On Fri, Jun 02, 2017 at
2016 Oct 13
2
Re: Fwd: Problems connecting to the libvirtd server
On 12.10.2016 19:16, Stefano Ricci wrote:
> I checked with qmp and your consideration was correct, the status of
> qemu process is in prelaunch and running equals false.
> But I do not understand why
[Please don't top post on technical lists]
Well, I don't have any idea either. I mean, the daemon logs you provided
are from the second run of libvirtd. Maybe this is qemu or
2015 May 31
4
Re: [ovirt-users] Bug in Snapshot Removing
Small addition again:
This error shows up in the log while removing snapshots WITHOUT rendering the Vms unresponsive
—
Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1657]: Timed out during operation: cannot acquire state change lock
Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net vdsm[6839]: vdsm vm.Vm ERROR vmId=`56848f4a-cd73-4eda-bf79-7eb80ae569a9`::Error getting block
2023 Oct 29
1
State of the gluster project
29.10.2023 00:07, Zakhar Kirpichenko ?????:
> I don't think it's worth it for anyone. It's a dead project since
> about 9.0, if not earlier.
Well, really earlier.
Attempt to get better gluster as gluster2 in 4.0 failed...
2017 Jun 03
2
Re: libvirtd not accepting connections
On Sat, Jun 03, 2017 at 05:20:47PM -0400, Michael C Cambria wrote:
>I also tried stopping libvirtd, renaming both qemu-system-i386 and
>qemu-system-x86_64, start libvirtd. Things get further along; dnsmasq
>log messages show up.
>
>$ sudo systemctl status libvirtd.service
>● libvirtd.service - Virtualization daemon
> Loaded: loaded