Michal Pr??vozn??k> On 6/12/23 20:17, Jerry Buburuz wrote:
>> Just found my issue.
>> After I removed the cephfs mounts it worked!
>> I will debug ceph.
>> I assumed because I could touch files on mounted cephfs it was working.
Now virsh list works!> Out of curiosity. Do you perhaps have a storage pool defined over
cephfs? I can see two possible sources for the problem:> 1) autostarted storage pool that makes libvirt mount cephfs, or
My storage is hard mounted in fstab. This works and it does mount on boot.
# fstab on hypervisor
user at .mynamefs-01=/ /data ceph noatime,_netdev 0 0
usern at .mynamefs-02=/ /data2 ceph noatime,_netdev 0 0
> 2) a storage pool defined over a path where cephfs is mounted.
> The problem with 1) is obvious (in fact it's not specific to ceph, if
it
was NFS/iSCSI and the server wasn't responding then libvirtd would just
hang).
I agree with NFS/cephfs or any storage over the network if its not
available libvirtd defined pools will cause problems.
> The problem with 2) is that for some types of storage pools ('dir'
typically) libvirt assumes they are always 'running'. And proceeds to
enumerate volumes in that pool (i.e. files under the dir). And if
there's a stale mount point, this might stuck libvirtd. But again, this is
not limited to ceph, any network FS might do this.> Michal
In my case I built this hypervisor using cephfs as my primary storage for
virtualmachines over the past year. Its has worked until recently.
Recently I had issue with my ceph which likely caused a stale mount. In
the past if the storage went offline, after fixing the issue I had no
problems and my virualmachines came back to life.
In the current case I found:
* cephfs working as normal. healthy.
* hypervisors mounting using fstab as usual.
* libvirtd starts normally no errors (even with cephfs mounted)
* virsh fails to connect to libvirt when the cephfs is mounted.
If I umount /cephfs, and systemctl restart libvirtd. virsh works!
Example "virsh list" , "virsh version" ..etc.
I am going to try and maybe delete the iso pools I have.
One interesting thing I found yesterday after I restart the hypervisor and
try umount /cephfs, libvirtd has one of my pools locked and opened. Its
the pool with iso in it. I know someone in a previous response to me
mentioned they had issues with iso being stored on cephfs and libvirtd. In
order for me to umount /cephfs I have to stop all libvirtd
services(livrtd, libvirtd.socket .. ro .. and admin). This makes sense
libvirtd has defined pools in my cephfs.
thanks
jerry