Just to add some info on that, I did a fresh install of 3.7.12 here (without setting that option) and I don't have a problem starting the VMs. I do have a problem the libgfapi though, I can't create VMs with qcow disks (I get a timeout) and I can create VMs with raw disks but when I try to format them with mkfs.ext4 they shut down without any errors. Maybe it's related ? Are you using qcow ? I added the volume as NFS and I'm using that without any problem for now with both qwoc and raw, maybe you could try that, see if at least your VMs can boot that way. On Wed, Jun 29, 2016 at 06:25:44PM +1000, Lindsay Mathieson wrote:> Was able to shutdown my gluster and clean reboot. > > set: > > cluster.shd-max-threads:4 > cluster.locking-scheme:granular > > And started one VM. It got halfway booted and froze. A gluster heal > info returned "Not able to fetch volfile from glusterd" > > I killed the VM, stopped the datastore and all gluster processes, then > started it back up. Heal info was successful and showed 200+ shards > being healed. > > however a heal info heal-count show's 0 heals > > I stopped the dtastore again and reset the settings: > cluster.shd-max-threads > cluster.locking-scheme > > Waiting for heal to complete. Before I try again. > > Contemplating undoing the upgrade. Can I set the opversion back to 30710? > > > > -- > Lindsay > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160629/77c24fc7/attachment.sig>
On 29/06/2016 10:48 PM, Kevin Lemonnier wrote:> Just to add some info on that, I did a fresh install of 3.7.12 here (without setting that option) > and I don't have a problem starting the VMs. > > I do have a problem the libgfapi though, I can't create VMs with qcow disks (I get a timeout) > and I can create VMs with raw disks but when I try to format them with mkfs.ext4 they shut down > without any errors. > Maybe it's related ? Are you using qcow ?Yes, I am> I added the volume as NFS and I'm using that without any problem for now with both qwoc and raw, maybe > you could try that, see if at least your VMs can boot that way.Which NFS server are you using? the std one built into proxmox/debian? how do you handle redundancy? That did suggest to me trying using the fuse client, which proxmox automatically sets up. I changed my gfgapi storage to shared directory storage pointing to the fuse mount. That is working better, I have several VM's running now and heal info isn't locking up or reporting any issues. However several other VM's won't start, qemu erros out with: "Could not read qcow2 header: Operation not permitted", which freaked me out till I manually checked the image with qemu-img, which reported it as fine. Perhaps I need to reboot the cluster again to reset any locks or randomness. -- Lindsay Mathieson
On 29/06/2016 10:48 PM, Kevin Lemonnier wrote:> I do have a problem the libgfapi though, I can't create VMs with qcow disks (I get a timeout) > and I can create VMs with raw disks but when I try to format them with mkfs.ext4 they shut down > without any errors.Downgraded back to 3.7.11 and got everything working again. To me it looks like a libgfapi problem. VM's were working via fuse. -- Lindsay Mathieson