Did a quick test this morning - 3.7.13 is now working with libgfapi - yay! However I do have to enable write-back or write-through caching in qemu before the vm's will start, I believe this is to do with aio support. Not a problem for me. I see there are settings for storage.linux-aio and storage.bd-aio - not sure as to whether they are relevant or which ones to play with. thanks, -- Lindsay Mathieson
On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay! >Is that an update in gluster or proxmox side ? Would be interested to try that out too, and I'm using directsync in any case so it'd be okay. -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160710/35865e31/attachment.sig>
Has their been any release notes or bug reports about the removal of aio support being intentional? In the case of proxmox it seems to be an easy workaround to resolve more or less. However In the case of oVirt I can change cache method per VM with a custom property key, but the dd process that tests storage backends has in the python scripts the direct flag hard coded in from what I have found so far. I could potentially swap to nfs-ganesha but again in ovirt exporting and importing a storage domain with a differing protocol is not necessarily what you want to be doing if you can avoid it. I'd probably end up creating a 2nd gluster volume and have to migrate disk by disk. Just trying to figure out what the roadmap of this is and what resolution I should be ultimately heading for. *David Gossage* *Carousel Checks Inc. | System Administrator* *Office* 708.613.2284 On Sat, Jul 9, 2016 at 7:49 PM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote:> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay! > > > However I do have to enable write-back or write-through caching in qemu > before the vm's will start, I believe this is to do with aio support. Not a > problem for me. > > I see there are settings for storage.linux-aio and storage.bd-aio - not > sure as to whether they are relevant or which ones to play with. > > thanks, > > -- > Lindsay Mathieson > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160721/0644b50c/attachment.html>
On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay! > > > However I do have to enable write-back or write-through caching in qemu > before the vm's will start, I believe this is to do with aio support. Not a > problem for me. > > I see there are settings for storage.linux-aio and storage.bd-aio - not sure > as to whether they are relevant or which ones to play with.Both storage.*-aio options are used by the brick processes. Depending on what type of brick you have (linux = filesystem, bd = LVM Volume Group) you could enable the one or the other. We do have a strong suggestion to set these "gluster volume group .." options: https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example From those options, network.remote-dio seems most related to your aio theory. It was introduced with http://review.gluster.org/4460 that contains some more details. HTH, Niels -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160721/3a82cc8e/attachment.sig>