Lindsay Mathieson
2016-Jul-10 00:45 UTC
[Gluster-users] NIC died migration timetable moved up
On 10/07/2016 5:17 AM, David Gossage wrote:> Came in this morning to update to 3.7.12 and noticed that 3.7.13 had > been released. So shut down VM's and gluster volumes and updated. > update process itself went smoothly but on starting up oVirt engine > the main gluster storage volume didn't activate. I manually activated > and it came up but oVirt wouldn't report on how much space was used. > However ovirt nodes did mount and allow me to start VM's. However > after a few minutes it would claim to be inactive again even if the > nodes themselevs still had access and mounted volumes and the VM's > were still running. Found these errors flooding the gluster logs on nodes.Hi David, I did a quick test this morning with Proxmox and 3.7.13 and was able to get it working with the fuse mount *and* libgfapi. One caveat - you *have* to enable qemu caching, either write-back or write-through. 3.7.12 & 13 seem to now disable aio support, and qemu requires that when caching is turned off. There are setting for aio in gluster that I haven't played with yet. -- Lindsay Mathieson
On Sat, Jul 9, 2016 at 7:45 PM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote:> On 10/07/2016 5:17 AM, David Gossage wrote: > >> Came in this morning to update to 3.7.12 and noticed that 3.7.13 had been >> released. So shut down VM's and gluster volumes and updated. >> update process itself went smoothly but on starting up oVirt engine the >> main gluster storage volume didn't activate. I manually activated and it >> came up but oVirt wouldn't report on how much space was used. However >> ovirt nodes did mount and allow me to start VM's. However after a few >> minutes it would claim to be inactive again even if the nodes themselevs >> still had access and mounted volumes and the VM's were still running. Found >> these errors flooding the gluster logs on nodes. >> > > Hi David, I did a quick test this morning with Proxmox and 3.7.13 and was > able to get it working with the fuse mount *and* libgfapi. > > > One caveat - you *have* to enable qemu caching, either write-back or > write-through. 3.7.12 & 13 seem to now disable aio support, and qemu > requires that when caching is turned off. > >I'll see if I can free up test setup to play around with it some more. It' seems stable at 3.7.11 for now so I'll probably be spending my time on getting the disks sharded now, and getting the 3rd node back in cluster.> There are setting for aio in gluster that I haven't played with yet. > > -- > Lindsay Mathieson > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160709/3112e77b/attachment.html>
On Sat, Jul 9, 2016 at 7:45 PM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote:> On 10/07/2016 5:17 AM, David Gossage wrote: > >> Came in this morning to update to 3.7.12 and noticed that 3.7.13 had been >> released. So shut down VM's and gluster volumes and updated. >> update process itself went smoothly but on starting up oVirt engine the >> main gluster storage volume didn't activate. I manually activated and it >> came up but oVirt wouldn't report on how much space was used. However >> ovirt nodes did mount and allow me to start VM's. However after a few >> minutes it would claim to be inactive again even if the nodes themselevs >> still had access and mounted volumes and the VM's were still running. Found >> these errors flooding the gluster logs on nodes. >> > > Hi David, I did a quick test this morning with Proxmox and 3.7.13 and was > able to get it working with the fuse mount *and* libgfapi. > > > One caveat - you *have* to enable qemu caching, either write-back or > write-through. 3.7.12 & 13 seem to now disable aio support, and qemu > requires that when caching is turned off. > > > There are setting for aio in gluster that I haven't played with yet. > >Comparing settings you have posted I noticed I had one difference. performance.stat-prefetch: off What effect does this have? My current line-up Options Reconfigured: features.shard-block-size: 64MB features.shard: on server.allow-insecure: on cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 performance.readdir-ahead: on cluster.self-heal-window-size: 1024 cluster.background-self-heal-count: 16 performance.strict-write-ordering: off nfs.disable: on nfs.addr-namelookup: off nfs.enable-ino32: off> -- > Lindsay Mathieson > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160711/2c9f13af/attachment.html>