md-caching is not a panacea for your case, but it could help to some extend. The difference between thin and usual arbiter is that the thin arbiter takes in action only when it's needed (one of the data bricks is down) , so the thin arbiter's lattency won't affect you as long as both data bricks are running. Keep in mind that thin arbiter is less used. For example, I have never deployed a thin arbiter. Best Regards,Strahil Nikolov On Tue, Aug 3, 2021 at 7:40, David Cunningham<dcunningham at voisonics.com> wrote: Hi Strahil, I registered and read the article, thank you. It looks like it would help the speed of directory listing and related operations, but I don't see anything to suggest that the other nodes won't be checked on file reads. Am I missing something? We would probably use something like 2 nodes nearby and 1 remote. I understand that the thin arbiter keeps a track of which nodes are online, but can't see how that will help with file reads and nodes being checked for consistency. Are you able to explain please? Thanks. On Mon, 2 Aug 2021 at 17:45, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: Hi David, Can you register at developers.redhat.com and check the article about the md-cache. I think that for most cases the caching should be sufficient in order to not lookup the remote node. By the way, are at least 2 of nodes nearby (lower lattency)? If only 1 node is 'remote' , then you can give a try to gluster's thin arbiter (for the 'remote' node). Best Regards,Strahil Nikolov On Mon, Aug 2, 2021 at 5:02, David Cunningham<dcunningham at voisonics.com> wrote: Hi Ravi and Strahil, Thanks again for your responses. Having one brick be the one to read (but with failover if that node goes completely offline) would be great if we could do it per client, but won't work if all clients have to use the same setting. I'm not sure about gNFS, but normal NFS is something we've thought about as an option. I'm not sure if it will help though, because if the client NFS mounts the server which has the brick, then when it does a read presumably the brick will be checked for consistency with the other bricks and latency will be a problem again. If my understanding is correct even with choose-local enabled the other bricks will still be checked so the problem is not solved. I confess that AFR vs eventual consistency is beyond my understanding of replication. In the world of SQL there is Galera cluster, and it will write to all nodes but for reads only checks the node the client is actually connected to. That's the sort of functionality we'd find really helpful for our use-case. On Fri, 30 Jul 2021 at 19:21, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: Hi David, md-cache will just save some lookup actions across the bricks, but it won't save you from all cases. Using gNFS + cluster.choose-local is worth exploring, but as gNFS is deprecated I never checked if it will be affected by the lattency of the last brick. What Ravi proposed looks promising, but it has some drawbacks - for example a brick dies and FUSE clienta have to be adjusted to read from another brick. Ravi, I think that this topic was already discussed once. Best Regards,Strahil Nikolov On Fri, Jul 30, 2021 at 8:49, Ravishankar N<ranaraya at redhat.com> wrote: ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210803/bd78a36d/attachment.html>
Hi Strahil, Thanks for that. If a thin arbiter was the only node with higher latency it would help, but unfortunately there would be full replica nodes too. It may be that GlusterFS just can't do what we're hoping for. On Wed, 4 Aug 2021 at 05:51, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:> md-caching is not a panacea for your case, but it could help to some > extend. > > The difference between thin and usual arbiter is that the thin arbiter > takes in action only when it's needed (one of the data bricks is down) , so > the thin arbiter's lattency won't affect you as long as both data bricks > are running. > > Keep in mind that thin arbiter is less used. For example, I have never > deployed a thin arbiter. > > Best Regards, > Strahil Nikolov > > On Tue, Aug 3, 2021 at 7:40, David Cunningham > <dcunningham at voisonics.com> wrote: > Hi Strahil, > > I registered and read the article, thank you. It looks like it would help > the speed of directory listing and related operations, but I don't see > anything to suggest that the other nodes won't be checked on file reads. Am > I missing something? > > We would probably use something like 2 nodes nearby and 1 remote. I > understand that the thin arbiter keeps a track of which nodes are online, > but can't see how that will help with file reads and nodes being checked > for consistency. Are you able to explain please? > > Thanks. > > > > On Mon, 2 Aug 2021 at 17:45, Strahil Nikolov <hunter86_bg at yahoo.com> > wrote: > > Hi David, > > Can you register at developers.redhat.com and check the article about the > md-cache. I think that for most cases the caching should be sufficient in > order to not lookup the remote node. > > By the way, are at least 2 of nodes nearby (lower lattency)? If only 1 > node is 'remote' , then you can give a try to gluster's thin arbiter (for > the 'remote' node). > > > Best Regards, > Strahil Nikolov > > On Mon, Aug 2, 2021 at 5:02, David Cunningham > <dcunningham at voisonics.com> wrote: > Hi Ravi and Strahil, > > Thanks again for your responses. Having one brick be the one to read (but > with failover if that node goes completely offline) would be great if we > could do it per client, but won't work if all clients have to use the same > setting. > > I'm not sure about gNFS, but normal NFS is something we've thought about > as an option. I'm not sure if it will help though, because if the client > NFS mounts the server which has the brick, then when it does a read > presumably the brick will be checked for consistency with the other bricks > and latency will be a problem again. If my understanding is correct even > with choose-local enabled the other bricks will still be checked so the > problem is not solved. > > I confess that AFR vs eventual consistency is beyond my understanding of > replication. In the world of SQL there is Galera cluster, and it will write > to all nodes but for reads only checks the node the client is actually > connected to. That's the sort of functionality we'd find really helpful for > our use-case. > > > On Fri, 30 Jul 2021 at 19:21, Strahil Nikolov <hunter86_bg at yahoo.com> > wrote: > > Hi David, > > md-cache will just save some lookup actions across the bricks, but it > won't save you from all cases. > > Using gNFS + cluster.choose-local is worth exploring, but as gNFS is > deprecated I never checked if it will be affected by the lattency of the > last brick. > > What Ravi proposed looks promising, but it has some drawbacks - for > example a brick dies and FUSE clienta have to be adjusted to read from > another brick. > > Ravi, I think that this topic was already discussed once. > > Best Regards, > Strahil Nikolov > > On Fri, Jul 30, 2021 at 8:49, Ravishankar N > <ranaraya at redhat.com> wrote: > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > > > > -- > David Cunningham, Voisonics Limited > http://voisonics.com/ > USA: +1 213 221 1092 > New Zealand: +64 (0)28 2558 3782 > > > > -- > David Cunningham, Voisonics Limited > http://voisonics.com/ > USA: +1 213 221 1092 > New Zealand: +64 (0)28 2558 3782 > >-- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210804/3106345e/attachment.html>
Il 2021-08-03 19:51 Strahil Nikolov ha scritto:> The difference between thin and usual arbiter is that the thin arbiter > takes in action only when it's needed (one of the data bricks is down) > , so the thin arbiter's lattency won't affect you as long as both data > bricks are running. > > Keep in mind that thin arbiter is less used. For example, I have never > deployed a thin arbiter.Maybe I am horribly wrong, but local-node reads should *not* involve other nodes in any manner - ie: no checksum or voting is done for read. AFR hashing should spread different files to different nodes when doing striping, but for mirroring any node should have a valid copy of the requested data. So when using choose-local all reads which can really be local (ie: the requested file is available) should not suffer from remote party latency. Is that correct? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8