Il 2021-08-03 19:51 Strahil Nikolov ha scritto:> The difference between thin and usual arbiter is that the thin arbiter > takes in action only when it's needed (one of the data bricks is down) > , so the thin arbiter's lattency won't affect you as long as both data > bricks are running. > > Keep in mind that thin arbiter is less used. For example, I have never > deployed a thin arbiter.Maybe I am horribly wrong, but local-node reads should *not* involve other nodes in any manner - ie: no checksum or voting is done for read. AFR hashing should spread different files to different nodes when doing striping, but for mirroring any node should have a valid copy of the requested data. So when using choose-local all reads which can really be local (ie: the requested file is available) should not suffer from remote party latency. Is that correct? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8
I'm not so sure. Imagine that local copy needs healing (outdated). Then gluster will check if other node's copy is blaming the local one and if it's "GREEN" , it will read locally. This check to the other servers is the slowest part due to the lattency between the nodes. I guess the only way is to use the FUSE client mount options and manually change the source brick. Another option that comes to my mind is pacemaker with a IPaddr2 reaource and the option globally-unique=true. If done properly, pacemaker will bring the IP on all nodes, but using IPTABLES (manipulated automatically by the cluster) only 1 node will be active at a time with a preference to the fastest node.Then the FUSE client can safely be configured to use that VIP, which in case of failure (of the fast node), will be moved to another node of the Gluster TSP.Yet, this will be a very complex design. Best Regards,Strahil Nikolov On Wed, Aug 4, 2021 at 22:28, Gionatan Danti<g.danti at assyoma.it> wrote: Il 2021-08-03 19:51 Strahil Nikolov ha scritto:> The difference between thin and usual arbiter is that the thin arbiter > takes in action only when it's needed (one of the data bricks is down) > , so the thin arbiter's lattency won't affect you as long as both data > bricks are running. > > Keep in mind that thin arbiter is less used. For example, I have never > deployed a thin arbiter.Maybe I am horribly wrong, but local-node reads should *not* involve other nodes in any manner - ie: no checksum or voting is done for read. AFR hashing should spread different files to different nodes when doing striping, but for mirroring any node should have a valid copy of the requested data. So when using choose-local all reads which can really be local (ie: the requested file is available) should not suffer from remote party latency. Is that correct? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210805/149132ed/attachment.html>
Hi Gionatan, Thanks for that reply. Under normal circumstances there would be nothing that needs to be healed, but how can local-node know this is really the case without checking the other nodes? If using local-node tells GlusterFS not to check other nodes for the health of the file at all then this sounds exactly like what we're looking for, although only for a GlusterFS node that is also a client. My understanding is that local-node isn't applicable to a machine that only has the client. Does anyone know definitively what is the case here? If not I guess we would need to test it. Thank you. On Thu, 5 Aug 2021 at 07:28, Gionatan Danti <g.danti at assyoma.it> wrote:> Il 2021-08-03 19:51 Strahil Nikolov ha scritto: > > The difference between thin and usual arbiter is that the thin arbiter > > takes in action only when it's needed (one of the data bricks is down) > > , so the thin arbiter's lattency won't affect you as long as both data > > bricks are running. > > > > Keep in mind that thin arbiter is less used. For example, I have never > > deployed a thin arbiter. > > Maybe I am horribly wrong, but local-node reads should *not* involve > other nodes in any manner - ie: no checksum or voting is done for read. > AFR hashing should spread different files to different nodes when doing > striping, but for mirroring any node should have a valid copy of the > requested data. > > So when using choose-local all reads which can really be local (ie: the > requested file is available) should not suffer from remote party > latency. > Is that correct? > > Thanks. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.danti at assyoma.it - info at assyoma.it > GPG public key ID: FF5F32A8 >-- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210810/d4f8502d/attachment.html>