Kind-of. That just tells the client what other nodes it can use to
retrieve that volume configuration. It's only used during that initial
fetch.
On 8/31/22 8:26 AM, P?ter K?roly JUH?SZ wrote:> You can also add the mount option:?backupvolfile-server to let the
> client know the other nodes.
>
> Matthew J Black <duluxoz at gmail.com> ? 2022?8?31??? 17:21???
>
> Ah, it all now falls into place: I was unaware that the client
> receives that file upon initial contact with the cluster, and thus
> has that information at hand independently of the cluster nodes.
>
> Thank you for taking the time to educate a poor newbie - it is
> very much appreciated.
>
> Cheers
>
> Dulux-Oz
>
> On 01/09/2022 01:16, Joe Julian wrote:
>>
>> You know when you do a `gluster volume info` and you get the
>> whole volume definition, the client graph is built from the same
>> info. In fact, if you look in /var/lib/glusterd/vols/$volume_name
>> you'll find some ".vol" files.
`$volume_name.tcp-fuse.vol` is the
>> configuration that the clients receive from whichever server they
>> initially connect to. You'll notice that file has multiple
>> "type/client" sections, each establishing a tcp
connection to a
>> server.
>>
>> Sidenote: You can also see in that file, how the microkernels are
>> used to build all the logic that forms the volume, which is kinda
>> cool. Back when I first started using gluster, there was no
>> glusterd and you have to build those .vol files by hand.
>>
>> On 8/31/22 8:04 AM, Matthew J Black wrote:
>>>
>>> Hi Joe,
>>>
>>> Thanks for getting back to me about this, it was helpful, and I
>>> really appreciate it.
>>>
>>> I am, however, still (slightly) confused - *how* does the
client
>>> "know" the addresses of the other servers in the
cluster (for
>>> read or write purposes), when all the client has is the line in
>>> the fstab file: "gfs1:gv1? /data/gv1? glusterfs defaults?
0 2"?
>>> I'm missing something, somewhere, in all of this, and I
can't
>>> work out what that "something" is.? :-)
>>>
>>> Your help truely is appreciated
>>>
>>> Cheers
>>>
>>> Dulux-Oz
>>>
>>> On 01/09/2022 00:55, Joe Julian wrote:
>>>>
>>>> With a replica volume the client connects and writes to all
the
>>>> replicas directly. For reads, when a filename is looked up
the
>>>> client checks with all the replicas and, if the file is
>>>> healthy, opens a read connection to the first replica to
>>>> respond (by default).
>>>>
>>>> If a server is shut down, the client receives the tcp
messages
>>>> that close the connection. For read operations, it chooses
the
>>>> next server. Writes will just continue to the remaining
>>>> replicas (metadata is stored in extended attributes to
inform
>>>> future lookups and the self-healer of file health).
>>>>
>>>> If a server crashes (no tcp finalization) the volume will
pause
>>>> for ping-timeout seconds (42 by default). Then continue as
>>>> above. BTW, that 42 second timeout shouldn't be a big
deal. The
>>>> MTBF should be sufficiently far apart that this should
still
>>>> easily get you five or six nines.
>>>>
>>>> On 8/30/22 11:55 PM, duluxoz wrote:
>>>>>
>>>>> Hi Guys & Gals,
>>>>>
>>>>> A Gluster newbie question for sure, but something I
just don't
>>>>> "get" (or I've missed in the doco,
mailing lists, etc):
>>>>>
>>>>> What happens to a Gluster Client when a Gluster Cluster
Node
>>>>> goes off-line / fails-over?
>>>>>
>>>>> How does the Client "know" to use (connect
to) another Gluster
>>>>> Node in the Gluster Cluster?
>>>>>
>>>>> Let me elaborate.
>>>>>
>>>>> I've got four hosts: gfs1, gfs2, gfs3, and client4
sitting on
>>>>> 192.168.1.1/24 <http://192.168.1.1/24>, .2, .3,
and .4
>>>>> respectively.
>>>>>
>>>>> DNS is set up and working correctly.
>>>>>
>>>>> gfs1, gs2, and gfs3 form a "Gluster Cluster"
with a Gluster
>>>>> Volume (gv1) replicated across all three nodes. This is
all
>>>>> working correctly (ie a file (file1) created/modified
on
>>>>> gfs1:/gv1 is replicated correctly to gfs2:/gv1 and
gfs3:/gv1).
>>>>>
>>>>> client4 has an entry in its /etc/fstab file which
reads:
>>>>> "gfs1:gv1? /data/gv1? glusterfs? defaults? 0
2". This is also
>>>>> all working correctly (ie client4:/data/gv1/file1 is
>>>>> accessible and replicated).
>>>>>
>>>>> So, (and I haven't tested this yet) what happens to
>>>>> client4:/data/gv1/file1 when gfs1 fails (ie is turned
off,
>>>>> crashes, etc)?
>>>>>
>>>>> Does client4 "automatically" switch to using
one of the other
>>>>> two Gluster Nodes, or do I have something wrong in
clients4's
>>>>> /etc/fstab file, or an error/mis-configuration
somewhere else?
>>>>>
>>>>> I thought about setting some DNS entries along the
lines of:
>>>>>
>>>>> ~~~
>>>>>
>>>>> glustercluster IN? A? 192.168.0.1
>>>>>
>>>>> glustercluster? IN? A 192.168.0.2
>>>>>
>>>>> glustercluster? IN? A 192.168.0.3
>>>>>
>>>>> ~~~
>>>>>
>>>>> and having clients4's /etc/fstab file read:
>>>>> "glustercluster:gv1? /data/gv1? glusterfs
defaults? 0 2", but
>>>>> this is a Round-Robin DNS config and I'm not sure
how Gluster
>>>>> treats this situation.
>>>>>
>>>>> So, if people could comment / point me in the correct
>>>>> direction I would really appreciate it - thanks.
>>>>>
>>>>> Dulux-Oz
>>>>>
>>>>>
>>>>> ________
>>>>>
>>>>>
>>>>>
>>>>> Community Meeting Calendar:
>>>>>
>>>>> Schedule -
>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>> Bridge:https://meet.google.com/cpu-eiue-hvk
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>>
https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> width>>>
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>>> Virus-free.www.avast.com
>>>
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>>>
>>>
>>>
<#m_-4861399478586633768_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20220831/1be06cca/attachment.html>