Displaying 7 results from an estimated 7 matches for "b2d3".
Did you mean:
b23
2018 Feb 07
2
Ip based peer probe volume create error
...peer. But other nodes node1 visible at it?s name .
node 1 peer status is
root at pri:/var/log/glusterfs# gluster peer status
Number of Peers: 2
Hostname: 51.15.90.60
Uuid: eed0d6c6-90ef-4705-b3f9-0b028d769df3
State: Peer in Cluster (Connected)
Hostname: 163.172.151.120
Uuid: 42bc6ec5-5826-4a40-b2d3-8a34e6caabb7
State: Peer in Cluster (Connected)
node 2
root at sec:/var/log/glusterfs# gluster peer status
Number of Peers: 2
Hostname: pri.ostechnix.lan
Uuid: 700fb496-4270-4e2a-878e-1a93ea0a5d0c
State: Peer in Cluster (Connected)
Other names:
51.15.77.14
Hostname: 163.172.151.120
Uuid: 42bc6...
2018 Feb 07
0
Ip based peer probe volume create error
...> node 1 peer status is
>
> root at pri:/var/log/glusterfs# gluster peer status
> Number of Peers: 2
>
> Hostname: 51.15.90.60
> Uuid: eed0d6c6-90ef-4705-b3f9-0b028d769df3
> State: Peer in Cluster (Connected)
>
> Hostname: 163.172.151.120
> Uuid: 42bc6ec5-5826-4a40-b2d3-8a34e6caabb7
> State: Peer in Cluster (Connected)
>
> node 2
>
> root at sec:/var/log/glusterfs# gluster peer status
> Number of Peers: 2
>
> Hostname: pri.ostechnix.lan
> Uuid: 700fb496-4270-4e2a-878e-1a93ea0a5d0c
> State: Peer in Cluster (Connected)
> Other names:...
2018 Feb 07
0
Ip based peer probe volume create error
Hi,
Could you please send me the glusterd.logs and cmd-history logs from all
the nodes.
Thanks
Gaurav
On Wed, Feb 7, 2018 at 5:13 PM, Ercan Aydo?an <ercan.aydogan at gmail.com>
wrote:
> Hello,
>
> i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if
> peer probe with hostname.
>
> But ip based probe it?s not working.
>
> What?s the correct
2018 Feb 07
2
Ip based peer probe volume create error
Hello,
i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if peer probe with hostname.
But ip based probe it?s not working.
What?s the correct /etc/hosts and hostname values when ip based peer probe.
Do i need still need peer FQDN names on /etc/hosts .
i need advice how can fix this issue.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...935C67BFC096491F8DC217799108 (39e1cb5c-af66-4aeb-b7e9-a01dddd046c9) on home-client-2
[2017-10-25 10:14:20.032169] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/5FA5978869A12BD2A7131418AE4C3F399605C7DE (0b2d3602-c888-42a7-9903-13eab27dc6ef) on home-client-2
[2017-10-25 10:14:20.051033] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/8CED5E0E251EE77FE153F4E5F7DA85C084BDB92D (e323113a-4852-43e9-869d-dbbf845911ad...