Displaying 20 results from an estimated 10000 matches similar to: ""other names" - how to clean/get rid of ?"
2017 Aug 02
0
"other names" - how to clean/get rid of ?
Are you referring to other names of peer status output? If so, then a
peerinfo entry having other names populated means it might be having
multiple n/w interfaces or the reverse address resolution is picking this
name. But why are you worried on the this part?
On Tue, 1 Aug 2017 at 23:24, peljasz <peljasz at yahoo.co.uk> wrote:
> hi
>
> how to get rid of entries in "Other
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
This means shd client is not able to establish the connection with the
brick on port 49155. Now this could happen if glusterd has ended up
providing a stale port back which is not what brick is listening to. If you
had killed any brick process using sigkill signal instead of sigterm this
is expected as portmap_signout is not received by glusterd in the former
case and the old portmap entry is
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin,
I've gotten around to this and was able to get upgrade done using 3.7.0
before moving to 3.11. For some reason 3.7.9 wasn't working well.
On 3.11 though I notice that gluster/nfs is really made optional and
nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha
on new clusters but would like to have glusterfs-gnfs on existing clusters
so a seamless upgrade
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs
attached now.
On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote:
>
>
> On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>>
>> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote:
>>
2018 Jan 22
2
BUG: After stop and start wrong port is advertised
Ouch! Yes, I see two port-related fixes in the GlusterFS 3.12.3 release
notes[0][1][2]. I've attached a tarball of all yesterday's logs from
/var/log/glusterd on one the affected nodes (called "wingu3"). I hope
that's what you need.
[0]
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.3.md
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1507747
2018 Jan 23
2
BUG: After stop and start wrong port is advertised
Hello,
?
?
Will we also suffer from this regression in any of the (previously) fixed 3.10 releases? We kept 3.10 and hope to stay stable :/
Regards
Jo
?
?
-----Original message-----
From:Atin Mukherjee <amukherj at redhat.com>
Sent:Tue 23-01-2018 05:15
Subject:Re: [Gluster-users] BUG: After stop and start wrong port is advertised
To:Alan Orth <alan.orth at gmail.com>;
CC:Jo
2018 Jan 21
2
BUG: After stop and start wrong port is advertised
For what it's worth, I just updated some CentOS 7 servers from GlusterFS
3.12.1 to 3.12.4 and hit this bug. Did the patch make it into 3.12.4? I had
to use Mike Hulsman's script to check the daemon port against the port in
the volume's brick info, update the port, and restart glusterd on each
node. Luckily I only have four servers! Hoping I don't have to do this
every time I
2018 Jan 23
0
BUG: After stop and start wrong port is advertised
3.10 doesn't have this regression, so you're safe.
On Tue, Jan 23, 2018 at 1:28 PM, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello,
>
>
>
>
>
> Will we also suffer from this regression in any of the (previously) fixed
> 3.10 releases? We kept 3.10 and hope to stay stable :/
>
>
>
> Regards
>
> Jo
>
>
>
>
>
>
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as
> the replacement?
>
> Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem IMHO.
--
Lindsay Mathieson
2018 Jan 23
0
BUG: After stop and start wrong port is advertised
So from the logs what it looks to be a regression caused by commit 635c1c3
( and the good news is that this is now fixed in release-3.12 branch and
should be part of 3.12.5.
Commit which fixes this issue:
COMMIT: https://review.gluster.org/19146 committed in release-3.12 by
\"Atin Mukherjee\" <amukherj at redhat.com> with a commit message-
glusterd: connect to an existing brick
2018 Jan 22
0
BUG: After stop and start wrong port is advertised
The patch was definitely there in 3.12.3. Do you have the glusterd and
brick logs handy with you when this happened?
On Sun, Jan 21, 2018 at 10:21 PM, Alan Orth <alan.orth at gmail.com> wrote:
> For what it's worth, I just updated some CentOS 7 servers from GlusterFS
> 3.12.1 to 3.12.4 and hit this bug. Did the patch make it into 3.12.4? I had
> to use Mike Hulsman's
2017 Oct 30
2
BUG: After stop and start wrong port is advertised
On Sat, 28 Oct 2017 at 02:36, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello Atin,
>
>
>
>
>
> I just read it and very happy you found the issue. We really hope this
> will be fixed in the next 3.10.7 version!
>
3.10.7 - no I guess as the patch is still in review and 3.10.7 is getting
tagged today. You?ll get this fix in 3.10.8.
>
>
>
2017 Dec 02
1
BUG: After stop and start wrong port is advertised
On Sat, 2 Dec 2017 at 19:29, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello Atin,
>
>
>
>
>
> Could you confirm this should have been fixed in 3.10.8? If so we'll test
> it for sure!
>
Fix should be part of 3.10.8 which is awaiting release announcement.
>
> Regards
>
> Jo
>
>
>
>
>
>
> -----Original
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10
seconds between each trace.
https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0
Content of the first stack trace is here:
Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)):
#0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0
#1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0
#2
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, Sep 4, 2017 at 5:28 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >1. On 80 nodes cluster, did you reboot only one node or multiple ones?
> Tried both, result is same, but the logs/stacks are from stopping and
> starting glusterd only on one server while others are running.
>
> >2. Are you sure that pstack output was always constantly pointing on
>
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl
net.ipv4.ip_local_reserved_ports".
Apart from output of command please send the logs to look into the issue.
Thanks
Gaurav
On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> +Gaurav, he is the author of the patch, can you please comment here?
>
>
> On Thu, Jun 15, 2017 at 3:28