similar to: gluster peer probe failing

Displaying 20 results from an estimated 1000 matches similar to: "gluster peer probe failing"

2017 Jun 15
0
gluster peer probe failing
https://review.gluster.org/#/c/17494/ will it and the next update of 3.10 should have this fix. If sysctl net.ipv4.ip_local_reserved_ports has any value > short int range then this would be a problem with the current version. Would you be able to reset the reserved ports temporarily to get this going? On Wed, Jun 14, 2017 at 8:32 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are: net.ipv4.ip_local_reserved_ports = 30000-32767 net.ipv4.ip_local_port_range = 32768 60999 meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue? From: Atin Mukherjee [mailto:amukherj at redhat.com] Sent: Thursday, June 15, 2017 10:56 AM To: Guy Cukierman <guyc at elminda.com> Cc:
2017 Jun 15
0
gluster peer probe failing
+Gaurav, he is the author of the patch, can you please comment here? On Thu, Jun 15, 2017 at 3:28 PM, Guy Cukierman <guyc at elminda.com> wrote: > Thanks, but my current settings are: > > net.ipv4.ip_local_reserved_ports = 30000-32767 > > net.ipv4.ip_local_port_range = 32768 60999 > > meaning the reserved ports are already in the short int range, so maybe I >
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl net.ipv4.ip_local_reserved_ports". Apart from output of command please send the logs to look into the issue. Thanks Gaurav On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > +Gaurav, he is the author of the patch, can you please comment here? > > > On Thu, Jun 15, 2017 at 3:28
2017 Jun 20
2
gluster peer probe failing
Hi, I have tried on my host by setting corresponding ports, but I didn't see the issue on my machine locally. However with the logs you have sent it is prety much clear issue is related to ports only. I will trying to reproduce on some other machine. Will update you as s0on as possible. Thanks Gaurav On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 20
0
gluster peer probe failing
Hi, I am able to recreate the issue and here is my RCA. Maximum value i.e 32767 is being overflowed while doing manipulation on it and it was previously not taken care properly. Hence glusterd was crashing with SIGSEGV. Issue is being fixed with " https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported as well. Thanks Gaurav On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav! 1. Any time estimation on to when this fix would be released? 2. Any recommended workaround? Best, Guy. From: Gaurav Yadav [mailto:gyadav at redhat.com] Sent: Tuesday, June 20, 2017 9:46 AM To: Guy Cukierman <guyc at elminda.com> Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] gluster peer probe failing
2017 Jun 15
1
peer probe failures
Hi, I'm having a similar issue, were you able to solve it? Thanks. Hey all, I've got a strange problem going on here. I've installed glusterfs-server on ubuntu 16.04: glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed] I can
2017 Jun 16
2
About the maintenance time
I currently use it in the replica configuration of 3.10.2. The brick process may not start when restarting the storage server. Also, when using gnfs, the I / O may hang up and become unusable. After checking the release notes of 3.11.0, the following ID seems to be applicable, so please reflect it if it can be reflected in 3.10 series. Also, for items with high urgency Since 3.11.0 has just been
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
Hi! I am having same issue but I am running Ubuntu v16.04. It does not mount during boot, but works if I mount it manually. I am running the Gluster-server on the same machines (3 machines) Here is the /tc/fstab file /dev/sdb1 /data/gluster ext4 defaults 0 0 web1.dasilva.network:/www /mnt/glusterfs/www glusterfs defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2017 Jun 16
0
About the maintenance time
On 06/16/2017 09:07 AM, te-yamauchi at usen.co.jp wrote: > I currently use it in the replica configuration of 3.10.2. > The brick process may not start when restarting the storage server. Also, when using gnfs, the I / O may hang up and become unusable. > After checking the release notes of 3.11.0, the following ID seems to be applicable, so please reflect it if it can be reflected in
2017 Jun 12
1
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Did the logs provide any hints as to what the issue may be? Diego On Sat, Jun 3, 2017 at 12:16 PM, Diego Remolina <dijuremo at gmail.com> wrote: > Thanks for taking the time to look into this. Since we needed downtime > due to the gluster update, we also updated the OS, including samba. We > went from 4.2.x to 4.4.4 and many other packages for CentOS were > updated as well. OS
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote: > Back to replica 3 w/o arbiter. Two fio jobs
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com> wrote: > Hi! > I am having same issue but I am running Ubuntu v16.04. > It does not mount during boot, but works if I mount it manually. I am > running the Gluster-server on the same machines (3 machines) > Here is the /tc/fstab file > > /dev/sdb1 /data/gluster ext4 defaults 0 0 > >
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, 4 Sep 2017 at 20:04, Serkan ?oban <cobanserkan at gmail.com> wrote: > I have been using a 60 server 1560 brick 3.7.11 cluster without > problems for 1 years. I did not see this problem with it. > Note that this problem does not happen when I install packages & start > glusterd & peer probe and create the volumes. But after glusterd > restart. > > Also
2017 Sep 01
2
Glusterd proccess hangs on reboot
Hi, You can find pstack sampes here: https://www.dropbox.com/s/6gw8b6tng8puiox/pstack_with_debuginfo.zip?dl=0 Here is the first one: Thread 8 (Thread 0x7f92879ae700 (LWP 78909)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000310fe37d57 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0 #2 0x0000003d99c07aa1 in start_thread () from /lib64/libpthread.so.0 #3