similar to: glusterfs segmentation fault in rdma mode

Displaying 20 results from an estimated 4000 matches similar to: "glusterfs segmentation fault in rdma mode"

2017 Nov 04
0
glusterfs segmentation fault in rdma mode
This looks like there could be some some problem requesting / leaking / whatever memory but without looking at the core its tought to tell for sure. Note: /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618] Can you open up a bugzilla and get us the core file to review? -b ----- Original Message ----- > From: "???" <21291285 at qq.com> > To:
2017 Nov 06
1
回复: glusterfs segmentation fault in rdma mode
Hi ,all We found a strange problem. Some clients worked normally while some clients couldn't access sepcial files. For exmaple, Client A couldn't create the directory xxx, but Client B could. However, if Client B created the directory, Client A could acess it and even deleted it. But Client A still couldn't create the same directory later. If I changed the directory name, Client A
2017 Nov 05
0
回复: glusterfs segmentation fault in rdma mode
Hi? If there was only one client, there were not any problems even the traffic was very heavy. But if I used several clients to write the same volume, then I could see the segmentation fault. I used gdb to debug, but the performance was much lower than the previous test results, and we couldn't see the errors. We thought that the problem only occurred when multiple clients wrote the same
2017 Jun 14
2
gluster peer probe failing
Hi, I have a gluster (version 3.10.2) server running on a 3 node (centos7) cluster. Firewalld and SELinux are disabled, and I see I can telnet from each node to the other on port 24007. When I try to create the first peering by running on node1 the command: gluster peer probe <node2 ip address> I get the error: "Connection failed. Please check if gluster daemon is operational."
2017 Jun 15
0
gluster peer probe failing
https://review.gluster.org/#/c/17494/ will it and the next update of 3.10 should have this fix. If sysctl net.ipv4.ip_local_reserved_ports has any value > short int range then this would be a problem with the current version. Would you be able to reset the reserved ports temporarily to get this going? On Wed, Jun 14, 2017 at 8:32 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are: net.ipv4.ip_local_reserved_ports = 30000-32767 net.ipv4.ip_local_port_range = 32768 60999 meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue? From: Atin Mukherjee [mailto:amukherj at redhat.com] Sent: Thursday, June 15, 2017 10:56 AM To: Guy Cukierman <guyc at elminda.com> Cc:
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl net.ipv4.ip_local_reserved_ports". Apart from output of command please send the logs to look into the issue. Thanks Gaurav On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > +Gaurav, he is the author of the patch, can you please comment here? > > > On Thu, Jun 15, 2017 at 3:28
2017 Jun 20
2
gluster peer probe failing
Hi, I have tried on my host by setting corresponding ports, but I didn't see the issue on my machine locally. However with the logs you have sent it is prety much clear issue is related to ports only. I will trying to reproduce on some other machine. Will update you as s0on as possible. Thanks Gaurav On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2017 Jun 15
0
gluster peer probe failing
+Gaurav, he is the author of the patch, can you please comment here? On Thu, Jun 15, 2017 at 3:28 PM, Guy Cukierman <guyc at elminda.com> wrote: > Thanks, but my current settings are: > > net.ipv4.ip_local_reserved_ports = 30000-32767 > > net.ipv4.ip_local_port_range = 32768 60999 > > meaning the reserved ports are already in the short int range, so maybe I >
2017 Jun 29
1
issue with trash feature and arbiter volumes
Gluster 3.10.2 I have a replica 3 (2+1) volume and I have just seen both data bricks go down (arbiter stayed up). I had to disable trash feature to get the bricks to start. I had a quick look on bugzilla but did not see anything that looked similar. I just wanted to check that I was not hitting some know issue and/or doing something stupid, before I open a bug. This is from the brick log:
2017 Jun 20
0
gluster peer probe failing
Hi, I am able to recreate the issue and here is my RCA. Maximum value i.e 32767 is being overflowed while doing manipulation on it and it was previously not taken care properly. Hence glusterd was crashing with SIGSEGV. Issue is being fixed with " https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported as well. Thanks Gaurav On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav! 1. Any time estimation on to when this fix would be released? 2. Any recommended workaround? Best, Guy. From: Gaurav Yadav [mailto:gyadav at redhat.com] Sent: Tuesday, June 20, 2017 9:46 AM To: Guy Cukierman <guyc at elminda.com> Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] gluster peer probe failing
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi, I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes each with 16 bricks in a single cluster. By default I have paused scrub process to have it run manually. for the first time, i was trying to run scrub-on-demand and it was running fine, but after some time, i decided to pause scrub process due to high CPU usage and user reporting folder listing taking time. But scrub
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya, We just had the opportunity to try the option of disabling the kernel-NFS and restarting glusterd to start gNFS. However the gluster demon crashes immediately on startup. What additional information besides what we provide below would help debugging this? Thanks, Pat -------- Forwarded Message -------- Subject: gluster-nfs crashing on start Date: Mon, 7 Aug 2017 16:05:09
2017 Jul 09
0
Rebalance task fails
On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I start rebalancing, it fails immediately. > Rebooting Gluster
2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya
2017 Jul 13
0
Rebalance task fails
Hi Szymon, I have received the files and will take a look and get back to you. In what context are you seeing index? Thanks, Nithya On 11 July 2017 at 01:15, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hi Nithya, > > the files were sent to priv to avoid spamming the list with large > attachments. > Could someone explain what is index in Gluster? > Unfortunately