search for: client_fill_address_famili

Displaying 10 results from an estimated 10 matches for "client_fill_address_famili".

2013 Dec 06
1
replace-brick failing - transport.address-family not specified
Hello, I have what I think is a fairly basic Gluster setup, however when I try to carry out a replace-brick operation it consistently fails... Here are the command line options: root at osh1:~# gluster volume info media Volume Name: media Type: Replicate Volume ID: 4c290928-ba1c-4a45-ac05-85365b4ea63a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1:
2013 Sep 16
0
gluster replace
So I messed up when building a brick on a gluster 3.3.1 filesystem. Instead of i=512 on the xfs filesystem I set i=256. I realized my mistake after I had already rebalanced the volume. I wanted to remove and replace that brick in order to rebuild it properly as it hadn't failed yet but I knew that it wasn't good to have i=256. So I attempted to do: gluster volume replace-brick
2014 Sep 30
1
geo-replication 3.5.2 not working on Ubuntu 12.0.4 - transport.address-family not specified
Hi, I am testing geo-replication 3.5.2 by following the instruction from https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md All commands are executed successfully without returning any error, but no replication is done from master to the slave. Enclosed please find the logs when starting the geo-replication volume. At the end of the log,
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings, Legend: storage-gfs-3-prd - the first gluster. storage-1-saas - new gluster where "the first gluster" had to be migrated. storage-gfs-4-prd - the second gluster (which had to be migrated later). I've started command replace-brick: 'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared start' During that Virtual
2017 Jun 20
2
gluster peer probe failing
Hi, I have tried on my host by setting corresponding ports, but I didn't see the issue on my machine locally. However with the logs you have sent it is prety much clear issue is related to ports only. I will trying to reproduce on some other machine. Will update you as s0on as possible. Thanks Gaurav On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2017 Jun 20
0
gluster peer probe failing
Hi, I am able to recreate the issue and here is my RCA. Maximum value i.e 32767 is being overflowed while doing manipulation on it and it was previously not taken care properly. Hence glusterd was crashing with SIGSEGV. Issue is being fixed with " https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported as well. Thanks Gaurav On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav! 1. Any time estimation on to when this fix would be released? 2. Any recommended workaround? Best, Guy. From: Gaurav Yadav [mailto:gyadav at redhat.com] Sent: Tuesday, June 20, 2017 9:46 AM To: Guy Cukierman <guyc at elminda.com> Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] gluster peer probe failing
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl net.ipv4.ip_local_reserved_ports". Apart from output of command please send the logs to look into the issue. Thanks Gaurav On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > +Gaurav, he is the author of the patch, can you please comment here? > > > On Thu, Jun 15, 2017 at 3:28
2011 Jun 09
1
NFS problem
Hi, I got the same problem as Juergen, My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0 Volume Name: poolsave Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/soft/gluster-data Brick2: ylal2960:/soft/gluster-data Options Reconfigured: diagnostics.brick-log-level: DEBUG network.ping-timeout: 20 performance.cache-size: 512MB