similar to: Setting up 3.1

Displaying 20 results from an estimated 3000 matches similar to: "Setting up 3.1"

2012 Feb 20
2
Replacing a node
I have two servers running gluster 3.1.2 hosting a single replica-2 volume (web images) on Ubuntu Lucid 64. I need to replace one of the nodes with a new server. What's the best approach to this? There's not much data, but I'd like to do it with no downtime if possible. Marcus -- Marcus Bointon Synchromedia Limited: Creators of http://www.smartmessages.net/ UK info at hand CRM
2012 Mar 14
0
CeBIT goodies
I saw a quick demo of HotLava's multiport ethernet cards at CeBIT last week. The guy there had a low-spec server with two multi-port 10Gbe cards in sitting there sustaining 200Gbit/sec. I was quite impressed, thought it might be of interest to gluster users. They're very pretty when they're all lit up :) http://www.hotlavasystems.com/ I also saw an interesting low-cost 'fat
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl net.ipv4.ip_local_reserved_ports". Apart from output of command please send the logs to look into the issue. Thanks Gaurav On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > +Gaurav, he is the author of the patch, can you please comment here? > > > On Thu, Jun 15, 2017 at 3:28
2017 Jun 20
2
gluster peer probe failing
Hi, I have tried on my host by setting corresponding ports, but I didn't see the issue on my machine locally. However with the logs you have sent it is prety much clear issue is related to ports only. I will trying to reproduce on some other machine. Will update you as s0on as possible. Thanks Gaurav On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 14
2
gluster peer probe failing
Hi, I have a gluster (version 3.10.2) server running on a 3 node (centos7) cluster. Firewalld and SELinux are disabled, and I see I can telnet from each node to the other on port 24007. When I try to create the first peering by running on node1 the command: gluster peer probe <node2 ip address> I get the error: "Connection failed. Please check if gluster daemon is operational."
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com> wrote: > Hi! > I am having same issue but I am running Ubuntu v16.04. > It does not mount during boot, but works if I mount it manually. I am > running the Gluster-server on the same machines (3 machines) > Here is the /tc/fstab file > > /dev/sdb1 /data/gluster ext4 defaults 0 0 > >
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello, i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts. node1 hostname pri.ostechnix.lan node2 hostname sec.ostechnix.lan node2 hostname third.ostechnix.lan 51.15.77.14 pri.ostechnix.lan pri 51.15.90.60 sec.ostechnix.lan sec 163.172.151.120 third.ostechnix.lan third volume create command is root at
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi! Please see bellow. Note that web1.dasilva.network is the address of the local machine where one of the bricks is installed and that ties to mount. [2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2017-08-20 20:30:40.973249] I [MSGID: 106478]
2017 Jun 15
0
gluster peer probe failing
https://review.gluster.org/#/c/17494/ will it and the next update of 3.10 should have this fix. If sysctl net.ipv4.ip_local_reserved_ports has any value > short int range then this would be a problem with the current version. Would you be able to reset the reserved ports temporarily to get this going? On Wed, Jun 14, 2017 at 8:32 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are: net.ipv4.ip_local_reserved_ports = 30000-32767 net.ipv4.ip_local_port_range = 32768 60999 meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue? From: Atin Mukherjee [mailto:amukherj at redhat.com] Sent: Thursday, June 15, 2017 10:56 AM To: Guy Cukierman <guyc at elminda.com> Cc:
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Thanks for the reply. What would be the best course of action? The data on the volume isn?t important right now but I?m worried when our setup goes to production we don?t have the same situation and really need to recover our Gluster setup. I?m assuming that to redo is to delete everything in the /var/lib/glusterd directory on each of the nodes and recreate the volume again. Essentially
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1. >From the logs it's quite clear to me that node 1 is unable to resolve the address configured in /etc/hosts where as the other nodes do. Could you paste the gluster peer status output from all the nodes? Also can you please check if you're able to ping "pri.ostechnix.lan" from node1 only? Does
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
Hi! I am having same issue but I am running Ubuntu v16.04. It does not mount during boot, but works if I mount it manually. I am running the Gluster-server on the same machines (3 machines) Here is the /tc/fstab file /dev/sdb1 /data/gluster ext4 defaults 0 0 web1.dasilva.network:/www /mnt/glusterfs/www glusterfs defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
On Thu, 29 Jun 2017 at 22:51, Victor Nomura <victor at mezine.com> wrote: > Thanks for the reply. What would be the best course of action? The data > on the volume isn?t important right now but I?m worried when our setup goes > to production we don?t have the same situation and really need to recover > our Gluster setup. > > > > I?m assuming that to redo is to
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
I had looked at the logs shared by Victor privately and it seems to be there is a N/W glitch in the cluster which is causing the glusterd to lose its connection with other peers and as a side effect to this, lot of rpc requests are getting bailed out resulting glusterd to end up into a stale lock and hence you see that some of the commands failed with "another transaction is in progress or
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: > After seeing command history, I could see that you have 3 nodes, and > firstly you are peer probing 51.15.90.60? and 163.172.151.120 from? > 51.15.77.14 > So here itself you have 3 node cluster, after all this you are going > on node 2 and again peer probing 51.15.77.14. > ?Ideally it should work, with above steps, but due to some
2017 Jun 20
0
gluster peer probe failing
Hi, I am able to recreate the issue and here is my RCA. Maximum value i.e 32767 is being overflowed while doing manipulation on it and it was previously not taken care properly. Hence glusterd was crashing with SIGSEGV. Issue is being fixed with " https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported as well. Thanks Gaurav On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2011 Sep 07
2
Gluster-users Digest, Vol 41, Issue 16
Hi Phil, we?d the same Problem, try to compile with debug options. Yes this sounds strange but it help?s when u are using SLES, the glusterd works ok and u can start to work with it. just put exportCFLAGS='-g3 -O0' between %build and %configure in the glusterfs spec file. But be warned don?t use it with important data especially when u are planing to use the replication feature,