similar to: Problems joining new gluster 3.10 nodes to existing 3.8

Displaying 20 results from an estimated 100 matches similar to: "Problems joining new gluster 3.10 nodes to existing 3.8"

2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a problem joining four Gluster 3.10 nodes to an existing > Gluster 3.8 nodes. My understanding that this should work and not be > too much of a problem. > > Peer robe is successful but the node is rejected: > > gluster> peer detach elkpinfglt07 > peer
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: > After seeing command history, I could see that you have 3 nodes, and > firstly you are peer probing 51.15.90.60? and 163.172.151.120 from? > 51.15.77.14 > So here itself you have 3 node cluster, after all this you are going > on node 2 and again peer probing 51.15.77.14. > ?Ideally it should work, with above steps, but due to some
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello, i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts. node1 hostname pri.ostechnix.lan node2 hostname sec.ostechnix.lan node2 hostname third.ostechnix.lan 51.15.77.14 pri.ostechnix.lan pri 51.15.90.60 sec.ostechnix.lan sec 163.172.151.120 third.ostechnix.lan third volume create command is root at
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1. >From the logs it's quite clear to me that node 1 is unable to resolve the address configured in /etc/hosts where as the other nodes do. Could you paste the gluster peer status output from all the nodes? Also can you please check if you're able to ping "pri.ostechnix.lan" from node1 only? Does
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation: http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/ On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote: > Hello, > > i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All > machines have same /etc/hosts. > > node1 hostname > pri.ostechnix.lan
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
I changed /etc/hosts 127.0.0.1 pri.ostechnix.lan pri 51.15.90.60 sec.ostechnix.lan sec 163.172.151.120 third.ostechnix.lan third on every node matching hostname to 127.0.0.1 then root at pri:~# apt-get purge glusterfs-server root at pri:~# rm -rf /var/lib/glusterd/ root at pri:~# rm -rf /var/log/glusterfs/ root at pri:~# apt-get install glusterfs-server root at pri:~#
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote: > All, > > I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last > week, I enabled the trashcan feature on one of my volumes: > gluster volume set date01 features.trash on I think you misspelled the volume name. Is it data01 or date01? > I also limited the max file size to 500MB:
2017 Nov 07
2
Enabling Halo sets volume RO
Hi all, I'm taking a stab at deploying a storage cluster to explore the Halo AFR feature and running into some trouble. In GCE, I have 4 instances, each with one 10gb brick. 2 instances are in the US and the other 2 are in Asia (with the hope that it will drive up latency sufficiently). The bricks make up a Replica-4 volume. Before I enable halo, I can mount to volume and r/w files. The
2017 Nov 08
0
Enabling Halo sets volume RO
I think the problem here is by default the quorum is playing around here, to get rid of this you can change quorum type as fixed and the value as 2 , or you can disable the quorum. Regards Rafi KC On 11/08/2017 04:03 AM, Jon Cope wrote: > Hi all, > > I'm taking a stab at deploying a storage cluster to explore the Halo > AFR feature and running into some trouble. In GCE, I
2018 Mar 21
2
Brick process not starting after reinstall
Hi all, our systems have suffered a host failure in a replica three setup. The host needed a complete reinstall. I followed the RH guide to 'replace a host with the same hostname' (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer
2018 Mar 21
0
Brick process not starting after reinstall
Could you share the following information: 1. gluster --version 2. output of gluster volume status 3. glusterd log and all brick log files from the node where bricks didn't come up. On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <hawk at tbi.univie.ac.at> wrote: > Hi all, > > our systems have suffered a host failure in a replica three setup. > The host needed a
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2018 Mar 20
0
brick processes not starting
Hi all, our systems have suffered a node failure in a replica three setup. The node needed a complete reinstall. I followed the RH guide to replace a host with the same hostname (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer gluster
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote: >> Note that "stripe" is not tested much and practically unmaintained. > > Ah, this was what I suspected. Understood. I'll be happy with "shard". > > Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi! Please see bellow. Note that web1.dasilva.network is the address of the local machine where one of the bricks is installed and that ties to mount. [2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2017-08-20 20:30:40.973249] I [MSGID: 106478]
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
Hi, I have a distributed volume which runs on Fedora 26 systems with glusterfs 3.11.2 from gluster.org repos: ---------- [root at taupo ~]# glusterd --version glusterfs 3.11.2 gluster> volume info gv2 Volume Name: gv2 Type: Distribute Volume ID: 6b468f43-3857-4506-917c-7eaaaef9b6ee Status: Started Snapshot Count: 0 Number of Bricks: 6 Transport-type: tcp Bricks: Brick1: