similar to: trying to add a 3rd peer

Displaying 20 results from an estimated 4000 matches similar to: "trying to add a 3rd peer"

2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes? You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?. Regards, Bartosz > Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13: > > All, > > I am trying to add a third peer to my gluster
2017 Jun 20
2
remounting volumes, is there an easier way
All, Over the week-end, one of my volume became unavailable. All clients could not access their mount points. On some of the clients, I had user processes that we using these mount points. So, I could not do a umount/mount without killing these processes. I also noticed that when I restarted the volume, the port changed on the server. So, clients that were still using the previous TCP/port could
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com> wrote: > All, > > Over the week-end, one of my volume became unavailable. All clients could > not access their mount points. On some of the clients, I had user processes > that we using these mount points. So, I could not do a umount/mount without > killing these processes. > > I also noticed
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week where I shutdown both nodes (one after each others). I had many files that needed to be healed after that. Everything worked well, except for 1 file. It is in split-brain, with 2 different GFID. I read the documentation but it only covers the cases where the GFID is the same on both bricks. BTW, I am running Gluster 3.10. Here
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig, There is no way to resolve gfid split-brains with type mismatch. You have to do it manually by following the steps in [1]. In case of type mismatch it is recommended to resolve it manually. But for only gfid mismatch in 3.11 we have a way to resolve it by using the *favorite-child-policy*. Since the file is not important, you can go with deleting that. [1]
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled. Ludwig On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote: > Do you have sharding enabled ? If yes, don't do it. > If no I'll let someone who knows better answer you :) > > On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > > All, > > > > We currently have a Gluster installation which is made of 2
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that will help to understand more about this scenarios and why gluster recommend replica 3 or arbiter volume. Regards Rafi KC On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote: > Hi Ludwig, > > There is no way to resolve gfid split-brains with type mismatch. You > have to do it manually by following the steps in [1].
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote: > All, > > I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last > week, I enabled the trashcan feature on one of my volumes: > gluster volume set date01 features.trash on I think you misspelled the volume name. Is it data01 or date01? > I also limited the max file size to 500MB:
2017 Sep 25
2
Adding bricks to an existing installation.
All, We currently have a Gluster installation which is made of 2 servers. Each server has 10 drives on ZFS. And I have a gluster mirror between these 2. The current config looks like: SERVER A-BRICK 1 replicated to SERVER B-BRICK 1 I now need to add more space and a third server. Before I do the changes, I want to know if this is a supported config. By adding a third server, I simply want to
2017 Jun 16
1
emptying the trash directory
All, I just enabled the trashcan feature on our volumes. It is working as expected. However, I can't seem to find the rules to empty the trashcan. Is there any automated process to do that? If so, what are the configuration features? Regards, Ludwig -- Ludwig Gamache -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it. If no I'll let someone who knows better answer you :) On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > All, > > We currently have a Gluster installation which is made of 2 servers. Each > server has 10 drives on ZFS. And I have a gluster mirror between these 2. > > The current config looks like: >
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Nov 01
1
Announcing Gluster release 3.10.7 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster 3.10.7 (packages available at [1]). Release notes for the release can be found at [2]. We are still working on a further fix for the corruption issue when sharded volumes are rebalanced, details as below. * Expanding a gluster volume that is sharded may cause file corruption - Sharded volumes are typically used for VM
2017 Oct 04
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 04, 2017 at 09:44:44AM +0000, ismael mondiu wrote: > Hello, > > I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6. > > It's hard to find upgrade guides for a minor release. Can you help me please ? Packages for GlusterFS 3.10.6 are available in the testing repository of the
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello, I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6. It's hard to find upgrade guides for a minor release. Can you help me please ? Thanks in advance Ismael ________________________________ De : Atin Mukherjee <amukherj at redhat.com> Envoy? : dimanche 17 septembre 2017 14:56 ? : ismael
2017 Oct 04
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 04, 2017 at 12:17:23PM +0000, ismael mondiu wrote: > > Thanks Niels, > > We want to install it on redhat 7. We work on a secured environment > with no internet access. > > We download the packages here > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and > then, we push the package to the server and install them via rpm > command .
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Thanks Niels, We want to install it on redhat 7. We work on a secured environment with no internet access. We download the packages here https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and then, we push the package to the server and install them via rpm command . Do you think this is a correct way to upgrade gluster when working without internet access? Thanks in advance
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello , it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 configuration. I upgraded the first server and then launched a reboot. Gluster is not starting. Seems that gluster starts before network layer. Some logs here: Thanks [2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port