similar to: Adding bricks to an existing installation.

Displaying 20 results from an estimated 4000 matches similar to: "Adding bricks to an existing installation."

2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it. If no I'll let someone who knows better answer you :) On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > All, > > We currently have a Gluster installation which is made of 2 servers. Each > server has 10 drives on ZFS. And I have a gluster mirror between these 2. > > The current config looks like: >
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled. Ludwig On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote: > Do you have sharding enabled ? If yes, don't do it. > If no I'll let someone who knows better answer you :) > > On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > > All, > > > > We currently have a Gluster installation which is made of 2
2017 Nov 01
1
Announcing Gluster release 3.10.7 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster 3.10.7 (packages available at [1]). Release notes for the release can be found at [2]. We are still working on a further fix for the corruption issue when sharded volumes are rebalanced, details as below. * Expanding a gluster volume that is sharded may cause file corruption - Sharded volumes are typically used for VM
2017 Jun 20
2
remounting volumes, is there an easier way
All, Over the week-end, one of my volume became unavailable. All clients could not access their mount points. On some of the clients, I had user processes that we using these mount points. So, I could not do a umount/mount without killing these processes. I also noticed that when I restarted the volume, the port changed on the server. So, clients that were still using the previous TCP/port could
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com> wrote: > All, > > Over the week-end, one of my volume became unavailable. All clients could > not access their mount points. On some of the clients, I had user processes > that we using these mount points. So, I could not do a umount/mount without > killing these processes. > > I also noticed
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week where I shutdown both nodes (one after each others). I had many files that needed to be healed after that. Everything worked well, except for 1 file. It is in split-brain, with 2 different GFID. I read the documentation but it only covers the cases where the GFID is the same on both bricks. BTW, I am running Gluster 3.10. Here
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that will help to understand more about this scenarios and why gluster recommend replica 3 or arbiter volume. Regards Rafi KC On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote: > Hi Ludwig, > > There is no way to resolve gfid split-brains with type mismatch. You > have to do it manually by following the steps in [1].
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig, There is no way to resolve gfid split-brains with type mismatch. You have to do it manually by following the steps in [1]. In case of type mismatch it is recommended to resolve it manually. But for only gfid mismatch in 3.11 we have a way to resolve it by using the *favorite-child-policy*. Since the file is not important, you can go with deleting that. [1]
2017 Jun 16
1
emptying the trash directory
All, I just enabled the trashcan feature on our volumes. It is working as expected. However, I can't seem to find the rules to empty the trashcan. Is there any automated process to do that? If so, what are the configuration features? Regards, Ludwig -- Ludwig Gamache -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2017 Oct 24
2
trying to add a 3rd peer
All, I am trying to add a third peer to my gluster install. The first 2 nodes are running since many months and have gluster 3.10.3-1. I recently installed the 3rd node and gluster 3.10.6-1. I was able to start the gluster daemon on it. After, I tried to add the peer from one of the 2 previous server (gluster peer probe IPADDRESS). That first peer started the communication with the 3rd peer. At
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote: > All, > > I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last > week, I enabled the trashcan feature on one of my volumes: > gluster volume set date01 features.trash on I think you misspelled the volume name. Is it data01 or date01? > I also limited the max file size to 500MB:
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes? You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?. Regards, Bartosz > Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13: > > All, > > I am trying to add a third peer to my gluster
2011 Apr 22
1
rebalancing after remove-brick
Hello, I'm having trouble migrating data from 1 removed replica set to another active one in a dist replicated volume. My test scenario is the following: - create set (A) - create a bunch of files on it - add another set (B) - rebalance (works fine) - remove-brick A - rebalance (doesn't rebalance - ran on one brick in each set) The doc seems to imply that it is possible to remove
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to be used in production? >> > >
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051 These fixes are very much part of the latest 3.10.2 release. Satheesaran within Red Hat also verified that they work and he's not seeing corruption issues anymore. I'd like to
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya