Displaying 20 results from an estimated 2000 matches similar to: "Interesting split-brain..."
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig,
There is no way to resolve gfid split-brains with type mismatch. You have
to do it manually by following the steps in [1].
In case of type mismatch it is recommended to resolve it manually. But for
only gfid mismatch in 3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that
will help to understand more about this scenarios and why gluster
recommend replica 3 or arbiter volume.
Regards
Rafi KC
On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote:
> Hi Ludwig,
>
> There is no way to resolve gfid split-brains with type mismatch. You
> have to do it manually by following the steps in [1].
2017 Nov 16
0
Missing files on one of the bricks
Hello, we are using glusterfs 3.10.3.
We currently have a gluster heal volume full running, the crawl is still
running.
Starting time of crawl: Tue Nov 14 15:58:35 2017
Crawl is in progress
Type of crawl: FULL
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
getfattr from both files:
# getfattr -d -m . -e hex
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote:
>
>
> On 15 November 2017 at 19:57, Frederic Harmignies
> <frederic.harmignies at elementai.com
> <mailto:frederic.harmignies at elementai.com>> wrote:
>
> Hello, we have 2x files that are missing from one of the bricks.
> No idea how to fix this.
>
> Details:
>
> # gluster volume
2017 Nov 15
2
Missing files on one of the bricks
Hello, we have 2x files that are missing from one of the bricks. No idea
how to fix this.
Details:
# gluster volume info
Volume Name: data01
Type: Replicate
Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.186.11:/mnt/AIDATA/data
Brick2: 192.168.186.12:/mnt/AIDATA/data
Options Reconfigured:
2017 Nov 16
0
Missing files on one of the bricks
On 15 November 2017 at 19:57, Frederic Harmignies <
frederic.harmignies at elementai.com> wrote:
> Hello, we have 2x files that are missing from one of the bricks. No idea
> how to fix this.
>
> Details:
>
> # gluster volume info
>
> Volume Name: data01
> Type: Replicate
> Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
> Status: Started
> Snapshot Count:
2017 Jun 20
2
trash can feature, crashed???
All,
I currently have 2 bricks running Gluster 3.10.1. This is a Centos
installation. On Friday last week, I enabled the trashcan feature on one of
my volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote:
> All,
>
> I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last
> week, I enabled the trashcan feature on one of my volumes:
> gluster volume set date01 features.trash on
I think you misspelled the volume name. Is it data01 or date01?
> I also limited the max file size to 500MB:
2017 Jun 20
2
remounting volumes, is there an easier way
All,
Over the week-end, one of my volume became unavailable. All clients could
not access their mount points. On some of the clients, I had user processes
that we using these mount points. So, I could not do a umount/mount without
killing these processes.
I also noticed that when I restarted the volume, the port changed on the
server. So, clients that were still using the previous TCP/port could
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> All,
>
> Over the week-end, one of my volume became unavailable. All clients could
> not access their mount points. On some of the clients, I had user processes
> that we using these mount points. So, I could not do a umount/mount without
> killing these processes.
>
> I also noticed
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2017 Jun 16
1
emptying the trash directory
All,
I just enabled the trashcan feature on our volumes. It is working as
expected. However, I can't seem to find the rules to empty the trashcan. Is
there any automated process to do that? If so, what are the configuration
features?
Regards,
Ludwig
--
Ludwig Gamache
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it.
If no I'll let someone who knows better answer you :)
On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> All,
>
> We currently have a Gluster installation which is made of 2 servers. Each
> server has 10 drives on ZFS. And I have a gluster mirror between these 2.
>
> The current config looks like:
>
2018 May 21
2
split brain? but where?
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==========
root at salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/centos-root
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users
01 | 02 mirrored --|
03 | 04 mirrored --| distributed
05 | 06 mirrored --|
1) Would this command work for that?
glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01
clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01
clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01
clustr-06:/mnt/data01
So the
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is, can someone help me pls, I cant find what to fix here.
>
2018 May 22
2
split brain? but where?
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of
gfid>/<next 2 bits of gfid>/<full gfid>
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,