Displaying 20 results from an estimated 26 matches for "arbiter1".
Did you mean:
arbiter
2017 Oct 17
3
gfid entries in volume heal info that do not heal
...3ab-4a65-bda1-9d9dd46db007
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 4 x (2 + 1) = 12
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: tpc-cent-glus1-081017:/exp/b1/gv0
>
> Brick2: tpc-cent-glus2-081017:/exp/b1/gv0
>
> Brick3: tpc-arbiter1-100617:/exp/b1/gv0 (arbiter)
>
> Brick4: tpc-cent-glus1-081017:/exp/b2/gv0
>
> Brick5: tpc-cent-glus2-081017:/exp/b2/gv0
>
> Brick6: tpc-arbiter1-100617:/exp/b2/gv0 (arbiter)
>
> Brick7: tpc-cent-glus1-081017:/exp/b3/gv0
>
> Brick8: tpc-cent-glus2-081017:/exp/b3/gv0
&g...
2017 Oct 17
0
gfid entries in volume heal info that do not heal
...00000000000000000000000
trusted.afr.gv0-client-2=0x000000000000000100000000
trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435
[root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2: No such file or directory
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b4/gv0/.glusterfs/e0/c5/e0c56...
2017 Oct 19
2
gfid entries in volume heal info that do not heal
...afr.gv0-client-2=0x000000000000000100000000
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d3464
> 63622d393630322d3839356136396461363131662f435f564f4c2d623030312d69363
> 7342d63642d63772e6d6435
>
> [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> ad6a15d811a2: No such file or directory
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
>...
2017 Oct 18
1
gfid entries in volume heal info that do not heal
....afr.gv0-client-2=0x000000000000000100000000
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d
> 346463622d393630322d3839356136396461363131662f435f564f4c2d62
> 3030312d693637342d63642d63772e6d6435
>
> [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2:
> No such file or directory
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /...
2017 Oct 23
2
gfid entries in volume heal info that do not heal
...000000000000000000
trusted.afr.gv0-client-2=0x000000000000000100000000
trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435
[root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2: No such file or directory
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b4/gv0/.glusterfs/e0/c5/e...
2017 Oct 16
0
gfid entries in volume heal info that do not heal
...ume info gv0
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 8f07894d-e3ab-4a65-bda1-9d9dd46db007
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: tpc-cent-glus1-081017:/exp/b1/gv0
Brick2: tpc-cent-glus2-081017:/exp/b1/gv0
Brick3: tpc-arbiter1-100617:/exp/b1/gv0 (arbiter)
Brick4: tpc-cent-glus1-081017:/exp/b2/gv0
Brick5: tpc-cent-glus2-081017:/exp/b2/gv0
Brick6: tpc-arbiter1-100617:/exp/b2/gv0 (arbiter)
Brick7: tpc-cent-glus1-081017:/exp/b3/gv0
Brick8: tpc-cent-glus2-081017:/exp/b3/gv0
Brick9: tpc-arbiter1-100617:/exp/b3/gv0 (arbiter)
Br...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...> trusted.afr.gv0-client-2=0x000000000000000100000000
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435
>
> [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2: No such file or directory
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b4/gv0/.glu...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...usted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> > > trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d
> > > 346463622d393630322d3839356136396461363131662f435f564f4c2d6230303
> > > 12d693637342d63642d63772e6d6435
> > >
> > > [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> > > getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > ad6a15d811a2: No such file or directory
> > >
> > >
> > >...
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt,
The files might be in split brain. Could you please send the outputs of
these?
gluster volume info <volname>
gluster volume heal <volname> info
And also the getfattr output of the files which are in the heal info output
from all the bricks of that replica pair.
getfattr -d -e hex -m . <file path on brick>
Thanks & Regards
Karthik
On 16-Oct-2017 8:16 PM,
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...-client-2=0x000000000000000100000000
>
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
>
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435
>
>
>
> [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
>
> getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2: No such file or directory
>
>
>
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
...> > > > trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466
> > > > > 332d346463622d393630322d3839356136396461363131662f435f564f4c2
> > > > > d623030312d693637342d63642d63772e6d6435
> > > > >
> > > > > [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> > > > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > > > ad6a15d811a2
> > > > > getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-
> > > > > bd3d-ad6a15d811a2: No such file or dir...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: glus...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...t kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume create VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms arbiter:/arbiter1
>
> To make this Volume as distributed Arbiter, add more bricks (multiple of
> 3, two data bricks and one arbiter brick) similar to above.
>
> --
> Aravinda
>
>
> ---- On Tue, 05 Nov 2024 22:24:38 +0530 *Gilberto Ferreira
> <gilberto.nunes32 at gmail.com <gilbert...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command if you want to...
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...> > > > trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466
> > > > > 332d346463622d393630322d3839356136396461363131662f435f564f4c2
> > > > > d623030312d693637342d63642d63772e6d6435
> > > > >
> > > > > [root at tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> > > > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > > > ad6a15d811a2
> > > > > getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-
> > > > > bd3d-ad6a15d811a2: No such file or dir...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name
them all arbiter!)
The sequence is important:
gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
arbiter1:/arb1
adds two data bricks and a corresponding arbiter from 3 different
servers and 3 different disks,?
thus you can loose any one server OR any one disk and stay up and
consistent.
adding more bricks to the volume you can follow the pattern.
A.
Am Dienstag, dem 05.11.2024 um 12:51 -0300 schrie...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command if you want to...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...<a.schwibbe at gmx.net> escreveu:
> > Your add-brick command adds 2 bricks 1 arbiter (even though you
> > name them all arbiter!)
> >
> > The sequence is important:
> >
> > gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0
> > gluster2:/gv0 arbiter1:/arb1
> >
> > adds two data bricks and a corresponding arbiter from 3 different
> > servers and 3 different disks,?
> > thus you can loose any one server OR any one disk and stay up and
> > consistent.
> >
> > adding more bricks to the volume you can foll...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...ow 11.1 or something like that.
In this new server, I have a small disk like 480G in size.
And I created 3 partitions formatted with XFS using imaxpct=75, as
suggested in previous emails.
And than in the gluster nodes, I tried to add the brick
gluster vol add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1/arbiter1
arbiter:/arbiter2/arbiter2 arbiter:/arbiter3/arbiter3
But to my surprise (or not!) I got this message:
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
.... de 2024 ?s 13:22, Andreas Schwibbe <a.schwibbe at gmx.net>
escreveu:
> Your add-brick command adds 2 bricks 1 arbiter (even though you name them
> all arbiter!)
>
> The sequence is important:
>
> gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
> arbiter1:/arb1
>
> adds two data bricks and a corresponding arbiter from 3 different servers
> and 3 different disks,
> thus you can loose any one server OR any one disk and stay up and
> consistent.
>
> adding more bricks to the volume you can follow the pattern.
>
> A.
>
>...