Displaying 20 results from an estimated 700 matches similar to: "How much disk can fail after a catastrophic failure occur?"
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
If you create a volume with replica 2 arbiter 1
you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.
You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
But if I change replica 2 arbiter 1 to replica 3 arbiter 1
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got thir error:
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant configuration. Use
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Hi there.
In previous emails, I comment with you guys, about 2 node gluster server,
where the bricks lay down in different size and folders in the same server,
like
gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
So I went ahead and installed a Debian 12 and
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name
them all arbiter!)
The sequence is important:
gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
arbiter1:/arb1
adds two data bricks and a corresponding arbiter from 3 different
servers and 3 different disks,?
thus you can loose any one server OR any one disk and stay up and
consistent.
adding more bricks
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Ok.
I got confused here!
For each brick I will need one arbiter brick, in a different
partition/folder?
And what if in some point in the future I decide to add a new brick in the
main servers?
Do I need to provide another partition/folder in the arbiter and then
adjust the arbiter brick counter?
---
Gilberto Nunes Ferreira
Em ter., 5 de nov. de 2024 ?s 13:22, Andreas Schwibbe
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
2024 Nov 11
0
Disk size and virtual size drive me crazy!
Hi there.
I can't understand why I am having this different values:
proxmox01:/vms/images# df
Sist. Arq. Tam. Usado Disp. Uso% Montado em
udev 252G 0 252G 0% /dev
tmpfs 51G 9,4M 51G 1% /run
/dev/sda4 433G 20G 413G 5% /
tmpfs 252G 63M 252G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
efivarfs 496K 335K
2018 May 22
2
split brain? but where?
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of
gfid>/<next 2 bits of gfid>/<full gfid>
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
2018 May 22
0
split brain? but where?
I tried this already.
8><---
[root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root at glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
[root at glusterp2 fb]# df -h
Filesystem
2018 May 21
2
split brain? but where?
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==========
root at salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/centos-root
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is, can someone help me pls, I cant find what to fix here.
>