Displaying 20 results from an estimated 2000 matches similar to: "Add an arbiter when have multiple bricks at same server."
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
If you create a volume with replica 2 arbiter 1
you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.
You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
But if I change replica 2 arbiter 1 to replica 3 arbiter 1
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got thir error:
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant configuration. Use
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name
them all arbiter!)
The sequence is important:
gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
arbiter1:/arb1
adds two data bricks and a corresponding arbiter from 3 different
servers and 3 different disks,?
thus you can loose any one server OR any one disk and stay up and
consistent.
adding more bricks
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Ok.
I got confused here!
For each brick I will need one arbiter brick, in a different
partition/folder?
And what if in some point in the future I decide to add a new brick in the
main servers?
Do I need to provide another partition/folder in the arbiter and then
adjust the arbiter brick counter?
---
Gilberto Nunes Ferreira
Em ter., 5 de nov. de 2024 ?s 13:22, Andreas Schwibbe
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Hi there.
In previous emails, I comment with you guys, about 2 node gluster server,
where the bricks lay down in different size and folders in the same server,
like
gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
So I went ahead and installed a Debian 12 and
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1%
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2024 Nov 11
0
Disk size and virtual size drive me crazy!
Hi there.
I can't understand why I am having this different values:
proxmox01:/vms/images# df
Sist. Arq. Tam. Usado Disp. Uso% Montado em
udev 252G 0 252G 0% /dev
tmpfs 51G 9,4M 51G 1% /run
/dev/sda4 433G 20G 413G 5% /
tmpfs 252G 63M 252G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
efivarfs 496K 335K
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small
(<1 MB) files and thousands of files larger than 1 GB.
Attached is the tier log for gluster1 and gluster2. These are full of
"demotion failed" messages, which is also shown in the status:
[root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
Node Promoted files Demoted files