Displaying 12 results from an estimated 12 matches for "pve01".
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: arbiter:/arb...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...ata bricks and one arbiter brick) similar to above.
>
> --
> Aravinda
>
>
> ---- On Tue, 05 Nov 2024 22:24:38 +0530 *Gilberto Ferreira
> <gilberto.nunes32 at gmail.com <gilberto.nunes32 at gmail.com>>* wrote ---
>
> Clearly I am doing something wrong
>
> pve01:~# gluster vol info
>
> Volume Name: VMS
> Type: Distributed-Replicate
> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/disco2TB-0/vms
> Brick2:...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
.../arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command if you want to override this behavior. ?
pve01:~# gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 forcevolume add-brick: success
But I don't know if this is the right thing to do.
---
Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 ?...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...stname State
> 0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
> 99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
> 4718ead7-aebd-4b8b-a401-f9e8b0acfeb1 localhost Connected
>
> But when I do this:
> pve01:~# gluster volume add-brick VMS replica 2 arbiter 1
> arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
> I got this error:
>
> For arbiter configuration, replica count must be 3 and arbiter count must
> be 1. The 3rd brick of the replica will be the arbiter
>
> Usage:
>...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...r:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command if you want to override
this behavior.
pve01:~# gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
But I don't know if this is the right thing to do.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 ?...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
4718ead7-aebd-4b8b-a401-f9e8b0acfeb1 localhost Connected
But when I do this:
pve01:~# gluster volume add-brick VMS replica 2 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got this error:
For arbiter configuration, replica count must be 3 and arbiter count must
be 1. The 3rd brick of the replica will be the arbiter
Usage:
volume add-brick <VOLNAME> [<...
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2....
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
If you create a volume with replica 2 arbiter 1
you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.
You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
...perienced.
As usual, consider if you can add an arbiter for your volumes.
Best Regards,
Strahil Nikolov
? ??????, 19 ???????? 2024 ?. ? 18:32:40 ?. ???????+3, Gilberto Ferreira <gilberto.nunes32 at gmail.com> ??????:
Hi there.I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-0
/dev/sdh ? ? ? ? ?1.0T ?9.3G 1015G ? 1% /disco1TB-3
/dev/sde ? ? ? ? ?1.0T ?9.5G 1015G ? 1% /disco1TB-1
/dev/sdf ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-2
/dev/sdg ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-1
/dev/sdc ? ? ? ? ?2....
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
...iter for your volumes.
>
> Best Regards,
> Strahil Nikolov
>
> ? ??????, 19 ???????? 2024 ?. ? 18:32:40 ?. ???????+3, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> ??????:
>
>
> Hi there.
> I have 2 servers with this number of disks in each side:
>
> pve01:~# df | grep disco
> /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
> /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
> /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
> /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
> /dev/sdg 2.0T 19G 2.0T 1% /disco2...