Displaying 8 results from an estimated 8 matches for "schwibbe".
Did you mean:
schwalbe
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...o add a new brick
> in the main servers?
> Do I need to provide another partition/folder in the arbiter and then
> adjust the arbiter brick counter?
>
> ---
>
>
> Gilberto Nunes Ferreira
>
>
>
>
>
>
> Em ter., 5 de nov. de 2024 ?s 13:22, Andreas Schwibbe
> <a.schwibbe at gmx.net> escreveu:
> > Your add-brick command adds 2 bricks 1 arbiter (even though you
> > name them all arbiter!)
> >
> > The sequence is important:
> >
> > gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0
> > gluster2:...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...1/vms gluster2:
> /disco1TB-1/vms arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
> arbiter:/arbiter4
> volume add-brick: failed: Operation failed
>
> ---
>
>
> Gilberto Nunes Ferreira
>
>
>
>
>
>
>
> Em ter., 5 de nov. de 2024 ?s 13:39, Andreas Schwibbe <a.schwibbe at gmx.net>
> escreveu:
>
>
> If you create a volume with replica 2 arbiter 1
>
> you create 2 data bricks that are mirrored (makes 2 file copies)
> +
> you create 1 arbiter that holds metadata of all files on these bricks.
>
> You "can" cre...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...ferent
partition/folder?
And what if in some point in the future I decide to add a new brick in the
main servers?
Do I need to provide another partition/folder in the arbiter and then
adjust the arbiter brick counter?
---
Gilberto Nunes Ferreira
Em ter., 5 de nov. de 2024 ?s 13:22, Andreas Schwibbe <a.schwibbe at gmx.net>
escreveu:
> Your add-brick command adds 2 bricks 1 arbiter (even though you name them
> all arbiter!)
>
> The sequence is important:
>
> gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
> arbiter1:/arb1
>
> adds two da...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...er:/arbiter2 arbiter:/arbiter3
>> arbiter:/arbiter4
>> volume add-brick: failed: Operation failed
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>>
>>
>>
>> Em ter., 5 de nov. de 2024 ?s 13:39, Andreas Schwibbe <a.schwibbe at gmx.net>
>> escreveu:
>>
>>
>> If you create a volume with replica 2 arbiter 1
>>
>> you create 2 data bricks that are mirrored (makes 2 file copies)
>> +
>> you create 1 arbiter that holds metadata of all files on these bricks.
&...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name
them all arbiter!)
The sequence is important:
gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
arbiter1:/arb1
adds two data bricks and a corresponding arbiter from 3 different
servers and 3 different disks,?
thus you can loose any one server OR any one disk and stay up and
consistent.
adding more bricks
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...39; bus='scsi'/>
<alias name='scsi1-0-0-0'/>
<address type='drive' controller='1' bus='0' target='0' unit='0'/>
</disk>
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of Andreas Schwibbe <a.schwibbe at gmx.net>
Date: Monday, October 14, 2024 at 4:34?AM
To: gluster-users at gluster.org <gluster-users at gluster.org>
Subject: Re: [Gluster-users] XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hey Erik,
I am running a similar setup with no iss...
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hey Erik,
I am running a similar setup with no issues having Ubuntu Host Systems
on HPE DL380 Gen 10.
I however used to run libvirt/qemu via nfs-ganesha on top of gluster
flawlessly.
Recently I upgraded to the native GFAPI implementation, which is poorly
documented with snippets all over the internet.
Although I cannot provide a direct solution for your issue, I am
however suggesting to try
2024 Sep 29
1
Growing cluster: peering worked, staging failed
Fellow gluster users,
trying to extend a 3 node cluster that is serving me very reliably for
a long time now.
Cluster is serving two volumes:
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 9bafc4d2-d9b6-4b6d-a631-1cf42d1d2559
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (2 + 1) = 18
Transport-type: tcp
Volume Name: gv1
Type: Replicate
Volume ID: