Displaying 17 results from an estimated 17 matches for "disco1tb".
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...> Type: Distributed-Replicate
> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/disco2TB-0/vms
> Brick2: gluster2:/disco2TB-0/vms
> Brick3: gluster1:/disco1TB-0/vms
> Brick4: gluster2:/disco1TB-0/vms
> Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> cluster.self-heal-daemon: off
> cluster.entry-self-heal: off
> cluster.metadata-self-heal: off
> cluster.data-self-heal: off
> cluster....
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0
/dev/sdj 1.0T 9.2G 1015G...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.granular-entry-heal: on
storage.fips-mode-rch...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...mment with you guys, about 2 node gluster
> > > server, where the bricks lay down in different size and folders
> > > in the same server, like
> > >
> > > gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
> > > gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms
> > > gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms
> > > gluster2:/disco1TB-1/vms
> > >
> > > So I went ahead and installed a Debian 12 and installed the same
> > > gluster version that the other servers, which is now 11.1 or
> > > s...
2024 Nov 11
1
Disk size and virtual size drive me crazy!
...3G 5% /
tmpfs 252G 63M 252G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
efivarfs 496K 335K 157K 69% /sys/firmware/efi/efivars
/dev/sda2 1,8G 204M 1,5G 12% /boot
/dev/sda1 1,9G 12M 1,9G 1% /boot/efi
/dev/sdb 932G 728G 204G 79% /disco1TB-0
/dev/sdc 932G 718G 214G 78% /disco1TB-1
/dev/sde 932G 720G 212G 78% /disco1TB-2
/dev/sdd 1,9T 1,5T 387G 80% /disco2TB-0
tmpfs 51G 4,0K 51G 1% /run/user/0
*gluster1:VMS 4,6T 3,6T 970G 80% /vms*
/dev/fuse 128M 36K 128M 1% /etc/pve
proxm...
2024 Nov 20
1
Disk size and virtual size drive me crazy!
...3G ? 5% /
tmpfs ? ? ? ? ? 252G ? 63M ?252G ? 1% /dev/shm
tmpfs ? ? ? ? ? 5,0M ? ? 0 ?5,0M ? 0% /run/lock
efivarfs ? ? ? ?496K ?335K ?157K ?69% /sys/firmware/efi/efivars
/dev/sda2 ? ? ? 1,8G ?204M ?1,5G ?12% /boot
/dev/sda1 ? ? ? 1,9G ? 12M ?1,9G ? 1% /boot/efi
/dev/sdb ? ? ? ?932G ?728G ?204G ?79% /disco1TB-0
/dev/sdc ? ? ? ?932G ?718G ?214G ?78% /disco1TB-1
/dev/sde ? ? ? ?932G ?720G ?212G ?78% /disco1TB-2
/dev/sdd ? ? ? ?1,9T ?1,5T ?387G ?80% /disco2TB-0
tmpfs ? ? ? ? ? ?51G ?4,0K ? 51G ? 1% /run/user/0
gluster1:VMS ? ?4,6T ?3,6T ?970G ?80% /vms
/dev/fuse ? ? ? 128M ? 36K ?128M ? 1% /etc/pve
proxmox...
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
...olumes.
Best Regards,
Strahil Nikolov
? ??????, 19 ???????? 2024 ?. ? 18:32:40 ?. ???????+3, Gilberto Ferreira <gilberto.nunes32 at gmail.com> ??????:
Hi there.I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-0
/dev/sdh ? ? ? ? ?1.0T ?9.3G 1015G ? 1% /disco1TB-3
/dev/sde ? ? ? ? ?1.0T ?9.5G 1015G ? 1% /disco1TB-1
/dev/sdf ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-2
/dev/sdg ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-1
/dev/sdc ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-0
/dev/sdj ? ? ? ? ?1.0T ?9.2G 1015G ?...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...me: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: arbiter:/arbiter1 (arbiter)
Brick4: gluster1:/disco1TB-0/vms
Brick5: gluster2:/disco1TB-0/vms
Brick6: arbiter:/arbiter2 (arbiter)
Brick7: gluster1:/disco1TB-1/vms
Brick8: gluster2:/disco1TB-1/vms
Brick9: arbiter:/arbiter3 (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.d...
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
...;
> ? ??????, 19 ???????? 2024 ?. ? 18:32:40 ?. ???????+3, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> ??????:
>
>
> Hi there.
> I have 2 servers with this number of disks in each side:
>
> pve01:~# df | grep disco
> /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
> /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
> /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
> /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
> /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
> /dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0
> /dev/sd...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Hi there.
In previous emails, I comment with you guys, about 2 node gluster server,
where the bricks lay down in different size and folders in the same server,
like
gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
So I went ahead and installed a Debian 12 and installed the same gluster
version that the other servers, which is now 11.1 or something like that.
In this new server, I have a small disk like 480G in size.
And I creat...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...infopve01:~# gluster vol info
?
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
cluster.data-self-heal: off
cluster.metada...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...> Hi there.
>
> In previous emails, I comment with you guys, about 2 node gluster
> server, where the bricks lay down in different size and folders in
> the same server, like
>
> gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-
> 0/vms gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
>
> So I went ahead and installed a Debian 12 and installed the same
> gluster version that the other servers, which is now 11.1 or
> something like that.
> In this new server, I have a small disk...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...info
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
cluster.data-self-heal: off
cluster.metada...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...;
> Hi there.
>
> In previous emails, I comment with you guys, about 2 node gluster server,
> where the bricks lay down in different size and folders in the same server,
> like
>
> gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
> gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
>
> So I went ahead and installed a Debian 12 and installed the same gluster
> version that the other servers, which is now 11.1 or something like that.
> In this new server, I have a small disk like 4...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...> Type: Distributed-Replicate
> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/disco2TB-0/vms
> Brick2: gluster2:/disco2TB-0/vms
> Brick3: gluster1:/disco1TB-0/vms
> Brick4: gluster2:/disco1TB-0/vms
> Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> performance.client-io-threads: off
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> cluster.granular-entry-heal: on
&g...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...;> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x 2 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/disco2TB-0/vms
>> Brick2: gluster2:/disco2TB-0/vms
>> Brick3: gluster1:/disco1TB-0/vms
>> Brick4: gluster2:/disco1TB-0/vms
>> Brick5: gluster1:/disco1TB-1/vms
>> Brick6: gluster2:/disco1TB-1/vms
>> Options Reconfigured:
>> performance.client-io-threads: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> cl...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster