search for: disco2tb

Displaying 17 results from an estimated 17 matches for "disco2tb".

Did you mean: disco1tb
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...er., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu: > Hello Gilberto, > > You can create a Arbiter volume using three bricks. Two of them will be > data bricks and one will be Arbiter brick. > > gluster volume create VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms > gluster2:/disco2TB-0/vms arbiter:/arbiter1 > > To make this Volume as distributed Arbiter, add more bricks (multiple of > 3, two data bricks and one arbiter brick) similar to above. > > -- > Aravinda > > > ---- On Tue, 05 Nov 2024 22:24:38 +0530 *Gilberto Ferre...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error pve01:~# gluster vol info Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: gluster1:/disco2TB-0/vms Brick2: gluster2:/disco2TB-0/vms Brick3: gluster1:/disco1TB-0/vms Brick4: gluster2:/disco1TB-0/vms Brick5: gluster1:/disco1TB-1/vms Brick6: gluster2:/disco1TB-1/vms Options Reconfigured: cluster.self-heal-daemon: off cluster.entry-self-heal: off cluster.metadata-self-heal: off cluster.data-se...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...Hi there. > > > > > > In previous emails, I comment with you guys, about 2 node gluster > > > server, where the bricks lay down in different size and folders > > > in the same server, like > > > > > > gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms > > > gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms > > > gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms > > > gluster2:/disco1TB-1/vms > > > > > > So I went ahead and installed a Debian 12 and installed the same > > > gluster vers...
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
...r of disks in each side: pve01:~# df | grep disco /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0 /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3 /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1 /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2 /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1 /dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0 /dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4 I have a Type: Distributed-Replicate gluster So my question is: how much disk can be in fail state after losing data or something? Thanks in advance --- Gilberto Nunes Ferreira -----------...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...er:/arbiter2 arbiter:/arbiter3 force volume add-brick: success pve01:~# gluster volume info Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gluster1:/disco2TB-0/vms Brick2: gluster2:/disco2TB-0/vms Brick3: arbiter:/arbiter1 (arbiter) Brick4: gluster1:/disco1TB-0/vms Brick5: gluster2:/disco1TB-0/vms Brick6: arbiter:/arbiter2 (arbiter) Brick7: gluster1:/disco1TB-1/vms Brick8: gluster2:/disco1TB-1/vms Brick9: arbiter:/arbiter3 (arbiter) Options Reconfigured...
2024 Nov 11
1
Disk size and virtual size drive me crazy!
...rmware/efi/efivars /dev/sda2 1,8G 204M 1,5G 12% /boot /dev/sda1 1,9G 12M 1,9G 1% /boot/efi /dev/sdb 932G 728G 204G 79% /disco1TB-0 /dev/sdc 932G 718G 214G 78% /disco1TB-1 /dev/sde 932G 720G 212G 78% /disco1TB-2 /dev/sdd 1,9T 1,5T 387G 80% /disco2TB-0 tmpfs 51G 4,0K 51G 1% /run/user/0 *gluster1:VMS 4,6T 3,6T 970G 80% /vms* /dev/fuse 128M 36K 128M 1% /etc/pve proxmox01:/vms/images# cd 103 proxmox01:/vms/images/103# ls vm-103-disk-0.qcow2 vm-103-disk-1.qcow2 proxmox01:/vms/images/103# ls -lh total 21T *-rw-r---...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Hi there. In previous emails, I comment with you guys, about 2 node gluster server, where the bricks lay down in different size and folders in the same server, like gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms So I went ahead and installed a Debian 12 and installed the same gluster version that the other servers, which is now 11.1 or something like that. In this new server,...
2024 Nov 20
1
Disk size and virtual size drive me crazy!
...rmware/efi/efivars /dev/sda2 ? ? ? 1,8G ?204M ?1,5G ?12% /boot /dev/sda1 ? ? ? 1,9G ? 12M ?1,9G ? 1% /boot/efi /dev/sdb ? ? ? ?932G ?728G ?204G ?79% /disco1TB-0 /dev/sdc ? ? ? ?932G ?718G ?214G ?78% /disco1TB-1 /dev/sde ? ? ? ?932G ?720G ?212G ?78% /disco1TB-2 /dev/sdd ? ? ? ?1,9T ?1,5T ?387G ?80% /disco2TB-0 tmpfs ? ? ? ? ? ?51G ?4,0K ? 51G ? 1% /run/user/0 gluster1:VMS ? ?4,6T ?3,6T ?970G ?80% /vms /dev/fuse ? ? ? 128M ? 36K ?128M ? 1% /etc/pve proxmox01:/vms/images# cd 103 proxmox01:/vms/images/103# ls vm-103-disk-0.qcow2 ?vm-103-disk-1.qcow2 proxmox01:/vms/images/103# ls -lh total 21T -rw-r----- 1...
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
...er of disks in each side: pve01:~# df | grep disco /dev/sdd ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-0 /dev/sdh ? ? ? ? ?1.0T ?9.3G 1015G ? 1% /disco1TB-3 /dev/sde ? ? ? ? ?1.0T ?9.5G 1015G ? 1% /disco1TB-1 /dev/sdf ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-2 /dev/sdg ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-1 /dev/sdc ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-0 /dev/sdj ? ? ? ? ?1.0T ?9.2G 1015G ? 1% /disco1TB-4 I have a?Type: Distributed-Replicate glusterSo my question is: how much disk can be in fail state after losing data or something? Thanks in advance --- Gilberto Nunes Ferreira ? ______...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...arbiter <COUNT>]] <NEW-BRICK> ... [force] gluster vol infopve01:~# gluster vol info ? Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: gluster1:/disco2TB-0/vms Brick2: gluster2:/disco2TB-0/vms Brick3: gluster1:/disco1TB-0/vms Brick4: gluster2:/disco1TB-0/vms Brick5: gluster1:/disco1TB-1/vms Brick6: gluster2:/disco1TB-1/vms Options Reconfigured: performance.client-io-threads: off transport.address-family: inet storage.fips-mode-rchecksum: on cluster....
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...m 05.11.2024 um 12:51 -0300 schrieb Gilberto Ferreira: > Hi there. > > In previous emails, I comment with you guys, about 2 node gluster > server, where the bricks lay down in different size and folders in > the same server, like > > gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms > gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB- > 0/vms gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms > > So I went ahead and installed a Debian 12 and installed the same > gluster version that the other servers, which is now 11.1 or > something...
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
...pve01:~# df | grep disco > /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0 > /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3 > /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1 > /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2 > /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1 > /dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0 > /dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4 > > I have a Type: Distributed-Replicate gluster > So my question is: how much disk can be in fail state after losing data or > something? > > Thanks in advance >...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...arbiter <COUNT>]] <NEW-BRICK> ... [force] gluster vol info pve01:~# gluster vol info Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: gluster1:/disco2TB-0/vms Brick2: gluster2:/disco2TB-0/vms Brick3: gluster1:/disco1TB-0/vms Brick4: gluster2:/disco1TB-0/vms Brick5: gluster1:/disco1TB-1/vms Brick6: gluster2:/disco1TB-1/vms Options Reconfigured: performance.client-io-threads: off transport.address-family: inet storage.fips-mode-rchecksum: on cluster....
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...5.11.2024 um 12:51 -0300 schrieb Gilberto Ferreira: > > Hi there. > > In previous emails, I comment with you guys, about 2 node gluster server, > where the bricks lay down in different size and folders in the same server, > like > > gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms > gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms > gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms > > So I went ahead and installed a Debian 12 and installed the same gluster > version that the other servers, which is now 11.1 or something like th...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...gluster vol info > pve01:~# gluster vol info > > Volume Name: VMS > Type: Distributed-Replicate > Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: gluster1:/disco2TB-0/vms > Brick2: gluster2:/disco2TB-0/vms > Brick3: gluster1:/disco1TB-0/vms > Brick4: gluster2:/disco1TB-0/vms > Brick5: gluster1:/disco1TB-1/vms > Brick6: gluster2:/disco1TB-1/vms > Options Reconfigured: > performance.client-io-threads: off > transport.address-family: inet...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...vol info >> >> Volume Name: VMS >> Type: Distributed-Replicate >> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 3 x 2 = 6 >> Transport-type: tcp >> Bricks: >> Brick1: gluster1:/disco2TB-0/vms >> Brick2: gluster2:/disco2TB-0/vms >> Brick3: gluster1:/disco1TB-0/vms >> Brick4: gluster2:/disco1TB-0/vms >> Brick5: gluster1:/disco1TB-1/vms >> Brick6: gluster2:/disco1TB-1/vms >> Options Reconfigured: >> performance.client-io-threads: off >>...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable. If you do have a 3rd host, I think the command would be:gluster