Displaying 20 results from an estimated 117 matches for "gluster2".
Did you mean:
gluster
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick3/gv0
Status: Connected
Number of entries: 0...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...24, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume create VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms arbiter:/arbiter1
>
> To make this Volume as distributed Arbiter, add more bricks (multiple of
> 3, two data bricks and one arbiter brick) similar to above.
>
> --
> Aravinda
>
>
> ---- On Tue, 05 Nov 2024 22:24:38 +0530 *Gilberto Ferreira
> <gilbert...
2017 Dec 11
2
active/active failover
...erfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described below, or am I misusing glusterfs? (and potentially corrupting my data?)
My setup is: I have two servers (qlogin and gluster2) that access a shared SAN storage. Both servers connect to the same SAN (SAS multipath) and I implement locking via lvm2 and sanlock, so I can mount the same storage on either server.
The idea is that normally each server serves one brick, but in case one server fails, the other server can serve b...
2018 Jan 02
0
Wrong volume size with df
...lar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 - output of gluster volume heal <volname> info
>
> Brick pod-sjc1-gluster1:/data/brick1/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick1/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster1:/data/brick2/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick2/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster1...
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small
(<1 MB) files and thousands of files larger than 1 GB.
Attached is the tier log for gluster1 and gluster2. These are full of
"demotion failed" messages, which is also shown in the status:
[root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
Node Promoted files Demoted files Status
run time in h:m:s
--------- --------- ---...
2017 Dec 11
0
active/active failover
...find detailed documentation about it. (I'm using glusterfs 3.10.8)
>
> So my question is: can I really use glusterfs to do failover in the way
> described below, or am I misusing glusterfs? (and potentially corrupting my
> data?)
>
> My setup is: I have two servers (qlogin and gluster2) that access a shared
> SAN storage. Both servers connect to the same SAN (SAS multipath) and I
> implement locking via lvm2 and sanlock, so I can mount the same storage on
> either server.
> The idea is that normally each server serves one brick, but in case one
> server fails, the...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.gr...
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
...s are extremely varied, there are millions of small
>> (<1 MB) files and thousands of files larger than 1 GB.
The tier use case is for bigger size files. not the best for files of
smaller size.
That can end up hindering the IOs.
>>
>> Attached is the tier log for gluster1 and gluster2. These are full of
>> "demotion failed" messages, which is also shown in the status:
>>
>> [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
>> Node Promoted files Demoted files Status
>> run time in h:m:s
>> -...
2011 Feb 24
0
No subject
...39;'s our stripe client setup :
####
volume client-stripe-1
type protocol/client
option transport-type ib-verbs
option remote-host gluster1
option remote-subvolume iothreads
end-volume
volume client-stripe-2
type protocol/client
option transport-type ib-verbs
option remote-host gluster2
option remote-subvolume iothreads
end-volume
volume client-stripe-3
type protocol/client
option transport-type ib-verbs
option remote-host gluster3
option remote-subvolume iothreads
end-volume
volume client-stripe-4
type protocol/client
option transport-type ib-verbs
option remote...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...Schwibbe
> <a.schwibbe at gmx.net> escreveu:
> > Your add-brick command adds 2 bricks 1 arbiter (even though you
> > name them all arbiter!)
> >
> > The sequence is important:
> >
> > gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0
> > gluster2:/gv0 arbiter1:/arb1
> >
> > adds two data bricks and a corresponding arbiter from 3 different
> > servers and 3 different disks,?
> > thus you can loose any one server OR any one disk and stay up and
> > consistent.
> >
> > adding more bricks to the volum...
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
...nt on files are blocked?
On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite <tomfite at gmail.com> wrote:
> The sizes of the files are extremely varied, there are millions of small
> (<1 MB) files and thousands of files larger than 1 GB.
>
> Attached is the tier log for gluster1 and gluster2. These are full of
> "demotion failed" messages, which is also shown in the status:
>
> [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
> Node Promoted files Demoted files Status
> run time in h:m:s
> ---------...
2017 Dec 12
1
active/active failover
...n't find detailed documentation about it. (I'm using glusterfs 3.10.8)
>
> So my question is: can I really use glusterfs to do failover in the way described below, or am I misusing glusterfs? (and potentially corrupting my data?)
>
> My setup is: I have two servers (qlogin and gluster2) that access a shared SAN storage. Both servers connect to the same SAN (SAS multipath) and I implement locking via lvm2 and sanlock, so I can mount the same storage on either server.
> The idea is that normally each server serves one brick, but in case one server fails, the other server can ser...
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi,
Can you send the volume info, and volume status output and the tier logs.
And I need to know the size of the files that are being stored.
On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote:
> I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
> bricks per server distributed replicated volume.
>
> I'm seeing IO get blocked
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
...ive hot tier:
[root at pod-sjc1-gluster1 ~]# gluster volume info
Volume Name: gv0
Type: Tier
Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
Status: Started
Snapshot Count: 13
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: pod-sjc1-gluster2:/data/hot_tier/gv0
Brick2: pod-sjc1-gluster1:/data/hot_tier/gv0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 3 x 2 = 6
Brick3: pod-sjc1-gluster1:/data/brick1/gv0
Brick4: pod-sjc1-gluster2:/data/brick1/gv0
Brick5: pod-sjc1-gluster1:/data/brick2/gv0
Brick6: pod-sjc1-gluster2:/d...
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
...---
Here's the client configuration:
volume client-stripe-1
type protocol/client
option transport-type ib-verbs
option remote-host gluster1
option remote-subvolume iothreads
end-volume
volume client-stripe-2
type protocol/client
option transport-type ib-verbs
option remote-host gluster2
option remote-subvolume iothreads
end-volume
volume client-stripe-3
type protocol/client
option transport-type ib-verbs
option remote-host gluster3
option remote-subvolume iothreads
end-volume
volume client-stripe-4
type protocol/client
option transport-type ib-verbs
option remote...
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following -
1 - output of gluster volume heal <volname> info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume <volname> info
4 - output of gluster volume <volname> status
5 - Also, could you try unmount the volume and mount it again and check the size?
----- Original Message -----
From:
2024 Oct 17
0
Bricks with different sizes.
...1TB with XFS and mounted it in this order:
/dev/sdc -> /disk1 -----> 2TB
/dev/sdd -> /disk2 -----> 2TB
/dev/sde -> /disk3 -----> 1TB
And the create a gluster vol with this command:
gluster vol create VMS replica 2 gluster1:/disco1/vms1
gluster1:/disco2/vms2 gluster1:/disco3/vms3 gluster2:/disco1/vms1
gluster2:/disco2/vms2 gluster2:/disco3/vms3 force
And mounted into /vms
like this:
gluster1:VMS /vms glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0
Later on, I add more 3 HDD, like:
mkfs.xfs /dev/sdf
mkfs.xfs /dev/sdg
mkfs.xfs /dev/sdh
mount /dev/sdf...
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
...ume: gfs_vms
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1.linova.de:/glusterfs/sde1enc
/brick 58448 0 Y 1062218
Brick gluster2.linova.de:/glusterfs/sdc1enc
/brick 50254 0 Y 20596
Brick gluster3.linova.de:/glusterfs/sdc1enc
/brick 52840 0 Y 1627513
Brick gluster1.linova.de:/glusterfs/sdf1enc
/brick...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...rbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: arbiter:/arbiter1 (arbiter)
Brick4: gluster1:/disco1TB-0/vms
Brick5: gluster2:/disco1TB-0/vms
Brick6: arbiter:/arbiter2 (arbiter)
Brick7: gluster1:/disco1TB-1/vms
Brick8: gluster2:/disco1TB-1/vms
Brick9: arbiter:/arbiter3 (arbiter)
Options Reconfigured:
cluster.self-heal-dae...
2017 Dec 19
3
Wrong volume size with df
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).
After a server crash, "gluster peer status" reports all peers as connected.
"gluster volume status detail" shows that all bricks are up and running
with the right size, but when I use df from a client mount point, the size
displayed is about 1/6 of the total size.
When browsing the data, they seem to