Displaying 20 results from an estimated 537 matches for "gfs".
Did you mean:
fs
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2: gfs3:/gfs/brick1/gv0
Brick3: gfs1:/gfs/arbit...
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gfs2:/gfs/brick1/gv0
> Bri...
2010 Mar 24
3
mounting gfs partition hangs
Hi,
I have configured two machines for testing gfs filesystems. They are
attached to a iscsi device and centos versions are:
CentOS release 5.4 (Final)
Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009
i686 i686 i386 GNU/Linux
The problem is if I try to mount a gfs partition it hangs.
[root at node2 ~]# cman_tool status
Ve...
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
...s are running.
Which processes should be run on every brick for heal operation?
# gluster volume status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
70850
Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
102951
Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
57535
Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y
566...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...operation?
>>
>> # gluster volume status
>> Status of volume: gv0
>> Gluster process TCP Port RDMA Port Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 70850
>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 102951
>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 57535
>> Brick cn04-ib:/gfs/gv0/brick1/brick...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...uld be run on every brick for heal operation?
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------------------------
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0...
2023 Jun 30
1
remove_me files building up
...sage increase and drop throughout the day on the data nodes for brick2 and brick3 as well, but while the arbiter follows the same trend of the disk usage increasing, it doesn't drop at any point.
This is the output of some gluster commands, occasional heal entries come and go:
root at uk3-prod-gfs-arb-01:~# gluster volume info gv1
Volume Name: gv1
Type: Distributed-Replicate
Volume ID: d3d1fdec-7df9-4f71-b9fc-660d12c2a046
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: uk1-prod-gfs-01:/data/glusterfs/gv1/brick1/brick
Brick2: uk2-prod-g...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...> # gluster volume status
>>> Status of volume: gv0
>>> Gluster process TCP Port RDMA Port Online
>>> Pid
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 70850
>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 102951
>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 57535
>>> Brick cn04-ib:/gf...
2018 Mar 06
1
geo replication
...erfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0 Y 326
Brick gfstest05:/gfs/testtomcat/mount 49153 0 Y 326
Brick gfstest01:/gfs/testtomcat/mount 49153 0 Y 335
Self-heal Daemon on localhost N/A N/A Y 1134...
2023 Jul 03
1
remove_me files building up
...age increase and drop throughout the day on the data nodes for brick2 and brick3 as well, but while the arbiter follows the same trend of the disk usage increasing, it doesn't drop at any point.
This is the output of some gluster commands, occasional heal entries come and go:
root at uk3-prod-gfs-arb-01:~# gluster volume info gv1
Volume Name: gv1Type: Distributed-ReplicateVolume ID: d3d1fdec-7df9-4f71-b9fc-660d12c2a046Status: StartedSnapshot Count: 0Number of Bricks: 3 x (2 + 1) = 9Transport-type: tcpBricks:Brick1: uk1-prod-gfs-01:/data/glusterfs/gv1/brick1/brickBrick2: uk2-prod-gfs-01:/dat...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...> # gluster volume status
>>> Status of volume: gv0
>>> Gluster process TCP Port RDMA Port Online
>>> Pid
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 70850
>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 102951
>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 57535
>>> Brick cn04-ib:/gf...
2007 May 21
2
CentOS-5 - kmod-gfs dependency issue?
Hi all,
I'm wondering if I'm running into some dependency issues on my CentOS5
test-machines. I've installed with a fairly minimal package set,
updated, removed old kernels and am now experimenting with iscsi and gfs.
I think I need kmod-gfs to get gfs -support, but there is only a version
that suits the base-kernel, 2.6.18-8.el5.
"
[root at node02 ~]# yum install kmod-gfs
(...)
Installing:
kmod-gfs i686 0.1.16-5.2.6.18_8.el5 base
133 k
Installing for dependencies:
kern...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...es should be run on every brick for heal operation?
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152...
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=...
2010 Apr 30
5
Mount drbd/gfs logical volume from domU
Hi list,
I setup on 2 Xen Dom0s drbd/gfs a logical volume, this works as primary/primary so both DomUs will be able to write on them at the same time. But I dont know how to mount them from my domUs, I can see them with fdisk -l. The partition is /dev/xvdb1
SHould I install gfs on domUs and mount them on each as gfs partitions?
[root@p3...
2007 Jun 11
3
domU on gfs
Hey All,
I have a cluster setup and exporting gfs storage everything is
working ok(as far as I know anyway). But instead of mounting the gfs
storage I want the xen guest to be installed on the shared gfs storage.
But with my current setup when I install the domU on the gfs storage it
changes it to ext3. Is it possible this way or does the domU...
2005 Sep 14
3
CentOS + GFS + EtherDrive
I am considering building a pair of storage servers that will be using
CentOS and GFS to share the storage from a Coraid (SATA+Raid) EtherDrive
shelf. Has anyone else tried such a setup?
Is GFS stable enough to use in a production environment?
There is a build of GFS 6.1 at http://rpm.karan.org/el4/csgfs/. Has anyone
used this? Is it stable?
Will I run into any problems if I u...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...atus
>>>> Status of volume: gv0
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>>> 70850
>>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>>> 102951
>>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>>> 57535
>>&g...
2023 Jul 04
1
remove_me files building up
...at is your main workload ?
Best Regards,Strahil Nikolov?
On Tuesday, July 4, 2023, 2:12 PM, Liam Smith <liam.smith at ek.co> wrote:
#yiv8784601153 P {margin-top:0;margin-bottom:0;}Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1meta-data=/dev/sdc1 ? ? ? ? ? ? ?isize=512 ? ?agcount=31, agsize=131007 blks? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? sectsz=512 ? attr=2, projid32bit=1? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? crc=1 ? ? ? ?finobt=1, sparse=1, rmapbt=0? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? refl...
2008 Jun 09
1
Slow gfs performance
HI,
Sorry for repeating same mail ,while composing that mail i mistakenly typed
the send button.I am facing problem with my gfs and below is the running
setup.
My setup
Two node Cluster[only to create shared gfs file system] with manual fencing
running on centos 4 update 5 for oracle apps.
Shared gfs partition are mounted on both the node[active-active]
Whenever i type df -h command it will take some delay to print...