Displaying 16 results from an estimated 16 matches for "glus1".
Did you mean:
plus1
2017 Oct 17
3
gfid entries in volume heal info that do not heal
...iles, any tips to finding them
> would be appreciated, but I?m definitely just wanting them gone. I forgot
> to mention earlier that the cluster is running 3.12 and was upgraded from
> 3.10; these files were likely stuck like this when it was on 3.10.
>
>
>
> [root at tpc-cent-glus1-081017 ~]# gluster volume info gv0
>
>
>
> Volume Name: gv0
>
> Type: Distributed-Replicate
>
> Volume ID: 8f07894d-e3ab-4a65-bda1-9d9dd46db007
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 4 x (2 + 1) = 12
>
> Transport-type: tcp
&g...
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c...
2017 Oct 18
1
gfid entries in volume heal info that do not heal
...On Tue, Oct 17, 2017 at 8:04 PM, Matt Waymack <mwaymack at nsgdv.com> wrote:
> Attached is the heal log for the volume as well as the shd log.
>
> >> Run these commands on all the bricks of the replica pair to get the
> attrs set on the backend.
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> security.selinux=0x73797374656d5f753a6f62...
2017 Oct 19
2
gfid entries in volume heal info that do not heal
...t 2-3 weeks :-)
On Tue, 2017-10-17 at 14:34 +0000, Matt Waymack wrote:
> Attached is the heal log for the volume as well as the shd log.
>
> > > Run these commands on all the bricks of the replica pair to get
> > > the attrs set on the backend.
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> ad6a15d811a2
> security.selinux=0x73797374656d5f75...
2017 Oct 16
0
gfid entries in volume heal info that do not heal
...down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]# gluster volume info gv0
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 8f07894d-e3ab-4a65-bda1-9d9dd46db007
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: tpc-cent-glus1-081017:/exp/b1/gv0
Brick2: tpc-cent-glus2-081017:/...
2017 Oct 23
2
gfid entries in volume heal info that do not heal
...hat's an improvement over the last 2-3 weeks :-)
On Tue, 2017-10-17 at 14:34 +0000, Matt Waymack wrote:
Attached is the heal log for the volume as well as the shd log.
Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
security.selinux=0x73797374656d5f753a6f626a6563745f723a756...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...eeks :-)
>
> On Tue, 2017-10-17 at 14:34 +0000, Matt Waymack wrote:
>
> Attached is the heal log for the volume as well as the shd log.
>
>
>
> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
>
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> security.selinux=0x73797374656d5f753a6f626a656...
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt,
The files might be in split brain. Could you please send the outputs of
these?
gluster volume info <volname>
gluster volume heal <volname> info
And also the getfattr output of the files which are in the heal info output
from all the bricks of that replica pair.
getfattr -d -e hex -m . <file path on brick>
Thanks & Regards
Karthik
On 16-Oct-2017 8:16 PM,
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...> > > > >
> > > > > Run these commands on all the bricks of the replica pair to
> > > > > get the attrs set on the backend.
> > > >
> > > >
> > >
> > >
> > >
> > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> > > getfattr: Removing leading '/' from absolute path names
> > > # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > ad6a15d811a2...
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...14:34 +0000, Matt Waymack wrote:
>
> Attached is the heal log for the volume as well as the shd log.
>
>
>
>
>
>
>
> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
>
>
>
>
>
>
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
>
> getfattr: Removing leading '/' from absolute path names
>
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
>
> security.selinux=0x73797374656d...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
...nds on all the bricks of the replica pair
> > > > > > > to get the attrs set on the backend.
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> > > > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > > > ad6a15d811a2
> > > > > getfattr: Removing leading '/' from absolute path names
> > > > > # file: exp/b1/gv0/.glusterfs/10/...
2011 Mar 03
3
Mac / NFS problems
...o authenticate on the macs, the gluster servers aren't
bound into the LDAP domain.
Any ideas?
Thanks
David
g3:/var/log/glusterfs # gluster volume info
Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
performance.stat-prefetch: 1
performance.cache-size: 1gb
performance.write-behind-window-size: 1mb
network.ping-timeout: 20...
2018 Feb 09
1
Tiering Volumns
...volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at Glus1 ~]# gluster volume info
Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3...
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...nds on all the bricks of the replica pair
> > > > > > > to get the attrs set on the backend.
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> > > > > /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-
> > > > > ad6a15d811a2
> > > > > getfattr: Removing leading '/' from absolute path names
> > > > > # file: exp/b1/gv0/.glusterfs/10/...
2018 Feb 10
0
Tier Volumes
...volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at Glus1 ~]# gluster volume info
Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3...
2011 Feb 16
1
nfs problems
...ustervol1-dht: found anomalies in /production/tempo. holes=2 overlaps=0
Any ideas?
Thanks
David
gluster 3.1.2
g3:/var/log/glusterfs # gluster volume info
Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
diagnostics.dump-fd-stats: on
diagnostics.latency-measurement: off
network.ping-timeout: 20
performance.write-behind-window...