similar to: Tiering Volumns

Displaying 20 results from an estimated 200 matches similar to: "Tiering Volumns"

2018 Feb 10
0
Tier Volumes
Hello everyone. I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier" volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the tiers for each volume? I will be adding 2 more HDDs to each server. I would then like to change from a Replicate to Distributed-Replicated. Not sure if that makes a difference in the tiering setup. [root at
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt, >From the xattr output, it looks like the files are not present on the arbiter brick & needs healing. But on the parent it does not have the pending markers set for those entries. The workaround for this is you need to do a lookup on the file which needs heal from the mount, so it will create the entry on the arbiter brick and then run the volume heal to do the healing. Follow
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10. [root at tpc-cent-glus1-081017 ~]#
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue (RAID6 array failed out with 3 dead drives at once while a 12 TB load was being copied into one mounted space - what a mess) I have >700K GFID entries that have no path data:Example:getfattr -d -e hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file: .glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt, Can you also check for the link count in the stat output of those hardlink entries in the .glusterfs folder on the bricks. If the link count is 1 on all the bricks for those entries, then they are orphaned entries and you can delete those hardlinks. To be on the safer side have a backup before deleting any of the entries. Regards, Karthik On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 23, 2017 1:52 AM To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re:
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data that supplies the path to the original. I have the inode from stat. Looking now to dig out the path/filename from xfs_db on the specific inodes individually. Is the hash of the filename or <path>/filename and if so relative to where? /, <path from top of brick>, ? On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only on the brick that was live during the outage and concurrent file copy- in. The brick that was down at that time has no GFIDs that are not also on the up brick. As the bricks are 10TB, the find is going to be a long running process. I'm running several finds at once with gnu parallel but it will still take some time.
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim, Can you check whether the same hardlinks are present on both the bricks & both of them have the link count 2? If the link count is 2 then "find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid>" should give you the file path. Regards, Karthik On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2011 Mar 03
3
Mac / NFS problems
Hello, Were having issues with macs writing to our gluster system. Gluster vol info at end. On a mac, if I make a file in the shell I get the following message: smoke:hunter david$ echo hello > test -bash: test: Operation not permitted And the file is made but is zero size. smoke:hunter david$ ls -l test -rw-r--r-- 1 david realise 0 Mar 3 08:44 test glusterfs/nfslog logs thus:
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt, The files might be in split brain. Could you please send the outputs of these? gluster volume info <volname> gluster volume heal <volname> info And also the getfattr output of the files which are in the heal info output from all the bricks of that replica pair. getfattr -d -e hex -m . <file path on brick> Thanks & Regards Karthik On 16-Oct-2017 8:16 PM,
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while! I have the following stats: 4085169 files in both bricks3162940 files only have a single hard link. All of the files exist on both servers. bmidata2 (below) WAS running when bmidata1 died. gluster volume heal clifford statistics heal-countGathering count of entries to be healed on volume clifford has been successful Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2011 Feb 16
1
nfs problems
getting lots of stale nfs filehandle errors we have 4 nodes in our cluster, clients nfs mount the volume from any node in a round-robin it appears that one node has gone bad. the clients mounting that node can't see the files that the others can see. ls -l gives rubbish for the metadata, and get lots of these lines in the nfs.log: [2011-02-16 15:33:32.538756] I
2017 Sep 06
0
First Gluster Volume deploy: recommended configuration and suggestions?
Dear users, I just started my first Gluster test volume using 3 servers (each server contains 12 hdd). I would like to create a "distributed disperse volume? but I?m a little bit confused about the right configuration schema that I should use. Should I use JBOD disks? How many bricks to be defined? Ideal redundancy value? Ideal disperse-data count value? 6x(4+2) or 3x(8+4) volume
2010 May 11
1
create volumn
Hi How can I create volumn in the new installation? I can't find any documents to help Thank you
2004 Mar 02
0
Fail to create OCFS volumn on the hard disk via a QLA2312 fiberchannel card.
/dev/rawctl is part of the "raw block device" driver interface. You need to turn on the raw device driver option (CONFIG_RAW_DRIVER) in your kernel build. I have always created a new file system from my 2.4 kernel (that has all possible modules built and properly configured to load when needed), so I don't know for sure if mkfs.ocfs will work correctly after turning on the raw
2012 Apr 01
1
Degrees of Freedom for lme.
Hi, I am trying to run a linear mixed effect model on data. I have 17 longitudinal subjects and 36 single subjects, and this is the code I'm using (below). So, INDEX1 is the column with brain volumns, and the predictors are gort and age, by time ID (time they were seen). I believe my data is set up the right way, but when I run it, I get DF for Intercept is 49, and DF for slope is 13?
2009 Jun 09
0
Announcing oVirt 0.99
Announcing oVirt 0.99 ===================== We are pleased to announce the release of oVirt 0.99, a significant step forward in stability and feature set for oVirt project users. Some highlights from the change log: * Improved installer, oVirt Server will now install with an existing FreeIPA setup present rather than insisting on installing FreeIPA from scratch * Anyterm console support --