Displaying 20 results from an estimated 900 matches similar to: "Mac / NFS problems"
2011 Feb 16
1
nfs problems
getting lots of stale nfs filehandle errors
we have 4 nodes in our cluster, clients nfs mount the volume from any node
in a round-robin
it appears that one node has gone bad. the clients mounting that node can't
see the files that the others can see. ls -l gives rubbish for the metadata,
and get lots of these lines in the nfs.log:
[2011-02-16 15:33:32.538756] I
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt,
Run these commands on all the bricks of the replica pair to get the attrs
set on the backend.
On the bricks of first replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/
108694db-c039-4b7c-bd3d-ad6a15d811a2
On the fourth replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/
e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
Also run the "gluster volume
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt,
The files might be in split brain. Could you please send the outputs of
these?
gluster volume info <volname>
gluster volume heal <volname> info
And also the getfattr output of the files which are in the heal info output
from all the bricks of that replica pair.
getfattr -d -e hex -m . <file path on brick>
Thanks & Regards
Karthik
On 16-Oct-2017 8:16 PM,
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file:
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue
(RAID6 array failed out with 3 dead drives at once while a 12 TB load
was being copied into one mounted space - what a mess)
I have >700K GFID entries that have no path data:Example:getfattr -d -e
hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file:
.glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re:
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find <brickpath> -samefile
<brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full
gfid>"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt,
>From the xattr output, it looks like the files are not present on the
arbiter brick & needs healing. But on the parent it does not have the
pending markers set for those entries.
The workaround for this is you need to do a lookup on the file which needs
heal from the mount, so it will create the entry on the arbiter brick and
then run the volume heal to do the healing.
Follow
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]#
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt,
Can you also check for the link count in the stat output of those hardlink
entries in the .glusterfs folder on the bricks.
If the link count is 1 on all the bricks for those entries, then they are
orphaned entries and you can delete those hardlinks.
To be on the safer side have a backup before deleting any of the entries.
Regards,
Karthik
On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running process.
I'm running several finds at once with gnu parallel but it will still
take some time.
2018 Feb 09
1
Tiering Volumns
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s
doing it, and hopefully even what they''re doing.
I can''t seem to find any way to do that. Any suggestions?
Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me
performance statistics and so forth. I''m looking for something more
granular. Either *who* the
2018 Feb 10
0
Tier Volumes
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2015 Apr 21
2
QemuDomainObjEndJob called when libvirtd is started and libvirt insists qemu is using the wrong disk source.
List,
I was under the impression that I could restart libvirtd without it
destroying my VMs, but am not finding that to be true. When I killall
libvirtd then my VM's keep running, but then when I start libvirtd it
calls qemuDomainObjEndJob:1542 : Stopping job: modify (async=none
vm=0x7fb8cc0d8510 name=test) and my domain gets whacked.
Any way to disable this behavior?
Also, while I'm
2013 Oct 31
7
How do I get rid of vfb?
Hi all,
I’m running Xen 4.1 on a couple of NetBSD dom0s and NetBSD’s pkgsrc provides both the xl and xm tools for working with Xen 4.1. I understand that xl is the new way of doing things, but I can’t get it to create my guests the way I want and the main symptom of this is that the text console isn’t available.
When I create a guest with xl it starts up qemu-dm (which xm doesn’t do) and I get
2009 Feb 13
2
Fwd: Manager Interface Originate (ASYNC) - How to get the Originate Status
Dear All,
I am originating the call directly to the SIP Provider using the maganger
interface + originate (ASYNC) command. Here is the PHP-AGI Script.
$call = $asm->send_request('Originate',
array('Channel'=>"SIP/416XXXXXXX at ABC/n",
'Context'=>'ORIG',
2008 May 21
9
Slow pkginstalls due to long door_calls to nscd
Hi all,
I am installing a zone onto two different V445s running S10U4 and the
zones are taking hours to install (about 1000 packages), that is, the
problem is identical on both systems. A bit of trussing and dtracing has
shown that the pkginstalls being run by the zoneadm install are making
door_call calls to nscd that are taking very long, so far observed to be
5 to 40 seconds, but always in