Displaying 20 results from an estimated 70 matches similar to: "Split brain files show no filename/path with gfid"
2012 Jan 26
0
Large number of 'no gfid found' messages in log
I am periodically seeing a high number of these messages in the client
log - Nothing in the log for the bricks. There appears to be a log entry
for every file in that directory, including sub-directories. I check
getfattr on the bricks and they have the gfid set and both replica brick
gfid's match for each file.
[2012-01-26 15:58:16.590368] W
[fuse-resolve.c:273:fuse_resolve_deep_cbk]
2017 Jul 24
0
Bug 1473150 - features/shard:Lookup on shard 18 failed. Base file gfid = b00f5de2-d811-44fe-80e5-1f382908a55a [No data available], the [No data available]
+gluster-users ML
Hi,
I've responded to your bug report here -
https://bugzilla.redhat.com/show_bug.cgi?id=1473150#c3
Kindly let us know if the patch fixes your bug.
-Krutika
On Thu, Jul 20, 2017 at 3:12 PM, zhangjianwei1216 at 163.com <
zhangjianwei1216 at 163.com> wrote:
> Hi Krutika Dhananjay, Pranith Kumar Karampuri,
> Thank for your reply!
>
> I am
2017 Oct 16
0
gfid entries in volume heal info that do not heal
Hi all,
I have a volume where the output of volume heal info shows several gfid entries to be healed, but they've been there for weeks and have not healed. Any normal file that shows up on the heal info does get healed as expected, but these gfid entries do not. Is there any way to remove these orphaned entries from the volume so they are no longer stuck in the heal process?
Thank you!
2018 Jan 30
0
gfid instead of file name
Hello!
Could you , please, tell me why gfid are shown here?
There was no such output before 3.12,? this is during upgrade from
3.12.4 to 3.12.5:
gluster volume heal pool info
Brick svarog:/wall/pool/brick
/seikosha.img
/felix.img
<gfid:87774d00-6ddd-42b5-ba08-d382097b6720>
/manzan.img
/ilum.img
<gfid:f094f40d-4872-4d34-96b9-0f655942e473>
/onderon.img
/sheep.img
/iskalon.img
2017 Jul 07
1
gfid and volume-id extended attributes lost
Hi,
We faced an issue in the production today. We had to stop the volume and reboot all the servers in the cluster. Once the servers rebooted starting of the volume failed because the following extended attributes were not present on all the bricks on 2 servers.
1) trusted.gfid
2) trusted.glusterfs.volume-id
We had to manually set these extended attributes to start the volume.
2023 Feb 01
0
Corrupted object's [GFID], despite md5sum matches everywhere
Hi,
To test corruption detection and repair, we modified a file inside the
brick directory on server glusterfs1, and scheduled regular scrubs. The
corruption is detected:
Error count: 1
Corrupted object's [GFID]:
9be5eecf-5ad8-4256-8b08-879aecf65881 ==> BRICK: /data/brick1/gv0
path: /prd/drupal-files-prd/inline-images/small - main building 1_0.jpg
We have self-healing enabled, and
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
Pranith,
Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes and then start glusterd. After that the volume start succeeded.
Thanks and Regards,
Ram
From: Pranith
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Friday, July 07, 2017 11:54 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O
path:
posix_removexattr() has:
0 if (!strcmp (GFID_XATTR_KEY, name))
{
1 gf_msg (this->name, GF_LOG_WARNING, 0,
P_MSG_XATTR_NOT_REMOVED,
2 "Remove xattr called on gfid for file %s",
real_path);
3 op_ret =
-1;
4 goto
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Pranith,
>
> Thanks for looking in to the issue. The bricks were
> mounted after the reboot. One more thing that I noticed was when the
> attributes were manually set when glusterd was up then on starting the
> volume the attributes were again lost. Had to stop glusterd
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]#
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt,
The files might be in split brain. Could you please send the outputs of
these?
gluster volume info <volname>
gluster volume heal <volname> info
And also the getfattr output of the files which are in the heal info output
from all the bricks of that replica pair.
getfattr -d -e hex -m . <file path on brick>
Thanks & Regards
Karthik
On 16-Oct-2017 8:16 PM,
2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions:
1. What volume type is this? What tuning have you done? gluster v info output would be helpful here.
2. How big are your bricks?
3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open.
4. Other than
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print
the backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.
On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:
> Ram,
> As per the code, self-heal was the only candidate which *can* do
> it. Could you check
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr