Displaying 20 results from an estimated 1000 matches similar to: "gfid and volume-id extended attributes lost"
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O
path:
posix_removexattr() has:
0 if (!strcmp (GFID_XATTR_KEY, name))
{
1 gf_msg (this->name, GF_LOG_WARNING, 0,
P_MSG_XATTR_NOT_REMOVED,
2 "Remove xattr called on gfid for file %s",
real_path);
3 op_ret =
-1;
4 goto
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Pranith,
>
> Thanks for looking in to the issue. The bricks were
> mounted after the reboot. One more thing that I noticed was when the
> attributes were manually set when glusterd was up then on starting the
> volume the attributes were again lost. Had to stop glusterd
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
Pranith,
Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes and then start glusterd. After that the volume start succeeded.
Thanks and Regards,
Ram
From: Pranith
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Friday, July 07, 2017 11:54 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> 3.7.19
>
These are the only callers for removexattr and only _posix_remove_xattr has
the potential to do removexattr as posix_removexattr already makes sure
that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr
happens only from healing code of afr/ec. And this can only happen
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs2sds:/ws/disk1/ws_brick
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
As per the code, self-heal was the only candidate which *can* do it.
Could you check logs of self-heal daemon and the mount to check if there
are any metadata heals on root?
+Sanoj
Sanoj,
Is there any systemtap script we can use to detect which process is
removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
>
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script(
https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.
To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print
the backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.
On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:
> Ram,
> As per the code, self-heal was the only candidate which *can* do
> it. Could you check
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
Ram,
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
I sent https://review.gluster.org/17765 to fix the possibility in
bulk removexattr. But I am not sure if this is indeed the reason for this
issue.
On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Thanks for the swift turn around. Will try this out and let you know.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:*
2018 May 08
2
Compiling 3.13.2 under FreeBSD 11.1?
On Mon, May 7, 2018 at 9:19 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote:
>
> See https://review.gluster.org/19974
Many thanks Kaleb.
Your patch did the trick and I did manage to compile, however I get a
Segmentation fault when trying to execute gluster.
I'm using the following options to configure (taken from the glusterfs
3.11.1 port in the FreeBSD port repository):
2018 May 09
0
Compiling 3.13.2 under FreeBSD 11.1?
On Tue, May 8, 2018 at 11:34 AM, Roman Serbski <mefystofel at gmail.com> wrote:
> # gdb gluster
> GNU gdb 6.1.1 [FreeBSD]
> Copyright 2004 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you are
> welcome to change it and/or distribute copies of it under certain conditions.
> Type "show copying" to see the
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra,
Sorry for the late follow up. I have some more data on the issue.
The issue tends to happen when the shards are created. The easiest time
to reproduce this is during an initial VM disk format. This is a log
from a test VM that was launched, and then partitioned and formatted
with LVM / XFS:
[2018-04-03 02:05:00.838440] W [MSGID: 109048]
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian,
Do you've a reproducer for this bug? If not a specific one, a general
outline of what operations where done on the file will help.
regards,
Raghavendra
On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> The gfid mismatch
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :).
This looks to be a genuine issue which requires some effort in fixing it.
Can you file a bug? I need following information attached to bug:
* Client and bricks logs. If you can reproduce the issue, please set
diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If
you cannot reproduce the issue or if you cannot accommodate such big logs,
please set
2011 Jun 09
1
NFS problem
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
2018 May 07
0
Compiling 3.13.2 under FreeBSD 11.1?
On 05/07/2018 04:29 AM, Roman Serbski wrote:
> Hello,
>
> Has anyone managed to successfully compile the latest 3.13.2 under
> FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
> fails:
See https://review.gluster.org/19974
3.13 reached EOL with 4.0. There will be a fix posted for 4.0 soon. In
the mean time I believe your specific problem with 3.13.2 should be
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file