Displaying 7 results from an estimated 7 matches for "b46b".
Did you mean:
b462
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
...03 02:07:57.979851] W [MSGID: 109009]
[dht-common.c:2831:dht_lookup_linkfile_cbk] 0-ovirt-350-zone1-dht:
/.shard/927c6620-848b-4064-8c88-68a332b645c2.3: gfid different on data
file on ovirt-350-zone1-replicate-3, gfid local =
00000000-0000-0000-0000-000000000000, gfid node =
55f86aa0-e7a0-4075-b46b-a11f8bdbbceb
[2018-04-03 02:07:57.980716] W [MSGID: 109009]
[dht-common.c:2570:dht_lookup_everywhere_cbk] 0-ovirt-350-zone1-dht:
/.shard/927c6620-848b-4064-8c88-68a332b645c2.3: gfid differs on
subvolume ovirt-350-zone1-replicate-3, gfid local =
b1e3f299-32ff-497e-918b-090e957090f6, gfid node =...
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
...851] W [MSGID: 109009]
> [dht-common.c:2831:dht_lookup_linkfile_cbk] 0-ovirt-350-zone1-dht:
> /.shard/927c6620-848b-4064-8c88-68a332b645c2.3: gfid different on data
> file on ovirt-350-zone1-replicate-3, gfid local = 00000000-0000-0000-0000-000000000000,
> gfid node = 55f86aa0-e7a0-4075-b46b-a11f8bdbbceb
> [2018-04-03 02:07:57.980716] W [MSGID: 109009]
> [dht-common.c:2570:dht_lookup_everywhere_cbk] 0-ovirt-350-zone1-dht:
> /.shard/927c6620-848b-4064-8c88-68a332b645c2.3: gfid differs on subvolume
> ovirt-350-zone1-replicate-3, gfid local = b1e3f299-32ff-497e-918b-090e957090...
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian,
Do you've a reproducer for this bug? If not a specific one, a general
outline of what operations where done on the file will help.
regards,
Raghavendra
On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> The gfid mismatch
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> The gfid mismatch here is between the shard and its "link-to" file, the
> creation of which happens at a layer below that of shard translator on the
> stack.
>
> Adding DHT devs to take a look.
>
Thanks Krutika. I assume shard doesn't do any dentry operations like
rename,
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
.../8E5CE1B5FAD91BCA037564647DA4FBF0A9134B91 (a4140db7-2989-472c-9024-66171c0dbc78) on home-client-2
[2017-10-25 10:14:17.182223] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/E27E50CFAF47AFE9E723635EB58778B46B2E1F13 (be9efeff-b4f8-4c70-96b6-3f66b3f303f6) on home-client-2
[2017-10-25 10:14:17.191006] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/781DA2AB18768FE8A30852A817CE27E0040718B9 (cab34e14-37bd-4cde-9dca...