search for: selfheal

Displaying 20 results from an estimated 85 matches for "selfheal".

2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...031] [client-rpc-fops.c:2928:client3_3_lookup_cbk] 0-home-client-3: remote operation failed. Path: <gfid:c2c7765a-17d9-49be-b7d7-042047a2186a> (c2c7765a-17d9-49be-b7d7-042047a2186a) [No such file or directory] [2017-10-23 12:07:26.676134] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 12286756-a097-4a6c-bc9d-5b89a88e0fc5. sources=[2] sinks=0 1 [2017-10-23 12:07:29.731815] W [MSGID: 114031] [client-rpc-fops.c:2928:client3_3_lookup_cbk] 0-home-client-4: remote operation failed. Path: <gfid:c2c7765a-17d9-49be-b7d7-042047a2186a&gt...
2018 Apr 05
2
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
Hi, I noticed when I run gluster volume heal data info, the follow message shows up in the log, along with other stuff: [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal > failed: Unable to form layout for directory / I'm seeing it on Gluster 4.0.1 and 3.13.2. Here's the full log after running heal info: https://gist.github.com/fa19201d064490ce34a512a8f5cb82cc. Any idea why this could be?...
2018 Apr 05
0
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi, > > I noticed when I run gluster volume heal data info, the follow message > shows up in the log, along with other stuff: > > [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory >> selfheal failed: Unable to form layout for directory / > > > I'm seeing it on Gluster 4.0.1 and 3.13.2. > This msg is harmless. You can ignore it for now. There is a fix in works at https://review.gluster.org/19727 &gt...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...da-442f-8180-fa40b6f5327c) [No such file or directory] [2018-04-09 06:58:53.715638] W [MSGID: 114031] [client-rpc-fops.c:670:client3_3_rmdir_cbk] 0-myvol-private-client-2: remote operation failed [Directory not empty] [2018-04-09 06:58:53.750372] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-myvol-private-replicate-0: performing metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa25156 [2018-04-09 06:58:53.757677] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-myvol-private-replicate-0: Completed metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa251...
2017 Nov 09
2
Error logged in fuse-mount log file
...<gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb [2017-11-07 11:59:17.218935] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht:...
2013 Jun 17
0
gluster client timeouts / found conflict
.../369/60702093 [2013-06-14 15:55:54] T [fuse-bridge.c:1964:fuse_write] glusterfs-fuse: 3642624: WRITE (0x18f5d80, size=131072, offset=2962411520) [2013-06-14 15:55:54] T [fuse-bridge.c:1912:fuse_writev_cbk] glusterfs-fuse: 3642624: WRITE => 131072/131072,2962411520/0 [2013-06-14 15:55:54] T [dht-selfheal.c:352:dht_selfheal_layout_new_directory] distribute: gave fix: 0 - 89478484 on dn-086-7 for /369/60702093 [2013-06-14 15:55:54] T [io-cache.c:133:ioc_inode_flush] iocache: locked inode(0x7f6628614170) [2013-06-14 15:55:54] T [dht-selfheal.c:352:dht_selfheal_layout_new_directory] distribute: gave fi...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...27c) [No such file or directory] > > [2018-04-09 06:58:53.715638] W [MSGID: 114031] [client-rpc-fops.c:670:client3_3_rmdir_cbk] 0-myvol-private-client-2: remote operation failed [Directory not empty] > > [2018-04-09 06:58:53.750372] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-myvol-private-replicate-0: performing metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa25156 > > [2018-04-09 06:58:53.757677] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-myvol-private-replicate-0: Completed metadata selfheal on 1cc6facf-eca5-481c-a905...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...hich is why the numbers are varying each > time. You would need to check why that is the case. > Hope this helps, > Ravi > > >> >> /[2017-07-20 09:58:46.573079] I [MSGID: 108026] >> [afr-self-heal-common.c:1254:afr_log_selfheal] >> 0-engine-replicate-0: Completed data selfheal on >> e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1 sinks=2/ >> /[2017-07-20 09:59:22.995003] I [MSGID: 108026] >> [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do...
2017 Nov 13
2
Error logged in fuse-mount log file
...> Hi, >> >> I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log >> file. >> >> what does this error mean? should i worry about this and how do i resolve >> this? >> >> [2017-11-07 11:59:17.218973] W [MSGID: 109005] >> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory >> selfheal fail >> ed : 1 subvolumes have unrecoverable errors. path = >> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb >> [2017-11-07 11:59:17.218935] I [MSGID: 109063] >> [dht-layout....
2018 May 02
3
Healing : No space left on device
...06118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for thedude ??? - on node 2, volume are up but don't seem to be willing to correctly heal. The logs show a lot of : ??? ??? [2018-05-02 09:23:01.054196] I [MSGID: 108026] [afr-self-heal-entry.c:887:afr_selfheal_entry_do] 0-thedude-replicate-0: performing entry selfheal on 4dc0ae36-c365-4fc7-b44c-d717392c7bd3 ??? ??? [2018-05-02 09:23:01.222596] E [MSGID: 114031] [client-rpc-fops.c:233:client3_3_mknod_cbk] 0-thedude-client-2: remote operation failed. Path: <gfid:74ea4c57-61e5-4674-96e4-51356dd710db>...
2017 Nov 16
0
Missing files on one of the bricks
.... Path: <gfid:7e8513f4-d4e2-4e66-b0ba-2dbe4c803c54> (7e8513f4-d4e2-4e66-b0ba-2dbe4c803c54) [No such file or directory]" repeated 4 times between [2017-11-15 19:30:22.726808] and [2017-11-15 19:30:22.827631] [2017-11-16 15:04:34.102010] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-data01-replicate-0: performing metadata selfheal on 9612ecd2-106d-42f2-95eb-fef495c1d8ab [2017-11-16 15:04:34.186781] I [MSGID: 108026] [afr-self-heal-common.c:1255:afr_log_selfheal] 0-data01-replicate-0: Completed metadata selfheal on 9612ecd2-106d-42f2-95eb-fef495c1d8ab. sources=[1...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...r directory] > > > > [2018-04-09 06:58:53.715638] W [MSGID: 114031] [client-rpc-fops.c:670:client3_3_rmdir_cbk] 0-myvol-private-client-2: remote operation failed [Directory not empty] > > > > [2018-04-09 06:58:53.750372] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-myvol-private-replicate-0: performing metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa25156 > > > > [2018-04-09 06:58:53.757677] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-myvol-private-replicate-0: Completed metadata selfheal on 1cc6facf-eca5...
2017 Nov 10
0
Error logged in fuse-mount log file
...-users at gluster.org> > > > Hi, > > I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log > file. > > what does this error mean? should i worry about this and how do i resolve > this? > > [2017-11-07 11:59:17.218973] W [MSGID: 109005] > [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory > selfheal fail > ed : 1 subvolumes have unrecoverable errors. path = > /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb > [2017-11-07 11:59:17.218935] I [MSGID: 109063] > [dht-layout.c:713:dht_layout_nor...
2018 Jan 17
1
Gluster endless heal
...op afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with the following messages; [2018-01-17 18:37:11.815247] I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal] 0-ovirt_imgs-replicate-0: Completed data selfheal on 450fb07a-e95d-48ef-a229-48917557c278. sources=[0] sinks=1 [2018-01-17 18:37:12.830887] I [MSGID: 108026] [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] 0-ovirt_imgs-replicate-0: performing metadata selfheal on ce0f545d-635a-40c0-95eb-...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...gt; > > [2018-04-09 06:58:53.715638] W [MSGID: 114031] [client-rpc-fops.c:670:client3_3_rmdir_cbk] 0-myvol-private-client-2: remote operation failed [Directory not empty] > > > > > > > > [2018-04-09 06:58:53.750372] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-myvol-private-replicate-0: performing metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa25156 > > > > > > > > [2018-04-09 06:58:53.757677] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-myvol-private-replicate-0: Completed metadata selfh...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...y] >>> >>> [2018-04-09 06:58:53.715638] W [MSGID: 114031] [client-rpc-fops.c:670:client3_3_rmdir_cbk] 0-myvol-private-client-2: remote operation failed [Directory not empty] >>> >>> [2018-04-09 06:58:53.750372] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-myvol-private-replicate-0: performing metadata selfheal on 1cc6facf-eca5-481c-a905-7a39faa25156 >>> >>> [2018-04-09 06:58:53.757677] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-myvol-private-replicate-0: Completed metadata selfheal on 1cc6facf...
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote: > > > On 15 November 2017 at 19:57, Frederic Harmignies > <frederic.harmignies at elementai.com > <mailto:frederic.harmignies at elementai.com>> wrote: > > Hello, we have 2x files that are missing from one of the bricks. > No idea how to fix this. > > Details: > > # gluster volume
2017 Nov 14
2
Error logged in fuse-mount log file
...use-mount log file To: Gluster Users < gluster-users at gluster.org > Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb [2017-11-07 11:59:17.218935] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dh...