Displaying 7 results from an estimated 7 matches similar to: "file changed as we read it"
2017 Oct 26
0
not healing one file
Hi Richard,
Thanks for the informations. As you said there is gfid mismatch for the
file.
On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different.
This is not considered as split-brain because we have two good copies here.
Gluster 3.10 does not have a method to resolve this situation other than the
manual intervention [1]. Basically what you need to do is remove the
2017 Oct 26
0
not healing one file
Thanks for this report. This week many of the developers are at Gluster
Summit in Prague, will be checking this and respond next week. Hope that's
fine.
Thanks,
Amar
On 25-Oct-2017 3:07 PM, "Richard Neuboeck" <hawk at tbi.univie.ac.at> wrote:
> Hi Gluster Gurus,
>
> I'm using a gluster volume as home for our users. The volume is
> replica 3, running on
2017 Oct 25
2
not healing one file
Hi Gluster Gurus,
I'm using a gluster volume as home for our users. The volume is
replica 3, running on CentOS 7, gluster version 3.10
(3.10.6-1.el7.x86_64). Clients are running Fedora 26 and also
gluster 3.10 (3.10.6-3.fc26.x86_64).
During the data backup I got an I/O error on one file. Manually
checking for this file on a client confirms this:
ls -l
2017 Nov 19
0
gluster share as home
Hi Gluster Group,
I've been using gluster as storage back end for oVirt for some years now
without the slightest hitch at all.
Excited with this I wanted to switch our home share from NFS over to a
replica 3 gluster volume as well. Since small file performance was not
particular good I applied all performance enhancing settings I could
find in the gluster blog and on other sites. Those
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
    getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health