Displaying 13 results from an estimated 13 matches for "a51a".
Did you mean:
a51
2018 Nov 09
0
virsh dominfo not returning any stats
...3.9.0Using library: libvirt 3.9.0Using API: QEMU 3.9.0RUNNING HYPERVISOR: QEMU 2.9.0
Through the two domstats we see that the cpu.time is not increasing, but vcpu.0.time is not. It's also not reporting a vcpu.0.wait on this centos hypervisor.
# virsh domstats
Domain: '3a1de3d0-9de3-4876-a51a-1337d2ad0b87'
state.state=1
state.reason=5
cpu.time=18530581887
cpu.user=430000000
cpu.system=2910000000
balloon.current=786432
balloon.maximum=786432
vcpu.current=1
vcpu.maximum=1
vcpu.0.state=1
vcpu.0.time=1145240000000
# virsh domstats
Domain: '3a1de3d0-9de3-4876-a5...
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like "
got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
Anyway I reproduced it by manually setting the afr.dirty bit for a zero
byte file on all 3 bricks. Since there are no afr pending xattrs
indicating good/bad copies and all files are zero bytes, the data
self-heal algorithm just picks the file with th...
2017 Aug 27
2
self-heal not working
...t redhat.com
> > To: mabi <mabi at protonmail.ch>
> > Ben Turner <bturner at redhat.com>, Gluster Users <gluster-users at gluster.org>
> >
> > Yes, the shds did pick up the file for healing (I saw messages like " got
> > entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
> >
> > Anyway I reproduced it by manually setting the afr.dirty bit for a zero
> > byte file on all 3 bricks. Since there are no afr pending xattrs
> > indicating good/bad copies and all files are zero bytes, the data
> > s...
2017 Aug 27
0
self-heal not working
...45 PM
> From: ravishankar at redhat.com
> To: mabi <mabi at protonmail.ch>
> Ben Turner <bturner at redhat.com>, Gluster Users <gluster-users at gluster.org>
>
> Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
>
> Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the file w...
2017 Aug 28
3
self-heal not working
...abi <mabi at protonmail.ch>
>>>> Ben Turner <bturner at redhat.com>, Gluster Users <gluster-users at gluster.org>
>>>>
>>>> Yes, the shds did pick up the file for healing (I saw messages like " got
>>>> entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
>>>>
>>>> Anyway I reproduced it by manually setting the afr.dirty bit for a zero
>>>> byte file on all 3 bricks. Since there are no afr pending xattrs
>>>> indicating good/bad copies and all files are zero...
2017 Aug 28
0
self-heal not working
...t;>> To: mabi <mabi at protonmail.ch>
>>> Ben Turner <bturner at redhat.com>, Gluster Users <gluster-users at gluster.org>
>>>
>>> Yes, the shds did pick up the file for healing (I saw messages like " got
>>> entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
>>>
>>> Anyway I reproduced it by manually setting the afr.dirty bit for a zero
>>> byte file on all 3 bricks. Since there are no afr pending xattrs
>>> indicating good/bad copies and all files are zero bytes, the data...
2017 Aug 28
0
self-heal not working
...t; >>> Ben Turner <bturner at redhat.com>, Gluster Users
>> <gluster-users at gluster.org>
>> >>>
>> >>> Yes, the shds did pick up the file for healing (I saw messages
>> like " got
>> >>> entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error
>> afterwards.
>> >>>
>> >>> Anyway I reproduced it by manually setting the afr.dirty bit for
>> a zero
>> >>> byte file on all 3 bricks. Since there are no afr pending xattrs
>> >>> indicatin...
2017 Aug 28
2
self-heal not working
...ailto:bturner at redhat.com), Gluster Users [<gluster-users at gluster.org>](mailto:gluster-users at gluster.org)
>>>>>>
>>>>>> Yes, the shds did pick up the file for healing (I saw messages like " got
>>>>>> entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
>>>>>>
>>>>>> Anyway I reproduced it by manually setting the afr.dirty bit for a zero
>>>>>> byte file on all 3 bricks. Since there are no afr pending xattrs
>>>>>> indicating good/b...
2017 Aug 25
0
self-heal not working
...concerned is called myvol-pro, the other 3 volumes have no problem so far.
>
> Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name anymore but just is GFID which is:
>
> gfid:1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea
>
> Hope that helps for debugging this issue.
>
>> -------- Original Message --------
>> Subject: Re: [Gluster-users] self-heal not working
>> Local Time: August 24, 2017 5:58 AM
>> UTC Time: August 24, 2017 3:58 AM
>> From: ravishankar at redhat....
2017 Aug 28
0
self-heal not working
...t redhat.com>, Gluster Users
>>>> <gluster-users at gluster.org>
>>>> >>>
>>>> >>> Yes, the shds did pick up the file for healing (I saw messages
>>>> like " got
>>>> >>> entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error
>>>> afterwards.
>>>> >>>
>>>> >>> Anyway I reproduced it by manually setting the afr.dirty bit
>>>> for a zero
>>>> >>> byte file on all 3 bricks. Since there are no afr pendin...
2017 Aug 24
2
self-heal not working
...mail.
The volume concerned is called myvol-pro, the other 3 volumes have no problem so far.
Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name anymore but just is GFID which is:
gfid:1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea
Hope that helps for debugging this issue.
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 24, 2017 5:58 AM
> UTC Time: August 24, 2017 3:58 AM
> From: ravishankar at redhat.com
> To: mabi <mabi at pro...
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the
afr.volname-client-xx xattr.
`gluster volume set myvolume diagnostics.client-log-level DEBUG` is right.
On 08/23/2017 10:31 PM, mabi wrote:
> I just saw the following bug which was fixed in 3.8.15:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1471613
>
> Is it possible that the problem I described in this post is
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15:
https://bugzilla.redhat.com/show_bug.cgi?id=1471613
Is it possible that the problem I described in this post is related to that bug?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishankar at