On Thu, Jul 14, 2016 at 4:53 AM, Manikandan Selvaganesh <mselvaga at
redhat.com> wrote:
> Hi David,
>
> As you have mentioned already, the issue is fixed already and
> the patch is here[1]. It is also back ported to 3.8 and 3.7.12.
>
> [1] http://review.gluster.org/#/c/13793/
>
Good news. Sadly when I tested updating to 3.7.12 & 13 last weekend I ran
into issues where oVirt would not keep storage active. I would be flooded
with these
[2016-07-09 15:27:46.935694] I [fuse-bridge.c:4083:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
7.22
[2016-07-09 15:27:49.555466] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556574] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556659] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 80: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
[2016-07-09 15:27:59.612477] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613700] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613781] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 168: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
I haven't had time to dig into it yet, but Lindsay Mathieson suggested it
had to do with aio support being removed in gluster and that disk image
writeback settings would need to be changed. Which I can probably do from
custom settings in oVirt I think, but the tests oVirt runs on storage nodes
via dd to test if they are up I am not sure I can modify easily. And if
those tests fail after few minutes node would be made inactive and even if
the VM's themselves still see storage engine would likely keep pausing them
thinking their is a storage issue.
>
> On Thu, Jul 14, 2016 at 2:40 PM, David Gossage <
> dgossage at carouselchecks.com> wrote:
>
>>
>> On Thu, Jul 14, 2016 at 4:07 AM, David Gossage <
>> dgossage at carouselchecks.com> wrote:
>>
>>>
>>>
>>> On Thu, Jul 14, 2016 at 3:33 AM, Manikandan Selvaganesh <
>>> mselvaga at redhat.com> wrote:
>>>
>>>> Hi David,
>>>>
>>>> Which version are you using. Though the error seems
superfluous, do you
>>>> observe any functional failures.
>>>>
>>>
>>> 3.7.11 and no so far I have noticed no issues over past week as I
have
>>> been enabling sharding on storage. VM's all seem to be running
just fine.
>>> Been migrating disk images off and on to shard a few a night since
Sunday
>>> and all have been behaving as expected.
>>>
>>>>
>>>> Also, there are quite some EINVAL bugs we fixed in 3.8, could
you point
>>>> out
>>>> to the one you find matching.
>>>>
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1325810
>>>
>>> This is one I found while searching a portion of my error message.
>>>
>>
>> Technically that was a duplicate report and the one it is covered in is
>> at
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1319581
>>
>> Though I do not have quota enabled (that I am aware of) as is described
>> there.
>>
>>
>>
>>>
>>>> On Thu, Jul 14, 2016 at 1:51 PM, David Gossage <
>>>> dgossage at carouselchecks.com> wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee
<amukherj at redhat.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
>>>>>> dgossage at carouselchecks.com> wrote:
>>>>>>
>>>>>>> M, David Gossage <dgossage at
carouselchecks.com> wrote:
>>>>>>>
>>>>>>>> Is their a way to reduce logging of
informational spam?
>>>>>>>>
>>>>>>>> /var/log/glusterfs/bricks/gluster1-BRICK1-1.log
is now 3GB over
>>>>>>>> past few days
>>>>>>>>
>>>>>>>> [2016-07-14 00:54:35.267018] I
[dict.c:473:dict_get]
>>>>>>>>
(-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>>>>>>>>
-->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>>>>>>>> [0x7fcdafde5917]
-->/lib64/libglusterfs.so.0(dict_get+0xac)
>>>>>>>> [0x7fcdc395e0fc] ) 0-dict: !this || key=()
[Invalid argument]
>>>>>>>> [2016-07-14 00:54:35.272945] I
[dict.c:473:dict_get]
>>>>>>>>
(-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>>>>>>>>
-->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>>>>>>>> [0x7fcdafde5917]
-->/lib64/libglusterfs.so.0(dict_get+0xac)
>>>>>>>> [0x7fcdc395e0fc] ) 0-dict: !this || key=()
[Invalid argument]
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>> @Mani, Is this something which get logged in a normal
scenario? I
>>>>>> doubt.
>>>>>>
>>>>>>
>>>>> I did find a bug report about it presumably being fixed in
3.8.
>>>>>
>>>>> I also currently have a node down which may be triggering
them.
>>>>>
>>>>>
>>>>>>>>
>>>>>>> Believe I found it
>>>>>>>
>>>>>>> gluster volume set testvol
diagnostics.brick-log-level WARNING
>>>>>>>
>>>>>>>
>>>>>>>
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Configuring_the_Log_Level.html
>>>>>>>
>>>>>>>
>>>>>>> *David Gossage*
>>>>>>>> *Carousel Checks Inc. | System Administrator*
>>>>>>>> *Office* 708.613.2284
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-users mailing list
>>>>>>> Gluster-users at gluster.org
>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> --Atin
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Manikandan Selvaganesh.
>>>>
>>>
>>>
>>
>
>
> --
> Regards,
> Manikandan Selvaganesh.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160714/c64d28cc/attachment.html>