On 07/01/2020 07:08, Ravishankar N wrote:>
>
> On 06/01/20 8:12 pm, lejeczek wrote:
>> And when I start this volume, in log on the brick which shows gfids:
> I assume these messages are from the self-heal daemon's log
> (glustershd.log). Correct me if I am mistaken.
>> ...
>>
>> [2020-01-06 14:28:24.119506] E [MSGID: 114031]
>> [client-rpc-fops_v2.c:150:client4_0_mknod_cbk] 0-QEMU_VMs-client-5:
>> remote operation failed. Path:
>> <gfid:3f0239ac-e027-4a0c-b271-431e76ad97b1> [Permission denied]
>
> Can you provide the following information:
>
> 1. gluster version
>
> 2. gluster volume info $volname
>
> 3. `getfattr -d -m . -e hex`? and `stat` outputs of any one gfid/file
> for which you see the EACCES from /all the bricks/ of the replica?
>
> 4. Do you see errors in the brick log as well? Does
> server4_mknod_cbk()? also complain of EACCES errors?
>
> 5. If yes, can you attach gdb to the brick process, put a break point
> in server4_mknod_cbk() and provide the function backtrace once it is hit?
>
> Thanks,
> Ravi
hi Ravi,
yes, from glustershd.log.
1. glusterfs 6.6 @Centos 7.6
2. $ gluster volume info QEMU_VMs?
?
Volume Name: QEMU_VMs
Type: Replicate
Volume ID: 595c0536-aef9-4491-b2ff-b97116089236
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Brick2: rider-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Brick3: whale-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Options Reconfigured:
dht.force-readdirp: off
performance.force-readdirp: off
performance.nl-cache: on
performance.xattr-cache-list: on
performance.cache-samba-metadata: off
performance.cache-size: 128MB
performance.io-thread-count: 64
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
transport.address-family: inet
nfs.disable: on
cluster.self-heal-daemon: enable
storage.owner-gid: 107
storage.owner-uid: 107
3. These files which the brick/replica shows appear to exist on only
that very brick/replica:
$ gluster volume heal QEMU_VMs info?
Brick swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Status: Connected
Number of entries: 0
?
Brick rider-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
/HA-halfspeed-LXC/rootfs/var/lib/gssproxy/default.sock
...
...
Status: Connected
Number of entries: 11
?
Brick whale-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Status: Connected
Number of entries: 0
--- And:
$ getfattr -d -m . -e hex
./HA-halfspeed-LXC/rootfs/var/lib/gssproxy/default.sock
# file: HA-halfspeed-LXC/rootfs/var/lib/gssproxy/default.sock
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.QEMU_VMs-client-3=0x00000000000007c600000000
trusted.afr.QEMU_VMs-client-5=0x00000000000007c400000000
trusted.gfid=0xc1c4431d567143839d119ed4b80beeff
trusted.gfid2path.0163b12aa697441e=0x61353365366133372d376636352d346337332d623962342d6261613837383137613162302f64656661756c742e736f636b
4. I do not see, on the brick/replica in question, mknod_cbk() with
"server" anywhere in the logs.
many thanks, L.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pEpkey.asc
Type: application/pgp-keys
Size: 1757 bytes
Desc: not available
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200107/b908c023/attachment.bin>