Ankireddypalle Reddy
2016-Dec-24 10:31 UTC
[Gluster-users] Need to understand this logging in disperse volume
Hi, In our application we replace a file named sample with a file name sample.compact. Here's the sequence of steps. 1) Rename sample to sample.temp 2) Rename sample.compact to sample 3) Unlink sample.temp Please note that this is a 3:1 disperse volume and each node is also a client. It's a FUSE mount. Here's the corresponding logging: [2016-12-24 09:23:25.407934] I [MSGID: 109066] [dht-rename.c:1413:dht_rename] 0-StoragePool-dht: renaming /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 (hash=StoragePool-disperse-6/cache=StoragePool-disperse-6) => /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002.temp (hash=StoragePool-disperse-4/cache=<nul>) [2016-12-24 09:23:25.829296] I [MSGID: 109066] [dht-rename.c:1413:dht_rename] 0-StoragePool-dht: renaming /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002.compact (hash=StoragePool-disperse-0/cache=StoragePool-disperse-0) => /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 (hash=StoragePool-disperse-6/cache=<nul>) At this point of time when I check for the bricks on which the file could be present I see the trusted.ec.config attribute set. glusterfs.pathinfo="(<DISTRIBUTE:StoragePool-dht> (<EC:StoragePool-disperse-0> <POSIX(/ws/disk1/ws_brick):glusterfs2.commvault.com:/ws/disk1/ws_brick/Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002> <POSIX(/ws/disk1/ws_brick):glusterfs3.commvault.com:/ws/disk1/ws_brick/Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002> <POSIX(/ws/disk1/ws_brick):glusterfs1:/ws/disk1/ws_brick/Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002>))" ws/disk1/ws_brick/Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 trusted.ec.config=0x0000080301000200 But I notice the following logging in the log files. 2016-12-24 09:23:26.759164] W [MSGID: 114031] [client-rpc-fops.c:1848:client3_3_xattrop_cbk] 0-StoragePool-client-14: remote operation failed. Path: /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 (cf977efe-e263-49b9-8099-06b36122e715) [No such file or directory] [2016-12-24 09:23:26.759202] W [MSGID: 114031] [client-rpc-fops.c:1848:client3_3_xattrop_cbk] 0-StoragePool-client-12: remote operation failed. Path: /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 (cf977efe-e263-49b9-8099-06b36122e715) [No such file or directory] [2016-12-24 09:23:26.759383] W [MSGID: 114031] [client-rpc-fops.c:1848:client3_3_xattrop_cbk] 0-StoragePool-client-13: remote operation failed. Path: /Folder_07.11.2016_23.02/CV_MAGNETIC/V_8772830/CHUNK_49113632/SFILE_CONTAINER_002 (cf977efe-e263-49b9-8099-06b36122e715) [No such file or directory] Client 14 is volume StoragePool-client-14 ... option remote-subvolume /ws/disk5/ws_brick ... end-volume Why is it that the extended attribute is being checked on a wrong brick /ws/disk5 while the correct brick is /ws/disk1. I see lot of these errors being logged. Thanks and Regards, Ram ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." ********************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161224/e842f990/attachment.html>