search for: dht_revalidate_cbk

Displaying 20 results from an estimated 20 matches for "dht_revalidate_cbk".

2011 Feb 24
1
Experiencing errors after adding new nodes
Hi, I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster. Now I am getting permission denied errors and I see the following in the client logs: [2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [2011-02-24 09:59:11.851656] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [root at qe-loader1 glusterfs]# tail -100 mnt-qe-filer01.log [2011-02-24 09:32:50.844211] I...
2010 Dec 15
0
Errors with Gluster 3.1.2qa2
...nodes with: Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 It solves my problems of latency and disk errors (like input/output errors or file descriptor in bad state) but I have just many many errors like this: [2010-12-15 12:01:12.711136] I [dht-common.c:369:dht_revalidate_cbk] dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument) [2010-12-15 12:01:21.228062] I [dht-common.c:369:dht_revalidate_cbk] dns-dht: subvolume dns-replicate-1 returned -1 (Invalid argument) [2010-12-15 12:01:28.677286] I [dht-common.c:369:dht_revalidate_cbk] dns-dht: subvolume dns-rep...
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
...4189b3c899 -rw-r--r-- 1 libvirt-qemu kvm 75161927680 Mar 25 13:46 ff8ad6c675c84df6f70f9bd0ac04fb4189b3c899_70 localadmin at ldgpsua00000038:~$ Before parent directory relisting log file for the mount point was full of: -------------------------- [2013-05-02 12:20:01.376593] I [dht-common.c:596:dht_revalidate_cbk] 3-glustervmstore-dht: mismatching layouts for / [2013-05-02 12:20:51.975861] I [dht-layout.c:593:dht_layout_normalize] 3-glustervmstore-dht: found anomalies in /_base. holes=0 overlaps =2 [2013-05-02 12:20:52.077131] I [dht-layout.c:593:dht_layout_normalize] 3-glustervmstore-dht: found anomalies i...
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate volume to 3x2 and performing a full rebalance fuse clients log the following messages for every directory access: [2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk] 1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench [2012-03-08 10:53:56.953065] I [dht-layout.c:682:dht_layout_dir_mismatch] 1-bfd-dht: subvol: bfd-replicate-2; inode layout - 0 - 0; disk layout - 2863311530 - 4294967295 [2012-03-08 10:53:56.953080] I [dht-common.c:524:dht_...
2012 Jun 11
1
"mismatching layouts" flooding in the logs
...at around 100kB of logs per second, on all 10 gluster servers: [2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637 [2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts for /gluster/pub/one/content/2012/2/23 [2012-06-11 15:08:15.733110] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 572662304 - 608453697; disk layout - 536870910 - 572662303 [2012-06-11 15:08:15.733161] I [dht-comm...
2013 Sep 19
0
dht_layout_dir_mismatch
...m (server package) /etc/fstab ------ /dev/sdb1 /exports/gluster ext4 defaults,noatime,acl,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 0 localhost:/USER-HOME /glusterfs glusterfs defaults,noauto,nobootwait 0 0 glusterfs.log ------ [2013-09-18 21:48:54.686845] I [dht-common.c:623:dht_revalidate_cbk] 0-USER-HOME-dht: mismatching layouts for /users/rpowell1/benchmark [2013-09-18 21:48:54.687492] I [dht-layout.c:630:dht_layout_normalize] 0-USER-HOME-dht: found anomalies in /users/rpowell1/benchmark. holes=1 overlaps=1 [2013-09-18 22:04:32.671426] W [socket.c:514:__socket_rwv] 0-glusterfs: readv...
2017 Dec 21
1
seeding my georeplication
...ontianing the gfid/file pairs needed to sync to the slave before enabling georeplication. Unfortunately, the gsync-sync-gfid program isn't working. On all the files it reports that they failed and I see the following in the fuse log: [2017-12-21 16:36:37.171846] D [MSGID: 0] [dht-common.c:997:dht_revalidate_cbk] 0-video-backup-dht: revalidate lookup of /path returned with op_ret 0 [Invalid argument] [2017-12-21 16:36:37.172352] D [fuse-helpers.c:650:fuse_ignore_xattr_set] 0-glusterfs-fuse: allowing setxattr: key [glusterfs.gfid.heal], client pid [0] [2017-12-21 16:36:37.172457] D [logging.c:1953:_gf_msg_...
2011 Oct 18
2
gluster rebalance taking three months
Hi guys, we have a rebalance running on eight bricks since July and this is what the status looks like right now: ===Tue Oct 18 13:45:01 CST 2011 ==== rebalance step 1: layout fix in progress: fixed layout 223623 There are roughly 8T photos in the storage,so how long should this rebalance take? What does the number (in this case) 22362 represent? Our gluster infomation: Repository
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
...of errors in my client and NFS logs following a recent volume expansion. [2012-02-16 22:59:42.504907] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol: atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501 34 - 1227133511 [2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk] 0-atmos-dht: mismatching layouts for /users/rle/TRACKTEMP/TRACKS [2012-02-16 22:59:42.534521] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol: atmos-replicate-1; inode layout - 0 - 0; disk layout - 1227133 512 - 1533916889 I have expanded the volume successfully many times in...
2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
...:55:31.641724] W [fuse-bridge.c:561:fuse_getattr] 0-glusterfs-fuse: 2298079: GETATTR 140360215569520 (fuse_loc_fill() failed) ... Sometimes on these events, and sometimes not, there will also be logs (on both normal and abnormal nodes) of the form: [2013-02-18 03:35:28.679681] I [dht-common.c:525:dht_revalidate_cbk] 0-volume1-dht: mismatching layouts for /inSample/pred/20110831 I understand from reading the mailing list that both the dentry errors and the mismatched layout errors are both non-fatal warnings and that the metadata will become internally consistent regardless. But these errors only happen on ti...
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be
2013 Feb 26
0
Replicated Volume Crashed
...02-25 20:00:32.715869] I [dht-layout.c:593:dht_layout_normalize] 0-adata-dht: found anomalies in /files. holes=2 overlaps=0 [2013-02-25 20:00:32.715886] W [dht-selfheal.c:882:dht_selfheal_directory] 0-adata-dht: 2 subvolumes have unrecoverable errors [2013-02-25 20:00:47.566817] I [dht-common.c:543:dht_revalidate_cbk] 0-adata-dht: subvolume adata-replicate-9 for /files returned -1 (Input/output error) What could be the reason for this? I've seen that the folder which contains the 3M files was locked and I am restructuring the directory layout so there is a directory tree with approximately 100-500 files wi...
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
..._set_split_brain] replicate-1: invalid argument: inode [2011-01-13 08:33:49.501919] I [afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] replicate-1: background data self-heal completed on /streaming/set3/work/reduce.12.1294902171.dplog.temp [2011-01-13 08:33:49.531838] I [dht-common.c:402:dht_revalidate_cbk] distribute-1: linkfile found in revalidate for /streaming/set3/work/mapped/dpabort/multiple_reduce.flash_pl.2.1294901929.1.172.26.98.59.2.map.10 [2011-01-13 08:33:50.396055] W [fuse-bridge.c:2765:fuse_setlk_cbk] glusterfs-fuse: 2230985: ERR => -1 (Invalid argument) pending frames: frame : type(...
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
...ill hit the same issue at clients on the nodes that also runs the servers. We?ve got to clients on connected to one of the volumes that has been working fine all the time. This is the debug logs from one of the mount as the client gets disconnected: The message "D [MSGID: 0] [dht-common.c:979:dht_revalidate_cbk] 0-mule-dht: revalidate lookup of / returned with op_ret 0 [Structure needs cleaning]" repeated 26 times between [2017-05-31 13:48:51.680757] and [2017-05-31 13:50:46.325368] /DAEMON/DEBUG [2017-05-31T15:50:50.589272+02:00] [] [] [logging.c:1830:gf_log_flush_timeout_cbk] 0-logging-infra: Log t...
2023 Oct 16
0
Stale file content
...k] 0-apptivegrid-client-0: remote operation failed. [{path=/62/03/8c/62038cd116e9a6857794aa14/settings}, {gfid=1d38410a-1c14-4346-a7e5-68856ed310e9}, {errno=2}, {error=No such file or directory}] > > and this > > [2023-08-17 21:57:31.902676 +0000] I [MSGID: 109018] [dht-common.c:1838:dht_revalidate_cbk] 0-apptivegrid-dht: Mismatching layouts for /62/03/8c/62038cd116e9a6857794aa14, gfid = f7f8eef0-bc19-4936-8c0c-fd0a497c5e69 > > This morning I found another occurrence of a stale file which I wanted to diagnose but a couple of minutes later it seemed to have healed itself. In order to diagno...
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
...ill hit the same issue at clients on the nodes that also runs the servers. We?ve got to clients on connected to one of the volumes that has been working fine all the time. This is the debug logs from one of the mount as the client gets disconnected: The message "D [MSGID: 0] [dht-common.c:979:dht_revalidate_cbk] 0-mule-dht: revalidate lookup of / returned with op_ret 0 [Structure needs cleaning]" repeated 26 times between [2017-05-31 13:48:51.680757] and [2017-05-31 13:50:46.325368] /DAEMON/DEBUG [2017-05-31T15:50:50.589272+02:00] [] [] [logging.c:1830:gf_log_flush_timeout_cbk] 0-logging-infra: Log t...
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
...e issue at clients on the nodes that also runs the servers. We?ve got to clients on connected to one of the volumes that has been working fine all the time. > > This is the debug logs from one of the mount as the client gets disconnected: > The message "D [MSGID: 0] [dht-common.c:979:dht_revalidate_cbk] 0-mule-dht: revalidate lookup of / returned with op_ret 0 [Structure needs cleaning]" repeated 26 times between [2017-05-31 13:48:51.680757] and [2017-05-31 13:50:46.325368] > /DAEMON/DEBUG [2017-05-31T15:50:50.589272+02:00] [] [] [logging.c:1830:gf_log_flush_timeout_cbk] 0-logging-infra:...
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...9ebda614] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (--> /usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0] ))))) 0-: 10.50.80.104:49152: ping timer event already removed [2017-09-20 13:34:23.346070] D [MSGID: 0] [dht-common.c:1002:dht_revalidate_cbk] 0-gv0-dht: revalidate lookup of / returned with op_ret 0 [Structure needs cleaning] [2017-09-20 13:34:23.347612] D [MSGID: 0] [dht-common.c:2699:dht_lookup] 0-gv0-dht: Calling fresh lookup for /tempdir3 on gv0-replicate-0 [2017-09-20 13:34:23.348013] D [MSGID: 0] [client-rpc-fops.c:2936:client3...
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems running RHEL 7.3. They are mounting a Gluster volume which is hosted on more traditional x86 servers. Everything seems to be working properly except for creating new directories from the PPC64 clients. The mkdir command gives a "Input/output error" and for the first few minutes the new directory is