search for: dht_lookup_cbk

Displaying 12 results from an estimated 12 matches for "dht_lookup_cbk".

2017 Dec 21
1
seeding my georeplication
...hanged for /path/file.txt [2017-12-21 16:36:37.173212] D [MSGID: 0] [client-rpc-fops.c:2941:client3_3_lookup_cbk] 0-stack-trace: stack-address: 0x7fe39ebdc42c, video-backup-client-4 returned -1 error: Stale file handle [Stale file handle] [2017-12-21 16:36:37.173233] D [MSGID: 0] [dht-common.c:2279:dht_lookup_cbk] 0-video-backup-dht: fresh_lookup returned for /path/file.txt with op_ret -1 [Stale file handle] [2017-12-21 16:36:37.173250] D [MSGID: 0] [dht-common.c:2359:dht_lookup_cbk] 0-video-backup-dht: Lookup of /path/file.txt for subvolume video-backup-client-4 failed [Stale file handle] [2017-12-21 16:36...
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...ch file or directory] [2017-09-20 13:34:23.348132] D [MSGID: 0] [afr-common.c:2264:afr_lookup_done] 0-stack-trace: stack-address: 0x3fff88001080, gv0-replicate-0 returned -1 error: No such file or directory [No such file or directory] [2017-09-20 13:34:23.348166] D [MSGID: 0] [dht-common.c:2284:dht_lookup_cbk] 0-gv0-dht: fresh_lookup returned for /tempdir3 with op_ret -1 [No such file or directory] [2017-09-20 13:34:23.348195] D [MSGID: 0] [dht-common.c:2297:dht_lookup_cbk] 0-gv0-dht: Entry /tempdir3 missing on subvol gv0-replicate-0 [2017-09-20 13:34:23.348220] D [MSGID: 0] [dht-common.c:2068:dht_l...
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2018 Jan 15
1
"linkfile not having link" occurrs sometimes after renaming
...reated by u1, and they are read only for u2. Of course u2 can read these files. Later these files are renamed by u1. Then I switch to the user u2. I find that u2 can't list or access the renamed files. I see these errors in log: [2018-01-15 17:35:05.133711] I [MSGID: 109045] [dht-common.c:2393:dht_lookup_cbk] 25-data-dht: linkfile not having link subvol for /txt/file1.txt.bak [2018-01-15 17:35:05.139261] W [MSGID: 114031] [client-rpc-fops.c:628:client3_3_unlink_cbk] 25-data-client-70: remote operation failed [Permission denied] [2018-01-15 17:35:05.139276] W [MSGID: 114031] [client-rpc-fops.c:628:clien...
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems running RHEL 7.3. They are mounting a Gluster volume which is hosted on more traditional x86 servers. Everything seems to be working properly except for creating new directories from the PPC64 clients. The mkdir command gives a "Input/output error" and for the first few minutes the new directory is
2009 Nov 19
1
Strange server locks isuess with 2.0.7 - updating
...ffe400] /lib/tls/libc.so.6(calloc+0x8e)[0xb7dcfffe] /usr/local/lib/glusterfs/2.0.7/xlator/cluster/replicate.so(afr_lookup+0x32)[0xb74dfd12] /usr/local/lib/glusterfs/2.0.7/xlator/cluster/distribute.so(dht_lookup_directory+0x10a)[0xb74ce21a] /usr/local/lib/glusterfs/2.0.7/xlator/cluster/distribute.so(dht_lookup_cbk+0x10e)[0xb74d3d9e] /usr/local/lib/glusterfs/2.0.7/xlator/cluster/replicate.so(afr_lookup_cbk+0x47b)[0xb74e0a7b] /usr/local/lib/glusterfs/2.0.7/xlator/protocol/client.so(client_lookup_cbk+0x334)[0xb751ca84] /usr/local/lib/glusterfs/2.0.7/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb...
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
...eads.c:358:iot_schedule] 0-homes-io-threads: FSTAT scheduled as fast fop [2018-01-24 08:55:19.138958] D [MSGID: 0] [afr-read-txn.c:220:afr_read_txn] 0-homes-replicate-1: e6ee0427-b17d-4464-a738-e8ea70d77d95: generation now vs cached: 2, 2 [2018-01-24 08:55:19.139187] D [MSGID: 0] [dht-common.c:2294:dht_lookup_cbk] 0-homes-dht: fresh_lookup returned for /vchebii/revtrans/Hircus-XM_018067032.1.pep.align.fas with op_ret 0 [2018-01-24 08:55:19.139200] D [MSGID: 0] [dht-layout.c:873:dht_layout_preset] 0-homes-dht: file = 00000000-0000-0000-0000-000000000000, subvol = homes-readdir-ahead-1 [2018-01-24 08:55:19.13...
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
...> 0-homes-io-threads: FSTAT scheduled as fast fop > [2018-01-24 08:55:19.138958] D [MSGID: 0] [afr-read-txn.c:220:afr_read_txn] > 0-homes-replicate-1: e6ee0427-b17d-4464-a738-e8ea70d77d95: generation now vs > cached: 2, 2 > [2018-01-24 08:55:19.139187] D [MSGID: 0] [dht-common.c:2294:dht_lookup_cbk] > 0-homes-dht: fresh_lookup returned for > /vchebii/revtrans/Hircus-XM_018067032.1.pep.align.fas with op_ret 0 > [2018-01-24 08:55:19.139200] D [MSGID: 0] > [dht-layout.c:873:dht_layout_preset] 0-homes-dht: file = > 00000000-0000-0000-0000-000000000000, subvol = homes-readdir-ahead-...
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
...duled as fast fop > > [2018-01-24 08:55:19.138958] D [MSGID: 0] > [afr-read-txn.c:220:afr_read_txn] > > 0-homes-replicate-1: e6ee0427-b17d-4464-a738-e8ea70d77d95: generation > now vs > > cached: 2, 2 > > [2018-01-24 08:55:19.139187] D [MSGID: 0] > [dht-common.c:2294:dht_lookup_cbk] > > 0-homes-dht: fresh_lookup returned for > > /vchebii/revtrans/Hircus-XM_018067032.1.pep.align.fas with op_ret 0 > > [2018-01-24 08:55:19.139200] D [MSGID: 0] > > [dht-layout.c:873:dht_layout_preset] 0-homes-dht: file = > > 00000000-0000-0000-0000-000000000000, subv...
2018 Jan 24
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding Poornima to take a look at it and comment. On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth <alan.orth at gmail.com> wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat Gluster Storage Server > documentation[2].
2018 Jan 23
6
parallel-readdir is not recognized in GlusterFS 3.12.4
Hello, I saw that parallel-readdir was an experimental feature in GlusterFS version 3.10.0, became stable in version 3.11.0, and is now recommended for small file workloads in the Red Hat Gluster Storage Server documentation[2]. I've successfully enabled this on one of my volumes but I notice the following in the client mount log: [2018-01-23 10:24:24.048055] W [MSGID: 101174]