search for: dht

Displaying 20 results from an estimated 295 matches for "dht".

Did you mean: dat
2017 Oct 19
3
gluster tiering errors
...rrors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for <file>(gfid:edaf97e1-02e0- 4838-9d26-71ea3aab22fb) [2017-10-19 17:52:07.525110] E [MSGID: 109011] [dht-common.c:7188:dht_create] 0-<vol>-hot-dht: no subvolume in layout for path=/path/to/<file> [2017-10-19 17:52:07.526088] E [MSGID: 109023] [dht-rebalance.c:7...
2017 Oct 22
0
gluster tiering errors
...ter volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for <file>(gfid:edaf97e1-02e0-4838-9d26-71ea3aab22fb) If possible, a *gluster volume info* would also help, instead of going to and fro with questions. -- Milind On Fri, Oct 20, 2017 at 12:42 AM, Herb Burnswell < herbert.burnswell at gmail.com> wrote: > All, &...
2017 Oct 22
1
gluster tiering errors
...termark-hi > > # gluster volume get <vol> cluster.watermark-low > > What is the size of the file that failed to migrate as per the following > tierd log: > > [2017-10-19 17:52:07.519614] I [MSGID: 109038] > [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion > failed for <file>(gfid:edaf97e1-02e0-4838-9d26-71ea3aab22fb) > > If possible, a *gluster volume info* would also help, instead of going to > and fro with questions. > > -- > Milind > > > > On Fri, Oct 20, 2017 at 12:42 AM, Herb Burnswell < >...
2017 Oct 24
2
gluster tiering errors
...----- cluster.watermark-low 75 >> What is the size of the file that failed to migrate as per the following tierd log: >> [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for <file>(gfid:edaf97e1-02e0-4838-9d26-71ea3aab22fb) The file was a word doc @ 29K in size. >>If possible, a *gluster volume info* would also help, instead of going to and fro with questions. # gluster vol info Volume Name: ctdb Type: Replicate Volume ID: f679c476...
2017 Oct 27
0
gluster tiering errors
...> cluster.watermark-low 75 > > > > >> What is the size of the file that failed to migrate as per the > following tierd log: > > >> [2017-10-19 17:52:07.519614] I [MSGID: 109038] > [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion > failed for <file>(gfid:edaf97e1-02e0-4838-9d26-71ea3aab22fb) > > The file was a word doc @ 29K in size. > > >>If possible, a *gluster volume info* would also help, instead of going > to and fro with questions. > > # gluster vol info > > Volume...
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi, I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster. Now I am getting permission denied errors and I see the following in the client logs: [2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [2011-02-24 09:59:11.851656] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [root at qe-loader1 glusterfs]# tail -100 mnt-qe-filer01.lo...
2018 May 23
0
Rebalance state stuck or corrupted
...' Staging failed on gfs-vm010. Error: rebalance session is in progress for the volume 'gv0' user at gfs-vm000:~$ sudo gluster volume rebalance gv0 stop volume rebalance: gv0: failed: Rebalance not started. tail log from gv0-rebalance.log [2018-05-23 17:32:55.262168] I [MSGID: 109029] [dht-rebalance.c:4260:gf_defrag_stop] 0-: Received stop command on rebalance [2018-05-23 17:32:55.262221] I [MSGID: 109028] [dht-rebalance.c:4079:gf_defrag_status_get] 0-glusterfs: Rebalance is stopped. Time taken is 749380.00 secs [2018-05-23 17:32:55.262234] I [MSGID: 109028] [dht-rebalance.c:4083:gf_...
2017 Oct 17
2
Distribute rebalance issues
...(Connection reset by peer) [2017-10-12 23:00:55.099709] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-video-client-4: disconnected from video-client-4. Client process will keep trying to connect to glusterd until brick's port is available [2017-10-12 23:00:55.099741] W [MSGID: 109073] [dht-common.c:8839:dht_notify] 0-video-dht: Received CHILD_DOWN. Exiting [2017-10-12 23:00:55.099752] I [MSGID: 109029] [dht-rebalance.c:4195:gf_defrag_stop] 0-: Received stop command on rebalance [2017-10-12 23:01:05.478462] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-video-client-4: changing port to 49164...
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
...df6f70f9bd0ac04fb4189b3c899 -rw-r--r-- 1 libvirt-qemu kvm 75161927680 Mar 25 13:46 ff8ad6c675c84df6f70f9bd0ac04fb4189b3c899_70 localadmin at ldgpsua00000038:~$ Before parent directory relisting log file for the mount point was full of: -------------------------- [2013-05-02 12:20:01.376593] I [dht-common.c:596:dht_revalidate_cbk] 3-glustervmstore-dht: mismatching layouts for / [2013-05-02 12:20:51.975861] I [dht-layout.c:593:dht_layout_normalize] 3-glustervmstore-dht: found anomalies in /_base. holes=0 overlaps =2 [2013-05-02 12:20:52.077131] I [dht-layout.c:593:dht_layout_normalize] 3-glust...
2017 Nov 09
2
Error logged in fuse-mount log file
...sers <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb [2017-11-07 11:59:17.218935] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-gluste...
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...0(+0x1a614)[0x3fff9ebda614] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (--> /usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0] ))))) 0-: 10.50.80.104:49152: ping timer event already removed [2017-09-20 13:34:23.346070] D [MSGID: 0] [dht-common.c:1002:dht_revalidate_cbk] 0-gv0-dht: revalidate lookup of / returned with op_ret 0 [Structure needs cleaning] [2017-09-20 13:34:23.347612] D [MSGID: 0] [dht-common.c:2699:dht_lookup] 0-gv0-dht: Calling fresh lookup for /tempdir3 on gv0-replicate-0 [2017-09-20 13:34:23.348013] D [MSGID: 0]...
2017 Jun 05
0
Rebalance failing on fix-layout
...ssue, and am bringing my bricks back into the volume. I am running gluster 3.7.6 and am running into the following issue: When I add the brick and rebalance, the operation fails after a couple minutes. The errors I find in the rebalance log is this: [2017-06-05 13:38:40.441671] E [MSGID: 109010] [dht-rebalance.c:2259:gf_defrag_get_entry] 0-hpcscratch-dht: /LV_Fitting/code/C gfid not present [2017-06-05 13:38:40.450341] E [MSGID: 109010] [dht-rebalance.c:2259:gf_defrag_get_entry] 0-hpcscratch-dht: /LV_Fitting/code/C/NoCov_NoImm gfid not present [2017-06-05 13:38:40.450380] E [MSGID: 109010] [dht...
2017 Nov 13
2
Error logged in fuse-mount log file
...>> Hi, >> >> I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log >> file. >> >> what does this error mean? should i worry about this and how do i resolve >> this? >> >> [2017-11-07 11:59:17.218973] W [MSGID: 109005] >> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory >> selfheal fail >> ed : 1 subvolumes have unrecoverable errors. path = >> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb >> [2017-11-07 11:59:17.218935] I [MSGID: 109063] >> [dh...
2017 Oct 17
0
Distribute rebalance issues
.../export-md0-brick.log.1 2 > ./export-md1-brick.log.1 2 > ./export-md2-brick.log.1 181 > ./export-md3-brick.log.1 2 > > > Any clues? What could be causing this because there is nothing in the log to indicate cause. > > The rebalance process requires that all DHT child subvols be up during the operation as it needs to reapply the directory layouts (which requires all child subvols to be up). As this is a pure distribute volume, even a single brick getting disconnected is enough to cause the process to stop. You would need to figure out why that brick is di...
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount. In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues. As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this would have fix . ---- Ashish ----- Original Message ----- From: "Nithya Balachandran" <nbalacha a...
2017 Nov 10
0
Error logged in fuse-mount log file
...ster-users at gluster.org> > > > Hi, > > I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log > file. > > what does this error mean? should i worry about this and how do i resolve > this? > > [2017-11-07 11:59:17.218973] W [MSGID: 109005] > [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory > selfheal fail > ed : 1 subvolumes have unrecoverable errors. path = > /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb > [2017-11-07 11:59:17.218935] I [MSGID: 109063] > [dht-layout.c:713:dht_l...
2011 Feb 16
1
nfs problems
...ter, clients nfs mount the volume from any node in a round-robin it appears that one node has gone bad. the clients mounting that node can't see the files that the others can see. ls -l gives rubbish for the metadata, and get lots of these lines in the nfs.log: [2011-02-16 15:33:32.538756] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/people.1. holes=2 overlaps=0 [2011-02-16 15:33:32.540759] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/people.nano. holes=2 overlaps=0 [2011-02-16 15:33:32.543682] I [dht-...
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
...data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048] [dht-common.c:9732:dht_rmdir_cached_lookup_cbk] 0-ovirt-350-zone1-dht: /489c6fb7-fe61-4407-8160-35c0aac40c85/images/_remove_me_9a0660e1-bd86-47ea-8e09-865c14f11f26/e2645bd1-a7f3-4cbd-9036-3d3cbc7204cd.meta found on cached subvol ovirt-350-zone1-replicate-5 [2018-04-03 02:07:57.967489] I [MSGID: 109070...
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
...issue tends to happen when the shards are created. The easiest time to > reproduce this is during an initial VM disk format. This is a log from a > test VM that was launched, and then partitioned and formatted with LVM / > XFS: > > [2018-04-03 02:05:00.838440] W [MSGID: 109048] > [dht-common.c:9732:dht_rmdir_cached_lookup_cbk] 0-ovirt-350-zone1-dht: > /489c6fb7-fe61-4407-8160-35c0aac40c85/images/_remove_ > me_9a0660e1-bd86-47ea-8e09-865c14f11f26/e2645bd1-a7f3-4cbd-9036-3d3cbc7204cd.meta > found on cached subvol ovirt-350-zone1-replicate-5 > [2018-04-03 02:07:57.96748...