Displaying 20 results from an estimated 2000 matches similar to: "dht log entries in fuse client after successful expansion/rebalance"
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi,
I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster.
Now I am getting permission denied errors and I see the following in the client logs:
[2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument)
[2011-02-24 09:59:11.851656] I
2013 Sep 19
0
dht_layout_dir_mismatch
Having an odd problem on a new test environment we are setting up for a
partner. And not sure where to look next to figure out the problem or
really understand what the dht_layout_dir_mismatch INFO message is
telling me.
I was turning up a 4 node distributed volume, each brick is its own
19TB ext4 partition on a hardware raid5. Each node has the volume
mounted back to itself at /glusterfs via
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
Hello,
Have spotted strange behaviour of GlusterFS fuse mount. I am unable to list files in a directory until parent directory is listed. However if I do list file with full path it is listed on some client nodes.
Example:
localadmin at ldgpsua00000038:~$ ls -al /var/lib/nova/instances/_base/
ls: cannot access /var/lib/nova/instances/_base/: No such file or directory
localadmin at
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2010 Dec 15
0
Errors with Gluster 3.1.2qa2
Hi all,
I have just migrated my old gluster partition to a fresh one with 4 nodes
with:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
It solves my problems of latency and disk errors (like input/output errors
or file descriptor in bad state) but I have just many many errors like this:
[2010-12-15 12:01:12.711136] I [dht-common.c:369:dht_revalidate_cbk]
2013 Feb 26
0
Replicated Volume Crashed
Hi,
I have a gluster volume that consists of 22Bricks and includes a single
folder with 3.6 Million files. Yesterday the volume crashed and turned out
to be completely unresposible and I was forced to perform a hard reboot on
all gluster servers because they were not able to execute a reboot command
issued by the shell because they were that heavy overloaded. Each gluster
server has 12 CPU cores
2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
Hi I'm running into a rather strange and frustrating bug and wondering if
anyone on the mailing list might have some insight about what might be
causing it. I'm running a cluster of two dozen nodes, where the processing
nodes are also the gluster bricks (using the SLURM resource manager). Each
node has the glusters mounted natively (not NFS). All nodes are using
v3.2.7. Each job in the
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list.
---------- Forwarded message ----------
From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>>
Date: Tue, Nov 7, 2017 at 6:43 PM
Subject: error logged in fuse-mount log file
To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Hi,
I am using
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya,
I have checked gfid in all the bricks in disperse set for the folder. it
all same there is no difference.
regards
Amudhan P
On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
> Comments inline.
>
> Regards,
> Nithya
>
> On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote:
>
2017 Nov 10
0
Error logged in fuse-mount log file
Hi,
Comments inline.
Regards,
Nithya
On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote:
> resending mail from another id, doubt on whether mail reaches mailing list.
>
>
> ---------- Forwarded message ----------
> From: *Amudhan P* <amudhan83 at gmail.com>
> Date: Tue, Nov 7, 2017 at 6:43 PM
> Subject: error logged in fuse-mount log
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount.
In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues.
As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this
would have fix .
----
Ashish
----- Original
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish .
Hi Amudhan,
Can you check the gfids for every dir in that heirarchy? Maybe one of the
parent dirs has a gfid mismatch.
Regards,
Nithya
On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote:
> Hi Nithya,
>
> I have checked gfid in all the bricks in disperse set for the folder. it
> all same there is no difference.
>
> regards
>
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote:
>
> I remember we have fixed 2 issues where such kind of error messages were
> coming and also we were seeing issues on mount.
> In one of the case the problem was in dht. Unfortunately, I don't
> remember the BZ's for those issues.
>
I think the DHT BZ you are referring to 1438423
2017 Jun 05
0
Rebalance failing on fix-layout
Hello,
The past couple of weeks I had some issues with firmware on the OS hard drives in my gluster cluster. I have recently fixed the issue, and am bringing my bricks back into the volume. I am running gluster 3.7.6 and am running into the following issue:
When I add the brick and rebalance, the operation fails after a couple minutes. The errors I find in the rebalance log is this:
[2017-06-05
2018 Apr 05
0
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii <archon810 at gmail.com>
wrote:
> Hi,
>
> I noticed when I run gluster volume heal data info, the follow message
> shows up in the log, along with other stuff:
>
> [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory
>> selfheal failed: Unable to form layout for directory /
>
>
> I'm
2018 May 23
0
Rebalance state stuck or corrupted
We have had a rebalance operation going on for a few days. After a couple
days the rebalance status said "failed". We stopped the rebalance operation
by doing gluster volume rebalance gv0 stop. Rebalance log indicated gluster
did try to stop the rebalance. However, when we try now to stop the volume
or try to restart rebalance it says there's a rebalance operation going on
and volume
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there,
Im running glusterfs version 3.1.0.
The client crashed after sometime with below stack.
2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1:
Subvolume 'distribute-1' came back up; going online.
[2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1:
data self-heal triggered. path:
/streaming/set3/work/reduce.12.1294902171.dplog.temp,