Displaying 20 results from an estimated 3000 matches similar to: "nfs problems"
2011 Mar 03
3
Mac / NFS problems
Hello,
Were having issues with macs writing to our gluster system.
Gluster vol info at end.
On a mac, if I make a file in the shell I get the following message:
smoke:hunter david$ echo hello > test
-bash: test: Operation not permitted
And the file is made but is zero size.
smoke:hunter david$ ls -l test
-rw-r--r-- 1 david realise 0 Mar 3 08:44 test
glusterfs/nfslog logs thus:
2018 Jan 16
2
Strange messages in mnt-xxx.log
Hi,
I'm testing gluster 3.12.4 and, by inspecting log files
/var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many
lines saying:
[2018-01-15 09:45:41.066914] I [MSGID: 109063]
[dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in
(null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-15 09:45:45.755021] I [MSGID: 109063]
2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi,
On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Hi,
>
> I'm testing gluster 3.12.4 and, by inspecting log files
> /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines
> saying:
>
> [2018-01-15 09:45:41.066914] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize]
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info:
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
Options Reconfigured:
storage.owner-gid: 107
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list.
---------- Forwarded message ----------
From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>>
Date: Tue, Nov 7, 2017 at 6:43 PM
Subject: error logged in fuse-mount log file
To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Hi,
I am using
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Here's the volume info:
>
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1:
2017 Nov 10
0
Error logged in fuse-mount log file
Hi,
Comments inline.
Regards,
Nithya
On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote:
> resending mail from another id, doubt on whether mail reaches mailing list.
>
>
> ---------- Forwarded message ----------
> From: *Amudhan P* <amudhan83 at gmail.com>
> Date: Tue, Nov 7, 2017 at 6:43 PM
> Subject: error logged in fuse-mount log
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya,
I have checked gfid in all the bricks in disperse set for the folder. it
all same there is no difference.
regards
Amudhan P
On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
> Comments inline.
>
> Regards,
> Nithya
>
> On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote:
>
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount.
In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues.
As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this
would have fix .
----
Ashish
----- Original
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish .
Hi Amudhan,
Can you check the gfids for every dir in that heirarchy? Maybe one of the
parent dirs has a gfid mismatch.
Regards,
Nithya
On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote:
> Hi Nithya,
>
> I have checked gfid in all the bricks in disperse set for the folder. it
> all same there is no difference.
>
> regards
>
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi,
I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster.
Now I am getting permission denied errors and I see the following in the client logs:
[2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument)
[2011-02-24 09:59:11.851656] I
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote:
>
> I remember we have fixed 2 issues where such kind of error messages were
> coming and also we were seeing issues on mount.
> In one of the case the problem was in dht. Unfortunately, I don't
> remember the BZ's for those issues.
>
I think the DHT BZ you are referring to 1438423
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
Hello,
Have spotted strange behaviour of GlusterFS fuse mount. I am unable to list files in a directory until parent directory is listed. However if I do list file with full path it is listed on some client nodes.
Example:
localadmin at ldgpsua00000038:~$ ls -al /var/lib/nova/instances/_base/
ls: cannot access /var/lib/nova/instances/_base/: No such file or directory
localadmin at
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Nov 07
0
error logged in fuse-mount log file
Hi,
I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
file.
what does this error mean? should i worry about this and how do i resolve
this?
[2017-11-07 11:59:17.218973] W [MSGID: 109005]
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path =
/fol1/fol2/fol3/fol4/fol5, gfid
2013 Sep 19
0
dht_layout_dir_mismatch
Having an odd problem on a new test environment we are setting up for a
partner. And not sure where to look next to figure out the problem or
really understand what the dht_layout_dir_mismatch INFO message is
telling me.
I was turning up a 4 node distributed volume, each brick is its own
19TB ext4 partition on a hardware raid5. Each node has the volume
mounted back to itself at /glusterfs via
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt,
>From the xattr output, it looks like the files are not present on the
arbiter brick & needs healing. But on the parent it does not have the
pending markers set for those entries.
The workaround for this is you need to do a lookup on the file which needs
heal from the mount, so it will create the entry on the arbiter brick and
then run the volume heal to do the healing.
Follow
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]#