similar to: NFS client problems

Displaying 20 results from an estimated 100 matches similar to: "NFS client problems"

2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal. lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks. This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed? On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote: > I am using 3.10.1 from which version this update is available. > > > On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> > wrote: > >> >> Whenever we do some fop on EC volume on a file, we check the xattr also
2017 Aug 29
2
error msg in the glustershd.log
Hi , I need some clarification for below error msg in the glustershd.log file. What is this msg? Why is this showing up?. currently using glusterfs 3.10.1 when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes. [2017-08-28 10:01:13.030710] W [MSGID: 122006]
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287 it has been fixed in glusterfs-3.11.0 --- Ashish ----- Original Message ----- From: "Amudhan P" <amudhan83 at gmail.com> To: "Ashish Pandey" <aspandey at redhat.com> Cc: "Gluster Users" <gluster-users at gluster.org> Sent: Thursday, August 31, 2017 1:07:16 PM Subject:
2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available. On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote: > > Whenever we do some fop on EC volume on a file, we check the xattr also to > see if the file is healthy or not. If not, we trigger heal. > lookup is the fop for which we don't take inodelk lock so it is possible > that the
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi, I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes each with 16 bricks in a single cluster. By default I have paused scrub process to have it run manually. for the first time, i was trying to run scrub-on-demand and it was running fine, but after some time, i decided to pause scrub process due to high CPU usage and user reporting folder listing taking time. But scrub
2017 Nov 07
0
error logged in fuse-mount log file
Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list. ---------- Forwarded message ---------- From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>> Date: Tue, Nov 7, 2017 at 6:43 PM Subject: error logged in fuse-mount log file To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish . Hi Amudhan, Can you check the gfids for every dir in that heirarchy? Maybe one of the parent dirs has a gfid mismatch. Regards, Nithya On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote: > Hi Nithya, > > I have checked gfid in all the bricks in disperse set for the folder. it > all same there is no difference. > > regards >
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya, I have checked gfid in all the bricks in disperse set for the folder. it all same there is no difference. regards Amudhan P On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Comments inline. > > Regards, > Nithya > > On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: >
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote: > > I remember we have fixed 2 issues where such kind of error messages were > coming and also we were seeing issues on mount. > In one of the case the problem was in dht. Unfortunately, I don't > remember the BZ's for those issues. > I think the DHT BZ you are referring to 1438423
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount. In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues. As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this would have fix . ---- Ashish ----- Original
2011 Oct 20
1
trying to create a 3 brick CIFS NAS server
Hi all I am having problems connecting to a 3 brick volume from a Windows client via samba/cifs. Volume Name: gluster-volume Type: Distribute Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 172.22.0.53:/data Brick2: 172.22.0.23:/data Brick3: 172.22.0.35:/data I created a /mnt/glustervol folder and then tried to mount the gluster-volume to it using: mount -t cifs
2017 May 30
1
Gluster client mount fails in mid flight with signum 15
Hello All We?ve have a problem with cluster client mounts fail in mid run with this in the log glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f640df4bdfb] ) 0-: received signum (15), shutting down. We?ve tried running debug but
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Nov 08
1
BUG: After stop and start wrong port is advertised
Hi, This bug is hitting me hard on two different clients. In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 in once case I had 59 differences in a total of 203 bricks. I wrote a quick and dirty script to check all ports against the brick file and the running process. #!/bin/bash Host=`uname -n| awk -F"." '{print $1}'` GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd|
2017 Nov 08
0
BUG: After stop and start wrong port is advertised
We've a fix in release-3.10 branch which is merged and should be available in the next 3.10 update. On Wed, Nov 8, 2017 at 4:58 PM, Mike Hulsman <mike.hulsman at proxy.nl> wrote: > Hi, > > This bug is hitting me hard on two different clients. > In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 > in once case I had 59 differences in a total of 203 bricks. > > I