similar to: error msg in the glustershd.log

Displaying 20 results from an estimated 300 matches similar to: "error msg in the glustershd.log"

2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available. On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote: > > Whenever we do some fop on EC volume on a file, we check the xattr also to > see if the file is healthy or not. If not, we trigger heal. > lookup is the fop for which we don't take inodelk lock so it is possible > that the
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal. lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks. This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed? On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote: > I am using 3.10.1 from which version this update is available. > > > On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> > wrote: > >> >> Whenever we do some fop on EC volume on a file, we check the xattr also
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287 it has been fixed in glusterfs-3.11.0 --- Ashish ----- Original Message ----- From: "Amudhan P" <amudhan83 at gmail.com> To: "Ashish Pandey" <aspandey at redhat.com> Cc: "Gluster Users" <gluster-users at gluster.org> Sent: Thursday, August 31, 2017 1:07:16 PM Subject:
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick whenever possible, but if there is none, we just serve the lookup from one of the bricks that we got a positive reply from. We found a bug? [1] due to this behavior were the iatt values returned in the lookup call
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list. ---------- Forwarded message ---------- From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>> Date: Tue, Nov 7, 2017 at 6:43 PM Subject: error logged in fuse-mount log file To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya, I have checked gfid in all the bricks in disperse set for the folder. it all same there is no difference. regards Amudhan P On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Comments inline. > > Regards, > Nithya > > On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: >
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi, I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes each with 16 bricks in a single cluster. By default I have paused scrub process to have it run manually. for the first time, i was trying to run scrub-on-demand and it was running fine, but after some time, i decided to pause scrub process due to high CPU usage and user reporting folder listing taking time. But scrub
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount. In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues. As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this would have fix . ---- Ashish ----- Original
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish . Hi Amudhan, Can you check the gfids for every dir in that heirarchy? Maybe one of the parent dirs has a gfid mismatch. Regards, Nithya On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote: > Hi Nithya, > > I have checked gfid in all the bricks in disperse set for the folder. it > all same there is no difference. > > regards >
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote: > > I remember we have fixed 2 issues where such kind of error messages were > coming and also we were seeing issues on mount. > In one of the case the problem was in dht. Unfortunately, I don't > remember the BZ's for those issues. > I think the DHT BZ you are referring to 1438423
2011 Oct 20
1
trying to create a 3 brick CIFS NAS server
Hi all I am having problems connecting to a 3 brick volume from a Windows client via samba/cifs. Volume Name: gluster-volume Type: Distribute Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 172.22.0.53:/data Brick2: 172.22.0.23:/data Brick3: 172.22.0.35:/data I created a /mnt/glustervol folder and then tried to mount the gluster-volume to it using: mount -t cifs
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Oct 09
0
[Gluster-devel] AFR: Fail lookups when quorum not met
On 09/22/2017 07:27 PM, Niels de Vos wrote: > On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote: >> Hello, >> >> In AFR we currently allow look-ups to pass through without taking into >> account whether the lookup is served from the good or bad brick. We always >> serve from the good brick whenever possible, but if there is none, we just >> serve
2017 Nov 08
1
BUG: After stop and start wrong port is advertised
Hi, This bug is hitting me hard on two different clients. In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 in once case I had 59 differences in a total of 203 bricks. I wrote a quick and dirty script to check all ports against the brick file and the running process. #!/bin/bash Host=`uname -n| awk -F"." '{print $1}'` GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd|
2017 Nov 08
0
BUG: After stop and start wrong port is advertised
We've a fix in release-3.10 branch which is merged and should be available in the next 3.10 update. On Wed, Nov 8, 2017 at 4:58 PM, Mike Hulsman <mike.hulsman at proxy.nl> wrote: > Hi, > > This bug is hitting me hard on two different clients. > In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 > in once case I had 59 differences in a total of 203 bricks. > > I