similar to: trying to create a 3 brick CIFS NAS server

Displaying 20 results from an estimated 200 matches similar to: "trying to create a 3 brick CIFS NAS server"

2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2017 Aug 29
2
error msg in the glustershd.log
Hi , I need some clarification for below error msg in the glustershd.log file. What is this msg? Why is this showing up?. currently using glusterfs 3.10.1 when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes. [2017-08-28 10:01:13.030710] W [MSGID: 122006]
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal. lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks. This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available. On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote: > > Whenever we do some fop on EC volume on a file, we check the xattr also to > see if the file is healthy or not. If not, we trigger heal. > lookup is the fop for which we don't take inodelk lock so it is possible > that the
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed? On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote: > I am using 3.10.1 from which version this update is available. > > > On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> > wrote: > >> >> Whenever we do some fop on EC volume on a file, we check the xattr also
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287 it has been fixed in glusterfs-3.11.0 --- Ashish ----- Original Message ----- From: "Amudhan P" <amudhan83 at gmail.com> To: "Ashish Pandey" <aspandey at redhat.com> Cc: "Gluster Users" <gluster-users at gluster.org> Sent: Thursday, August 31, 2017 1:07:16 PM Subject:
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list. ---------- Forwarded message ---------- From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>> Date: Tue, Nov 7, 2017 at 6:43 PM Subject: error logged in fuse-mount log file To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi, I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes each with 16 bricks in a single cluster. By default I have paused scrub process to have it run manually. for the first time, i was trying to run scrub-on-demand and it was running fine, but after some time, i decided to pause scrub process due to high CPU usage and user reporting folder listing taking time. But scrub
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya, I have checked gfid in all the bricks in disperse set for the folder. it all same there is no difference. regards Amudhan P On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Comments inline. > > Regards, > Nithya > > On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: >
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish . Hi Amudhan, Can you check the gfids for every dir in that heirarchy? Maybe one of the parent dirs has a gfid mismatch. Regards, Nithya On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote: > Hi Nithya, > > I have checked gfid in all the bricks in disperse set for the folder. it > all same there is no difference. > > regards >
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount. In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues. As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this would have fix . ---- Ashish ----- Original
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote: > > I remember we have fixed 2 issues where such kind of error messages were > coming and also we were seeing issues on mount. > In one of the case the problem was in dht. Unfortunately, I don't > remember the BZ's for those issues. > I think the DHT BZ you are referring to 1438423
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2012 Apr 04
1
issue with Digium TDM410P
The TDM410P doesn't support 'hvac', only the obsolete TDM400P supports that option was for the old phones that have a neon light (or equivalent LED+ZENER ciruit). Are other phones off the TDM410P (other than the VTECH) working, or is the Vtech the only model with VMWI available to you. I'm not able to check at the moment, I have copied the asterisk-users list, someone else may
2010 Feb 10
2
Server not found in kerberos database (with net ads join)
Hi All, After running into a few issues in trying to join my debian (squeeze) box to a windows 2008 server, I am running into this (hopefully last) problem... When I try to do te net join command, I get the following > nanoelecfs:/home/joel# net join ads -S XX.XX.XX.XX dn > 'DC=FS,DC=UML,DC=EDU' -U USERNAME > Enter EEng_LDAP's password: > [2010/02/10 15:20:10, 0]
2017 Nov 07
0
error logged in fuse-mount log file
Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid
2017 Nov 08
1
BUG: After stop and start wrong port is advertised
Hi, This bug is hitting me hard on two different clients. In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 in once case I had 59 differences in a total of 203 bricks. I wrote a quick and dirty script to check all ports against the brick file and the running process. #!/bin/bash Host=`uname -n| awk -F"." '{print $1}'` GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd|
2013 Nov 27
0
NFS client problems
I have create a 2 node replicated cluster with GlusterFS 3.4.1 on Centos 6.4. Mounting the volume locally on each server using native client works fine, however I am having issues with a separate client only server that I wish to use NFS to mount the gluster server volume. Volume Name: glustervol Type: Replicate Volume ID: 6a5dde86-... Status: Started Number of Bricks: 1 x 2 = 2