Displaying 20 results from an estimated 27 matches for "glustervol".
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick 24009...
2017 Aug 29
2
error msg in the glustershd.log
...ntly using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am
seeing this kind of error and glustershd process consumes some percentage
of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006]
[ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to
combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1,
uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode:
100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029]
[ec-generic.c:684:ec_combine_lookup] 0-glustervol-disperse-109: Mismatc...
2017 Aug 29
2
error msg in the glustershd.log
...> when ever I start write process to volume (volume mounted thru fuse) I am
> seeing this kind of error and glustershd process consumes some percentage
> of cpu until write process completes.
>
> [2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine]
> 0-glustervol-disperse-109: Failed to combine iatt (inode:
> 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid:
> 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
> [2017-08-28 10:01:13.030752] N [MSGID: 122029]
> [ec-generic.c:684:ec_combine_lookup] 0-glustervol-d...
2017 Aug 29
0
error msg in the glustershd.log
...ly using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029] [ec-generic.c:684:ec_combine_lookup] 0-glustervol-disperse-109: Mismat...
2017 Aug 31
0
error msg in the glustershd.log
...rt write process to volume (volume mounted thru fuse) I am
>> seeing this kind of error and glustershd process consumes some percentage
>> of cpu until write process completes.
>>
>> [2017-08-28 10:01:13.030710] W [MSGID: 122006]
>> [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to
>> combine iatt (inode: 11548094941524765708-11548094941524765708, links:
>> 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode:
>> 100755-100755)
>> [2017-08-28 10:01:13.030752] N [MSGID: 122029]
>> [ec-generic.c:684:ec_combi...
2017 Aug 31
1
error msg in the glustershd.log
...ly using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029] [ec-generic.c:684:ec_combine_lookup] 0-glustervol-disperse-109: Mismat...
2017 Nov 09
2
Error logged in fuse-mount log file
...o:gluster-users at gluster.org>>
Hi,
I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file.
what does this error mean? should i worry about this and how do i resolve this?
[2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail
ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
[2017-11-07 11:59:17.218935] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies in /fol1/fol2/fol3/fol4/fol...
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
...process due to high CPU usage
and user reporting folder listing taking time.
But scrub pause resulted below message in some of the nodes.
Also, i can see that scrub daemon is not showing in volume status for some
nodes.
Error msg type 1
--
[2017-09-01 10:04:45.840248] I [bit-rot.c:1683:notify]
0-glustervol-bit-rot-0: BitRot scrub ondemand called
[2017-09-01 10:05:05.094948] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2017-09-01 10:05:06.401792] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2017-09-01 10:05:07.544524] I [MSGID: 118035]
[bit-rot-scrub.c:129...
2017 Nov 13
2
Error logged in fuse-mount log file
...terfs 3.10.1 and i am seeing below msg in fuse-mount log
>> file.
>>
>> what does this error mean? should i worry about this and how do i resolve
>> this?
>>
>> [2017-11-07 11:59:17.218973] W [MSGID: 109005]
>> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
>> selfheal fail
>> ed : 1 subvolumes have unrecoverable errors. path =
>> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
>> [2017-11-07 11:59:17.218935] I [MSGID: 109063]
>> [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht...
2017 Nov 10
0
Error logged in fuse-mount log file
...i,
>
> I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
> file.
>
> what does this error mean? should i worry about this and how do i resolve
> this?
>
> [2017-11-07 11:59:17.218973] W [MSGID: 109005]
> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
> selfheal fail
> ed : 1 subvolumes have unrecoverable errors. path =
> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
> [2017-11-07 11:59:17.218935] I [MSGID: 109063]
> [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
&g...
2017 Nov 14
2
Error logged in fuse-mount log file
...gluster-users at gluster.org >
Hi,
I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file.
what does this error mean? should i worry about this and how do i resolve this?
[2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail
ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
[2017-11-07 11:59:17.218935] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies in /fol1/fol2/fol3/fol4/f...
2017 Nov 13
0
Error logged in fuse-mount log file
...below msg in fuse-mount log
>>> file.
>>>
>>> what does this error mean? should i worry about this and how do i
>>> resolve this?
>>>
>>> [2017-11-07 11:59:17.218973] W [MSGID: 109005]
>>> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht:
>>> Directory selfheal fail
>>> ed : 1 subvolumes have unrecoverable errors. path =
>>> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
>>> [2017-11-07 11:59:17.218935] I [MSGID: 109063]
>>> [dht-layout.c:713:dht_layout_normali...
2017 Nov 14
0
Error logged in fuse-mount log file
...>>>> file.
>>>>
>>>> what does this error mean? should i worry about this and how do i
>>>> resolve this?
>>>>
>>>> [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory]
>>>> 0-glustervol-dht: Directory selfheal fail
>>>> ed : 1 subvolumes have unrecoverable errors. path =
>>>> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
>>>> [2017-11-07 11:59:17.218935] I [MSGID: 109063]
>>>> [dht-layout.c:713:dht_layout_norm...
2017 Nov 07
0
error logged in fuse-mount log file
Hi,
I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
file.
what does this error mean? should i worry about this and how do i resolve
this?
[2017-11-07 11:59:17.218973] W [MSGID: 109005]
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path =
/fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
[2017-11-07 11:59:17.218935] I [MSGID: 109063]
[dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
in /fol1/fol2/fol3/fol4/fol...
2011 Oct 20
1
trying to create a 3 brick CIFS NAS server
...ll
I am having problems connecting to a 3 brick volume from a Windows client via samba/cifs.
Volume Name: gluster-volume
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 172.22.0.53:/data
Brick2: 172.22.0.23:/data
Brick3: 172.22.0.35:/data
I created a /mnt/glustervol folder and then tried to mount the gluster-volume to it using:
mount -t cifs 172.22.0.53:/gluster-volume /mnt/glustervol
I get this error....
Retrying with upper case share name
mount error(6): No such device or address
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
What am I doin...
2013 Nov 27
0
NFS client problems
I have create a 2 node replicated cluster with GlusterFS 3.4.1 on Centos 6.4. Mounting the volume locally on each server using native client works fine, however I am having issues with a separate client only server that I wish to use NFS to mount the gluster server volume.
Volume Name: glustervol
Type: Replicate
Volume ID: 6a5dde86-...
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterserver1 :/mnt/glusterv0
Brick2: glusterserver2 :/mnt/glusterv0
On nfs client:
mount -o vers=3 glusterserver1:/glustervol /mnt/glustervol
mount: glusterserve...
2017 Dec 11
2
reset-brick command questions
...with following command -
>
> |gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH
> HOSTNAME:BRICKPATH commit force |
This fails, I unmounted the gluster path, formatted a fresh disk,
mounted it on the old mount point and created the brick subdir on it.
gluster volume reset-brick glustervol
gluster1:/gluster/brick1/glusterbrick1
gluster1:/gluster/brick1/glusterbrick1 commit force
volume reset-brick: failed: Source brick must be stopped. Please use
gluster volume reset-brick <volname> <dst-brick> start.
Why would I even need to specify the "|HOSTNAME:...
2017 Dec 12
0
reset-brick command questions
...back into the volume.
Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks.
In this case also you will follow the same steps but just have to provide IP address
gluster volume reset-brick glustervol gluster1:/gluster/brick1/glusterbrick1 "gluster1 IP address" :/gluster/brick1/glusterbrick1 commit force
Now as we have this command for different cases, to keep uniformity of the command, we chose to provide brick path twice.
Coming to your case, I think you followed all the steps co...
2017 Nov 08
1
BUG: After stop and start wrong port is advertised
...nt clients.
In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4
in once case I had 59 differences in a total of 203 bricks.
I wrote a quick and dirty script to check all ports against the brick file and the running process.
#!/bin/bash
Host=`uname -n| awk -F"." '{print $1}'`
GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print $NF}'| awk -F"-server" '{print $1}'|sort | uniq`
Port=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print $NF}'| awk -F"." '{print $NF}'`
for Volumes in ${GlusterVol}...
2017 Nov 08
0
BUG: After stop and start wrong port is advertised
...lusterfs 3.10.2 on Centos 7.4
> in once case I had 59 differences in a total of 203 bricks.
>
> I wrote a quick and dirty script to check all ports against the brick file
> and the running process.
> #!/bin/bash
>
> Host=`uname -n| awk -F"." '{print $1}'`
> GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk
> '{print $NF}'| awk -F"-server" '{print $1}'|sort | uniq`
> Port=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print
> $NF}'| awk -F"." '{print $NF}'`
>
> for Vo...