Displaying 9 results from an estimated 9 matches for "iatt".
Did you mean:
att
2017 Aug 29
2
error msg in the glustershd.log
...owing up?. currently using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am
seeing this kind of error and glustershd process consumes some percentage
of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006]
[ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to
combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1,
uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode:
100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029]
[ec-generic.c:684:ec_combine_lookup] 0-glustervol...
2017 Aug 29
2
error msg in the glustershd.log
...lusterfs 3.10.1
>
> when ever I start write process to volume (volume mounted thru fuse) I am
> seeing this kind of error and glustershd process consumes some percentage
> of cpu until write process completes.
>
> [2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine]
> 0-glustervol-disperse-109: Failed to combine iatt (inode:
> 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid:
> 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
> [2017-08-28 10:01:13.030752] N [MSGID: 122029]
> [ec-generic.c:684:ec_com...
2017 Aug 29
0
error msg in the glustershd.log
...ing up?. currently using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029] [ec-generic.c:684:ec_combine_lookup] 0-glustervo...
2017 Aug 31
0
error msg in the glustershd.log
...when ever I start write process to volume (volume mounted thru fuse) I am
>> seeing this kind of error and glustershd process consumes some percentage
>> of cpu until write process completes.
>>
>> [2017-08-28 10:01:13.030710] W [MSGID: 122006]
>> [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to
>> combine iatt (inode: 11548094941524765708-11548094941524765708, links:
>> 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode:
>> 100755-100755)
>> [2017-08-28 10:01:13.030752] N [MSGID: 122029]
>> [ec-...
2017 Aug 31
1
error msg in the glustershd.log
...ing up?. currently using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am seeing this kind of error and glustershd process consumes some percentage of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006] [ec-combine.c:191:ec_iatt_combine] 0-glustervol-disperse-109: Failed to combine iatt (inode: 11548094941524765708-11548094941524765708, links: 1-1, uid: 0-0, gid: 0-0, rdev: 0-0, size: 1769963520-1769947136, mode: 100755-100755)
[2017-08-28 10:01:13.030752] N [MSGID: 122029] [ec-generic.c:684:ec_combine_lookup] 0-glustervo...
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi,
When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file.
Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4
This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2017 Sep 22
2
AFR: Fail lookups when quorum not met
...without taking into
account whether the lookup is served from the good or bad brick. We
always serve from the good brick whenever possible, but if there is
none, we just serve the lookup from one of the bricks that we got a
positive reply from.
We found a bug? [1] due to this behavior were the iatt values returned
in the lookup call was bad and caused the client to hang. The proposed
fix [2] was to fail look ups when we definitely know the lookup can't be
trusted (by virtue of AFR xattrs indicating the replies we got from the
up bricks are indeed bad).
Note that this fix is *only* fo...
2017 Oct 09
0
[Gluster-devel] AFR: Fail lookups when quorum not met
...ther the lookup is served from the good or bad brick. We always
>> serve from the good brick whenever possible, but if there is none, we just
>> serve the lookup from one of the bricks that we got a positive reply from.
>>
>> We found a bug? [1] due to this behavior were the iatt values returned in
>> the lookup call was bad and caused the client to hang. The proposed fix [2]
>> was to fail look ups when we definitely know the lookup can't be trusted (by
>> virtue of AFR xattrs indicating the replies we got from the up bricks are
>> indeed bad)....
2011 Jun 09
1
NFS problem
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB