Displaying 20 results from an estimated 30 matches for "entrylks".
Did you mean:
entrylk
2018 Apr 11
2
Unreasonably poor performance of replicated volumes
Hello everybody!
I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are actually
virtual machines located on 3 separate physical XenServer7.1 servers)
They are all connected via infiniband network. Iperf3 shows around *23
Gbit/s network bandwidth *between each 2 of them.
Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical
volume created on top of it, formatted
2017 Dec 21
2
stale file handle on gluster NFS client when trying to remove a directory
...9;m able to delete the directory. So could anyone help me
in knowing what could have happened or when in general I get such errors.
The following is NFS log:
[2017-12-21 13:56:01.592256] I [MSGID: 108019]
[afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
0-g_sitework2-replicate-5: Blocking entrylks failed.
[2017-12-21 13:56:01.594350] W [MSGID: 108019]
[afr-lk-common.c:1064:afr_log_entry_locks_failure]
0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks
on at least one child while attempting RMDIR on
{pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
[2017-12-2...
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you can find
same post as yours on the daily basis.
On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
Thanks a lot for your reply!
You guessed it right though - mailing lists, various blogs, documentation,
videos and even source code at this point. Changing some off the options
does make performance slightly better, but nothing particularly
groundbreaking.
So, if I understand you correctly, no one has yet managed to get acceptable
performance (relative to underlying hardware capabilities) with
2018 Jan 03
0
stale file handle on gluster NFS client when trying to remove a directory
...could anyone help me
> in knowing what could have happened or when in general I get such errors.
>
>
> The following is NFS log:
>
>
> [2017-12-21 13:56:01.592256] I [MSGID: 108019] [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
> 0-g_sitework2-replicate-5: Blocking entrylks failed.
>
> [2017-12-21 13:56:01.594350] W [MSGID: 108019]
> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
> 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks
> on at least one child while attempting RMDIR on {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c,...
2018 Jan 03
1
stale file handle on gluster NFS client when trying to remove a directory
...what could have happened or when in general I get such errors.
>>
>>
>> The following is NFS log:
>>
>>
>> [2017-12-21 13:56:01.592256] I [MSGID: 108019]
>> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
>> 0-g_sitework2-replicate-5: Blocking entrylks failed.
>>
>> [2017-12-21 13:56:01.594350] W [MSGID: 108019]
>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>> 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks
>> on at least one child while attempting RMDIR on
>> {pgfid:23558c59...
2013 Feb 26
0
Replicated Volume Crashed
...r-self-heal-common.c:1189:afr_sh_missing_entry_call_impunge_recreate]
0-adata-replicate-8: no missing files - /files/random-file.dat. proceeding
to metadata check
[2013-02-25 19:44:59.816525] I
[afr-self-heal-common.c:1941:afr_sh_post_nb_entrylk_conflicting_sh_cbk]
0-adata-replicate-1: Non blocking entrylks failed.
[2013-02-25 19:44:59.816554] E
[afr-self-heal-common.c:2160:afr_self_heal_completion_cbk]
0-adata-replicate-1: background data missing-entry gfid self-heal failed
on /files/random-file.dat
[2013-02-25 19:47:01.548989] I
[afr-self-heal-common.c:1941:afr_sh_post_nb_entrylk_conflicting_sh_cbk...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at
ovirt1.nwfiber.com:/gluster/brick1/engine
ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
Are the bricks of engine volume on both these servers identical in terms of
their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote:
> Hi:
>
> Thank you. I
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
Hi all again:
I'm now subscribed to gluster-users as well, so I should get any replies
from that side too.
At this point, I am seeing acceptable (although slower than I expect)
performance much of the time, with periodic massive spikes in latency
(occasionally so bad as to cause ovirt to detect a engine bad health
status). Often, if I check the logs just then, I'll see those call traces
2009 Jan 14
4
locks feature not loading ? (2.0.0rc1)
Hi all,
I upgraded from 1.4.0rc3 to 2.0.0rc1 in my test environment, and while
the upgrade itself went smoothly, i appear to be having problems with
the (posix-)locks feature. :( The feature is clearly declared in the
server config file, and according to the DEBUG-level logs, it is loaded
successfully at runtime ; however, when Gluster attempts to lock an
object (for the purposes of AFR
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
I've been back at it, and still am unable to get more than one of my
physical nodes to come online in ovirt, nor am I able to get more than the
two gluster volumes (storage domains) to show online within ovirt.
In Storage -> Volumes, they all show offline (many with one brick down,
which is correct: I have one server off)
However, in Storage -> domains, they all show down (although
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim at palousetech.com> wrote:
> I've been back at it, and still am unable to get more than one of my
> physical nodes to come online in ovirt, nor am I able to get more than the
> two gluster volumes (storage domains) to show online within ovirt.
>
> In Storage -> Volumes, they all show offline (many with one brick down,
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with
and without shard will be the same.
In any case, please attach the volume profile[1], so we can see what else
is slowing things down.
-Krutika
[1] -
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2013 May 24
0
Problem After adding Bricks
Hello, I have run into some performance issues after adding bricks to
a 3.3.1 volume. Basically I am seeing very high CPU usage and
extremely degraded performance. I started a re-balance but stopped it
after a couple days. The logs have a lot of entries for split-brain as
well as "Non Blocking entrylks failed for". For some of the
directories on the client doing an ls will show multiple entires for
the same directory ( ls below ). I am wondering if it is just
spinning trying to heal itself? I have been able to fix some of these
entries by removing gfid files, stat-ing etc, however I feel I...
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika,
I already have a preallocated disk on VM.
Now I am checking performance with dd on the hypervisors which have the
gluster volume configured.
I tried also several values of shard-block-size and I keep getting the same
low values on write performance.
Enabling client-io-threads also did not have any affect.
The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from
qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0
Author: Paolo Bonzini <pbonzini at redhat.com>
Date: Wed Jun 21 16:35:46 2017
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover
that when one of replicate node reboot and startup the glusterd daemon,the
gluster will crash cause by the other
replicate node cpu usage reach 100%.
Our gluster info:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Options Reconfigured:
performance.cache-size: 3GB
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...-diskusage.c:96:dht_du_info_cbk] 0-gv0-dht: subvolume
'gv0-replicate-0': avail_percent is: 99.00 and avail_space is: 21425758208
and avail_inodes is: 99.00
[2017-09-20 13:34:23.353086] D [MSGID: 0]
[afr-transaction.c:1934:afr_post_nonblocking_entrylk_cbk]
0-gv0-replicate-0: Non blocking entrylks done. Proceeding to FOP
[2017-09-20 13:34:23.353722] D [MSGID: 0]
[dht-selfheal.c:1879:dht_selfheal_layout_new_directory] 0-gv0-dht: chunk
size = 0xffffffff / 20466 = 209858.658018
[2017-09-20 13:34:23.353748] D [MSGID: 0]
[dht-selfheal.c:1920:dht_selfheal_layout_new_directory] 0-gv0-dht:
assig...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
[Adding gluster-users back]
Nothing amiss with volume info and status.
Can you check the agent.log and broker.log - will be under
/var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under
/var/log/glusterfs/rhev-data-center-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim at palousetech.com> wrote:
> I believe the gluster data store for