Displaying 20 results from an estimated 90000 matches similar to: "No subject"
2009 Jun 24
2
Limit of Glusterfs help
HI:
Was there a limit of servers which was used as storage in Gluster ?
2009-06-24
eagleeyes
???? gluster-users-request
????? 2009-06-24 03:00:42
???? gluster-users
???
??? Gluster-users Digest, Vol 14, Issue 34
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
2009 Jun 24
0
Gluster-users Digest, Vol 14, Issue 34
HI:
Was there a limit of servers which was used as storage in Gluster ?
2009-06-24
eagleeyes
???? gluster-users-request
????? 2009-06-24 03:00:42
???? gluster-users
???
??? Gluster-users Digest, Vol 14, Issue 34
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
2018 Mar 05
1
[Gluster-devel] Removal of use-compound-fops option in afr
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi <atumball at redhat.com> wrote:
> Pranith,
>
>
>
>> We found that compound fops is not giving better performance in
>> replicate and I am thinking of removing that code. Sent the patch at
>> https://review.gluster.org/19655
>>
>>
> If I understand it right, as of now AFR is the only component
2018 May 23
0
cluster brick logs filling after upgrade from 3.6 to 3.12
Recently we updated a Gluster replicated setup from 3.6 to 3.12 (stepping through 3.8 first before going to 3.12).
Afterwards I noticed the brick logs were filling at an alarming rate on the server we have the NFS service running from:
$ sudo tail -20 /var/log/glusterfs/bricks/export-gluster-shared.log
[2018-05-23 06:22:12.405240] I [MSGID: 139001] [posix-acl.c:269:posix_acl_log_permit_denied]
2017 Aug 21
0
[Gluster-devel] How commonly applications make use of fadvise?
On Sat, Aug 19, 2017 at 4:27 PM, Csaba Henk <chenk at redhat.com> wrote:
> Hi Niels,
>
> On Fri, Aug 11, 2017 at 2:33 PM, Niels de Vos <ndevos at redhat.com> wrote:
> > On Fri, Aug 11, 2017 at 05:50:47PM +0530, Ravishankar N wrote:
> [...]
> >> To me it looks like fadvise (mm/fadvise.c) affects only the linux page
> cache
> >> behavior and is
2017 Jun 14
0
Transport Endpoint Not connected while running sysbench on Gluster Volume
Also, this is the profile output of this Volume:
gluster> volume profile mariadb_gluster_volume info cumulative
Brick: laeft-dccdb01p.core.epay.us.loc:/export/mariadb_backup/brick
-------------------------------------------------------------------
Cumulative Stats:
Block Size: 16384b+ 32768b+
65536b+
No. of Reads: 0 0
0
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal.
lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks.
This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Aug 11
2
[Gluster-devel] How commonly applications make use of fadvise?
On Fri, Aug 11, 2017 at 05:50:47PM +0530, Ravishankar N wrote:
>
>
> On 08/11/2017 04:51 PM, Niels de Vos wrote:
> > On Fri, Aug 11, 2017 at 12:47:47AM -0400, Raghavendra Gowdappa wrote:
> > > Hi all,
> > >
> > > In a conversation between me, Milind and Csaba, Milind pointed out
> > > fadvise(2) [1] and its potential benefits to Glusterfs'
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287
it has been fixed in glusterfs-3.11.0
---
Ashish
----- Original Message -----
From: "Amudhan P" <amudhan83 at gmail.com>
To: "Ashish Pandey" <aspandey at redhat.com>
Cc: "Gluster Users" <gluster-users at gluster.org>
Sent: Thursday, August 31, 2017 1:07:16 PM
Subject:
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed?
On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> I am using 3.10.1 from which version this update is available.
>
>
> On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com>
> wrote:
>
>>
>> Whenever we do some fop on EC volume on a file, we check the xattr also
2023 Feb 14
1
failed to close Bad file descriptor on file creation after using setfattr to test latency?
Hi all,
Running into a problem with my gluster here in a LAB env. Production is a
slightly different build(distributed replicated with multiple arbiter
bricks) and I don't see the same errors...yet. I only seem to have this
problem on 2 client vm's that I ran "setfattr -n trusted.io-stats-dump -v
output_file_id mount_point" on while trying to test for latency issues.
Curious
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you can find
same post as yours on the daily basis.
On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system.
I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case.
=====
Symptoms
=====
- Error from test
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives
disconnects? so we know which to disable
parallel-readdir doing magic ran on pdf from last year
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
-v
On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote:
> By the way, on a slightly related note, I'm pretty
2017 Aug 11
0
[Gluster-devel] How commonly applications make use of fadvise?
On 08/11/2017 04:51 PM, Niels de Vos wrote:
> On Fri, Aug 11, 2017 at 12:47:47AM -0400, Raghavendra Gowdappa wrote:
>> Hi all,
>>
>> In a conversation between me, Milind and Csaba, Milind pointed out
>> fadvise(2) [1] and its potential benefits to Glusterfs' caching
>> translators like read-ahead etc. After discussing about it, we agreed
>> that our
2012 Jun 04
1
Performance translators - a overview.
Hi,
The purpose of performance translators is to decrease system call latency
of applications and increase responsiveness of glusterfs.
The standard approach used within glusterfs to decrease system call latency
is making sure we avoid network roundtrip time as part of the fop
processing. And based on what fop we are dealing with, we have different
translators like read-ahead, io-cache,
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
Thanks a lot for your reply!
You guessed it right though - mailing lists, various blogs, documentation,
videos and even source code at this point. Changing some off the options
does make performance slightly better, but nothing particularly
groundbreaking.
So, if I understand you correctly, no one has yet managed to get acceptable
performance (relative to underlying hardware capabilities) with
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad,
I'm sorry, I don't want to test this again on my system just yet! It caused
too much instability for my users and I don't have enough resources for a
development environment. The only other variables that changed before the
crashes was the group metadata-cache[0], which I enabled the same day as
the parallel-readdir and readdir-ahead options:
$ gluster volume set homes
2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available.
On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote:
>
> Whenever we do some fop on EC volume on a file, we check the xattr also to
> see if the file is healthy or not. If not, we trigger heal.
> lookup is the fop for which we don't take inodelk lock so it is possible
> that the
2012 Jan 04
0
FUSE init failed
Hi,
I'm having an issue using the GlusterFS native client.
After doing a mount the filesystem appears mounted but any operation
results in a
Transport endpoint is not connected
message
gluster peer status and volume info don't complain.
I've copied the mount log below which mentions an error at fuse_init.
The kernel is based on 2.6.15 and FUSE api version is 7.3.
I'm using