similar to: gluster for home directories?

Displaying 20 results from an estimated 20000 matches similar to: "gluster for home directories?"

2018 Mar 08
0
gluster for home directories?
Hi Rik, Nice clarity and detail in the description. Thanks! inline... On Wed, Mar 7, 2018 at 8:29 PM, Rik Theys <Rik.Theys at esat.kuleuven.be> wrote: > Hi, > > We are looking into replacing our current storage solution and are > evaluating gluster for this purpose. Our current solution uses a SAN > with two servers attached that serve samba and NFS 4. Clients connect to
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design. Nothing you can do about it. Ondrej -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys Sent: Monday, March 19, 2018 10:38 AM To: gluster-users at
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set on the NFS export (sync or async)? >From my tests, I concluded that the issue was not
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, On 03/19/2018 03:42 PM, TomK wrote: > On 3/19/2018 5:42 AM, Ondrej Valousek wrote: > Removing NFS or NFS Ganesha from the equation, not very impressed on my > own setup either.? For the writes it's doing, that's alot of CPU usage > in top. Seems bottle-necked via a single execution core somewhere trying > to facilitate read / writes to the other bricks. > >
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either. For the writes it's doing, that's alot of CPU usage in top. Seems bottle-necked via a single execution core somewhere trying to facilitate read / writes to the other bricks. Writes to the gluster FS from within one of the gluster participating bricks:
2018 Mar 18
4
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Howdy all, We're experiencing terrible small file performance when copying or moving files on gluster clients. In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files sideways on a client, doing the same thing on NFS (which I know is a totally different solution etc. etc.) takes approximately 10-15 seconds(!). Any advice for tuning the volume or XFS settings would be
2018 Mar 07
1
gluster for home directories?
Hi, On 2018-03-07 16:35, Ondrej Valousek wrote: > Why do you need to replace your existing solution? > If you don't need to scale out due to the capacity reasons, the async > NFS server will always outperform GlusterFS The current solution is 8 years old and is reaching its end of life. The reason we are also looking into gluster is that we like that it uses standard components
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 10:52 AM, Rik Theys wrote: > Hi, > > On 03/19/2018 03:42 PM, TomK wrote: >> On 3/19/2018 5:42 AM, Ondrej Valousek wrote: >> Removing NFS or NFS Ganesha from the equation, not very impressed on my >> own setup either.? For the writes it's doing, that's alot of CPU usage >> in top. Seems bottle-necked via a single execution core somewhere trying
2018 Mar 07
0
gluster for home directories?
Hi, Why do you need to replace your existing solution? If you don't need to scale out due to the capacity reasons, the async NFS server will always outperform GlusterFS Ondrej ----- The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use,
2017 Sep 29
2
nfs-ganesha locking problems
Hi, I have a problem with nfs-ganesha serving gluster volumes I can read and write files but then one of the DBAs tried to dump an Oracle DB onto the NFS share and got the following errors: Export: Release 11.2.0.4.0 - Production on Wed Sep 27 23:27:48 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates.??All rights reserved. Connected to: Oracle Database 11g Enterprise Edition
2017 Jul 14
2
Bug 1374166 or similar
Hi, yes, I mounted the Gluster volume and deleted the files from the volume not the brick mount -t glusterfs hostname:volname /mnt cd /mnt/some/directory rm -rf * restart of nfs-ganesha is planned for tomorrow. I'll keep you posted BTW: nfs-ganesha is running on a separate server in standalone configuration Best Regards Bernhard 2017-07-14 10:43 GMT+02:00 Jiffin Tony Thottan <jthottan
2017 Jun 30
2
Some bricks are offline after restart, how to bring them online gracefully?
Hi Hari, thank you for your support! Did I try to check offline bricks multiple times? Yes ? I gave it enough time (at least 20 minutes) to recover but it stayed offline. Version? All nodes are 100% equal ? I tried fresh installation several times during my testing, Every time it is CentOS Minimal install with all updates and without any additional software: uname -r 3.10.0-514.21.2.el7.x86_64
2017 Dec 04
2
gluster and nfs-ganesha
Hi Jiffin, I looked at the document, and there are 2 things: 1. In Gluster 3.8 it seems you don't need to do that at all, it creates this automatically, so why not in 3.10? 2. The step by step guide, in the last item, doesn't say where exactly do I need to create the nfs-ganesha directory. The copy/paste seems irrelevant as enabling nfs-ganesha creates automatically the ganesha.conf and
2017 Dec 02
2
gluster and nfs-ganesha
HI, I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. I'm trying to create a very simple 2 nodes cluster to be used with NFS-ganesha. I've created the bricks and the volume. Here's the output: # gluster volume info Volume Name: cluster-demo Type: Replicate Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
Hi Jan, It is not recommended that you automate the script for 'volume start force'. Bricks do not go offline just like that. There will be some genuine issue which triggers this. Could you please attach the entire glusterd.logs and the brick logs around the time so that someone would be able to look? Just to make sure, please check if you have any network outage(using iperf or some
2017 Dec 04
0
gluster and nfs-ganesha
On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote: > HI, > > I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. > > I'm trying to create a very simple 2 nodes cluster to be used with > NFS-ganesha. I've created the bricks and the volume. Here's the output: > > # gluster volume info > > Volume Name: cluster-demo > Type:
2017 Dec 06
0
gluster and nfs-ganesha
Hi, On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote: > Hi Jiffin, > > I looked at the document, and there are 2 things: > > 1. In Gluster 3.8 it seems you don't need to do that at all, it > creates this automatically, so why not in 3.10? Kindly please refer the mail[1] and release note [2] for glusterfs-3.9 Regards, Jiffin [1]
2017 Jun 30
1
Some bricks are offline after restart, how to bring them online gracefully?
Hi, Jan, by multiple times I meant whether you were able to do the whole setup multiple times and face the same issue. So that we have a consistent reproducer to work on. As grepping shows that the process doesn't exist the bug I mentioned doesn't hold good. Seems like another issue irrelevant to the bug i mentioned (have mentioned it now). When you say too often, this means there is a
2017 Jul 16
0
Bug 1374166 or similar
Hi, both Gluster servers were rebooted and now the unlink directory is clean. Best Regards Bernhard 2017-07-14 12:43 GMT+02:00 Bernhard D?bi <1linuxengineer at gmail.com>: > Hi, > > yes, I mounted the Gluster volume and deleted the files from the > volume not the brick > > mount -t glusterfs hostname:volname /mnt > cd /mnt/some/directory > rm -rf * > >
2017 Jul 18
1
Bug 1374166 or similar
On 16/07/17 20:11, Bernhard D?bi wrote: > Hi, > > both Gluster servers were rebooted and now the unlink directory is clean. Following should have happened, If delete operation is performed gluster keeps file in .unlink directory if it has open fd. In this case since lazy umount is performed, ganesha server may still keep the fd's open by that client so gluster keeps the unlink