similar to: pNFS

Displaying 20 results from an estimated 2000 matches similar to: "pNFS"

2018 Mar 13
1
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 2:42 PM, Ondrej Valousek < Ondrej.Valousek at s3group.com> wrote: > Yes, I have had this in place already (well except of the negative cache, > but enabling that did not make much effect). > > To me, this is no surprise ? nothing can match nfs performance for small > files for obvious reasons: > Could you give profile info of the run you did with
2018 Mar 13
0
Expected performance for WORM scenario
Yes, I have had this in place already (well except of the negative cache, but enabling that did not make much effect). To me, this is no surprise ? nothing can match nfs performance for small files for obvious reasons: 1. Single server, does not have to deal with distributed locks 2. Afaik, gluster does not support read/write delegations the same way NFS does. 3. Glusterfs is
2018 Mar 14
2
Expected performance for WORM scenario
We can't stick to single server because the law. Redundancy is a legal requirement for our business. I'm sort of giving up on gluster though. It would seem a pretty stupid content addressable storage would suit our needs better. On 13 March 2018 at 10:12, Ondrej Valousek <Ondrej.Valousek at s3group.com> wrote: > Yes, I have had this in place already (well except of the negative
2018 Mar 13
0
Expected performance for WORM scenario
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance. Simple script (bit extreme I know, but helps to draw the picture): #!/bin/csh set HOSTNAME=`/bin/hostname` set j=1 while ($j <= 7000) echo ahoj > test.$HOSTNAME.$j @ j++ end rm -rf test.$HOSTNAME.* Takes 9 seconds to execute on the NFS share, but 90 seconds on
2018 Mar 13
3
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek < Ondrej.Valousek at s3group.com> wrote: > Well, it might be close to the _*synchronous*_ nfs, but it is still well > behind of the asynchronous nfs performance. > > Simple script (bit extreme I know, but helps to draw the picture): > > > > #!/bin/csh > > > > set HOSTNAME=`/bin/hostname` > > set j=1
2018 Mar 14
0
Expected performance for WORM scenario
That seems unlikely. I pre-create the directory layout and then write to directories I know exist. I don't quite understand how any settings at all can reduce performance to 1/5000 of what I get when writing straight to ramdisk though, and especially when running on a single node instead of in a cluster. Has anyone else set this up and managed to get better write performance? On 13 March
2012 Jun 01
2
ssh & control groups
Hi List, I am looking for an option for sshd to start user's shell (when logging in interactively to a remote host) in a control group via cgexec - so for example: /bin/cgexec -g <username> /bin/bash This would be extremely handy on linux Terminal servers to control users access to the system resources (protect system from a malicious user hogging the machine by running cpu/memory
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either. For the writes it's doing, that's alot of CPU usage in top. Seems bottle-necked via a single execution core somewhere trying to facilitate read / writes to the other bricks. Writes to the gluster FS from within one of the gluster participating bricks:
2018 Mar 13
5
Expected performance for WORM scenario
On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek < Ondrej.Valousek at s3group.com> wrote: > Hi, > > Gluster will never perform well for small files. > > I believe there is nothing you can do with this. > It is bad compared to a disk filesystem but I believe it is much closer to NFS now. Andreas, Looking at your workload, I am suspecting there to be lot of LOOKUPs
2018 Mar 07
0
gluster for home directories?
Hi, Why do you need to replace your existing solution? If you don't need to scale out due to the capacity reasons, the async NFS server will always outperform GlusterFS Ondrej ----- The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use,
2018 Mar 12
0
Expected performance for WORM scenario
Hi, Gluster will never perform well for small files. I believe there is nothing you can do with this. Ondrej From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Andreas Ericsson Sent: Monday, March 12, 2018 1:47 PM To: Gluster-users at gluster.org Subject: [Gluster-users] Expected performance for WORM scenario Heya fellas. I've been
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design. Nothing you can do about it. Ondrej -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys Sent: Monday, March 19, 2018 10:38 AM To: gluster-users at
2008 Aug 25
1
LOCALSTATEDIR
Hi I'm doing a network install of Dovecot 1.1.2. All is fine apart from a little annoying mistake I've found: I do this: > ./configure --prefix=/appli/tools_Linux/dovecot/1.1.2 --localstatedir=/var && make && make install however, since the --localstatedir is equal to a default value, it is still being constructed as "PREFIX/var" for some reason and in
2008 Sep 01
2
Read-only FS
Hi all, Imagine an NFS mounted read-only file system containing directories full of plain mboxes. Any recommendations on how to approach the access with dovecot? I believe all NFS recommended options apply, how about index/cache files though? Can I switch them off deliberately for good? Performance degradation is not an issue. Thanks, R. --
2018 Mar 07
1
gluster for home directories?
Hi, On 2018-03-07 16:35, Ondrej Valousek wrote: > Why do you need to replace your existing solution? > If you don't need to scale out due to the capacity reasons, the async > NFS server will always outperform GlusterFS The current solution is 8 years old and is reaching its end of life. The reason we are also looking into gluster is that we like that it uses standard components
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online. That requires the client tool chain to understand that a write goes to all storage devices not just the active one. 3.10 is a long term support release. Upgrading to 3.12 or 4 is not a significant issue once a replacement
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello, I'm designing a 2-node, HA NAS that must support NFS. I had planned on using GlusterFS native NFS until I saw that it is being deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite well. Are
2007 Sep 19
18
sip.conf best practices?
All - I've been wrestling with how to best structure the sip device accounts on a new asterisk server I'm deploying. All of the sip devices (currently only Linksys SPA941s) will reside on the same subnet as the server, and I have already set up a decent automatic provisioning system for the phones. When the rollout is complete, there will be about 100 SIP devices authenticating and
2018 Mar 07
4
gluster for home directories?
Hi, We are looking into replacing our current storage solution and are evaluating gluster for this purpose. Our current solution uses a SAN with two servers attached that serve samba and NFS 4. Clients connect to those servers using NFS or SMB. All users' home directories live on this server. I would like to have some insight in who else is using gluster for home directories for about 500
2018 May 09
0
3.12, ganesha and storhaug
All, I am upgrading the storage cluster from 3.8 to 3.10 or 3.12. I have 3.12 on the ovirt cluster. I would like to change the client connection method to NFS/NFS-Ganesha as the FUSE method causes some issues with heavy python users (mmap errors on file open for write). I see that nfs-ganesha was dropped after 3.10 yet there is an updated version in the 3.12 repo for CentOS 7 (which I am