Displaying 20 results from an estimated 2000 matches similar to: "Read-only FS"
2008 Aug 25
1
LOCALSTATEDIR
Hi
I'm doing a network install of Dovecot 1.1.2. All is fine apart from a
little annoying mistake I've found:
I do this:
> ./configure --prefix=/appli/tools_Linux/dovecot/1.1.2
--localstatedir=/var && make && make install
however, since the --localstatedir is equal to a default value, it is
still being constructed as "PREFIX/var" for some reason and in
2018 Mar 13
1
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 2:42 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Yes, I have had this in place already (well except of the negative cache,
> but enabling that did not make much effect).
>
> To me, this is no surprise ? nothing can match nfs performance for small
> files for obvious reasons:
>
Could you give profile info of the run you did with
2018 Mar 13
0
Expected performance for WORM scenario
Yes, I have had this in place already (well except of the negative cache, but enabling that did not make much effect).
To me, this is no surprise ? nothing can match nfs performance for small files for obvious reasons:
1. Single server, does not have to deal with distributed locks
2. Afaik, gluster does not support read/write delegations the same way NFS does.
3. Glusterfs is
2018 Mar 14
2
Expected performance for WORM scenario
We can't stick to single server because the law. Redundancy is a legal
requirement for our business.
I'm sort of giving up on gluster though. It would seem a pretty stupid
content addressable storage would suit our needs better.
On 13 March 2018 at 10:12, Ondrej Valousek <Ondrej.Valousek at s3group.com>
wrote:
> Yes, I have had this in place already (well except of the negative
2018 Mar 13
3
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Well, it might be close to the _*synchronous*_ nfs, but it is still well
> behind of the asynchronous nfs performance.
>
> Simple script (bit extreme I know, but helps to draw the picture):
>
>
>
> #!/bin/csh
>
>
>
> set HOSTNAME=`/bin/hostname`
>
> set j=1
2018 Mar 13
0
Expected performance for WORM scenario
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance.
Simple script (bit extreme I know, but helps to draw the picture):
#!/bin/csh
set HOSTNAME=`/bin/hostname`
set j=1
while ($j <= 7000)
echo ahoj > test.$HOSTNAME.$j
@ j++
end
rm -rf test.$HOSTNAME.*
Takes 9 seconds to execute on the NFS share, but 90 seconds on
2018 Mar 14
0
Expected performance for WORM scenario
That seems unlikely. I pre-create the directory layout and then write to
directories I know exist.
I don't quite understand how any settings at all can reduce performance to
1/5000 of what I get when writing straight to ramdisk though, and
especially when running on a single node instead of in a cluster. Has
anyone else set this up and managed to get better write performance?
On 13 March
2018 Mar 13
5
Expected performance for WORM scenario
On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Hi,
>
> Gluster will never perform well for small files.
>
> I believe there is nothing you can do with this.
>
It is bad compared to a disk filesystem but I believe it is much closer to
NFS now.
Andreas,
Looking at your workload, I am suspecting there to be lot of LOOKUPs
2018 Mar 12
0
Expected performance for WORM scenario
Hi,
Gluster will never perform well for small files.
I believe there is nothing you can do with this.
Ondrej
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Andreas Ericsson
Sent: Monday, March 12, 2018 1:47 PM
To: Gluster-users at gluster.org
Subject: [Gluster-users] Expected performance for WORM scenario
Heya fellas.
I've been
2012 Jun 01
2
ssh & control groups
Hi List,
I am looking for an option for sshd to start user's shell (when logging in interactively to a remote host) in a control group via cgexec -
so for example:
/bin/cgexec -g <username> /bin/bash
This would be extremely handy on linux Terminal servers to control users access to the system resources (protect system from a malicious
user hogging the machine by running cpu/memory
2018 Mar 07
1
gluster for home directories?
Hi,
On 2018-03-07 16:35, Ondrej Valousek wrote:
> Why do you need to replace your existing solution?
> If you don't need to scale out due to the capacity reasons, the async
> NFS server will always outperform GlusterFS
The current solution is 8 years old and is reaching its end of life.
The reason we are also looking into gluster is that we like that it uses
standard components
2007 Sep 19
18
sip.conf best practices?
All - I've been wrestling with how to best structure the sip device
accounts on a new asterisk server I'm deploying. All of the sip
devices (currently only Linksys SPA941s) will reside on the same
subnet as the server, and I have already set up a decent automatic
provisioning system for the phones. When the rollout is complete,
there will be about 100 SIP devices authenticating and
2018 Mar 06
0
pNFS
Hi list,
I am wondering why do we need Ganesha user-land NFS server in order to get pNFS working?
I understand Ganesha is necessary on the MDS, but standard kernel based NFS server should be sufficient on DS bricks (which should bring us additional performance), right?
Could someone clarify?
Thanks,
Ondrej
-----
The information contained in this e-mail and in any attachments is confidential
2018 Mar 07
0
gluster for home directories?
Hi,
Why do you need to replace your existing solution?
If you don't need to scale out due to the capacity reasons, the async NFS server will always outperform GlusterFS
Ondrej
-----
The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use,
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design.
Nothing you can do about it.
Ondrej
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users at
2018 Mar 12
4
Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even
halfdecently with a write-intensive workload. Testnumbers are from gluster
3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout
directory structure. The directories themselves aren't overly full. Most of
the data we write to gluster is "write once, read probably never", so 99%
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement
2009 Jun 09
1
Read only configuration.
Hi,
I currently I have a Samba share configured as follows:
[pub_fileshare]
comment = Public fileshare
path = /u02/pub
guest ok = Yes
writeable = Yes
There is a subfolder under /u02/pub called /u02/pub/expenses/hardware
that I need to make read only. How do I do this? I am new to using
Samba.
I configured the share /u02/pub/expenses/hardware using the
configuration below. This works as it is
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle-necked via a single execution core somewhere trying
to facilitate read / writes to the other bricks.
Writes to the gluster FS from within one of the gluster participating
bricks: