Displaying 20 results from an estimated 700 matches similar to: "Bandwidth and latency requirements"
2017 Sep 28
2
Bandwidth and latency requirements
Interesting table Karan!,
Could you please tell us how you did the benchmark? fio or iozone
orsimilar?
thanks
Arman.
On Wed, Sep 27, 2017 at 1:20 PM, Karan Sandha <ksandha at redhat.com> wrote:
> Hi Collin,
>
> During our arbiter latency testing for completion of ops we found the
> below results:- an arbiter node in another data centre and both the data
> bricks in the
2017 Sep 27
0
Bandwidth and latency requirements
Hi Collin,
During our arbiter latency testing for completion of ops we found the below
results:- an arbiter node in another data centre and both the data bricks
in the same data centre,
1) File-size 1 KB (10000 files )
2) mkdir
Latency
5ms
10ms
20ms
50ms
100ms
200ms
Ops
Create
755 secs
1410 secs
2717 secs
5874 secs
12908 sec
26113 sec
Mkdir
922 secs
1725 secs
3325 secs
8127
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon,
Can you please turn OFF client-io-threads as we have seen degradation of
performance with io-threads ON on sequential read/writes, random
read/writes. Server event threads is 1 and client event threads are 2 by
default.
Thanks & Regards
On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com>
wrote:
> Hi gluster users,
> I've spent several
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users,
I've spent several months trying to get any kind of high performance out of
gluster. The current XFS/samba array is used for video editing and
300-400MB/s for at least 4 clients is minimum (currently a single windows
client gets at least 700/700 for a single client over samba, peaking to 950
at times using blackmagic speed test). Gluster has been getting me as low
as
2017 Oct 12
0
Bandwidth and latency requirements
Apologies for the late reply.
Further to this, if my Linux clients are connecting uing glusterfs-fuse and
I have my volumes defined like this:
dc1srv1:/gv_fileshare dc2srv1:/gv_fileshare dc1srv2:/gv_fileshare
dc2srv2:/gv_fileshare (replica 2)
How do I ensure that clients in dc1 prefer dc1srv1 and dc1srv2 while
clients in dc2 prefer the dc2 servers?
Is it simply a matter of ordering in
2017 Sep 29
0
Bandwidth and latency requirements
It was simple emulation of network packets on the port of the server node
using tc tool tc qdisc add dev <port> root netem delay <time>ms. The files
were created using dd tool (in-built in linux) and mkdir. Post the IO's we
verified with no pending heals.
Thanks & Regards
On Thu, Sep 28, 2017 at 2:06 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:
> Interesting
2017 Jul 29
1
Not possible to stop geo-rep after adding arbiter to replica 2
I managed to force stopping geo replication using the "force" parameter after the "stop" but there are still other issues related to the fact that my geo replication setup was created before I added the additional arbiter node to my replca.
For example when I would like to stop my volume I simply can't and I get the following error:
volume stop: myvolume: failed: Staging
2017 Jul 29
2
Not possible to stop geo-rep after adding arbiter to replica 2
Hello
To my two node replica volume I have added an arbiter node for safety purpose. On that volume I also have geo replication running and would like to stop it is status "Faulty" and keeps trying over and over to sync without success. I am using GlusterFS 3.8.11.
So in order to stop geo-rep I use:
gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo stop
but it
2017 Jul 29
0
Not possible to stop geo-rep after adding arbiter to replica 2
Adding Rahul and Kothresh who are SME on geo replication
Thanks & Regards
Karan Sandha
On Sat, Jul 29, 2017 at 3:37 PM, mabi <mabi at protonmail.ch> wrote:
> Hello
>
> To my two node replica volume I have added an arbiter node for safety
> purpose. On that volume I also have geo replication running and would like
> to stop it is status "Faulty" and keeps
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2017 Oct 10
0
small files performance
I just tried setting:
performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on
and clients could not see their files with ls when accessing via a fuse
mount. The files and directories were there,
2017 Apr 27
2
CentOS as Guest OS on Red Hat Virtualisation 4.x
Hi all,
I have a banking customer asking if CentOS is compatible on RHV4.0 as a guest VM. Based on Red Hat?s knowledge base (see link below), CentOS not supported guest OS.
However, VMware say on their official document, CentOS is a compatible guest OS (see link below). So, in my customer's mind, CentOS cannot be used as a guest OS in RHV cause it is not compatible, which I find it hard to
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all,
?
?
One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :)
?
Initial cluster was created like this:
?
gluster volume create www replica 3 transport tcp
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
+ Ambarish
On 07/11/2017 02:31 PM, Jo Goossens wrote:
> Hello,
>
>
>
>
>
> We tried tons of settings to get a php app running on a native gluster
> mount:
>
>
>
> e.g.: 192.168.140.41:/www /var/www glusterfs
> defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable
> 0 0
>
>
>
> I tried some mount variants
2015 Jan 22
2
sieve filter not working
Hi,
OK. I tried your suggestion. I modified the dovecot config file
"10-logging.conf", like so:
log_path = syslog
and
mail_debug = yes
It appears that the logging goes to "/var/log/maillog", not "messages"
as I expected.
Restarting service dovecot produces info in the "maillog" file showing
the restart:
...
Jan 22 15:20:14 coe dovecot: imap: Server
2015 Jan 22
4
sieve filter not working
Hi,
I have a question.
I have dovecot 2.0.9 running on a CentOS 6.6 email server for a small
department, ~15 users.
amavis and postfix are also enabled.
It appears that amavis invokes spamassassin, which tags incoming spam
email. All email is then put into users local inbox directory,
regardless of spam tag X-Spam_Flag value of YES or NO. I want instead
to redirect spam to a special directory.
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow.
?
I did not try the mount options yet, but I will now!
?
?
With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is the volume info as requested by soumya:
?
#gluster volume info www
?Volume Name: www
Type: Replicate
Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.140.41:/gluster/www
Brick2: 192.168.140.42:/gluster/www
Brick3: 192.168.140.43:/gluster/www
Options Reconfigured:
2009 Jan 05
1
adding a curve with xaxs="i"
I want the curve to touch the y axis like the curve touches the upper boundary.
How can I eliminate the margin between axis and curve on the left side?
x1 <- c(1,2,3,4,5)
x2 <- c(2,4,6,8,10)
mod <- lm (x2~x1)
hm <- function (x) (mod$coe[1]+x*mod$coe[2])
plot.new()
# ...
box()
curve (hm,lty=1,add=T,xaxs="i",yaxs="i")
(R 2.8.1)
--
Sensationsangebot verl?ngert: GMX
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello,
I have installed GlusterFS one month ago, and replication have many issues :
First of all, our infrastructure, 2 storage array of 8Tb in replication
mode... We have our backups file on this arrays, so 6Tb of datas.
I want replicate datas on the second storrage array, so, i use this command
:
# gluster volume rebalance REP_SVG migrate-data start
And gluster start to replicate, in 2 weeks