John Mark Walker
2012-May-17  04:28 UTC
[Gluster-users] Fwd: [Gluster-devel] Asking about Gluster Performance Factors
See response below from Ben England. Also, note that this question should
probably go in gluster-users.
-JM 
----- Forwarded Message ----- 
From: "Ben England" <bengland at redhat.com> 
To: "John Mark Walker" <johnmark at redhat.com> 
Sent: Wednesday, May 16, 2012 8:23:30 AM 
Subject: Re: [Gluster-devel] Asking about Gluster Performance Factors 
JM, see comments marked with ben>>> below. 
----- Original Message -----
From: "???" <ej1515.park at samsung.com> 
To: gluster-devel at nongnu.org 
Sent: Wednesday, May 16, 2012 5:23:12 AM 
Subject: [Gluster-devel] Asking about Gluster Performance Factors 
Samsung Enterprise Portal mySingle 
May 16, 2012 
Dear Gluster Dev Team : 
I'm Ethan, Assistant engineer in Samsung electronics. Reviewing your paper,
I have some questions of performance factors in gluster.
First, what does it mean the option "performance.cache-*"? Does it
mean read cache? If does, what's difference between the options
"prformance.cache-max-file-size" and
"performance.cache-size" ?
I read your another paper("performance in a gluster system, versions
3.1.x") and it says as below on Page 12,
(Gluster Native protocol does not implement write caching, as we believe that
the modest performance improvements from rite caching do not justify the risk of
cache coherency issues.)
ben>>> While gluster processes do not implement write caching
internally, there are at least 3 ways to improve write performance in a Gluster
system.
- If you use a RAID controller with a non-volatile writeback cache, the RAID
controller can buffer writes on behalf of the Gluster server and thereby reduce
latency.
- XFS or any other local filesystem used within the server "bricks"
can do "write-thru" caching, meaning that the writes can be aggregated
and can be kept in the Linux buffer cache so that subsequent read requests can
be satisfied from this cache, transparent to Gluster processes.
- there is a "write-behind" translator in the native client that will
aggregate small sequential write requests at the FUSE layer into larger
network-level write requests. If the smallest possible application I/O size is a
requirement, sequential writes can also be efficiently aggregated by an NFS
client.
Second, how much is the read throughput improved as configuring 2-way
replication? we need any statistics or something like that.
("performance in a gluster system, versions 3.1.x") and it says as
below on Page 12,
(However, read throughput is generally improved by replication, as reads can be
delivered from either storage node)
ben>>> Yes, reads can be satisfied by either server in a replication
pair. Since the gluster native client only reads one of the two replicas, read
performance should be approximately the same for 2-replica file system as it
would be for a 1-replica file system. The difference in performance is with
writes, as you would expect.
Sincerely yours, 
Ethan Eunjun Park 
Assistant Engineer, 
Solution Development Team, Media Solution Center 
416, Maetan 3-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-742, Korea 
Mobile : 010-8609-9532 
E-mail : ej1515.park at samsung.com 
http://www.samsung.com/sec 
_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel at nongnu.org 
https://lists.nongnu.org/mailman/listinfo/gluster-devel 
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120517/26100f69/attachment.html>