doing the benchmark file size rage 128KB to 1GB .So up to file size 256MB I
am getting  buffer cache performance and file size 512MB ,1GB I am getting
with in  link speed .But in case of GlusterFS I not able to understand what
is happening .
Please any one can help me .
NFS :
iozone -Raceb ./perffinal.wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1
Reader
           4              8            16            32
64           128
128    744701    727625    935039    633768    499971    391433
256    920892    1085148    1057519    931149    551834    380335
512    937558    1075517    1100810    904515    558917    368605
1024    974395    1072149    1094105    969724    555319    379390
2048    1026059    1125318    1137073    1005356    568252    375232
4096    1021220    1144780    1169589    1030467    578615    376367
8192    965366    1153315    1071693    1072681    607040    371771
16384    1008989    1133837    1163806    1046171    600500    376056
32768    1022692    1165701    1175739    1065870    630626    363563
65536    1005490    1152909    1168181    1048258    631148    374343
131072    1011405    1161491    1176534    1048509    637910    375741
262144    1011217    1130486    1118877    1075740    636433    375511
524288    9563    9562    9568    9551    9525    9562
1048576    9499    9520    9513    9535    9493    9469
GlusterFS:
iozone -Raceb /root/glusterfs/perfgfs2.wks -y 4K -q 128K -n 128K -g 1G -i 0
-i 1
Reader Report
            4           8            16         32           64         128
128    48834    50395    49785    48593    48450    47959
256    15276    15209    15210    15100    14998    14973
512    12343    12333    12340    12291    12202    12213
1024    11330    11334    11327    11303    11276    11283
2048    10875    10881    10877    10873    10857    10865
4096    10671    10670    9706    10673    9685    10640
8192    10572    10060    10571    10573    10555    10064
16384    10522    10523    10523    10522    10522    10263
32768    10494    10497    10495    10493    10497    10497
65536    10484    10483    10419    10483    10485    10485
131072    10419    10475    10477    10445    10445    10478
262144    10323    10241    10312    10226    10320    10237
524288    10074    9966    9707    8567    8213    9046
1048576    7440    7973    5737    7101    7678    5743
Any idea for this higher value in NFS test .some this is different . But I
am not able to understand.
Thanks for your time
Mohan
------=_Part_259374_19852034.1230880209611
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline
we're conducting performance  benchmark runs to evaluate Linux
performance as NFS file servers.<br>It is observed that an unusual high
percentage of benchmark time was spent in "read"
operation.<br>A sampled workload consisting of 18% of read consumes 63% of
total benchmark time. Did this<br>
problem get analyzed before (or even better :)-is there a patch) ? We're
on 2.4.19 kernel- NFS<br>V3 - UDP, with EXT3 as local file
system.<br><br>Thanks in advance.<br><br><a
href=3D"mailto:gluster-users at gluster.org">gluster-users at
gluster.org</a><br>
<br>Dear All,<br><br>we are currently using NFS to meet data
sharing requirements.Now we are facing  some performance and
scalability problem ,so this form does not meet the requirements of our
network(performance).So we are finding the possible solutions to increase the
performance and scalability .To give very strong solution to NFS issue I have
analysed two File System one is GlusterFS and another one is Red Hat GFS.we
conclude that GlusterFS will increase the performance and scalability ,It has
all the features we are looking .For the testing purpose I am benchmarking NFS
and GlusterFS to get better performance .My benchmark result shows that
GlusterFS give better performance ,but i am getting some unacceptable read
performance . I am not able to understand how exactly the read operation
performs NFS and GlusterFS .even I don't know anything i am doing
wrong.here i am showing the benchmark result to get better idea of my read
performance issuee .i have attached the result of NFS and GlusterFS
read  values .any one can please go thro this and give me some valuable
guide .It will make my benchmarking very effective . <br>
<br>This my server and client Hardware and software
:<br><br>HARDWARE CONFIG:<br><br>Processor core
speed  : Intel(R) Celeron(R) CPU 1.70GHz<br><br>Number of
cores  : Single Core (not dual-core)<br><br>RAM
size  : 384MB(128MB+256MB)<br>
<br>RAM type  : DDR<br><br>RAM Speed  : 266
MHz (3.8 ns)<br><br>Swap  :
1027MB<br><br>Storage controller  : ATA
device<br><br>Disk model/size  : SAMSUNG SV4012H /40 GB,2
MB Cache,<br><br>Storage speed  : 52.4
MB/sec<br><br>
Spindle Speed  : 5400 rpm(Revolution per Minute)<br><br>NIC
Type  : VIA Rhine III chipset IRQ 18<br><br>NIC
Speed  : 100 Mbps/Full-Duplex
Card<br><br>SOFTWARE:<br><br>Operation System : Fedora
Core 9 GNU/Linux<br><br>Linux version  : 2.6.9-42<br>
<br>Local FS  : Ext3<br><br>NFS version  :
1.1.2<br><br>GlusterFS version: glusterfs 1.3.8 built on Feb 3
2008<br><br>Iozone  : iozone-3-5.fc9.i386 (File System
Benchmark Tool)<br><br>ttcp  : ttcp-1.12-18.fc9.i386(RAW
throughput measurement Tool)<br>
<br>This is the server and client vol files i am using the
benchmarking<br><br>#GlusterFS Server Volume
Specification<br><br>volume brick<br>  type
storage/posix                  
# POSIX FS translator<br>  option directory
/bench        #
/bench dir contains 25,000 files with size 10 KB 15KB<br>
end-volume<br><br>volume iot<br>  type
performance/io-threads<br>  option thread-count 4
<br>  option cache-size 8MB<br>  subvolumes
brick<br>end-volume<br><br>volume server<br> 
type protocol/server<br>  option transport-type
tcp/server    <br>
  subvolumes iot<br>  option auth.ip.brick.allow * #
Allow access to "brick"
volume<br>end-volume<br><br><br><br># GlusterFS
Client Volume Specification <br><br>volume
client<br>  type protocol/client<br>  option
transport-type tcp/client    <br>
  option remote-host
192.xxx.x.xxx      
<br>  option remote-subvolume
brick      
<br>end-volume<br><br>volume readahead<br> 
type performance/read-ahead<br>  option page-size
128KB     # 256KB is the default
option<br>  option page-count
4     # cache per file  =3D
(page-count x page-size)  2 is default option<br>
  subvolumes client<br>end-volume<br><br>volume
iocache<br>  type performance/io-cache<br> 
#option page-size 128KB   ## default is
32MB<br>  option cache-size 256MB  #128KB is default
option<br>  option page-count 4 <br>
  subvolumes readahead<br>end-volume<br><br>volume
writeback<br>  type performance/write-behind<br> 
option aggregate-size 128KB<br>  option flush-behind
on<br>  subvolumes iocache 
<br>end-volume<br><br><br>I am confusing this result .I
don't have idea how to trace and get good comparable result is read
performance .I think I am miss understanding the buffer cache concepts
.<br>
<br>From attached NFS read result , I understand that I have 348MB
RAM  and I am doing the benchmark file size rage 128KB to 1GB .So up to
file size 256MB I am getting  buffer cache performance and file size
512MB ,1GB I am getting with in  link speed .But in case of GlusterFS I
not able to understand what is happening . <br>
<br>Please any one can help me .<br><br>NFS :<br>iozone
-Raceb ./perffinal.wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1
       
       
<br><br>Reader   
       
       
    <br>    
     
4             
8           
16           
32           
64          
128<br>
<br>128    744701   
727625    935039   
633768    499971   
391433<br>256   
920892    1085148   
1057519    931149   
551834   
380335<br>512   
937558    1075517   
1100810    904515   
558917   
368605<br>1024   
974395    1072149   
1094105    969724   
555319    379390<br>
2048    1026059   
1125318    1137073   
1005356    568252   
375232<br>4096   
1021220    1144780   
1169589    1030467   
578615   
376367<br>8192   
965366    1153315   
1071693    1072681   
607040   
371771<br>16384   
1008989    1133837   
1163806    1046171   
600500    376056<br>
32768    1022692   
1165701    1175739   
1065870    630626   
363563<br>65536   
1005490    1152909   
1168181    1048258   
631148   
374343<br>131072   
1011405    1161491   
1176534    1048509   
637910   
375741<br>262144   
1011217    1130486   
1118877    1075740   
636433    375511<br>
524288    9563   
9562    9568   
9551    9525   
9562<br>1048576   
9499    9520   
9513    9535   
9493   
9469<br><br>GlusterFS:<br>iozone -Raceb
/root/glusterfs/perfgfs2.wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1
       
       
    <br>
Reader Report       
       
       
<br>           
4          
8           
16        
32          
64        
128<br><br>128   
48834    50395   
49785    48593   
48450   
47959<br>256   
15276    15209   
15210    15100   
14998    14973<br>
512    12343   
12333    12340   
12291    12202   
12213<br>1024   
11330    11334   
11327    11303   
11276   
11283<br>2048   
10875    10881   
10877    10873   
10857   
10865<br>4096   
10671    10670   
9706    10673   
9685    10640<br>
8192    10572   
10060    10571   
10573    10555   
10064<br>16384   
10522    10523   
10523    10522   
10522   
10263<br>32768   
10494    10497   
10495    10493   
10497   
10497<br>65536   
10484    10483   
10419    10483   
10485    10485<br>
131072    10419   
10475    10477   
10445    10445   
10478<br>262144   
10323    10241   
10312    10226   
10320   
10237<br>524288   
10074    9966   
9707    8567   
8213   
9046<br>1048576   
7440    7973   
5737    7101   
7678    5743<br>
<br>Any idea for this higher value in NFS test .some this is different .
But I am not able to
understand.<br><br><br><br><br><br>Thanks
for your time<br>Mohan<br><br>
------=_Part_259374_19852034.1230880209611--