Jean-Francois Chevrette
2011-Aug-11 14:36 UTC
[Gluster-users] Very bad performance /w glusterfs. Am I missing something?
Hello everyone, I have just began playing with GlusterFS 3.2 on a debian squeeze system. This system is a powerful quad-core xeon with 12GB of RAM and two 300GB SAS 15k drives configured as a RAID-1 on an Adaptec 5405 controller. Both servers are connected through a crossover cable on gigabit ethernet ports. I installed the latest GlusterFS 3.2.2 release from the provided debian package. As an initial test, I've created a simple brick on my first node: gluster volume create brick transport tcp node1.internal:/brick I started the volume and mounted it locally mount -t glusterfs 127.0.0.1:/brick /mnt/brick I can an iozone test on both the underlying partition and the glusterfs mountpoint. Here are my results for the random write test (results are in ops/sec): "Random write report" w/o glusterfs "4" "8" "16" "32" "64" "128" "256" "512" "1024" "2048" "4096" "8192" "16384" "64" 166603 121220 76676 46395 25605 "128" 171020 126906 83301 49372 27431 14275 "256" 172871 110303 85948 51957 28590 15147 7196 "512" 172029 129816 85336 51949 28881 15158 7517 3859 "1024" 175453 131270 73993 53413 29961 15866 7800 3936 1980 "2048" 176735 132777 87669 48482 28473 15918 7867 3980 1851 1011 "4096" 194828 146079 145045 53511 28624 15157 7490 5340 1989 1007 490 "Random write report" /w glusterfs "4" "8" "16" "32" "64" "128" "256" "512" "1024" "2048" "4096" "8192" "16384" "64" 6872 6390 5797 5103 4630 "128" 6871 6661 5865 4767 4424 4656 "256" 8953 6691 6506 5513 4999 3429 1908 "512" 9222 8727 6650 6003 5290 2386 2057 1061 "1024" 10363 10127 10023 7385 5839 4629 2267 1234 571 "2048" 9200 8778 8280 7394 5852 4221 2234 1262 634 324 "4096" 5739 5549 5441 4810 3952 2824 1931 1075 552 302 148 (sorry if the formatting is messed) Any ideas why I am getting such bad results? My volume is not even replicated or distributed yet! Thanks! -- Jean-Francois Chevrette
Joe Landman
2011-Aug-16 12:21 UTC
[Gluster-users] Very bad performance /w glusterfs. Am I missing something?
On 08/11/2011 10:36 AM, Jean-Francois Chevrette wrote:> Hello everyone, > > I have just began playing with GlusterFS 3.2 on a debian squeeze > system. This system is a powerful quad-core xeon with 12GB of RAM and > two 300GB SAS 15k drives configured as a RAID-1 on an Adaptec 5405 > controller. Both servers are connected through a crossover cable on > gigabit ethernet ports. > > I installed the latest GlusterFS 3.2.2 release from the provided > debian package. > > As an initial test, I've created a simple brick on my first node: > > gluster volume create brick transport tcp node1.internal:/brick > > I started the volume and mounted it locally > > mount -t glusterfs 127.0.0.1:/brick /mnt/brick > > I can an iozone test on both the underlying partition and the > glusterfs mountpoint. Here are my results for the random write test > (results are in ops/sec):[...]> (sorry if the formatting is messed) > > > Any ideas why I am getting such bad results? My volume is not even > replicated or distributed yet!You are not getting "bad" results. The results from the local fs w/o gluster are likely completely cached. This is a very small test, and chances are you'r IOs aren't even making it out to the device before the test completes. The only test in your results which is likely generating any sort of realistic IO is that very last row and last column data size. A 15k RPM disk will do ~300 IOPs, which is about what you should see per unit. For a RAID1 across 2 such disks, you should get (depending upon how you built the RAID1 and what the underlying RAID system is), from 150-600 IOPs in most cases. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615