Hi, I'm doing some performance tests with bonnie (1.03d) on GlusterFS 2.0.4 (Debian packages). Write tests went OK. But on the rewrite test, bonnie seemed to hang. Load average dropped to 0.00 on both nodes. Nothing in server or client logs. I will launch the test again this night cause it takes very long (16GB RAM). Any idea what could cause that ?
Julien Cornuwel
2009-Aug-07 10:11 UTC
[Gluster-users] GlsuterFS as OpenVZ backend [Was: bonnie hangs with glusterFS 2.0.4]
Le dimanche 02 ao?t 2009 ? 23:49 +0200, Julien Cornuwel a ?crit :> Hi, > > I'm doing some performance tests with bonnie (1.03d) on GlusterFS 2.0.4 > (Debian packages). > > Write tests went OK. > But on the rewrite test, bonnie seemed to hang. Load average dropped to > 0.00 on both nodes. Nothing in server or client logs. > > I will launch the test again this night cause it takes very long (16GB > RAM). > > Any idea what could cause that ?Well I simplified my setup to the most (see attached files) and the test passed. Results : Block write : 57500KB/s (53769 on local disks) Rewrite : 4477KB/s (30742 on local disks) Block Read : 8375KB/s (79528 on local disks) Write performances are surprisingly high, better than local disks ! I guess writebehind translator is doing a great job. But reads are so slow ! I will do another test with readahead enabled to see the difference, hoping bonnie will survive it. Original setup was more complicated : I had two volumes replicated on both nodes, and a Distribute volume on top of them. The idea was to be able to add new nodes, one by one, when needed. I haven't been able to test this setup by I guess performances would have been lower (same network/disk speed, more overhead). The purpose of these tests is to determine whether I can build an OpenVZ cluster on top of GlusterFS instead of DRBD. At first, there will be only two nodes, so both solutions can apply. But if the cluster grows as I hope, glusterFS is the only way to share storage accross all nodes. What I want to know is : "Is it possible to start directly with glusterfs, or do I need to reach a critical mass where the number of nodes will be enough to overpower local storage ?" Hardware nodes are : - 2*quad opteron - 16GB RAM - 750GB RAID1 - 1 GbE -------------- next part -------------- ##################################### ### GlusterFS Client Volume File ## ##################################### volume node01primary type protocol/client option transport-type tcp option remote-host node01 option remote-subvolume primary end-volume volume node02secondary type protocol/client option transport-type tcp option remote-host node02 option remote-subvolume secondary end-volume volume storage01 type cluster/replicate subvolumes node01primary node02secondary end-volume volume writebehind type performance/write-behind option cache-size 4MB subvolumes storage01 end-volume -------------- next part -------------- ##################################### ### GlusterFS Server Volume File ## ##################################### volume posix type storage/posix option directory /mnt/primary end-volume volume locks type features/locks subvolumes posix end-volume volume primary type performance/io-threads option thread-count 8 subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.primary.allow * option auth.addr.secondary.allow * subvolumes primary end-volume
Julien Cornuwel
2009-Aug-17 16:51 UTC
[Gluster-users] Test results [Was : bonnie hangs with glusterFS 2.0.4]
Le lundi 17 ao?t 2009 ? 19:49 +0400, Konstantin A. Lepikhov a ?crit :> Hi Julien! > > Monday 17, at 05:04:43 PM you wrote: > > > Le mardi 11 ao??t 2009 ?? 15:03 +0400, Konstantin A. Lepikhov a ??crit : > > > > > You can try to git clone kernel source and switch between different tags. > > > It's also very good test. > > > > Here are the final test results. The setup is : > > - 2 nodes, GbE, SATA drives, 2*4-cores Opteron 2.2Ghz, 16GB RAM > > - Ping between nodes is 0.120ms > > - GlusterFS 2.0.6 > > - Very simple setup : Replicate with readahead and writebehind. > > - Tests are done on only one node (no concurrent access) > Did you send this results to glusterfs-users list?Oops, sorry, I just hit 'reply'. Now it's done.> > The purpose of these tests is to compare GlusterFS versus local disk > > performances, on a two node cluster, as I want to host OpenVZ VEs on my > > servers. > Do you have disk load/network load statistics for this test?I haven't detailed stats, but for what I saw, there was no bottlenecks : - Load average never reached 1 - There was plenty of CPU power/RAM available during the tests - Network load was never above 30 percent of the bandwidth. It really looked as if the system was waiting for something, and my guess goes to the network.> > Untar a kernel archive : > > Local: 0:19 > > GlusterFS: 9:12 > > > > Kernel compilation : > > Local: 55:06 > > GlusterFS: 3:37:38 > > > > GIT clone kernel sources : > > Local: 5:31 > > GlusterFS: 2:49:09 > > > > So, clearly, GlusterFS solution is not viable here. I think this is > > because of network latency. As I don't think my hosting provider is > > likely to offer IB in the near future, this is a no-go. > > > > Maybe if I had dozens of servers, latency would be compensated by > > parallelism. I hope I'll be able to test it someday ;-) > Yes, latency is highly depends on configuration - I think DHT setup must be > much faster. > > > > > Anyway, thank you for your support and advice folks, I'll keep an eye on > > this project in the future. > IMHO in your setup pohmelfs/drbd8 are more acceptable. >