Francois THIEBOLT
2011-Jun-08 12:40 UTC
[Gluster-users] [Gluster3.2@Grid5000] 128 nodes failure and rr scheduler question
Hello, I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a first point, i've been unable to start a volume featuring 128bricks (64 ok) Then, due to the round-robin scheduler, as the number of nodes increase (every node is also a brick), the performance of an application on an individual node decrease! So my question is : how to STOP the round-robin distribution of files over the bricks within a volume ? *** Setup *** - i'm using glusterfs3.2 from source - every node is both a client node and a brick (storage) Commands : - gluster peer probe <each of the 128nodes> - gluster volume create myVolume transport tcp <128 bricks:/storage> - gluster volume start myVolume (fails with 128 bricks!) - mount -t glusterfs ...... on all nodes Feel free to tell me how to improve things Fran?ois -- ------------------------------------------------------------- THIEBOLT Francois \ Your computer seems overloaded ? UPS Toulouse III \ - Check that nobody's asked for tea ! thiebolt at irit.fr \ "The Hitchhiker's Guide to the Galaxy" D.Adams
Amar Tumballi
2011-Jun-10 10:21 UTC
[Gluster-users] [Gluster3.2@Grid5000] 128 nodes failure and rr scheduler question
Hi Francois, Answers inline. On Wed, Jun 8, 2011 at 6:10 PM, Francois THIEBOLT <thiebolt at irit.fr> wrote:> Hello, > > I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a > first point, i've been unable to start a volume featuring 128bricks (64 ok) > > This looks similar to the bug http://bugs.gluster.com/show_bug.cgi?id=2941 Thefix should be available with 3.2.1 release, which should be out very soon. Also we are working on scalability of 'glusterd', the glusterfs management daemon, after which it should work fine. One work around for now is, create a volume with 64 bricks and do the 'add-brick' of another 64 brick. It should work fine.> Then, due to the round-robin scheduler, as the number of nodes increase > (every node is also a brick), the performance of an application on an > individual node decrease! > So my question is : how to STOP the round-robin distribution of files over > the bricks within a volume ? >There is no 'Scheduler' in picture here with GlusterFS 3.2.x (for that matter from 3.0.x releases), hence there is no option to stop the scheduler. Regards, Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110610/f2bccf27/attachment.html>
Pavan T C
2011-Jun-10 14:38 UTC
[Gluster-users] [Gluster3.2@Grid5000] 128 nodes failure and rr scheduler question
On Wednesday 08 June 2011 06:10 PM, Francois THIEBOLT wrote:> Hello, > > I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a > first point, i've been unable to start a volume featuring 128bricks (64 ok) > > Then, due to the round-robin scheduler, as the number of nodes increase > (every node is also a brick), the performance of an application on an > individual node decrease!I would like to understand what you mean by "increase of nodes". You have 64 bricks and each brick also acts as a client. So, where is the increase in the number of nodes? Are you referring to the mounts that you are doing? What is your gluster configuration - I mean, is it a distribute only, or is it a distributed-replicate setup? [From your command sequence, it should be a pure distribute, but I just want to be sure]. What is your application like? Is it mostly I/O intensive? It will help if you provide a brief description of typical operations done by your application. How are you measuring the performance? What parameter determines that you are experiencing a decrease in performance with increase in the number of nodes? Pavan> So my question is : how to STOP the round-robin distribution of files > over the bricks within a volume ? > > *** Setup *** > - i'm using glusterfs3.2 from source > - every node is both a client node and a brick (storage) > Commands : > - gluster peer probe <each of the 128nodes> > - gluster volume create myVolume transport tcp <128 bricks:/storage> > - gluster volume start myVolume (fails with 128 bricks!) > - mount -t glusterfs ...... on all nodes > > Feel free to tell me how to improve things > > Fran?ois >