Hi, I just discovered GlusterFS and it looks great ! I will definitely give it a try. Soon. In particular, the NUFA translator seems to meet my needs (use local resources as far as possible). I've read most of the documentation about NUFA but I still have some unanswered questions : - What happens if a node fills up its entire local storage ? Are new data transfered to another node ? Or does it crash ? - What about data protection ? As I understand it, if a node dies in a NUFA cluster, its files are gone with it ? On http://www.gluster.org/docs/index.php/GlusterFS_Roadmap_Suggestions Jshook say that in order to combine NUFA and afr functionnality, you just have to use afr with local volume name, and have read-subvolume option set to local volume. That's true in the case of a 2 nodes cluster, but in a 100 nodes cluster, you would still have the capacity of only 1 node, and 100 copies of each file. Am I right ? What would be great is to have the ability to create parity bricks : something like having 98 nodes in a NUFA cluster and 2 parity nodes that are just here in case a node (or two) went down. I saw that you had graid6 on your roadmap, so do you think that's possible ? And if so, when (approximately) ? Anyway, thanks for the work you made so far. I'll certainly be back annoying you when I'll start testing it ;-) Regards,
>Hi, > >I just discovered GlusterFS and it looks great ! I will definitely give >it a try. Soon. >In particular, the NUFA translator seems to meet my needs (use local >resources as far as possible). I've read most of the documentation about >NUFA but I still have some unanswered questions : > >- What happens if a node fills up its entire local storage ? Are new >data transfered to another node ? Or does it crash ? >- What about data protection ? As I understand it, if a node dies in a >NUFA cluster, its files are gone with it ? > >On http://www.gluster.org/docs/index.php/GlusterFS_Roadmap_Suggestions >Jshook say that in order to combine NUFA and afr functionnality, you >just have to use afr with local volume name, and have read-subvolume >option set to local volume. That's true in the case of a 2 nodes >cluster, but in a 100 nodes cluster, you would still have the capacity >of only 1 node, and 100 copies of each file. Am I right ? > >What would be great is to have the ability to create parity bricks : >something like having 98 nodes in a NUFA cluster and 2 parity nodes that >are just here in case a node (or two) went down. I saw that you had >graid6 on your roadmap, so do you think that's possible ? And if so, >when (approximately) ? > >Anyway, thanks for the work you made so far. I'll certainly be back >annoying you when I'll start testing it ;-) > >Regards, > >I saw RAID-6 support on the road map also, and agree it would be great to get some type of protection against brick failure. I got to thinking... instead of doing RAID-6 maybe it would be better to do something like ZFS raid-z on the brick level. Treat each brick like a vdev and the collection of bricks like a zpool! I'm sure it's far more complicated than that, but do any of the developers out there think it would possible to merge the two (RAIDZ & GlusterFS)? I guess the hardest part would be trying to figure out where the checking would get done; client side or brick side? -fc -- "I have come here to chew bubble gum and kick ass; and I'm all out of bubble gum." ? ?~Rowdy Roddy Piper - 'They Live'
Hi, please find the inlined comments. On Wed, Apr 1, 2009 at 6:44 PM, Julien Cornuwel <julien at cornuwel.net> wrote:> Hi, > > I just discovered GlusterFS and it looks great ! I will definitely give > it a try. Soon. > In particular, the NUFA translator seems to meet my needs (use local > resources as far as possible). I've read most of the documentation about > NUFA but I still have some unanswered questions : > > - What happens if a node fills up its entire local storage ? Are new > data transfered to another node ? Or does it crash ?New files will be created on another node. Writes to the files on already filled out nodes, return -1 with error code set to ENOSPC.> - What about data protection ? As I understand it, if a node dies in a > NUFA cluster, its files are gone with it ?Yes, with just NUFA setup, there can be data loss. You can protect from data loss by using replicate/afr xlator to replicate each child of NUFA.> > On http://www.gluster.org/docs/index.php/GlusterFS_Roadmap_Suggestions > Jshook say that in order to combine NUFA and afr functionnality, you > just have to use afr with local volume name, and have read-subvolume > option set to local volume. That's true in the case of a 2 nodes > cluster, but in a 100 nodes cluster, you would still have the capacity > of only 1 node, and 100 copies of each file. Am I right ? > > What would be great is to have the ability to create parity bricks : > something like having 98 nodes in a NUFA cluster and 2 parity nodes that > are just here in case a node (or two) went down. I saw that you had > graid6 on your roadmap, so do you think that's possible ? And if so, > when (approximately) ? > > Anyway, thanks for the work you made so far. I'll certainly be back > annoying you when I'll start testing it ;-) > > Regards, > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >-- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090415/92cbaaa1/attachment.html>