Tom Lahti
2008-Jun-04 00:41 UTC
[Gluster-users] balancing redundancy with space utilization
Currently it would seem that AFR will simply copy everything to every brick in the AFR. If I did something like ... volume afr-example type cluster/afr subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7 brick8 end-volume I would wind up with 8 copies of every file. Clearly, this is too many. What I would rather have is maybe 3 copies of each file distributed randomly across 3 servers, so that I could still have 2 servers fail and have all data available, but without using up unnecessary space on the other 5. The 3 would need to be round-robined in some manner so as to distribute the disk utilization. First file goes on brick1 brick2 brick3, 2nd file goes on brick2 brick3 brick4, etc. It seems that AFR used to have this with "option relicate *:3" but that was removed. The supposed replacement for that, the switch scheduler, doesn't really have the same functionality. Unless there is a undocumented form of the "option switch.case" statement that I have yet to see. Can I do "option switch.case *:3" or some such? -- -- =========================== Tom Lahti BIT Statement LLC http://www.bitstatement.net/ -- ============================
Anand Babu Periasamy
2008-Jun-04 02:51 UTC
[Gluster-users] balancing redundancy with space utilization
Hi Tom, You need unify of afr volumes. 3 copies of 8 servers is slightly odd to pair. Though you have options.. 1) 1-2-3, 4-5-6, 7-8-1 2) 1-2-3, 4-5-6, 7-8 3) 1-2-3, 2-3-4, 3-4-5 4-5-6 5-6-7 6-7-8 My recommendation is to go for 9 servers if you are looking for 3 copies of all files. Upgrading in pairs of 3 becomes easier at a later stage. Please refer to this documentation: http://www.gluster.org/docs/index.php/Unify_over_AFR Happy Hacking, -- Anand Babu Periasamy GPG Key ID: 0x62E15A31 Blog [http://ab.freeshell.org] The GNU Operating System [http://www.gnu.org] Z RESEARCH Inc [http://www.zresearch.com] Tom Lahti wrote:> Currently it would seem that AFR will simply copy everything to every > brick in the AFR. If I did something like ... > > volume afr-example > type cluster/afr > subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7 brick8 > end-volume > > I would wind up with 8 copies of every file. Clearly, this is too many. > What I would rather have is maybe 3 copies of each file distributed > randomly across 3 servers, so that I could still have 2 servers fail and > have all data available, but without using up unnecessary space on the > other 5. The 3 would need to be round-robined in some manner so as to > distribute the disk utilization. First file goes on brick1 brick2 > brick3, 2nd file goes on brick2 brick3 brick4, etc. > > It seems that AFR used to have this with "option relicate *:3" but that > was removed. The supposed replacement for that, the switch scheduler, > doesn't really have the same functionality. > > Unless there is a undocumented form of the "option switch.case" > statement that I have yet to see. Can I do "option switch.case *:3" or > some such? >
Apparently Analagous Threads
- Problem with xlator
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- Would difference in size (and content) of a file on replicated bricks be healed?
- Problems running dbench on 3.3
- tar: File changed as we read it