Craig Flockhart
2009-Feb-06 04:27 UTC
[Gluster-users] cloud config with multiple internal drives on each node
Hi, I'm trying to come up with a working configuration for some machines as follows: - each is a client and a server and can see all the other nodes' storage and its own - each has 4 identical hard drives - replication (one or possibly two replicas) - high availability (continues to function if one replicated node of a pair goes down) It's something like the example configuration with NUFA/unify, but with replication across nodes and HA too. Any suggested configuration? Just a general explanation of the setup is fine. All my efforts so far have resulted in error messages about dht anomalies with "holes=1, overlaps=1" in the logs. I'm using 2.0.0rc1. thanks! Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090205/6b005b2c/attachment.html>
Keith Freedman
2009-Feb-08 07:44 UTC
[Gluster-users] cloud config with multiple internal drives on each node
At 08:27 PM 2/5/2009, Craig Flockhart wrote:>Hi, >I'm trying to come up with a working configuration for some machines >as follows: >- each is a client and a server and can see all the other nodes' >storage and its own >- each has 4 identical hard drives >- replication (one or possibly two replicas) >- high availability (continues to function if one replicated node of >a pair goes down)you want to use replicate to achieve ha and nufa/unify to present a larger filesystem than you could have on a single node.>It's something like the example configuration with NUFA/unify, but >with replication across nodes and HA too. > >Any suggested configuration? Just a general explanation of the setup is fine. >All my efforts so far have resulted in error messages about dht >anomalies with "holes=1, overlaps=1" in the logs. I'm using 2.0.0rc1.you could post your config and someone will likely evaluate it, but off the top of my head I'd think something like this would work: volume replicate1: remotedisk1, remotedisk2 volume replicate2: remotedisk3, remotedisk4 volume dht: replicate1 replicate2 on replicate1or2, set read-subvolume to whichever remotedisk# is on the local server just so you have some performance improvements on reads some of the time. so, if remotedisk2 server fails then replicate1 will be broken but will read from remotedisk1 the other option is an opposite config which is possibly easier to debug, but deficient in my mind. volume dht1: remotedisk1, remotedisk2 (get this working by itself, then ...) volume dht2: remotedisk3, remotedisk4 (get this working by itself, then combine dht1 and dht2 into the same config and add...) volume replicate: dht1 dht2 I do NOT prefer this config because you don't get the advantage of setting the local read volume on replicate., also, if remotedisk2 fails, then i'm not sure what dht1 will do, it may work and deliver the files it can and store the files it can, or it may fail and you're working with only 2 devices instead of 3. Keith
Keith Freedman
2009-Feb-08 07:44 UTC
[Gluster-users] cloud config with multiple internal drives on each node
At 08:27 PM 2/5/2009, Craig Flockhart wrote:>Hi, >I''m trying to come up with a working configuration for some machines >as follows: >- each is a client and a server and can see all the other nodes'' >storage and its own >- each has 4 identical hard drives >- replication (one or possibly two replicas) >- high availability (continues to function if one replicated node of >a pair goes down)you want to use replicate to achieve ha and nufa/unify to present a larger filesystem than you could have on a single node.>It''s something like the example configuration with NUFA/unify, but >with replication across nodes and HA too. > >Any suggested configuration? Just a general explanation of the setup is fine. >All my efforts so far have resulted in error messages about dht >anomalies with "holes=1, overlaps=1" in the logs. I''m using 2.0.0rc1.you could post your config and someone will likely evaluate it, but off the top of my head I''d think something like this would work: volume replicate1: remotedisk1, remotedisk2 volume replicate2: remotedisk3, remotedisk4 volume dht: replicate1 replicate2 on replicate1or2, set read-subvolume to whichever remotedisk# is on the local server just so you have some performance improvements on reads some of the time. so, if remotedisk2 server fails then replicate1 will be broken but will read from remotedisk1 the other option is an opposite config which is possibly easier to debug, but deficient in my mind. volume dht1: remotedisk1, remotedisk2 (get this working by itself, then ...) volume dht2: remotedisk3, remotedisk4 (get this working by itself, then combine dht1 and dht2 into the same config and add...) volume replicate: dht1 dht2 I do NOT prefer this config because you don''t get the advantage of setting the local read volume on replicate., also, if remotedisk2 fails, then i''m not sure what dht1 will do, it may work and deliver the files it can and store the files it can, or it may fail and you''re working with only 2 devices instead of 3. Keith
Krishna Srinivas
2009-Feb-09 07:44 UTC
[Gluster-users] cloud config with multiple internal drives on each node
Criag, In case you have 4 machines each with 4 drives. you can replicate each drive of server1 with corresponding drive of server2 using AFR. you can replicate each drive of server3 with corresponding drive of server4 using AFR. So you have 8 AFRs. You can use distribute (aka DHT) to aggregate the 8 AFRs. Can you start with fresh backend directories and see if you still get those errors? Because previous extended attributes on the directories might be causing those errors. If you still see the errors can you post your spec files? Krishna On Fri, Feb 6, 2009 at 9:57 AM, Craig Flockhart <craigflockhart at yahoo.com> wrote:> Hi, > I'm trying to come up with a working configuration for some machines as > follows: > - each is a client and a server and can see all the other nodes' storage and > its own > - each has 4 identical hard drives > - replication (one or possibly two replicas) > - high availability (continues to function if one replicated node of a pair > goes down) > > It's something like the example configuration with NUFA/unify, but with > replication across nodes and HA too. > > Any suggested configuration? Just a general explanation of the setup is > fine. > All my efforts so far have resulted in error messages about dht anomalies > with "holes=1, overlaps=1" in the logs. I'm using 2.0.0rc1. > > thanks! > Craig > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >