Bryan Whitehead
2012-Feb-14 01:17 UTC
[Gluster-users] Distributed-Replicated adding/removing nodes
I have 3 servers, but want replicate = 2. To do this I have 2 bricks on each server: Example output: Volume Name: images Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: rdma Bricks: Brick1: lab0:/g0 Brick2: lab1:/g0 Brick3: lab2:/g0 Brick4: lab0:/g1 Brick5: lab1:/g1 Brick6: lab2:/g1 If I want to add lab3:/g0 and lab3:/g1, it will end up looking like this: Volume Name: images Type: Distributed-Replicate Status: Started Number of Bricks: 4 x 2 = 8 Transport-type: rdma Bricks: Brick1: lab0:/g0 Brick2: lab1:/g0 Brick3: lab2:/g0 Brick4: lab0:/g1 Brick5: lab1:/g1 Brick6: lab2:/g1 Brick7: lab3:/g0 Brick8: lab3:/g1 This seems like files could potentially both be stuck on lab3. Do I need to do some crazy migrations to move bricks around? Is this somewhat automated? NOTE: I don't have a lab3, but I'm imagining what will happen when I do... So the above output is from my imagination. But I think it is correct. -Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120213/738fe5c7/attachment.html>
Arnold Krille
2012-Feb-14 09:40 UTC
[Gluster-users] Distributed-Replicated adding/removing nodes
Hi, On Monday 13 February 2012 17:17:47 Bryan Whitehead wrote:> I have 3 servers, but want replicate = 2. To do this I have 2 bricks on > each server: > > Example output: > > Volume Name: images > Type: Distributed-Replicate > Status: Started > Number of Bricks: 3 x 2 = 6 > Transport-type: rdma > Bricks: > Brick1: lab0:/g0 > Brick2: lab1:/g0 > Brick3: lab2:/g0 > Brick4: lab0:/g1 > Brick5: lab1:/g1 > Brick6: lab2:/g1 > > If I want to add lab3:/g0 and lab3:/g1, it will end up looking like this: > > Volume Name: images > Type: Distributed-Replicate > Status: Started > Number of Bricks: 4 x 2 = 8 > Transport-type: rdma > Bricks: > Brick1: lab0:/g0 > Brick2: lab1:/g0 > Brick3: lab2:/g0 > Brick4: lab0:/g1 > Brick5: lab1:/g1 > Brick6: lab2:/g1 > Brick7: lab3:/g0 > Brick8: lab3:/g1 > > This seems like files could potentially both be stuck on lab3. Do I need to > do some crazy migrations to move bricks around? Is this somewhat automated?You have to replace one of the exisiting bricks (lets assume lab2/g0) with lab3/g0, wipe the data from the old brick lab2/g0 and then add another pair of bricks of lab2/g0-lab3/g1. Which then gives you:> Volume Name: images > Type: Distributed-Replicate > Status: Started > Number of Bricks: 4 x 2 = 8 > Transport-type: rdma > Bricks: > Brick1: lab0:/g0 > Brick2: lab1:/g0 > Brick3: lab3:/g0 > Brick4: lab0:/g1 > Brick5: lab1:/g1 > Brick6: lab2:/g1 > Brick7: lab2:/g0 > Brick8: lab3:/g1But with this you still only protect yourself against one failing node. The same as with three nodes. Have fun, Arnold -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120214/812749a9/attachment.sig>