search for: vtqanh

Displaying 3 results from an estimated 3 matches for "vtqanh".

Did you mean: tanh
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi, Yes this is possible. Make sure you have cluster.weighted-rebalance enabled for the volume and run rebalance with the start force option. Which version of gluster are you running (we fixed a bug around this a while ago)? Regards, Nithya On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote: > We currently have a 3 node gluster setup each has a 100TB brick (total > 300TB, usable 100TB due to replica factor 3) > We would like to expand the existing volume by adding another 3 nodes, but > each will only have a 50TB brick. I think this is possible, but...
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force option, will this distribute the existing data
2018 May 23
0
Rebalance state stuck or corrupted
We have had a rebalance operation going on for a few days. After a couple days the rebalance status said "failed". We stopped the rebalance operation by doing gluster volume rebalance gv0 stop. Rebalance log indicated gluster did try to stop the rebalance. However, when we try now to stop the volume or try to restart rebalance it says there's a rebalance operation going on and volume