mki-glusterfs at mozone.net
2011-Jun-03 19:42 UTC
[Gluster-users] adding new bricks for volume expansion with 3.0.x?
Hi How does one go about expanding a volume that consists of a distribute -replicate set of machines in 3.0.6? The setup consists of 4 pairs of machines, with 3 bricks per machine. I need to add an additional 5 pair of machines (15 bricks) to the volume, but I don't understand what's required per se. There are currently 4 client machines mounting the volume using the ip.of.first.backend:/volume syntax in fstab where the first backend server provides the general volume to the clients. Looking at some of the past mailing list chatter, it seems scale-n-defrag.sh is what I need, but it's unclear as to how to go about this without distruption to services on the other clients. If I bring up a new client server, copy the vol file and mount that vol using that volfile temporarily to run the defrag, do I have to make all the existing clients also mount that exact same vol file while this is running? Or can they keep running on their old volfile for the duration of the defrag? The last time I tried modifying the vol file by even 1 byte, on the backend that was serving it up, the clients refused to mount it, so not sure how that's supposed to work in this case? Can someone please shed some light into what the correct process to go about this is? Thanks much. Mohan
Fabricio Cannini
2011-Jun-03 23:39 UTC
[Gluster-users] adding new bricks for volume expansion with 3.0.x?
Em Sexta-feira 03 Junho 2011, ?s 16:42:37, mki-glusterfs at mozone.net escreveu:> Hi > > How does one go about expanding a volume that consists of a distribute > -replicate set of machines in 3.0.6? The setup consists of 4 pairs > of machines, with 3 bricks per machine. I need to add an additional 5 > pair of machines (15 bricks) to the volume, but I don't understand > what's required per se. There are currently 4 client machines mounting > the volume using the ip.of.first.backend:/volume syntax in fstab where > the first backend server provides the general volume to the clients. > > Looking at some of the past mailing list chatter, it seems > scale-n-defrag.sh is what I need, but it's unclear as to how to go about > this without distruption to services on the other clients. If I bring up > a new client server, copy the vol file and mount that vol using that > volfile temporarily to run the defrag, do I have to make all the existing > clients also mount that exact same vol file while this is running? Or can > they keep running on their old volfile for the duration of the defrag? > The last time I tried modifying the vol file by even 1 byte, on the > backend that was serving it up, the clients refused to mount it, so not > sure how that's supposed to work in this case? > > Can someone please shed some light into what the correct process to go > about this is? > > Thanks much. > > MohanHi Mohan. is upgrading to newer versions an option for you? If yes, then i would look into it *before* trying scale-n-defrag.sh. Operations like this got much easier from 3.1 and onwards. Good luck.
mki-glusterfs at mozone.net
2011-Jun-04 21:41 UTC
[Gluster-users] adding new bricks for volume expansion with 3.0.x?
> How does one go about expanding a volume that consists of a distribute > -replicate set of machines in 3.0.6? The setup consists of 4 pairs > of machines, with 3 bricks per machine. I need to add an additional 5 > pair of machines (15 bricks) to the volume, but I don't understand > what's required per se. There are currently 4 client machines mounting > the volume using the ip.of.first.backend:/volume syntax in fstab where > the first backend server provides the general volume to the clients.Following up on my own message, it seems that's exactly it, one must mount the volumes using seperate vol files (not using the ip:/vol syntax). And the new bricks become visible right away, which is great. However, my new problem is rebalancing the data in the directories. There are literally tens of thousands of directories and millions of files. Using the scale-n-defrag.sh seems to do nothing and it spews the "find: setfattr: No such attribute" when it tries to look for the trusted.glusterfs.dht to delete! Anyone know what can be done to rebalance in 3.0.6? Thanks Mohan
Reasonably Related Threads
- client mount fails on boot under debian lenny...
- [PATCH] Btrfs: fix the same inode id problem when doing auto defragment
- Defrag guest fs
- ZFS Roadmap - thoughts on expanding raidz / restriping / defrag
- e2defrag - Unable to allocate buffer for inode priorities