Heath Skarlupka
2013-Apr-29 16:19 UTC
[Gluster-users] Replicated and Non Replicated Bricks on Same Partition
Gluster-Users, We currently have a 30 node Gluster Distributed-Replicate 15 x 2 filesystem. Each node has a ~20TB xfs filesystem mounted to /data and the bricks live on /data/brick. We have been very happy with this setup, but are now collecting more data that doesn't need to be replicated because it can be easily regenerated. Most of the data lives on our replicated volume and is starting to waste space. My plan was to create a second directory under the /data partition called /data/non_replicated_brick on each of the 30 nodes and start up a second Gluster filesystem. This would allow me to dynamically size the replicated and non_replicated space based on our current needs. I'm a bit worried about going forward with this because I haven't seen many users talk about putting two gluster bricks on the same underlying filesystem. I've gotten passed the technical hurdle and know that it is technically possible, but I'm worried about corner cases and issues that might crop up when we add more bricks and need to rebalance both gluster volumes at once. Does anybody have any insight in what the caveats of doing this are or are there any users putting multiple bricks on a single filesystem in the 50-100 node size range. Thank you all for your insights and help! Heath Skarlupka Systems Administrator Space Science Engineering Center University of Wisconsin Madison
Anand Avati
2013-Apr-30 03:28 UTC
[Gluster-users] Replicated and Non Replicated Bricks on Same Partition
On Mon, Apr 29, 2013 at 9:19 AM, Heath Skarlupka < heath.skarlupka at ssec.wisc.edu> wrote:> Gluster-Users, > > We currently have a 30 node Gluster Distributed-Replicate 15 x 2 > filesystem. Each node has a ~20TB xfs filesystem mounted to /data and the > bricks live on /data/brick. We have been very happy with this setup, but > are now collecting more data that doesn't need to be replicated because it > can be easily regenerated. Most of the data lives on our replicated volume > and is starting to waste space. My plan was to create a second directory > under the /data partition called /data/non_replicated_brick on each of the > 30 nodes and start up a second Gluster filesystem. This would allow me to > dynamically size the replicated and non_replicated space based on our > current needs. > > I'm a bit worried about going forward with this because I haven't seen > many users talk about putting two gluster bricks on the same underlying > filesystem. I've gotten passed the technical hurdle and know that it is > technically possible, but I'm worried about corner cases and issues that > might crop up when we add more bricks and need to rebalance both gluster > volumes at once. Does anybody have any insight in what the caveats of > doing this are or are there any users putting multiple bricks on a single > filesystem in the 50-100 node size range. Thank you all for your insights > and help!This is a very common use case and should work fine. In the future we are exploring better integration with dm-thinp so that each brick has its own XFS filesystem on a thin provisioned logical volume. But for now you can create a second volume on the same XFS filesystems. Avati -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130429/154db2ed/attachment.html>
Seemingly Similar Threads
- [Gluster 3.2.1] Réplication issues on a two bricks volume
- mkfs.btrfs out of memory failure on lvm2 thinp volume
- incomplete listing of a directory, sometimes getdents loops until out of memory
- Expand distributed replicated volume with new set of smaller bricks
- Possible new bug in 3.1.5 discovered