nux at li.nux.ro
2021-Mar-19 15:32 UTC
[Gluster-users] Evergrowing distributed volume question
Hello, A while ago I attempted and failed to maintain an "evergrowing" storage solution based on GlusterFS. I was relying on a distributed non-replicated volume to host backups and so on, in the idea that when it was close to full I would just add another brick (server) and keep it going like that. In reality what happened was that many of the writes were distributed to the brick that was (in time) full, ending up with "out of space" errors, despite having one or more bricks with plenty of space. Can anyone advise whether current Glusterfs behaviour has improved in this regard, ie does it check if a brick is full and redirect the "write" to one that is not? Regards, Lucian
Strahil Nikolov
2021-Mar-19 17:12 UTC
[Gluster-users] Evergrowing distributed volume question
As gluster does not have metadata server, the client:s identify the brick via a special algorithm based on the file/dir name. Each brick corresponds to a 'range' of hashes , thus when you add a new brick, you always need to rebalance the volume. Best Regards,Strahil Nikolov Hello, A while ago I attempted and failed to maintain an "evergrowing" storage solution based on GlusterFS. I was relying on a distributed non-replicated volume to host backups and so on, in the idea that when it was close to full I would just add another brick (server) and keep it going like that. In reality what happened was that many of the writes were distributed to the brick that was (in time) full, ending up with "out of space" errors, despite having one or more bricks with plenty of space. Can anyone advise whether current Glusterfs behaviour has improved in this regard, ie does it check if a brick is full and redirect the "write" to one that is not? Regards, Lucian ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210319/6b214aab/attachment.html>