I have a question regarding total volume size of a mounted GlusterFS volume. At least in a simple replicated volume (2x1) the size of the volume is the one of the smallest brick. We can extend it live by extending the corresponding bricks, and the GLusterFS volume will immediately appear bigger, up to the size of the smallest brick. Now, I had a problem on my setup, long story short, an LVM bug has forcibly unmounted the volumes on which my bricks are running, while gluster was being used. The problem is that instead of having a 8TB file system mounted on /mnt/bricks/vmstore the server suddently found an empty /mnt/bricks/vmstore pointing on / of this server (20GB) After 3 hours during which Gluster complained about missing files on node1 (but continuing to serve files from node2 transparently), it decided to start healing from the correct node to this empty /mnt/bricks/vmstore on the failed node. Except that, doing this, it suddently reduced the total size of the (mounted, and used) volume from 8TB to 20GB. No need to say the VM using it didn't liked it. Now, my question is: is there a way to prevent GlusterFS to automatically reduce the total size of the volume ? In this case, I would have liked the failing node to be prevented from healing, as it was only 20GB but the volume was 8TB (out of which ~5.5TB was used) Cheers, Daniel -- Logo FWS *Daniel Berteaud* FIREWALL-SERVICES SAS. Soci?t? de Services en Logiciels Libres Tel : 05 56 64 15 32 <tel:0556641532> Matrix: @dani:fws.fr /www.firewall-services.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171113/e00c9daf/attachment.html>