Susant Palai
2019-Mar-13 07:33 UTC
[Gluster-users] Removing Brick in Distributed GlusterFS
On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT <kontakt at taste-of-it.de> wrote:> Hi Susant, > > and thanks for your fast reply and pointing me to that log. So i was able > to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] > 0-vol4-dht: Could not find any subvol with space accomodating the file" > > But Volume Detail and df -h show xTB of free Disk Space and also Free > Inodes. > > Options Reconfigured: > performance.client-io-threads: on > storage.reserve: 0 > performance.parallel-readdir: off > performance.readdir-ahead: off > auth.allow: 192.168.0.* > nfs.disable: off > transport.address-family: inet > > Ok since there is enough disk space on other Bricks and i actually didnt > complete brick-remove, can i rerun brick-remove to rebalance last Files and > Folders? >Ideally, the error should not have been seen with disk space available on the target nodes. You can start remove-brick again and it should move out the remaining set of files to the other bricks.> > Thanks > Taste > > > Am 12.03.2019 10:49:13, schrieb Susant Palai: > > Would it be possible for you to pass the rebalance log file on the node > from which you want to remove the brick? (location : > /var/log/glusterfs/<volume-name-rebalance.log>) > > + the following information: > 1 - gluster volume info > 2 - gluster volume status > 2 - df -h output on all 3 nodes > > > Susant > > On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <kontakt at taste-of-it.de> > wrote: > > Hi, > i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / > Bricks. I want to remove one Brick and run gluster volume remove-brick > <vol> <brickname> start. The Job completes and shows 11960 failures and > only transfers 5TB out of 15TB Data. I have still files and folders on this > volume on the brick to remove. I actually didnt run the final command with > "commit". Both other Nodes have each over 6TB of free Space, so it can hold > the remaininge Data from Brick3 theoretically. > > Need help. > thx > Taste > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190313/d5c7ed1a/attachment.html>
Hi, i stopped Operation remove-brick. Than upgraded Debian to Stretch, while the Gluster Repository for Jessie and GlusterFS 4.0 Latest throw an http 404 Error, which i could not fix in time. So i upgraded to Stretch and than to latest GlusterFS 4.02. Than i run remove-brick again which lead to the same error. Brick1 and Brick2 has total Disk of 32,6TB, both have 3.3TB on free Disk now. Brick3 should remove with total of 16.3TB and free of 7.7TB. Files to remove are between a view KBs and over 40GB. So aprox 7TB has to move, which yes could not stored on 3,3TB*2, but as i understand rebalance should move files until free Diskspace on Brick1 and Brick2 is nearly Zero. Right? Ok, i will add a Temp-Disk and move xTB out of the Volume. At all i think its still a Bug. Thx. Am 13.03.2019 08:33:35, schrieb Susant Palai:> On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT <> kontakt at taste-of-it.de> > wrote: > > Hi Susant, > > > > and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file" > > > > But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes.> > Options Reconfigured: > > performance.client-io-threads: on > > storage.reserve: 0 > > performance.parallel-readdir: off > > performance.readdir-ahead: off > > auth.allow: 192.168.0.* > > nfs.disable: off > > transport.address-family: inet > > Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders? > > > > > Ideally, the error should not have been seen with disk space available on the target nodes.? ?You can start remove-brick again and it should move out the remaining set of files to the other bricks. > > ? > > > Thanks > > > > Taste> > Am 12.03.2019 10:49:13, schrieb Susant Palai: > > > Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>) > > > > > > + the following information: > > > ?1 - gluster volume info? > > > > > > ?2 - gluster volume status > > > > > > ?2 - df -h output on all 3 nodes > > >> > > Susant > > > > > > On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <> > > kontakt at taste-of-it.de> > > > wrote: > > > > Hi, > > > > i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks.? I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command? with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically. > > > > > > > > Need help. > > > > thx > > > > Taste > > > > _______________________________________________ > > > > Gluster-users mailing list > > > > Gluster-users at gluster.org > > > > https://lists.gluster.org/mailman/listinfo/gluster-users> > > > > >> > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190313/532419da/attachment.html>