Alessandro Ipe
2018-Feb-01 15:39 UTC
[Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit forceI was expecting that the self-heal daemon would start copying data from server1:/data/gluster/brick1 (about 7.4 TB) to the empty server2:/data/gluster/brick1, which it only did for directories, but not for files. For the moment, I launched on the fuse mount point> find . | xargs statbut crawling the whole volume (100 TB) to trigger self-healing of a single brick of 7.4 TB is unefficient. Is there any trick to only self-heal a single brick, either by setting some attributes to its top directory, for example ? Many thanks, Alessandro
Serkan Çoban
2018-Feb-01 16:32 UTC
[Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote:> Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks > server1:/data/gluster/brick1 > server2:/data/gluster/brick1 > > server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a >> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit force > > I was expecting that the self-heal daemon would start copying data from server1:/data/gluster/brick1 > (about 7.4 TB) to the empty server2:/data/gluster/brick1, which it only did for directories, but not for files. > > For the moment, I launched on the fuse mount point >> find . | xargs stat > but crawling the whole volume (100 TB) to trigger self-healing of a single brick of 7.4 TB is unefficient. > > Is there any trick to only self-heal a single brick, either by setting some attributes to its top directory, for example ? > > > Many thanks, > > > Alessandro > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
Alessandro Ipe
2018-Feb-01 17:13 UTC
[Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, Thanks. However "gluster v heal volname full" returned the following error message Commit failed on server4. Please check log file for details. I have checked the log files in /var/log/glusterfs on server4 (by grepping heal), but did not get any match. What should I be looking for and in which log file, please ? Note that there is currently a rebalance process running on the volume. Many thanks, A. On Thursday, 1 February 2018 17:32:19 CET Serkan ?oban wrote:> You do not need to reset brick if brick path does not change. Replace > the brick format and mount, then gluster v start volname force. > To start self heal just run gluster v heal volname full. > > On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be>wrote:> > Hi, > > > > > > My volume home is configured in replicate mode (version 3.12.4) with the > > bricks server1:/data/gluster/brick1 > > server2:/data/gluster/brick1 > > > > server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for > > that brick on server2, umounted it, reformated it, remounted it and did a> > >> gluster volume reset-brick home server2:/data/gluster/brick1 > >> server2:/data/gluster/brick1 commit force> > > I was expecting that the self-heal daemon would start copying data from > > server1:/data/gluster/brick1 (about 7.4 TB) to the empty > > server2:/data/gluster/brick1, which it only did for directories, but not > > for files. > > > > For the moment, I launched on the fuse mount point > > > >> find . | xargs stat > > > > but crawling the whole volume (100 TB) to trigger self-healing of a single > > brick of 7.4 TB is unefficient. > > > > Is there any trick to only self-heal a single brick, either by setting > > some attributes to its top directory, for example ? > > > > > > Many thanks, > > > > > > Alessandro > > > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://lists.gluster.org/mailman/listinfo/gluster-users-- Dr. Ir. Alessandro Ipe Department of Observations Tel. +32 2 373 06 31 Remote Sensing from Space Royal Meteorological Institute Avenue Circulaire 3 Email: B-1180 Brussels Belgium Alessandro.Ipe at meteo.be Web: http://gerb.oma.be
Reasonably Related Threads
- How to trigger a resync of a newly replaced empty brick in replicate config ?
- How to trigger a resync of a newly replaced empty brick in replicate config ?
- Where is log file of GlusterFS 3.0?
- Gluster infrastructure question
- How much disk can fail after a catastrophic failure occur?