I've figured out the problem.
If you mount the glusterfs with native client on a peer, if another peer
crashes then doesn't self-heal after reboot.
Should I put this issue in the bug tracker?
Bye
Raf
----- Original Message -----
From: "R.C." <milanraf at gmail.com>
To: <gluster-users at gluster.org>
Sent: Monday, March 14, 2011 11:41 PM
Subject: Best practices after a peer failure?
> Hello to the list.
>
> I'm practicing GlusterFS in various topologies by means of multiple
> Virtualbox VMs.
>
> As the standard system administrator, I'm mainly interested in disaster
> recovery scenarios. The first being a replica 2 configuration, with one
> peer crashing (actually stopping VM abruptly) during data writing to the
> volume.
> After rebooting the stopped VM and relaunching the gluster deamon (service
> glusterd start), the cluster doesn't start healing by itself.
> I've also tried the suggested commands:
> find <gluster-mount> -print0 | xargs --null stat >/dev/null
> and
> find <gluster-mount> -type f -exec dd if='{}' of=/dev/null
bs=1M \; >
> /dev/null 2>&1
> without success.
> A rebalance command recreates replicas but, when accessing cluster, the
> always-alive client is the only one committing data to disk.
>
> Where am I misoperating?
>
> Thank you for your support.
>
> Raf
>