No. Executing `statistics heal-count` shouldn't be blocking heals.
-Krutika
On Mon, Aug 15, 2016 at 1:16 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> In the past half hour its started to heal. Down to 1639 shards now.
>
> Quick question - would running "gluster v heal datastore4 statistics
> heal-count' on a 5 second loop block healing?
>
> To answer my own question - I don't think so as it appear to be
> healing quite quickly now.
>
>
> On 15 August 2016 at 17:17, Krutika Dhananjay <kdhananj at
redhat.com> wrote:
> > Could you please attacj the brick logs and glustershd logs?
>
> Will get it together shortly
>
> > Also share the volume configuration please (`gluster volume info`).
>
>
> Volume Name: datastore4
> Type: Replicate
> Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
> Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
> Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
> Options Reconfigured:
> cluster.locking-scheme: granular
> cluster.granular-entry-heal: on
> cluster.background-self-heal-count: 16
> features.shard-block-size: 64MB
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.stat-prefetch: on
> performance.strict-write-ordering: off
> nfs.enable-ino32: off
> nfs.addr-namelookup: off
> nfs.disable: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> features.shard: on
> cluster.data-self-heal: on
> cluster.self-heal-window-size: 1024
> performance.readdir-ahead: on
>
>
>
>
>
> --
> Lindsay
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160815/87f93a85/attachment.html>