Displaying 4 results from an estimated 4 matches for "8709782a".
Did you mean:
809782
2017 Sep 04
2
heal info OK but statistics not working
...connected for current three peers.
This is third time when it happens to me, very same way:
each time net-disjointed peer was brought back online then
statistics & details worked again.
can you not reproduce it?
$ gluster vol info QEMU-VMs
Volume Name: QEMU-VMs
Type: Replicate
Volume ID: 8709782a-daa5-4434-a816-c4e0aef8fef2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1:
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Brick2:
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Brick3:
10.5.6.100:/__.aLocalStorages/0/0-...
2017 Sep 04
0
heal info OK but statistics not working
...third time when it happens to me, very same way: each time
> net-disjointed peer was brought back online then statistics & details
> worked again.
>
> can you not reproduce it?
>
> $ gluster vol info QEMU-VMs
>
> Volume Name: QEMU-VMs
> Type: Replicate
> Volume ID: 8709782a-daa5-4434-a816-c4e0aef8fef2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
> Brick2: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
> Brick...
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on