Lindsay Mathieson
2016-Apr-17 04:12 UTC
[Gluster-users] Continual heals happening on cluster
gluster 3.7.10
Proxmox (debian jessie)
I'm finding the following more than a little concerning. I've created a
datastore with the following settings:
Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
cluster.self-heal-window-size: 1024
transport.address-family: inet
performance.readdir-ahead: on
I've transferred 12 Windows VM's to it (gfapi) and am running them all,
spread across three nodes.
"gluster volume heal datastore3 statistics heal-count" shows zero
heals
on all nodes.
but "gluster volume heal datastore4 info" shows heals occurring on
mutliple shards on all nodes, different shards each time its called.
gluster volume heal datastore4 info
Brick vnb.proxmox.softlog:/tank/vmdata/datastore4
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.221
/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1362
/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.942
/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.1032
/.shard/f8ce4b49-14d0-46ef-9a95-456884f34fd4.623
/.shard/e9a39d2e-a1b7-4ea0-9d8c-b55370048d03.483
/.shard/f8ce4b49-14d0-46ef-9a95-456884f34fd4.47
/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.160
Status: Connected
Number of entries: 8
Brick vng.proxmox.softlog:/tank/vmdata/datastore4
/.shard/bd493985-2ee6-43f1-b8d5-5f0d5d3aa6f5.33
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.48
/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1304
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.47
/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.142
Status: Connected
Number of entries: 5
Brick vna.proxmox.softlog:/tank/vmdata/datastore4
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.357
/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.996
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.679
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.496
/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.160
/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.954
/.shard/d297f8d6-e263-4af3-9384-6492614dc115.678
/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.852
/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.1544
Status: Connected
Number of entries: 9
--
Lindsay Mathieson
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160417/203cdefb/attachment.html>
Lindsay Mathieson
2016-Apr-17 09:09 UTC
[Gluster-users] Continual heals happening on cluster
I shutdown all the vms, waited for heals to finish, then stopped tall
gluster processes.
set the following to off:
performance.write-behind
datastore4 performance.flush-behind
datastore4 cluster.data-self-heal
and restarted everything - same issue - assuming it actually is an
"issue"
--
Lindsay Mathieson
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160417/80371ba9/attachment.html>