Hi, I have a 2 node replicated volume under ovirt 3.5. My self heal daemon is not running. I have a lot of misshealted vms on my glusterfs [root at node1 ~]# gluster volume heal g1sata info Brick node0.itsmart.cloud:/data/sata/brick/ <gfid:bc0623a6-da58-4a86-8c81-f8ac67dbbe35> Number of entries: 1 Brick node1.itsmart.cloud:/data/sata/brick/ <gfid:bc0623a6-da58-4a86-8c81-f8ac67dbbe35> /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/6788e53a-750d-4566-8579-37f586a0f306/2f62334e-39dc-4ffa-9102-51289588c42b - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/12ff021d-4075-4032-979c-685520dc1895/4051ffec-3dd2-495d-989b-eefb9fe92221 - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/c9dbc63e-b9a2-43aa-b433-8c53ce824492/bb0efb35-5164-4b22-9bed-5daeacf97129 - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/388c14f5-5690-4eae-a7dc-76d782ad8acc/0059a2c2-f8b1-4979-8321-41422d9a469f - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/2cb7ee4b-5c43-45e7-b13e-18aa3df0ef66/c0cd0554-ac37-4feb-803c-d1207219e3a1 - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1bb441b8-84a2-4d5b-bd29-f57b100bbce4/095230c2-0411-44cf-a085-3c929e4ca9b6 - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/e3751092-3f6a-4aa6-b569-2a2fb4ae294a/133b2d17-2a2a-4ec3-b26a-4fd685aa2b78 - Possibly undergoing heal /fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1535497b-d6ca-40e3-84b0-85f55217cbc9/144ddc5c-be25-4d5e-91a4-a0864ea2a10e - Possibly undergoing heal Number of entries: 9 Status of volume: g1sata Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 172.16.0.10:/data/sata/brick 49152 Y 27983 Brick 172.16.0.11:/data/sata/brick 49152 Y 2581 NFS Server on localhost 2049 Y 14209 Self-heal Daemon on localhost N/A Y 14225 NFS Server on 172.16.0.10 2049 Y 27996 Self-heal Daemon on 172.16.0.10 N/A Y 28004 Task Status of Volume g1sata ------------------------------------------------------------------------------ There are no active volume tasks [root at node1 ~]# rpm -qa|grep gluster glusterfs-libs-3.5.2-1.el6.x86_64 glusterfs-cli-3.5.2-1.el6.x86_64 glusterfs-rdma-3.5.2-1.el6.x86_64 glusterfs-server-3.5.2-1.el6.x86_64 glusterfs-3.5.2-1.el6.x86_64 glusterfs-api-3.5.2-1.el6.x86_64 glusterfs-fuse-3.5.2-1.el6.x86_64 vdsm-gluster-4.16.7-1.gitdb83943.el6.noarch centos 6.5 , firewall is disabled, selinux is on permissive I did a service restart on each node but that isn't helped. Also I have split-brained could someone help me? Thanks Tibor -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141109/2147d3a7/attachment.html>