paf1 at email.cz
2016-Mar-03 09:23 UTC
[Gluster-users] [ovirt-users] open error -13 = sanlock
This is replica 2, only , with following settings Options Reconfigured: performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: fixed cluster.server-quorum-type: none storage.owner-uid: 36 storage.owner-gid: 36 cluster.quorum-count: 1 cluster.self-heal-daemon: enable If I'll create "ids" file manually ( eg. " sanlock direct init -s 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 " ) on both bricks, vdsm is writing only to half of them ( that with 2 links = correct ) "ids" file has correct permittions, owner, size on both bricks. brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. b?e 18.56 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not updated brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. b?e 10.16 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is continually updated What happens when I'll restart vdsm ? Will oVirt storages go to "disable " state ??? = disconnect VMs storages ? regs.Pa. On 3.3.2016 02:02, Ravishankar N wrote:> On 03/03/2016 12:43 AM, Nir Soffer wrote: >> >> PS: # find /STORAGES -samefile >> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids >> -print >> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids >> = missing "shadowfile" in " .gluster " dir. >> How can I fix it ?? - online ! >> >> >> Ravi? > Is this the case in all 3 bricks of the replica? > BTW, you can just stat the file on the brick and see the link count > (it must be 2) instead of running the more expensive find command. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160303/5680cd96/attachment.html>
Ravishankar N
2016-Mar-03 10:50 UTC
[Gluster-users] [ovirt-users] open error -13 = sanlock
On 03/03/2016 02:53 PM, paf1 at email.cz wrote:> This is replica 2, only , with following settings > > Options Reconfigured: > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > cluster.quorum-type: fixedNot sure why you have set this option. Ideally replica 3 or arbiter volumes are recommended for gluster+ovirt use. (client) quorum does not make sense for a 2 node setup. I have a detailed write up here which explains things http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ which explains things.> cluster.server-quorum-type: none > storage.owner-uid: 36 > storage.owner-gid: 36 > cluster.quorum-count: 1 > cluster.self-heal-daemon: enable > > If I'll create "ids" file manually ( eg. " sanlock direct init -s > 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 > " ) on both bricks, > vdsm is writing only to half of them ( that with 2 links = correct ) > "ids" file has correct permittions, owner, size on both bricks. > brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. b?e 18.56 > /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - > not updatedOkay, so this one has link count =1 which means the .glusterfs hardlink is missing. Can you try deleting this file from the brick and perform a stat on the file from the mount? That should heal (i.e recreate it ) on this brick from the other brick with the appropriate .glusterfs hard link.> brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. b?e 10.16 > /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - > is continually updated > > What happens when I'll restart vdsm ? Will oVirt storages go to > "disable " state ??? = disconnect VMs storages ?No idea on this one... -Ravi> > regs.Pa. > > On 3.3.2016 02:02, Ravishankar N wrote: >> On 03/03/2016 12:43 AM, Nir Soffer wrote: >>> >>> PS: # find /STORAGES -samefile >>> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids >>> -print >>> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids >>> = missing "shadowfile" in " .gluster " dir. >>> How can I fix it ?? - online ! >>> >>> >>> Ravi? >> Is this the case in all 3 bricks of the replica? >> BTW, you can just stat the file on the brick and see the link count >> (it must be 2) instead of running the more expensive find command. >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160303/3daea443/attachment.html>
On Thu, Mar 3, 2016 at 11:23 AM, paf1 at email.cz <paf1 at email.cz> wrote:> This is replica 2, only , with following settings >Replica 2 is not supported. Even if you "fix" this now, you will have the same issue soon.> > Options Reconfigured: > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > cluster.quorum-type: fixed > cluster.server-quorum-type: none > storage.owner-uid: 36 > storage.owner-gid: 36 > cluster.quorum-count: 1 > cluster.self-heal-daemon: enable > > If I'll create "ids" file manually ( eg. " sanlock direct init -s > 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 > " ) on both bricks, > vdsm is writing only to half of them ( that with 2 links = correct ) > "ids" file has correct permittions, owner, size on both bricks. > brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. b?e 18.56 > /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not > updated > brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. b?e 10.16 > /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is > continually updated > > What happens when I'll restart vdsm ? Will oVirt storages go to "disable " > state ??? = disconnect VMs storages ? >Nothing will happen, the vms will continue to run normally. On block storage, stopping vdsm will prevent automatic extending of vm disks when the disk become too full, but on file based storage (like gluster) there is no issue.> > regs.Pa. > > > On 3.3.2016 02:02, Ravishankar N wrote: > > On 03/03/2016 12:43 AM, Nir Soffer wrote: > > PS: # find /STORAGES -samefile >> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print >> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids >> = missing "shadowfile" in " .gluster " dir. >> How can I fix it ?? - online ! >> > > Ravi? > > Is this the case in all 3 bricks of the replica? > BTW, you can just stat the file on the brick and see the link count (it > must be 2) instead of running the more expensive find command. > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160303/46dba3cd/attachment.html>