Displaying 4 results from an estimated 4 matches for "nolockinode".
2018 Sep 19
3
CTDB potential locking issue
...". If
> that is needed for Cephfs then the fsname_norootdir option might not be
> appropriate.
>
This was a leftover from a short-lived experiment with OCFS2 where I think
it was required. I think CephFS should be fine with fsname.
>
> You could also consider using the fileid:nolockinode hack if it is
> appropriate.
>
> You should definitely read vfs_fileid(8) before using either of these
> options.
>
I'll have a read. Thanks again for your assistance.
>
> Although clustering has obvious benefits, it doesn't come for
> free. Dealing with contentio...
2018 Sep 19
0
CTDB potential locking issue
...l files = yes
The share is accessed by the Windows machines to install software, read
configs etc. I would have thought the share being read only would preclude
this type of locking behaviour?
Do I need to explicitly disable locking in the share definition?
I suppose I could still use the fileid:nolockinode for this file, do I just
add fileid:nolockinode = *inodenumber *to the global section of my smb.conf?
Thanks,
David
On Wed, Sep 19, 2018 at 7:00 PM David C <dcsysengineer at gmail.com> wrote:
> Hi Martin
>
> Many thanks for the detailed response. A few follow-ups inline:
>...
2018 Sep 19
0
CTDB potential locking issue
...rency there, then you could think about using the
fileid:algorithm = fsname_norootdir
option. However, I note you're using "fileid:algorithm = fsid". If
that is needed for Cephfs then the fsname_norootdir option might not be
appropriate.
You could also consider using the fileid:nolockinode hack if it is
appropriate.
You should definitely read vfs_fileid(8) before using either of these
options.
Although clustering has obvious benefits, it doesn't come for
free. Dealing with contention can be tricky... :-)
peace & happiness,
martin
2018 Sep 18
4
CTDB potential locking issue
Hi All
I have a newly implemented two node CTDB cluster running on CentOS 7, Samba
4.7.1
The node network is a direct 1Gb link
Storage is Cephfs
ctdb status is OK
It seems to be running well so far but I'm frequently seeing the following
in my log.smbd:
[2018/09/18 19:16:15.897742, 0]
> ../source3/lib/dbwrap/dbwrap_ctdb.c:1207(fetch_locked_internal)
> db_ctdb_fetch_locked for