similar to: CTDB starting statd without -n gfs -H /etc/ctdb/statd-callout

Displaying 20 results from an estimated 3000 matches similar to: "CTDB starting statd without -n gfs -H /etc/ctdb/statd-callout"

2014 Dec 12
0
Intermittent Event Script Timeouts on CTDB Cluster Nodes
Hi All, I've got a CTDB cluster, managing NFSv3 and Samba, sitting in front of a GPFS storage cluster. The NFSv3 piece is carrying some pretty heavy traffic at peak load. About once every three to four days, CTDB has been exhibiting behaviors that result in IP-failover between two nodes for reasons that are currently unknown. The exact chain of events has been a little different each time
2019 Oct 01
0
CTDB and nfs-ganesha
Hi Max, On Tue, 1 Oct 2019 18:57:43 +0000, Max DiOrio via samba <samba at lists.samba.org> wrote: > Hi there ? I seem to be having trouble wrapping my brain about the > CTDB and ganesha configuration. I thought I had it figured out, but > it doesn?t seem to be doing any checking of the nfs-ganesha service. > I put nfs-ganesha-callout as executable in /etc/ctdb > I create
2016 Mar 18
1
Where are People Storing CTDB's Accounting Files?
Hi All, We're using CTDB to cluster protocols over a large SAN and have had some pain related to a bit of a design flaw: we store CTDB and protocol-specific accounting files (recovery locks, state files, etc) on the same filesystem that we're offering through CTDB itself. This makes our front-end services pretty intolerant of flapping in the back-end filesystem, which is obviously not
2019 Oct 02
1
CTDB and nfs-ganesha
Martin - thank you for this. I don't know why I couldn't find any of this information anywhere. How long has this change been in place, every website I see about configuring nfs-ganesha with ctdb shows the old information, not the new. Do I need to enable the legacy 06.nfs 60.nfs files when using ganesha? I assume no. ?On 10/1/19, 5:46 PM, "Martin Schwenke" <martin at
2019 Oct 02
0
CTDB and nfs-ganesha
As soon as I made the configuration change and restarted CTDB, it crashes. Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: connect() failed, errno=111 Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: Failed to connect to CTDB daemon
2019 Oct 01
3
CTDB and nfs-ganesha
Hi there ? I seem to be having trouble wrapping my brain about the CTDB and ganesha configuration. I thought I had it figured out, but it doesn?t seem to be doing any checking of the nfs-ganesha service. I put nfs-ganesha-callout as executable in /etc/ctdb I create nfs-checks-ganesha.d folder in /etc/ctdb and in there I have 20.nfs_ganesha.check In my ctdbd.conf file I have: # Options to
2019 Oct 02
3
CTDB and nfs-ganesha
Hi Marin - again thank you for the help. I can't believe I coundn't find any info about this big configuration change. Even the Samba WIKI doesn't really spell this out at all in instructs you to use ctdbd.conf. Do I need to enable the 20.nfs_ganesha.check script file at all, or will the config itself take care of that? Also, are there any recommendations on which nfs-checks.d
2013 Jul 12
1
port for rpc.statd occupied rsync port
Hello, booting starts /etc/init.d/nfslock. today, rpc.statd used port 873. later starting xinetd finds port used and disabled rsync daemon. So its more or less a lucky break, to boot CentOS and have rsync running? In /etc/rc3.d is S14nfslock S56xinetd so by design xinetd starts always after nfslock!!!! Best regards -- Viele Gr??e i.V. Helmut Drodofsky ________________________________
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover
2018 Sep 18
0
CTDB potential locking issue
How did you mount your cephfs filesystem? Am 18. September 2018 20:34:25 MESZ schrieb David C via samba <samba at lists.samba.org>: >Hi All > >I have a newly implemented two node CTDB cluster running on CentOS 7, >Samba >4.7.1 > >The node network is a direct 1Gb link > >Storage is Cephfs > >ctdb status is OK > >It seems to be running well so far but
2009 Dec 18
1
mountd and statd at specific ports - nfs firewall
Hi, I am configuring firewall for NFS. I see that statd and mountd start at random port. Is there any way to force it to start at specific port each time. The '-p ' option would work, but how do I configure it to start at specific port number each time. I mean where do statd and mountd look for default configuration options? Any clues? - CS.
2013 Nov 27
0
[Announce] CTDB 2.5.1 available for download
Hi, Since CTDB tree has been merged in Samba tree, any new CTDB development would be done in Samba tree. Till combined Samba+CTDB is released, CTDB fixes would be released as minor releases starting with 2.5.1. Amitay. Changes in CTDB 2.5.1 ===================== Important bug fixes ------------------- * The locking code now correctly implements a per-database active locks limit. Whole
2015 Apr 13
0
[Announce] CTDB release 2.5.5 is ready for download
This is the latest stable release of CTDB. CTDB 2.5.5 can be used with Samba releases prior to Samba 4.2.x (i.e. Samba releases 3.6.x, 4.0.x and 4.1.x). Changes in CTDB 2.5.5 ===================== User-visible changes -------------------- * Dump stack traces for hung RPC processes (mountd, rquotad, statd) * Add vaccuming latency to database statistics * Add -X option to ctdb tool that uses
2014 Jan 30
1
Glusterfs/CTDB/Samba file locking problem
Hi guys, I try to set up two identical installed up to date CentOS6 machines with Glusterfs/CTDB/Samba . I have set up Glusterfs and it works. I have set up CTDB from CentOS and it seems to work too. Samba is AD integrated and works mainly. The main problem is that file locking seem to not work between the machines at all. If two Win7 clients try to open an document from the same Samba server
2008 Jul 22
0
smbtorture: testing p-cifs over gfs
hello folks. congrats on samba 3.2 it's an awesome release. i'm trying to take advantage of the parallel cifs capabilities of this release. lustre (too complex) and gpfs (not free) at not available to me right now, so i'm using gfs v2 and iscsi as my storage building blocks. i'm running 3.2 on centos 5.2. everything is configured and i compiled samba4 to use smbtorture,
2013 May 30
0
[Announce] CTDB 2.2 available for download
Changes in CTDB 2.2 =================== User-visible changes -------------------- * The "stopped" event has been removed. The "ipreallocated" event is now run when a node is stopped. Use this instead of "stopped". * New --pidfile option for ctdbd, used by initscript * The 60.nfs eventscript now uses configuration files in /etc/ctdb/nfs-rpc-checks.d/ for
2000 Jul 21
0
[RHSA-2000:043-03] Revised advisory: Updated package for nfs-utils available
--------------------------------------------------------------------- Red Hat, Inc. Security Advisory Synopsis: Revised advisory: Updated package for nfs-utils available Advisory ID: RHSA-2000:043-03 Issue date: 2000-07-17 Updated on: 2000-07-21 Product: Red Hat Linux Keywords: rpc.statd root compromise Cross references: N/A
2015 Aug 31
0
CentOS 7.1 NFS Client Issues - rpc.statd / rpcbind
On 08/31/2015 01:39 PM, Mark Selby wrote: > I have seen some talk about this but have not seen any answers. I know > this is a problem on CentOS 7.1 and I also think it is a problem on > CentOS 7.0. > > Basically if I have an NFS client only config - meaning that the > nfs-server.service is not enabled then I have to wait 60 seconds after > boot for the 1st NFSV3 mount to
2015 Aug 31
1
CentOS 7.1 NFS Client Issues - rpc.statd / rpcbind
That is the thing - rpc.statd does have rpcbind a pre-req. It looks like systemd is not handling this correctly. Just wondering if anyone knows a good way to fix. root at ls2 /usr/lib/systemd/system 110# grep Requires rpc-statd.service Requires=nss-lookup.target rpcbind.target On 8/30/15 7:45 PM, Rob Kampen wrote: > On 08/31/2015 01:39 PM, Mark Selby wrote: >> I have seen some talk
2007 Nov 22
2
dovecot loading during boot
I have two RHEL4 email servers running postfix/MailScanner which use dovecot. They work great. But during bootup the nfslock script in my init.d loads rpc.statd and calls portmap to get a port number. Portmap keeps giving rpc.statd the imaps port number (993). I then have to stop my mail server services, manually start dovecot, then restart the mail server services and everything goes merrily on