similar to: CTDB and nfs-ganesha

Displaying 20 results from an estimated 2000 matches similar to: "CTDB and nfs-ganesha"

2019 Oct 02
3
CTDB and nfs-ganesha
Hi Marin - again thank you for the help. I can't believe I coundn't find any info about this big configuration change. Even the Samba WIKI doesn't really spell this out at all in instructs you to use ctdbd.conf. Do I need to enable the 20.nfs_ganesha.check script file at all, or will the config itself take care of that? Also, are there any recommendations on which nfs-checks.d
2019 Oct 02
1
CTDB and nfs-ganesha
Martin - thank you for this. I don't know why I couldn't find any of this information anywhere. How long has this change been in place, every website I see about configuring nfs-ganesha with ctdb shows the old information, not the new. Do I need to enable the legacy 06.nfs 60.nfs files when using ganesha? I assume no. ?On 10/1/19, 5:46 PM, "Martin Schwenke" <martin at
2019 Oct 02
0
CTDB and nfs-ganesha
As soon as I made the configuration change and restarted CTDB, it crashes. Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: connect() failed, errno=111 Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: Failed to connect to CTDB daemon
2019 Oct 01
0
CTDB and nfs-ganesha
Hi Max, On Tue, 1 Oct 2019 18:57:43 +0000, Max DiOrio via samba <samba at lists.samba.org> wrote: > Hi there ? I seem to be having trouble wrapping my brain about the > CTDB and ganesha configuration. I thought I had it figured out, but > it doesn?t seem to be doing any checking of the nfs-ganesha service. > I put nfs-ganesha-callout as executable in /etc/ctdb > I create
2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max, On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > As soon as I made the configuration change and restarted CTDB, it crashes. > > Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. > Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE > Oct 2 11:05:21 hq-6pgluster01
2019 Oct 05
2
CTDB and nfs-ganesha
Hi Max, On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > Looks like this is the actual error: > > 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started > 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 > 2019/10/04 09:51:29.175021
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover
2018 Sep 18
4
CTDB potential locking issue
Hi All I have a newly implemented two node CTDB cluster running on CentOS 7, Samba 4.7.1 The node network is a direct 1Gb link Storage is Cephfs ctdb status is OK It seems to be running well so far but I'm frequently seeing the following in my log.smbd: [2018/09/18 19:16:15.897742, 0] > ../source3/lib/dbwrap/dbwrap_ctdb.c:1207(fetch_locked_internal) > db_ctdb_fetch_locked for
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk, could you please tell me have you had time to check my previous post? Could you please send me mentioned link to your Gluster Ansible scripts? Thank you, Adam On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote: > Hi hvjunk (Hi Hendrik), > > "centos-release-gluster" installs "centos-gluster310". I assume it > picks the
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc. https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master> The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts. The playbook installing higher up in the directory > On 05 Jun 2017,
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message ----- From: "hvjunk" <hvjunk at gmail.com> To: "Adam Ru" <ad.ruckel at gmail.com> Cc: gluster-users at gluster.org Sent: Monday, June 5, 2017 9:29:03 PM Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Sorry, got sidetracked with invoicing etc.
2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.
2017 Dec 02
2
gluster and nfs-ganesha
HI, I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. I'm trying to create a very simple 2 nodes cluster to be used with NFS-ganesha. I've created the bricks and the volume. Here's the output: # gluster volume info Volume Name: cluster-demo Type: Replicate Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2
2017 Dec 04
2
gluster and nfs-ganesha
Hi Jiffin, I looked at the document, and there are 2 things: 1. In Gluster 3.8 it seems you don't need to do that at all, it creates this automatically, so why not in 3.10? 2. The step by step guide, in the last item, doesn't say where exactly do I need to create the nfs-ganesha directory. The copy/paste seems irrelevant as enabling nfs-ganesha creates automatically the ganesha.conf and
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system from reading the ganesha wiki I have the impression that it is possible to change the log level without restarting ganesha. I was playing with dbus-send but so far was unsuccessful. if you can help me with that, this would be great. here some details about the tested machines. the nfs client
2017 May 01
1
Gluster and NFS-Ganesha - cluster is down after reboot
Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home project ? three servers with Gluster and NFS-Ganesha. My goal is to create HA NFS share with three copies of each file on each server. My systems are CentOS 7.3 Minimal install with the latest updates and the most current RPMs from "centos-gluster310" repository. I
2017 Dec 06
2
gluster and nfs-ganesha
Thanks Jiffin, Btw, the nfs-ganesha part in the release notes is having a wrong header, so it's not highlighted. One thing that it is still mystery to me: gluster 3.8.x does all what the release notes of 3.9 says - automatically. Any chance that someone could port it to 3.9? Thanks for the links On Wed, Dec 6, 2017 at 7:28 AM, Jiffin Tony Thottan <jthottan at redhat.com> wrote: >
2017 Dec 06
0
gluster and nfs-ganesha
Hi, On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote: > Hi Jiffin, > > I looked at the document, and there are 2 things: > > 1. In Gluster 3.8 it seems you don't need to do that at all, it > creates this automatically, so why not in 3.10? Kindly please refer the mail[1] and release note [2] for glusterfs-3.9 Regards, Jiffin [1]
2017 Dec 04
0
gluster and nfs-ganesha
On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote: > HI, > > I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. > > I'm trying to create a very simple 2 nodes cluster to be used with > NFS-ganesha. I've created the bricks and the volume. Here's the output: > > # gluster volume info > > Volume Name: cluster-demo > Type:
2019 Oct 05
0
CTDB and nfs-ganesha
I?ll have to check out the script issue on Monday. You said the lock needs to be the same on all nodes. I can do that but this is now in production and restarting the ctdb service forces a failover of the ip, which actually causes a failure of a few of our Kubernetes sql database pods - they freak out and don?t recovery if storage is ripped out from under them. Is there a way to do this