similar to: Gluster and NFS-Ganesha - cluster is down after reboot

Displaying 20 results from an estimated 2000 matches similar to: "Gluster and NFS-Ganesha - cluster is down after reboot"

2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk, could you please tell me have you had time to check my previous post? Could you please send me mentioned link to your Gluster Ansible scripts? Thank you, Adam On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote: > Hi hvjunk (Hi Hendrik), > > "centos-release-gluster" installs "centos-gluster310". I assume it > picks the
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc. https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master> The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts. The playbook installing higher up in the directory > On 05 Jun 2017,
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message ----- From: "hvjunk" <hvjunk at gmail.com> To: "Adam Ru" <ad.ruckel at gmail.com> Cc: gluster-users at gluster.org Sent: Monday, June 5, 2017 9:29:03 PM Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Sorry, got sidetracked with invoicing etc.
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
I checked, and I have it like this: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-nfs" # # The gluster server from which to mount the shared data volume. HA_VOL_SERVER="tlxdmz-nfs1" # # N.B. you may use short names or long names; you may not use IP addrs. # Once you select one, stay with it as it will be mildly unpleasant to # clean up
2017 Jul 06
2
Gluster install using Ganesha for NFS
After 3.10 you'd need to use storhaug.... Which.... doesn't work (yet). You need to use 3.10 for now. On 07/06/2017 12:53 PM, Anthony Valentine wrote: > I'm running this on CentOS 7.3 > > [root at glustertest1 ~]# cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > > > Here are the software versions I have installed. > > [root at
2017 Jun 30
2
Some bricks are offline after restart, how to bring them online gracefully?
Hi Hari, thank you for your support! Did I try to check offline bricks multiple times? Yes ? I gave it enough time (at least 20 minutes) to recover but it stayed offline. Version? All nodes are 100% equal ? I tried fresh installation several times during my testing, Every time it is CentOS Minimal install with all updates and without any additional software: uname -r 3.10.0-514.21.2.el7.x86_64
2019 Oct 01
3
CTDB and nfs-ganesha
Hi there ? I seem to be having trouble wrapping my brain about the CTDB and ganesha configuration. I thought I had it figured out, but it doesn?t seem to be doing any checking of the nfs-ganesha service. I put nfs-ganesha-callout as executable in /etc/ctdb I create nfs-checks-ganesha.d folder in /etc/ctdb and in there I have 20.nfs_ganesha.check In my ctdbd.conf file I have: # Options to
2017 Dec 02
2
gluster and nfs-ganesha
HI, I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. I'm trying to create a very simple 2 nodes cluster to be used with NFS-ganesha. I've created the bricks and the volume. Here's the output: # gluster volume info Volume Name: cluster-demo Type: Replicate Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
Hi Jan, It is not recommended that you automate the script for 'volume start force'. Bricks do not go offline just like that. There will be some genuine issue which triggers this. Could you please attach the entire glusterd.logs and the brick logs around the time so that someone would be able to look? Just to make sure, please check if you have any network outage(using iperf or some
2017 Jun 30
1
Some bricks are offline after restart, how to bring them online gracefully?
Hi, Jan, by multiple times I meant whether you were able to do the whole setup multiple times and face the same issue. So that we have a consistent reproducer to work on. As grepping shows that the process doesn't exist the bug I mentioned doesn't hold good. Seems like another issue irrelevant to the bug i mentioned (have mentioned it now). When you say too often, this means there is a
2017 Dec 04
2
gluster and nfs-ganesha
Hi Jiffin, I looked at the document, and there are 2 things: 1. In Gluster 3.8 it seems you don't need to do that at all, it creates this automatically, so why not in 3.10? 2. The step by step guide, in the last item, doesn't say where exactly do I need to create the nfs-ganesha directory. The copy/paste seems irrelevant as enabling nfs-ganesha creates automatically the ganesha.conf and
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi, I've just created again the gluster with NFS ganesha. Glusterfs version 3.8 When I run the command gluster nfs-ganesha enable - it returns a success. However, looking at the pcs status, I see this: [root at tlxdmz-nfs1 ~]# pcs status Cluster name: ganesha-nfs Stack: corosync Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum Last updated: Wed Dec 20
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi, In your ganesha-ha.conf do you have your virtual ip adresses set something like this?: VIP_tlxdmz-nfs1="192.168.22.33" VIP_tlxdmz-nfs2="192.168.22.34" Renaud De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo Envoy??: 20 d?cembre 2017 04:35 ??: gluster-users at gluster.org Objet?: [Gluster-users]
2017 Dec 06
0
gluster and nfs-ganesha
Hi, On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote: > Hi Jiffin, > > I looked at the document, and there are 2 things: > > 1. In Gluster 3.8 it seems you don't need to do that at all, it > creates this automatically, so why not in 3.10? Kindly please refer the mail[1] and release note [2] for glusterfs-3.9 Regards, Jiffin [1]
2017 Dec 04
0
gluster and nfs-ganesha
On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote: > HI, > > I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. > > I'm trying to create a very simple 2 nodes cluster to be used with > NFS-ganesha. I've created the bricks and the volume. Here's the output: > > # gluster volume info > > Volume Name: cluster-demo > Type:
2019 Oct 02
3
CTDB and nfs-ganesha
Hi Marin - again thank you for the help. I can't believe I coundn't find any info about this big configuration change. Even the Samba WIKI doesn't really spell this out at all in instructs you to use ctdbd.conf. Do I need to enable the 20.nfs_ganesha.check script file at all, or will the config itself take care of that? Also, are there any recommendations on which nfs-checks.d
2017 Dec 06
2
gluster and nfs-ganesha
Thanks Jiffin, Btw, the nfs-ganesha part in the release notes is having a wrong header, so it's not highlighted. One thing that it is still mystery to me: gluster 3.8.x does all what the release notes of 3.9 says - automatically. Any chance that someone could port it to 3.9? Thanks for the links On Wed, Dec 6, 2017 at 7:28 AM, Jiffin Tony Thottan <jthottan at redhat.com> wrote: >
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
Hi Jan, comments inline. On Fri, Jun 30, 2017 at 1:31 AM, Jan <jan.h.zak at gmail.com> wrote: > Hi all, > > Gluster and Ganesha are amazing. Thank you for this great work! > > I?m struggling with one issue and I think that you might be able to help me. > > I spent some time by playing with Gluster and Ganesha and after I gain some > experience I decided that I
2017 Jun 29
3
Some bricks are offline after restart, how to bring them online gracefully?
Hi all, Gluster and Ganesha are amazing. Thank you for this great work! I?m struggling with one issue and I think that you might be able to help me. I spent some time by playing with Gluster and Ganesha and after I gain some experience I decided that I should go into production but I?m still struggling with one issue. I have 3x node CentOS 7.3 with the most current Gluster and Ganesha from
2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.