Displaying 20 results from an estimated 100 matches similar to: "Shared storage showing 100% used"
2017 Dec 02
2
gluster and nfs-ganesha
HI,
I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5.
I'm trying to create a very simple 2 nodes cluster to be used with
NFS-ganesha. I've created the bricks and the volume. Here's the output:
# gluster volume info
Volume Name: cluster-demo
Type: Replicate
Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
2017 Dec 04
2
gluster and nfs-ganesha
Hi Jiffin,
I looked at the document, and there are 2 things:
1. In Gluster 3.8 it seems you don't need to do that at all, it creates
this automatically, so why not in 3.10?
2. The step by step guide, in the last item, doesn't say where exactly do I
need to create the nfs-ganesha directory. The copy/paste seems irrelevant
as enabling nfs-ganesha creates automatically the ganesha.conf and
2017 Dec 04
0
gluster and nfs-ganesha
On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote:
> HI,
>
> I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5.
>
> I'm trying to create a very simple 2 nodes cluster to be used with
> NFS-ganesha. I've created the bricks and the volume. Here's the output:
>
> # gluster volume info
>
> Volume Name: cluster-demo
> Type:
2017 Dec 06
0
gluster and nfs-ganesha
Hi,
On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote:
> Hi Jiffin,
>
> I looked at the document, and there are 2 things:
>
> 1. In Gluster 3.8 it seems you don't need to do that at all, it
> creates this automatically, so why not in 3.10?
Kindly please refer the mail[1] and release note [2] for glusterfs-3.9
Regards,
Jiffin
[1]
2017 Dec 06
2
gluster and nfs-ganesha
Thanks Jiffin,
Btw, the nfs-ganesha part in the release notes is having a wrong header, so
it's not highlighted.
One thing that it is still mystery to me: gluster 3.8.x does all what the
release notes of 3.9 says - automatically. Any chance that someone could
port it to 3.9?
Thanks for the links
On Wed, Dec 6, 2017 at 7:28 AM, Jiffin Tony Thottan <jthottan at redhat.com>
wrote:
>
2010 Nov 10
1
quota broken for large NFS mount
Should I report this as a bug somewhere? Or is it just a problem with the
old fedora box, probably fixed long ago and not relevant to the centos list?
On the server side, theme4, a very old fedora box, exports t4d5 via NFS.
t4d5 is big, has lots of space, and the user tobiasf has plenty of quota:
[root at theme4 ~]# quota -vls tobiasf|grep sdf
/dev/sdf1 1312G 4578G 4769G
2017 Oct 03
0
multipath
I have inherited a system set up with multipath, which is not something I
have seen before so I could use some advice
The system is a Dell R420 with 2 LSI SAS2008 HBAs, 4 internal disks, and a
MD3200 storage array attached via SAS cables. Oh and CentOS 6
lsblk shows the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 931.5G 0 disk
??sdd1
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk,
could you please tell me have you had time to check my previous post?
Could you please send me mentioned link to your Gluster Ansible scripts?
Thank you,
Adam
On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote:
> Hi hvjunk (Hi Hendrik),
>
> "centos-release-gluster" installs "centos-gluster310". I assume it
> picks the
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc.
https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master>
The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts.
The playbook installing higher up in the directory
> On 05 Jun 2017,
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message -----
From: "hvjunk" <hvjunk at gmail.com>
To: "Adam Ru" <ad.ruckel at gmail.com>
Cc: gluster-users at gluster.org
Sent: Monday, June 5, 2017 9:29:03 PM
Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc.
2019 Oct 01
3
CTDB and nfs-ganesha
Hi there ? I seem to be having trouble wrapping my brain about the CTDB and ganesha configuration. I thought I had it figured out, but it doesn?t seem to be doing any checking of the nfs-ganesha service.
I put nfs-ganesha-callout as executable in /etc/ctdb
I create nfs-checks-ganesha.d folder in /etc/ctdb and in there I have 20.nfs_ganesha.check
In my ctdbd.conf file I have:
# Options to
2019 Oct 01
0
CTDB and nfs-ganesha
Hi Max,
On Tue, 1 Oct 2019 18:57:43 +0000, Max DiOrio via samba
<samba at lists.samba.org> wrote:
> Hi there ? I seem to be having trouble wrapping my brain about the
> CTDB and ganesha configuration. I thought I had it figured out, but
> it doesn?t seem to be doing any checking of the nfs-ganesha service.
> I put nfs-ganesha-callout as executable in /etc/ctdb
> I create
2019 Oct 02
3
CTDB and nfs-ganesha
Hi Marin - again thank you for the help. I can't believe I coundn't find any info about this big configuration change. Even the Samba WIKI doesn't really spell this out at all in instructs you to use ctdbd.conf.
Do I need to enable the 20.nfs_ganesha.check script file at all, or will the config itself take care of that? Also, are there any recommendations on which nfs-checks.d
2019 Oct 02
0
CTDB and nfs-ganesha
As soon as I made the configuration change and restarted CTDB, it crashes.
Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB.
Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE
Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: connect() failed, errno=111
Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: Failed to connect to CTDB daemon
2019 Oct 02
1
CTDB and nfs-ganesha
Martin - thank you for this. I don't know why I couldn't find any of this information anywhere. How long has this change been in place, every website I see about configuring nfs-ganesha with ctdb shows the old information, not the new.
Do I need to enable the legacy 06.nfs 60.nfs files when using ganesha? I assume no.
?On 10/1/19, 5:46 PM, "Martin Schwenke" <martin at
2017 May 01
1
Gluster and NFS-Ganesha - cluster is down after reboot
Hi Gluster users,
First, I'd like to thank you all for this amazing open-source! Thank you!
I'm working on home project ? three servers with Gluster and NFS-Ganesha.
My goal is to create HA NFS share with three copies of each file on each
server.
My systems are CentOS 7.3 Minimal install with the latest updates and the
most current RPMs from "centos-gluster310" repository.
I
2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max,
On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio
<Max.DiOrio at ieeeglobalspec.com> wrote:
> As soon as I made the configuration change and restarted CTDB, it crashes.
>
> Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB.
> Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE
> Oct 2 11:05:21 hq-6pgluster01
2019 Oct 05
2
CTDB and nfs-ganesha
Hi Max,
On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio
<Max.DiOrio at ieeeglobalspec.com> wrote:
> Looks like this is the actual error:
>
> 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started
> 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0
> 2019/10/04 09:51:29.175021
2018 Feb 01
2
Upgrading a ctdb cluster: samba not listening on TCP port 445
Hi all,
I try to update two clustered samba file servers. Right now samba 4.7.0
with ctdb is running on both of them. To update samba I stopped ctdb on
one of the servers, and compiled and installed samba 4.7.1 with:
./configure --with-cluster-support --with-shared-modules=idmap_tdb2,idmap_ad,vfs_glusterfs --with-systemd
Trying to start ctdb on the updated server fails with "
2017 Aug 17
1
shared-storage bricks
Hi,
I enabled shared storage on my four nodes cluster but when I look at the volume info, I only have 3 bricks. Is that suppose to be normal ?
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170817/9e94d1ac/attachment.html>