Surya K Ghatty
2015-Nov-17  16:51 UTC
[Gluster-users] Configuring Ganesha and gluster on separate nodes?
Hi:
I am trying to understand if it is technically feasible to have gluster
nodes on one machine, and export a volume from one of these nodes using a
nfs-ganesha server installed on a totally different machine? I tried the
below and showmount -e does not show my volume exported. Any suggestions
will be appreciated.
1. Here is my configuration:
 Gluster nodes: glusterA and glusterB on individual bare metals - both in
Trusted pool, with volume gvol0 up and running.
Ganesha node: on bare metal ganeshaA.
2. my ganesha.conf looks like this with IP address of glusterA in the FSAL.
 FSAL {
                Name = GLUSTER;
                # IP of one of the nodes in the trusted pool
                hostname = "WW.ZZ.XX.YY"  --> IP address of
GlusterA.
                # Volume name. Eg: "test_volume"
                volume = "gvol0";
        }
3. I disabled nfs on gvol0. As you can see, nfs.disable is set to on.
[root at glusterA ~]# gluster vol info
Volume Name: gvol0
Type: Distribute
Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: glusterA:/data/brick0/gvol0
Options Reconfigured:
nfs.disable: on
nfs.export-volumes: off
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf
-L /var/log/ganesha.log -N NIV_FULL_DEBUG
Ganesha server was put in grace, no errors.
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: nfs-ganesha-26426[reaper]
fridgethr_freeze :RW LOCK :F_DBG :Released mutex 0x7f21a92818d0
(&fr->mtx)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: nfs-ganesha-26426[reaper]
nfs_in_grace :RW LOCK :F_DBG :Acquired mutex 0x7f21ad1f18e0
(&grace.g_mutex)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : nfs-ganesha-26426[reaper]
nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : nfs-ganesha-26426[reaper]
nfs_in_grace :RW LOCK :F_DBG :Released mutex 0x7f21ad1f18e0
(&grace.g_mutex)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141
5. [root at ganeshaA glusterfs]# showmount -e
Export list for ganeshaA:
<empty>
Any suggestions on what I am missing?
Regards,
Surya Ghatty
"This too shall pass"
________________________________________________________________________________________________________
Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | ghatty at us.ibm.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151117/5e2074f0/attachment.html>
Kaleb KEITHLEY
2015-Nov-17  16:59 UTC
[Gluster-users] Configuring Ganesha and gluster on separate nodes?
On 11/17/2015 11:51 AM, Surya K Ghatty wrote:> Hi: > > I am trying to understand if it is technically feasible to have gluster > nodes on one machine, and export a volume from one of these nodes using > a nfs-ganesha server installed on a totally different machine?It should work, but it's definitely outside the envelope of anything we have tested. You're on your own here. I tried> the below and showmount -e does not show my volume exported. Any > suggestions will be appreciated. > > 1. Here is my configuration: > > Gluster nodes: glusterA and glusterB on individual bare metals - both in > Trusted pool, with volume gvol0 up and running. > Ganesha node: on bare metal ganeshaA. > > 2. my ganesha.conf looks like this with IP address of glusterA in the FSAL. > > FSAL { > Name = GLUSTER; > > # IP of one of the nodes in the trusted pool > *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.* > > # Volume name. Eg: "test_volume" > volume = "gvol0"; > } > > 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.* > > [root at glusterA ~]# gluster vol info > > Volume Name: gvol0 > Type: Distribute > Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0 > Status: Started > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: glusterA:/data/brick0/gvol0 > Options Reconfigured: > *nfs.disable: on* > nfs.export-volumes: off > features.quota-deem-statfs: on > features.inode-quota: on > features.quota: on > performance.readdir-ahead: on > > 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L > /var/log/ganesha.log -N NIV_FULL_DEBUG > Ganesha server was put in grace, no errors. > > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: > nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released > mutex 0x7f21a92818d0 (&fr->mtx) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484 > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: > nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex > 0x7f21ad1f18e0 (&grace.g_mutex) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129 > *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : > nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE* > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : > nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex > 0x7f21ad1f18e0 (&grace.g_mutex) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141 > > 5. [root at ganeshaA glusterfs]# showmount -e > Export list for ganeshaA: > <empty> > > Any suggestions on what I am missing? > > Regards, > > Surya Ghatty > > "This too shall pass" > ________________________________________________________________________________________________________ > Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services > Development | tel: (507) 316-0559 | ghatty at us.ibm.com > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >
Soumya Koduri
2015-Nov-18  11:07 UTC
[Gluster-users] Configuring Ganesha and gluster on separate nodes?
On 11/17/2015 10:21 PM, Surya K Ghatty wrote:> Hi: > > I am trying to understand if it is technically feasible to have gluster > nodes on one machine, and export a volume from one of these nodes using > a nfs-ganesha server installed on a totally different machine? I tried > the below and showmount -e does not show my volume exported. Any > suggestions will be appreciated. > > 1. Here is my configuration: > > Gluster nodes: glusterA and glusterB on individual bare metals - both in > Trusted pool, with volume gvol0 up and running. > Ganesha node: on bare metal ganeshaA. > > 2. my ganesha.conf looks like this with IP address of glusterA in the FSAL. > > FSAL { > Name = GLUSTER; > > # IP of one of the nodes in the trusted pool > *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.* > > # Volume name. Eg: "test_volume" > volume = "gvol0"; > } > > 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.* > > [root at glusterA ~]# gluster vol info > > Volume Name: gvol0 > Type: Distribute > Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0 > Status: Started > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: glusterA:/data/brick0/gvol0 > Options Reconfigured: > *nfs.disable: on* > nfs.export-volumes: off > features.quota-deem-statfs: on > features.inode-quota: on > features.quota: on > performance.readdir-ahead: on > > 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L > /var/log/ganesha.log -N NIV_FULL_DEBUG > Ganesha server was put in grace, no errors. > > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: > nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released > mutex 0x7f21a92818d0 (&fr->mtx) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484 > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: > nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex > 0x7f21ad1f18e0 (&grace.g_mutex) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129 > *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : > nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE* > 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : > nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex > 0x7f21ad1f18e0 (&grace.g_mutex) at > /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141 >You shall still need gluster-client bits on the machine where nfs-ganesha server is installed to export a gluster volume. Check if you have got libgfapi.so installed on that machine. Also, ganesha server does log the warnings if its unable to process the EXPORT/FSAL block. Please recheck the logs if you have got any. Thanks, Soumya> 5. [root at ganeshaA glusterfs]# showmount -e > Export list for ganeshaA: > <empty> > > Any suggestions on what I am missing? > > Regards, > > Surya Ghatty > > "This too shall pass" > ________________________________________________________________________________________________________ > Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services > Development | tel: (507) 316-0559 | ghatty at us.ibm.com > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >
Surya K Ghatty
2015-Dec-02  19:38 UTC
[Gluster-users] Configuring Ganesha and gluster on separate nodes?
Hi Soumya, Kaleb, all:
Thanks for the response!
Quick follow-up to this question - We tried running ganesha and gluster on
two separate machines and the configuration seems to be working without
issues.
Follow-up question I have is this: what changes do I need to make to put
the Ganesha in active active HA mode - where backend gluster and ganesha
will be on a different node. I am using the instructions here for putting
Ganesha in HA mode. http://www.slideshare.net/SoumyaKoduri/high-49117846.
This presentation refers to commands like gluster
cluster.enable-shared-storage to enable HA.
1. Here is the config I am hoping to achieve:
glusterA and glusterB on individual bare metals - both in Trusted pool,
with volume gvol0 up and running.
Ganesha 1 and 2 on machines ganesha1, and ganesha1. And my gluster storage
will be on a third machine gluster1. (with a peer on another machine
gluster2).
Ganesha node1: on a VM ganeshaA.
Ganesha node2: on another vm GaneshaB.
I would like to know what it takes to put ganeshaA and GaneshaB in Active
Active HA mode. Is it technically possible?
a. How do commands like cluster.enable-shared-storage work in this case?
b. where does this command need to be run? on the ganesha node, or on the
gluster nodes?
2. Also, is it possible to have multiple ganesha servers point to the same
gluster volume in the back end? say, in the configuration #1, I have
another ganesha server GaneshaC that is not clustered with ganeshaA or
ganeshaB. Can it export the volume gvol0 that ganeshaA and ganeshaB are
also exporting?
thank you!
Surya.
Regards,
Surya Ghatty
"This too shall pass"
________________________________________________________________________________________________________
Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | ghatty at us.ibm.com
From:	Soumya Koduri <skoduri at redhat.com>
To:	Surya K Ghatty/Rochester/IBM at IBMUS, gluster-users at gluster.org
Date:	11/18/2015 05:08 AM
Subject:	Re: [Gluster-users] Configuring Ganesha and gluster on separate
            nodes?
On 11/17/2015 10:21 PM, Surya K Ghatty wrote:> Hi:
>
> I am trying to understand if it is technically feasible to have gluster
> nodes on one machine, and export a volume from one of these nodes using
> a nfs-ganesha server installed on a totally different machine? I tried
> the below and showmount -e does not show my volume exported. Any
> suggestions will be appreciated.
>
> 1. Here is my configuration:
>
> Gluster nodes: glusterA and glusterB on individual bare metals - both in
> Trusted pool, with volume gvol0 up and running.
> Ganesha node: on bare metal ganeshaA.
>
> 2. my ganesha.conf looks like this with IP address of glusterA in the
FSAL.>
> FSAL {
> Name = GLUSTER;
>
> # IP of one of the nodes in the trusted pool
> *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*
>
> # Volume name. Eg: "test_volume"
> volume = "gvol0";
> }
>
> 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.*
>
> [root at glusterA ~]# gluster vol info
>
> Volume Name: gvol0
> Type: Distribute
> Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: glusterA:/data/brick0/gvol0
> Options Reconfigured:
> *nfs.disable: on*
> nfs.export-volumes: off
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on
>
> 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L
> /var/log/ganesha.log -N NIV_FULL_DEBUG
> Ganesha server was put in grace, no errors.
>
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released
> mutex 0x7f21a92818d0 (&fr->mtx) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex
> 0x7f21ad1f18e0 (&grace.g_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
> *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN
GRACE*> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex
> 0x7f21ad1f18e0 (&grace.g_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141
>
You shall still need gluster-client bits on the machine where
nfs-ganesha server is installed to export a gluster volume. Check if you
have got libgfapi.so installed on that machine.
Also, ganesha server does log the warnings if its unable to process the
EXPORT/FSAL block. Please recheck the logs if you have got any.
Thanks,
Soumya
> 5. [root at ganeshaA glusterfs]# showmount -e
> Export list for ganeshaA:
> <empty>
>
> Any suggestions on what I am missing?
>
> Regards,
>
> Surya Ghatty
>
> "This too shall pass"
>
________________________________________________________________________________________________________
> Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
> Development | tel: (507) 316-0559 | ghatty at us.ibm.com
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151202/ebf57ede/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151202/ebf57ede/attachment.gif>