On Sun, Mar 22, 2015 at 06:21:52PM +0000, Jason Hilton
wrote:> So I have been trying to get this going and the volume status command
> shows that the "NFS Server on localhost" is offline, port and pid
are
> "N/A". I have double checked everything and NFS is disabled on
start
> up an are not currently running. The showmount command returns
> "clnt_create: RPC: Program not registered"
Please make sure that none of the RPC programs for NFS are registered at
rpcbind before starting Gluster/NFS (or rebooting).
You can check by executing 'rpcinfo' on the Gluster/NFS server. Any of
the mountd, nlockmgr, status and nfs can block the registration of the
NFS-server (and its helper protocols). Unregistering one of the
protocols can be done like this:
# rpcinfo
...
100005 3 tcp 0.0.0.0.150.65 mountd superuser
...
# rpcinfo -d 100005 3
Do this for all the protocols (+versions) mentioned above and restart
the Gluster/NFS server process:
# gluster volume start $VOLUME force
The 'start force' will only start any missing processes, like the
Gluster/NFS (a 'glusterfs' process). Running processes (like the ones
for the bricks) should not be impacted.
Niels
>
> Any ideas?
> Thanks again!
>
> -----Original Message-----
> From: Niels de Vos [mailto:ndevos at redhat.com]
> Sent: Sunday, March 22, 2015 11:45 AM
> To: Jason Hilton
> Cc: 'gluster-users at gluster.org'
> Subject: Re: [Gluster-users] Volume creation time?
>
> On Sun, Mar 22, 2015 at 02:59:53PM +0000, Jason Hilton wrote:
> > Thank you for the quick reply! I didn't expect to see any
response on
> > a Sunday. I did as you suggested and found some messages stating
> > that the address and port were failing to bind because it was already
> > in use. It turned out that the NFS service was running and interfered
> > with glusterd. I was intending to share my gluster volumes via NFS
> > and I thought I had read that as of V3, gluster exported NFS shares by
> > default, so I had started the service. Does gluster provide its own
> > NFS services?
>
> Yes, Gluster comes indeed with its own NFS-server. You should not start any
NFS-services, Gluster takes care of starting them. The only service that you
need to have running (or activated for systemd environments), is rpcbind.
>
> Once your volume has been created and started, you should be able to see
that there is a NFS-server running with this command:
>
> # gluster volume status
>
> And, with 'showmount -e' the volme should be listed as an export.
>
> Cheers,
> Niels
>
> >
> > ***************************************************************
> > Jason Hilton
> > Director of Technology Development
> > 601 Madison Street, Suite 400
> > Alexandria, VA 22314
> > jason.hilton at aaae.org
> > Desk: 703.824.0500x167
> > FAX: 703.578.4952
> >
> > AAAE Tech support:
> > IET at aaae.org
> > 703.797.2555, opt. 2
> > ***************************************************************
> >
> >
> > -----Original Message-----
> > From: Niels de Vos [mailto:ndevos at redhat.com]
> > Sent: Sunday, March 22, 2015 10:13 AM
> > To: Jason Hilton
> > Cc: 'gluster-users at gluster.org'
> > Subject: Re: [Gluster-users] Volume creation time?
> >
> > On Sun, Mar 22, 2015 at 01:34:24PM +0000, Jason Hilton wrote:
> > > Hi-
> > > I'm new to GlusterFS and I have been trying to set up a
gluster
> > > volume. The volume is 150 TB. I started the create volume
command
> > > on Friday morning and it has not yet completed. Since I have no
> > > prior experience with GlusterFS, is this an expected duration?
The
> > > server is no power house, a pair of older Xeon Quad core
processors
> > > at 2 GHz and only 4 GB of RAM. TOP shows very little processor
> > > usage, but IOTOP shows some disk I/O. I don't mind waiting
it out,
> > > I just want to be sure that the process is still proceeding. Is
> > > there a way to monitor Gluster volume creation progress?
> >
> > Volume creation should be very fast, there is not a lot to do for
Gluster to create a volume. A couple of seconds should be sufficient.
> >
> > Check the /var/log/glusterfs/etc-*.log to see if there are any errors
listed there.
> >
> > HTH,
> > Niels