Displaying 20 results from an estimated 30000 matches similar to: "Issue with nfs export"
2013 Nov 27
0
NFS client problems
I have create a 2 node replicated cluster with GlusterFS 3.4.1 on Centos 6.4. Mounting the volume locally on each server using native client works fine, however I am having issues with a separate client only server that I wish to use NFS to mount the gluster server volume.
Volume Name: glustervol
Type: Replicate
Volume ID: 6a5dde86-...
Status: Started
Number of Bricks: 1 x 2 = 2
2012 Oct 05
0
No subject
for all three nodes:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there
2012 Oct 05
0
No subject
for all three nodes:=0A=
=0A=
Brick1: gluster-0-0:/mseas-data-0-0=0A=
Brick2: gluster-0-1:/mseas-data-0-1=0A=
Brick3: gluster-data:/data=0A=
=0A=
=0A=
Which node are you trying to mount to /data? If it is not the=0A=
gluster-data node, then it will fail if there is not a /data directory.=0A=
In this case, it is a good thing, since mounting to /data on gluster-0-0=0A=
or gluster-0-1 would not
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon,
I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu
I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2013 Oct 17
0
Gluster Community Congratulates OpenStack Developers on Havana Release
The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community.
2012 Jun 12
1
What is glustershd ?
Hi Gluster users !
In the process of upgrading a Gluster 3.0.0 platform to 3.3.0, I'm trying
to understand how GlusterFS 3.3.0 is working.
I can't find any documentation which explains what's the role of glustershd.
The only thing I (think I) understand is that the glustershd-server.vol is
only generated for replicated volumes. It contains a cluster/replicate
volume which replicate
2013 Nov 05
1
setting up a dedicated NIC for replication
Hi all,
I evaluate Glusterfs for storing virtual machines (KVM).
I am looking for how configuring a dedicated network (vlan) for Gluster's replication.
Because the configuration is based on only one DNS name, I don't know how to configure Gluster's nodes in order to:
- Use production network, for hypervisors communications
- Use replicated/heartbeat
2013 Jan 03
0
Resolve brick failed in restore
Hi,
I have a lab with 10 machines acting as storage servers for some compute
machines, using glusterfs to distribute the data as two volumes.
Created using:
gluster volume create vol1 192.168.10.{221..230}:/data/vol1
gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2
and mounted on the client and server machines using:
mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1
mount
2013 Mar 20
1
About adding bricks ...
Hi @all,
I've created a Distributed-Replicated Volume consisting of 4 bricks on
2 servers.
# gluster volume create glusterfs replica 2 transport tcp \
gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1
Now I have the following very nice replication schema:
+-------------+ +-------------+
| gluster00 | | gluster01 |
+-------------+ +-------------+
| exp0 | exp1 |
2014 Apr 28
2
volume start causes glusterd to core dump in 3.5.0
I just built a pair of AWS Red Hat 6.5 instances to create a gluster replicated pair file system. I can install everything, peer probe, and create the volume, but as soon as I try to start the volume, glusterd dumps core.
The tail of the log after the crash:
+------------------------------------------------------------------------------+
[2014-04-28 21:49:18.102981] I
2012 Oct 22
1
How to add new bricks to a volume?
Hi, dear glfs experts:
I've been using glusterfs (version 3.2.6) for months,so far it works very
well.Now I'm facing a problem of adding two new bricks to an existed
replicated (rep=2) volume,which is consisted of only two bricks and is
mounted by multiple clients.Can I just use the following commands to add
new bricks without stopping the services which is using the volume as
motioned?
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug:
[2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle]
0-gv0-stripe-0: Failed to get stripe-size
[2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk]
0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument)
Is there a fix for this in 3.3.1 or do we need to move to git HEAD to
make this work?
M.
--
2013 Oct 24
1
Geo-replication: queue delete commands and process after a specified time
Hello list,
I'm toying with the idea of using Gluster as a user facing network share,
and geo-replicating the data for backup purposes.
At a bare minimum I'd like geo-replicate to not sync file deletions
immediately to the slave, but instead queue those deletions for a
configurable period of time (say 7 days).
As an added bonus, moving a file would actually leave a copy behind with a
2012 Dec 20
0
nfs.export-dirs
hi All,
# gluster volume info data
Volume Name: data
Type: Distribute
Volume ID: d74ab958-1599-4e82-9358-1eea282d4025
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: tipper:/mnt/brick1
Options Reconfigured:
nfs.export-dirs: on
nfs.export-volumes: off
nfs.export-dir: /install
nfs.port: 2049
nfs.ports-insecure: off
nfs.disable: off
nfs.mount-udp: on
nfs.addr-namelookup: off
2013 Oct 14
0
Glusterfs 3.4.1 not able to mount the exports containing soft links
Hi,
I am running the glusterFS 3.4.1 NFS server. I have created a distributed
volume on server 1 , and I am trying to mount the soft link contained in
the volume as NFS from server 2 . but it is failing with the error "
mount.nfs: an incorrect mount option was specified"
Below is the volume in the server 1 that I am trying to export
server 1 sh# gluster volume info all
Volume
2013 Sep 06
1
Gluster native client very slow
Hello,
I'm testing a two nodes glusterfs distributed cluster (version 3.3.1-1)
on Debian 7. The two nodes write on the same iscsi volume on a SAN.
When I try to write an 1G file with dd, I have the following results :
NFS : 107 Mbytes/s
Gluster client : 8 Mbytes/sec
My /etc/fstab on the client :
/etc/glusterfs/cms.vol /data/cms glusterfs defaults 0 0
I'd like to use the gluster
2013 Sep 23
1
Mounting a sub directory of a glusterfs volume
I am not sure if posting with the subject copied from the webpage of
mail-list of an existing thread would loop my response under the same.
Apologies if it doesn't.
I am trying to figure a way to mount a directory within a gluster volume to
a web server. This directory is enabled with quota to limit a users' usage.
gluster config:
Volume Name: test-volume
features.limit-usage:
2012 Nov 27
1
Performance after failover
Hey, all.
I'm currently trying out GlusterFS 3.3.
I've got two servers and four clients, all on separate boxes.
I've got a Distributed-Replicated volume with 4 bricks, two from each
server,
and I'm using the FUSE client.
I was trying out failover, currently testing for reads.
I was reading a big file, using iftop to see which server was actually
being read from.
I put up an
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi,
I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used
by 4 clients.
Sometimes from some clients I can't access some of the files. After I force
a full heal on the brick I see several files healed. Is this behavior
normal?
Thanks
--
Paulo Silva <paulojjs at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: