Displaying 20 results from an estimated 20000 matches similar to: "Gluster native client configuration"
2013 Sep 06
1
Gluster native client very slow
Hello,
I'm testing a two nodes glusterfs distributed cluster (version 3.3.1-1)
on Debian 7. The two nodes write on the same iscsi volume on a SAN.
When I try to write an 1G file with dd, I have the following results :
NFS : 107 Mbytes/s
Gluster client : 8 Mbytes/sec
My /etc/fstab on the client :
/etc/glusterfs/cms.vol /data/cms glusterfs defaults 0 0
I'd like to use the gluster
2013 Oct 07
1
glusterd service fails to start on one peer
I'm hoping that someone here can point me the right direction to help me
solve a problem I am having.
I've got 3 gluster peers and for some reason glusterd sill not start on one
of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4
(2.6.32-358.el6.x86_64).
In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error
repeated 36 times (alternating between brick-0
2013 Dec 06
2
How reliable is XFS under Gluster?
Hello,
I am in the point of picking up a FS for new brick nodes. I was used to
like and use ext4 until now but I recently red for an issue introduced by a
patch in ext4 that breaks the distributed translator. In the same time, it
looks like the recommended FS for a brick is no longer ext4 but XFS which
apparently will also be the default FS in the upcoming RedHat7. On the
other hand, XFS is being
2013 Sep 30
1
Using gluster with ipv6
Hi,
I'm starting to use gluster version debian/unstable 3.4.0-4
but I need use ipv6 in my network, but just can't setup glusterfs for.
I tried configuring IPv6 or DNS (only resolving at IPv6), got errors in both case.
Here last test I did + error:
/usr/sbin/glusterfsd -s ipv6.google.com --volfile-id testvol5.XXX....
[2013-09-30 14:09:24.424624] E [common-utils.c:211:gf_resolve_ip6]
2013 Jul 02
1
files do not show up on gluster volume
I am trying to touch files on a mounted gluster mount point.
gluster1:/gv0 24G 786M 22G 4% /mnt
[root at centos63 ~]# cd /mnt
[root at centos63 mnt]# ll
total 0
[root at centos63 mnt]# touch hi
[root at centos63 mnt]# ll
total 0
The files don't show up after I ls them, but if I try to do a mv operation
something very strange happens:
[root at centos63 mnt]# mv /tmp/hi .
mv:
2013 Sep 16
1
Gluster Cluster
Hi all
I have a glusterfs cluster underpinning a KVM virtual server in a two host
setup. Today the cluster just froze and stopped working. I have rebooted
both nodes, brought up the storage again. I can see all the vm files there
but when I try to start the vm the machine just hangs. How can I see if
gluster is trying to synchronise files between the two servers?
thanks
Shaun
-------------- next
2013 Dec 16
1
Gluster Management Console
I see references to the Gluster Management Console in some of the older
(3.0 and 3.1) documentation. Is this feature available in version 3.4 as
well? If so, could someone point me to documentation on how to access?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Jul 17
0
Gluster 3.4.0 RDMA stops working with more then a small handful of nodes
I was wondering if anyone on this list has run into this problem. When creating/mounting RDMA volumes of ~half dozen or less nodes - I am able to successfully create, start, and mount these RDMA only volumes.
However if I try to scale this to 20, 50, or even 100 nodes RDMA only volumes completely fall over on themselves. Some of the basic symptoms I'm seeing are:
* Volume create always
2013 Jul 03
1
One Volume Per User - Possible with Gluster?
I've been looking into using Gluster to replace a system that we currently
use for storing data for several thousand users. With our current networked
file system, each user can create volumes and only that user has access to
their volumes with authentication.
I see that Gluster also offers a username/password auth system, which is
great, but there are several issues about it that bother me:
2013 Jul 18
1
Gluster & PHP - stat problem?
Has anyone ever ran into a problem in which PHP's stat() call to a file on
a Gluster-backed volume randomly fails, yet /usr/bin/stat never fails?
Running strace against both reveals that the underlying system calls
succeed.
I realize this is probably a PHP problem since I cannot replicate with a
non-PHP-based script; however, was hoping someone on this list might have
seen this before.
RHEL
2013 Oct 01
1
Gluster on ZFS: cannot open empty files
Hi everyone,
I've got glusterfs-server/glusterfs-client
version 3.4.0final-ubuntu1~precise1 (from the semiosis PPA) running on
Ubuntu 13.04. I'm trying to share ZFS (ZFS on Linux 0.6.2-1~precise from
the zfs-stable PPA) using GlusterFS. When creating the ZFS filesystem and
the Gluster volume, I accepted all the defaults and then:
- I enabled deduplication for the ZFS filesystem (zfs set
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2013 Dec 15
2
puppet-gluster from zero: hangout?
Hey james and JMW:
Can/Should we schedule a google hangout where james spins up a
puppet-gluster based gluster deployment on fedora from scratch? Would love
to see it in action (and possibly steal it for our own vagrant recipes).
To speed this along: Assuming James is in England here , correct me if im
wrong, but if so ~ Let me propose a date: Tuesday at 12 EST (thats 5 PM in
london - which i
2013 Oct 06
0
Options to turn off/on for reliable virtual machinewrites & write performance
In a replicated cluster, the client writes to all replicas at the same time. This is likely while you are only getting half the speed for writes as its going to two servers and therefore maxing your gigabit network. That is, unless I am misunderstanding how you are measuring the 60MB/s write speed.
I don't have any advice on the other bits...sorry.
Todd
-----Original Message-----
From:
2013 Nov 25
1
Geo Replication Hooks
Hi,
I have created a proposal for the implementation of Geo Replication Hooks.
See here:
http://www.gluster.org/community/documentation/index.php/Features/Geo_Replication_Hooks
Any comments, thoughts, etc would be great.
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Jul 09
2
Gluster Self Heal
Hi,
I have a 2-node gluster with 3 TB storage.
1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes.
2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting.
Please advice on how I can maintain
2013 Sep 28
0
Gluster NFS Replicate bricks different size
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share
holds the qcow images of the VMs.
I recently nuked a whole replica brick in an 1x2 array (for numerous other
reasons including split-brain), the brick self healed and restored back to
the same state as its partner.
4 days later, they've become inbalanced. The direct `du` of the /brick are
showing different sizes by
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi,
When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file.
Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4
This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2014 Mar 20
1
Optimizing Gluster (gfapi) for high IOPS
Hey folks,
We've been running VM's on qemu using a replicated gluster volume connecting using gfapi and things have been going well for the most part. Something we've noticed though is that we have problems with many concurrent disk operations and disk latency. The latency gets bad enough that the process eats the cpu and the entire machine stalls. The place where we've seen it
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ;
the client can write data that originally on offline bricks to other online bricks ;
the distributed volume crash, even if one brick offline; it's so unreliable
when the failed brick online ,how to join the original distribute volume;
don't want the new write data can't