similar to: Quota problems with Gluster3.3b2

Displaying 20 results from an estimated 900 matches similar to: "Quota problems with Gluster3.3b2"

2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t glusterfs gluster01:/volume01 /mnt/gluster The client
2011 Aug 24
1
Input/output error
Hi, everyone. Its nice meeting you. I am poor at English.... I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want to change from gluster mount to nfs mount. I have installed GlusterFS 3.2.1 one week ago,and replication 2 server. OS:CentOS5.5 64bit RPM:glusterfs-core-3.2.1-1 glusterfs-fuse-3.2.1-1 command gluster volume create syncdata replica 2 transport tcp
2013 Oct 26
1
Crashing (signal received: 11)
I am seeing this crashing happening, I am working on the self healing errors as well, not sure if the two are related. I would appreciate any direction on trying to resolve the issue, I have clients dropping connection daily. [2013-10-26 15:35:46.935903] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-ENTV04EP-replicate-9: background meta-data self-heal failed on / [2013-10-26
2012 Nov 30
2
"layout is NULL", "Failed to get node-uuid for [...] and other errors during rebalancing in 3.3.1
I started rebalancing my volume after updating from 3.2.7 to 3.3.1. After a few hours, I noticed a large number of failures in the rebalance status: > Node Rebalanced-files size scanned failures > status > --------- ----------- ----------- ----------- ----------- > ------------ > localhost 0 0Bytes 4288805
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi... Started playing with gluster. And the heal functions is my "target" for testing. Short description of my test ---------------------------- * 4 replicas on single machine * glusterfs mounted locally * Create file on glusterfs-mounted directory: date >data.txt * Append to file on one of the bricks: hostname >>data.txt * Trigger a self-heal with: stat data.txt =>
2011 Jun 09
1
NFS problem
Hi, I got the same problem as Juergen, My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0 Volume Name: poolsave Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/soft/gluster-data Brick2: ylal2960:/soft/gluster-data Options Reconfigured: diagnostics.brick-log-level: DEBUG network.ping-timeout: 20 performance.cache-size: 512MB
2010 Nov 11
1
Possible split-brain
Hi all, I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client: [root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2011 Mar 03
3
Mac / NFS problems
Hello, Were having issues with macs writing to our gluster system. Gluster vol info at end. On a mac, if I make a file in the shell I get the following message: smoke:hunter david$ echo hello > test -bash: test: Operation not permitted And the file is made but is zero size. smoke:hunter david$ ls -l test -rw-r--r-- 1 david realise 0 Mar 3 08:44 test glusterfs/nfslog logs thus:
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume on Glusterfs's nfs. But could success on Distributed-Replicate . Anyone know how or why ? 2013/9/5 higkoohk <higkoohk at gmail.com> > Thanks Vijay ! > > It run success after 'volume set images-stripe nfs.nlm off'. > > Now I can use Esxi with Glusterfs's nfs export . > > Many
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover that when one of replicate node reboot and startup the glusterd daemon,the gluster will crash cause by the other replicate node cpu usage reach 100%. Our gluster info: Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Options Reconfigured: performance.cache-size: 3GB
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it to his cluster via automount. It seems to work initially but after some time (days) he is now regularly seeing this warning: "Too many levels of symbolic links" $ df: `/share/gl': Too many levels of symbolic links when he tries to traverse the mounted filesystems. I've been using gluster with static mounts
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone: We have a glusterfs clusters, version is 3.2.7. The volume info is as below: Volume Name: gfs1 Type: Distributed-Replicate Status: Started Number of Bricks: 94 x 3 = 282 Transport-type: tcp We native mount the volume in all nodes. When we access the file ?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain. While we can access the same file on
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little frustration with Gluster. I know an enormous emount of effort has gone into this product, and I just can't believe that with all the effort behind it and so many people using it, it could be so fragile. So here goes. Perhaps someone here can point to the error of my ways. I really want this to work because it would be ideal
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi, I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode (fs1, fs2) Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit network I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using glusterfs. (also tried NFS -> Gluster mount) We have 50Gb of
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2012 Feb 18
1
Gluster NFS and symlink
Hi list, Is there a configuration for gluster to have symlinks working with gluster nfs exports? When I try to create a symlink on a glusterfs nfs mount I get: ln: creating symbolic link `test' to `httpdocs': Unknown error 526 From nfs.log: [2012-02-18 01:27:27.541155] E [client3_1-fops.c:173:client3_1_symlink_cbk] 0-dcm-gluster-backup1-client-0: remote operation failed: Operation not
2012 Mar 12
0
Data consistency with Gluster 3.2.5
I have set up a replicated, four-node gluster config for a web farm. The idea is that each web node is its own Gluster server, and will have its own copy of the entire web root locally. It then serves the cluster to itself via a mount. We're running it over dual GigE NICs bonded. The problem I am having is when we switch live traffic to nodes in the cluster, they almost immediately get