similar to: access a file on one node, split brain, while it's normal on another node

Displaying 20 results from an estimated 400 matches similar to: "access a file on one node, split brain, while it's normal on another node"

2012 Mar 12
0
Data consistency with Gluster 3.2.5
I have set up a replicated, four-node gluster config for a web farm. The idea is that each web node is its own Gluster server, and will have its own copy of the entire web root locally. It then serves the cluster to itself via a mount. We're running it over dual GigE NICs bonded. The problem I am having is when we switch live traffic to nodes in the cluster, they almost immediately get
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone, I'm playing with Gluster3.3b2, and everything is working fine when uploading stuff through swift. However, when I enable quotas on Gluster, I randomly get permission errors. Sometimes I can upload files, most times I can't. I'm mounting the partitions with the acl flag, I've tried wiping out everything and starting from scratch, same result. As soon as I
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there, Im running glusterfs version 3.1.0. The client crashed after sometime with below stack. 2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1: Subvolume 'distribute-1' came back up; going online. [2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1: data self-heal triggered. path: /streaming/set3/work/reduce.12.1294902171.dplog.temp,
2011 Aug 21
2
Fixing split brain
Hi Consider the typical spit brain situation: reading from file gets EIO, logs say: [2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open] 0-gfs-replicate-0: failed to open as split brain seen, returning EIO [2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk] 0-glusterfs-fuse: 1371456: OPEN() /manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1 (Input/output
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2011 Jun 28
0
[Gluster-devel] volume rebalance still broken
Replying and adding gluster-users. That seems more appropriate? ________________________________________ From: gluster-devel-bounces+jwalker=gluster.com at nongnu.org [gluster-devel-bounces+jwalker=gluster.com at nongnu.org] on behalf of Emmanuel Dreyfus [manu at netbsd.org] Sent: Tuesday, June 28, 2011 6:51 AM To: gluster-devel at nongnu.org Subject: [Gluster-devel] volume rebalance still broken
2013 Oct 26
1
Crashing (signal received: 11)
I am seeing this crashing happening, I am working on the self healing errors as well, not sure if the two are related. I would appreciate any direction on trying to resolve the issue, I have clients dropping connection daily. [2013-10-26 15:35:46.935903] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-ENTV04EP-replicate-9: background meta-data self-heal failed on / [2013-10-26
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little frustration with Gluster. I know an enormous emount of effort has gone into this product, and I just can't believe that with all the effort behind it and so many people using it, it could be so fragile. So here goes. Perhaps someone here can point to the error of my ways. I really want this to work because it would be ideal
2007 Aug 15
0
[git patch] fstype support + minor stuff
hello hpa, rebased my branch, please pull latest git pull git://brane.itp.tuwien.ac.at/~mattems/klibc.git maks for the following shortlog maximilian attems (6): fstype: add squashfs v3 support reiser4_fs.h: add attribute packed to reiser4_master_sb fstype: add ext4 support .gitignore: add subdir specific entries usr/klibc/Kbuild: beautify klibc build fstype:
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2017 Sep 22
0
fts_read failed
Hi, I have simple installation, using mostly defaults, of two mirrored servers. One brick, one volume. GlusterFS version is 3.12.1 (server and client). All hosts involved are Debian 9.1. On another host I have mounted two different directories from the cluster using /etc/fstab: gfs1,gfs2:/vol1/sites-available/ws0 /etc/nginx/sites-available glusterfs defaults,_netdev 0 0 and
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t glusterfs gluster01:/volume01 /mnt/gluster The client
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2013 Feb 26
0
Replicated Volume Crashed
Hi, I have a gluster volume that consists of 22Bricks and includes a single folder with 3.6 Million files. Yesterday the volume crashed and turned out to be completely unresposible and I was forced to perform a hard reboot on all gluster servers because they were not able to execute a reboot command issued by the shell because they were that heavy overloaded. Each gluster server has 12 CPU cores
2010 Nov 11
1
Possible split-brain
Hi all, I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client: [root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi... Started playing with gluster. And the heal functions is my "target" for testing. Short description of my test ---------------------------- * 4 replicas on single machine * glusterfs mounted locally * Create file on glusterfs-mounted directory: date >data.txt * Append to file on one of the bricks: hostname >>data.txt * Trigger a self-heal with: stat data.txt =>
1997 Mar 03
0
SECURITY: Important fixes for IMAP
-----BEGIN PGP SIGNED MESSAGE----- The IMAP servers included with all versions of Red Hat Linux have a buffer overrun which allow *remote* users to gain root access on systems which run them. A fix for Red Hat 4.1 is now avaialble (details on it at the end of this note). Users of Red Hat 4.0 should apply the Red Hat 4.1 fix. Users of previous releases of Red Hat Linux are strongly encouraged to
2019 Dec 20
1
GFS performance under heavy traffic
Hi David, Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases). In such way, when the primary is lost, your client can reach a backup one without disruption. P.S.: Client may 'hang' - if the primary server got