similar to: Behaviour of two node degraded cluster

Displaying 20 results from an estimated 4000 matches similar to: "Behaviour of two node degraded cluster"

2013 Jul 02
1
read-subvolume
Hi everyone I have installed 3.3.1-1 from the Debian repository you provide. I am using a simple 2 node cluster and running in replication mode. The connection between the nodes is limited to 100MB/sec (that's bits not bytes!). Usage will be mainly for read access and since there is always a local copy available [ exactly 2 replicas on exactly 2 machines ] I expect very fast read
2019 Dec 28
1
GFS performance under heavy traffic
Hi David, It seems that I have misread your quorum options, so just ignore that from my previous e-mail. Best Regards, Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote: > > Hi David, > > Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2017 Sep 10
4
Corosync on a home network
I've been trying to build a model cluster using three virtual machines on my home server. Each VM boots off its own dedicated partition (CentOS 7.3). One partition is designated to be the common /home partition for the VMs, (on the real machine it will mount as /cluster). I'm intending to run GFS2 on the shared partition, so I need to configure DLM and corosync. That's where I'm
2009 Nov 20
3
o2net patch that avoids socket disconnect/reconnect
This fix modifies o2net layer behavior which seems to trigger some DLM race issues during umount/evictions that needs to be fixed as well. I am working on the dlm issues but meanwhile please review this patch. Thanks, --Srini
2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey, Can you give us the volume info output for this volume? Why are you not able to get the xattrs from arbiter brick? It is the same way as you do it on data bricks. The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in the getxattr outputs you have provided. Did you do a remove-brick and add-brick any time? Otherwise it will be trusted.afr.virt_images-client-{0,1,2} usually.
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik, Thanks for providing the required outputs. See my replies inline. On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote: > Hi Karthik and Ben, > > I'll try and reply to you inline. > > On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> > wrote: > > Hey, > > > > Can you give us the
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben, I'll try and reply to you inline. On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hey, > > Can you give us the volume info output for this volume? # gluster volume info virt_images Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks:
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> Hi, after many VM crashes during upgrades of Gluster, losing network >> connectivity on one node etc. I would advise running replica 2 with >> arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an
2012 Nov 14
2
Avoid Split-brain and other stuff
Hi! I just gave GlusterFS a try and experienced two problems. First some background: - I want to set up a file server with synchronous replication between branch offices, similar to Windows DFS-Replication. The goal is _not_ high-availability or cluster-scaleout, but just having all files locally available at each branch office. - To test GlusterFS, I installed two virtual machines
2012 Nov 26
1
Heal not working
Hi, I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2012 Jan 04
1
GPFS for mail-storage (Was: Re: Compressing existing maildirs)
Great information, thank you. Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? ----- Reply message ----- From: "Jan-Frode Myklebust" <janfrode at tanso.net> To: "Stan Hoeppner" <stan at hardwarefreak.com> Cc: "Timo Sirainen" <tss at iki.fi>, <dovecot at dovecot.org> Subject:
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2018 Jul 05
5
two 2-node clusters or one 4-node cluster?
Hello, I'm planning migration of current two clusters based on CentOS 6.x with Cman/Rgmanager going to CentOS 7.x and Corosync/Pacemaker. As the clusters and their services are on the same subnet, and there no particular security concerns differentiating them, I'm also evaluating the option to transform the two clusters into a unique 4-node one during the upgrade. Currently I'm
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
Hello All, I have installed Oracle Cluster Manager on linux x86-64 nit. I am using ocfs file system for quorum file. But I am getting following error. Please see ocfs configureation below. I would appreciate, if someone could help me to understand if I am doing something wrong. Thanks in advance. --------------------------------------------------cm.log file ---------------------------- oracm,
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and not for the entire volume. Since you have a
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now have four peers in the cluster, one dead, three live. Confidence level is not high. -- Lindsay Mathieson
2017 Jul 04
2
I need a sanity check.
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on with our CLVM cluster. Background: 4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches. Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage array. Machines are running CentOS 5.8 with the Xen kernels. These blades host various VMs for a project. The iSCSI