similar to: remove snapshot when one peer is dead

Displaying 20 results from an estimated 5000 matches similar to: "remove snapshot when one peer is dead"

2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 11/06/2017 9:38 AM, Lindsay Mathieson wrote: > Since my node died on friday I have a dead peer (vna) that needs to be > removed. > > > I had major issues this morning that I haven't resolve yet with all > VM's going offline when I rebooted a node which I *hope * was due to > quorum issues as I now have four peers in the cluster, one dead, three > live. >
2017 Jun 11
1
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:57 AM, Lindsay Mathieson wrote: > > I did a > > | gluster volume set all cluster.server-quorum-ratio 51%| > > And that has resolved my issue for now as it allows two servers to > form a quorum.| > | > Edit :) Actually | gluster volume set all cluster.server-quorum-ratio 50%| -- Lindsay Mathieson -------------- next part -------------- An
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote: > Since my node died on friday I have a dead peer (vna) that needs to be > removed. > > > I had major issues this morning that I haven't resolve yet with all > VM's going offline when I rebooted a node which I *hope * was due to > quorum issues as I now have four peers in the cluster, one dead, three > live. >
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now have four peers in the cluster, one dead, three live. Confidence level is not high. -- Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > > > Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: > > Yes. And please ensure you do this after bringing down all the glusterd > instances and then once the peer file is removed from all the nodes restart > glusterd on all the
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
The gluster side looks like this: root at B741:~# gluster volume get glvol_samba features.show-snapshot-directory features.show-snapshot-directory???????? on root at B741:~# gluster volume get glvol_samba features.uss features.uss???????????????????????????? enable I found an error when the samba client is mounting the gluster volume in the gluster logs
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: Yes. And please ensure you do this after bringing down all the glusterd instances and then once the peer file is removed from all the nodes restart glusterd on all the nodes one after another. If you have to bring down all gluster instances before file removal, you also bring down the whole gluster
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
Hi Sebastian, why are you using shadow:snapprefix if this is just ?snap?? Does it work using ONLY shadow:format = snap_GMT-%Y.%m.%d-%H.%M.%S ? If you use snapprefix you also need to use shadow:delimiter (in your case this would be ?_?). However, I never managed to get it working with snapprefix on my machines. Alexander > On Monday, May 22, 2023 at 2:52 PM, Sebastian Neustein via samba
2023 May 22
2
vfs_shadow_copy2 cannot read/find snapshots
Hi Alexander # net conf delparm projects shadow:snapprefix does not change a thing. The error persists. (I killed my smb session before trying again). log still says: [2023/05/22 15:23:23.324179,? 1] ../../source3/modules/vfs_shadow_copy2.c:2222(shadow_copy2_get_shadow_copy_data) ? shadow_copy2_get_shadow_copy_data: SMB_VFS_NEXT_OPEN failed for
2023 May 22
3
vfs_shadow_copy2 cannot read/find snapshots
Hi I am trying to get shadow_copy2 to read gluster snapshots and provide the users with previous versions of their files. Here is my smb.conf: [global] ??????? security = ADS ??????? workgroup = AD ??????? realm = AD.XXX.XX ??????? netbios name = A32X ??????? log file = /var/log/samba/%m ??????? log level = 1 ??????? idmap config * : backend = tdb ??????? idmap config * : range =
2006 Jan 09
0
[PATCH 01/11] ocfs2: event-driven quorum
This patch separates o2net and o2quo from knowing about one another as much as possible. This is the first in a series of patches that will allow userspace cluster interaction. Quorum is separated out first, and will ultimately only be associated with the disk heartbeat as a separate module. To do so, this patch performs the following changes: * o2hb_notify() is added to handle injection of
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and refusing to do IO. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote: > *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set cluster.server-quorum-ratio 51% On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi all, > > I have promised to do some testing and I finally find some time and > infrastructure. > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > replicated volume with arbiter (2+1) and VM on KVM (via
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3. There are some gotchas with using an arbiter for VM workloads. If quorum-type is auto and a brick that is not the arbiter drop out then if the up brick is dirty as far as the arbiter is concerned i.e. the only good copy is on the down brick you will get ENOTCONN and your VMs will halt on IO. On 6 September 2017 at 16:06,
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder to achieve than with just replica 2 + arbiter. On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi Neil, docs mention two live nodes of replica 3 blaming each other and > refusing to do IO. > > https://gluster.readthedocs.io/en/latest/Administrator% >
2013 Mar 29
0
Getting Unknown Error while configuring Asterisk with Linux HA
Hi, I recently configured Linux HA for Asterisk service (using Asterisk resource agent downloaded from link: https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/asterisk ). As per configuration it is working good but when I include "monitor_sipuri=" sip:42 at 10.3.152.103" " parameter in primitive section it is giving me an errors like listed below; root at
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik, Thanks for the info. Maybe the documentation should be updated to explain the different AFR versions, I know I was confused. Also, when looking at the changelogs from my three bricks before fixing: Brick 1: trusted.afr.virt_images-client-1=0x000002280000000000000000 trusted.afr.virt_images-client-3=0x000000000000000000000000 Brick 2: