Displaying 20 results from an estimated 500 matches similar to: "Split brain directory"
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which
2013 Jan 11
2
Best practices on KVM systems
Hello,
i'm trying to create some best practices on my centos 6.3 / libvirt /kvm
hypervisors.
Actually, i use NFS as shared storage backend for every VM and make
reasonable use of the KSM (enabling it into qemu.conf).
Every VM is configured with VirtIO drivers (when possible) and the disks
use none as cacheing method to allow me live migration.
I'll be happy to know if there are some
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey,
Can you give us the volume info output for this volume?
Why are you not able to get the xattrs from arbiter brick? It is the same
way as you do it on data bricks.
The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in
the getxattr outputs you have provided.
Did you do a remove-brick and add-brick any time? Otherwise it will be
trusted.afr.virt_images-client-{0,1,2} usually.
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik,
Thanks for providing the required outputs. See my replies inline.
On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote:
> Hi Karthik and Ben,
>
> I'll try and reply to you inline.
>
> On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com>
> wrote:
> > Hey,
> >
> > Can you give us the
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben,
I'll try and reply to you inline.
On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hey,
>
> Can you give us the volume info output for this volume?
# gluster volume info virt_images
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks:
2014 May 04
3
BAD disk I/O performance
Hello,
i'm trying to convert my physical web servers to a virtual guest. What i'm
experiencing is a poor disk i/o, compared to the physical counterpart
(having strace telling me that each write takes approximately 100 times the
time needed on physical).
Tested hardware is pretty good (HP Proliant 360p Gen8 with 2xSAS 15k rpm 48
Gb Ram).
The hypervisor part is a minimal Centos 6.5 with
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik,
Thanks for the info. Maybe the documentation should be updated to
explain the different AFR versions, I know I was confused.
Also, when looking at the changelogs from my three bricks before fixing:
Brick 1:
trusted.afr.virt_images-client-1=0x000002280000000000000000
trusted.afr.virt_images-client-3=0x000000000000000000000000
Brick 2:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey Henrik,
Good to know that the issue got resolved. I will try to answer some of the
questions you have.
- The time taken to heal the file depends on its size. That's why you were
seeing some delay in getting everything back to normal in the heal info
output.
- You did not hit the split-brain situation. In split-brain all the bricks
will be blaming the other bricks. But in your case the
2009 Mar 01
1
php agi and get_data errors.
Hallo,
I'm using a self-made script to get the code a user enters on my applications.
Sadly, the code doesn't work, i push the digits, but the result is
always an empty data.
(code=200, result=1, data= '').
Here is the code:
set_time_limit(99999999999999);
require('phpagi.php');
$agi = new AGI();
$agi->answer();
function printdebug($a) {
global $agi;
2018 Jan 23
6
parallel-readdir is not recognized in GlusterFS 3.12.4
Hello,
I saw that parallel-readdir was an experimental feature in GlusterFS
version 3.10.0, became stable in version 3.11.0, and is now recommended for
small file workloads in the Red Hat Gluster Storage Server
documentation[2]. I've successfully enabled this on one of my volumes but I
notice the following in the client mount log:
[2018-01-23 10:24:24.048055] W [MSGID: 101174]
2017 Oct 11
2
gluster volume + lvm : recommendation or neccessity ?
Thanks Rafi, that's understood now :)
I'm considering to deploy gluster on a 4 x 40 TB? bricks, do you think
it would better to make 1 LVM partition for each Volume I need or to
make one Big LVM partition and start multiple volumes on it ?
We'll store mostly big files (videos) on this environement.
Le 11/10/2017 ? 09:34, Mohammed Rafi K C a ?crit?:
>
> On 10/11/2017 12:20
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].
* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
LV. More details about thin provisioning and thin provisioned snapshot
can be found here.
* This thinly
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2017 Sep 20
3
Backup and Restore strategy in Gluster FS 3.8.4
> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old in
> the 3.8 release, 3.8.15 is the current (and probably final) release of 3.8.
>
> "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and GlusterFS-
> 3.11 (STM) have reached EOL. Except for serious security issues no
> further updates to these versions are forthcoming. If you find a bug please
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig,
There is no way to resolve gfid split-brains with type mismatch. You have
to do it manually by following the steps in [1].
In case of type mismatch it is recommended to resolve it manually. But for
only gfid mismatch in 3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week
where I shutdown both nodes (one after each others). I had many files that
needed to be healed after that. Everything worked well, except for 1 file.
It is in split-brain, with 2 different GFID. I read the documentation but
it only covers the cases where the GFID is the same on both bricks. BTW, I
am running Gluster 3.10.
Here
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
On Thu, Sep 28, 2017 at 12:11 PM, Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.zhou at nokia-sbell.com> wrote:
>
>
> The version I am using is glusterfs 3.6.9
>
This is a very old version which is EOL. If you can upgrade to any of the
supported version (3.10 or 3.12) would be great.
They have many new features, bug fixes & performance improvements. If you
can try to reproduce
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that
will help to understand more about this scenarios and why gluster
recommend replica 3 or arbiter volume.
Regards
Rafi KC
On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote:
> Hi Ludwig,
>
> There is no way to resolve gfid split-brains with type mismatch. You
> have to do it manually by following the steps in [1].
2017 Jun 09
3
Best way to shut down RH KVM/QEMU based VMs via NUT
Hey there,
Has anyone tackled the best way to shut down a VM running Win10 running in RedHat's (CentOS 7) KVM/QEMU VM manager?
I'm looking at this link to follow: