Displaying 20 results from an estimated 300 matches similar to: "Dir split brain resolution"
2018 Feb 05
2
Dir split brain resolution
Hi,
I am wondering why the other brick is not showing any entry in split brain
in the heal info split-brain output.
Can you give the output of stat & getfattr -d -m . -e hex
<file-path-on-brick> from both the bricks.
Regards,
Karthik
On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkicktech at gmail.com> wrote:
> After stoping/starting the volume I have:
>
> gluster volume
2018 Feb 05
0
Dir split brain resolution
After stoping/starting the volume I have:
gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
<gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8>
Status: Connected
Number of entries in split-brain: 1
Brick gluster1:/gluster/engine/brick
Status: Connected
Number of entries in split-brain: 0
gluster volume heal engine split-brain latest-mtime
2018 Feb 05
0
Dir split brain resolution
Hi Karthik,
I tried to delete one file at one node and that is probably the reason.
After several deletes seems that I deleted some files that shouldn't and
the ovirt engine hosted on this volume was not able to start.
Now I am setting up the engine from scratch...
In case I see this kind of split brain again I will get back before I start
deleting :)
Alex
On Mon, Feb 5, 2018 at 2:34 PM,
2009 Jul 06
1
lvb length issue [was Re: [ocfs2-tools-devel] question of ocfs2_controld (Jun 27)]
Now the discussion moves to kernel space, I move the email from
ocfs2-tools-devel to ocfs2-devel.
The original discussion can be found from
http://oss.oracle.com/pipermail/ocfs2-tools-devel/2009-June/001891.html
Joel Becker Wrote:
> On Sat, Jun 27, 2009 at 03:46:04AM +0800, Coly Li wrote:
>> Joel Becker Wrote:
>>> On Sat, Jun 27, 2009 at 03:00:05AM +0800, Coly Li wrote:
>>
2021 Jan 07
1
HCI Cluster - CentOS8 to Streams Upgrade Broken
I have a test environment. Three node HCI cluster. CentOS8 build.
Gluster as file system with standard cockpit deploy of HCI.
Converted to CentOS Streams which seemed to go fine. Did a yum update and
no issues.
Did a reboot.. and now engine will no longer start. So I can no longer
start my Virtual machines. I posted as bug
https://bugzilla.redhat.com/show_bug.cgi?id=1911910 I posted to
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users,
Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of
2017 Sep 17
2
Volume Heal issue
Hi all,
I have a replica 3 with 1 arbiter.
I see the last days that one file at a volume is always showing as needing
healing:
gluster volume heal vms info
Brick gluster0:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster1:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster2:/gluster/vms/brick
*<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>*
Status:
2013 Sep 06
1
[PATCH 1/6] Add dlm operations placeholders
Signed-off-by: Goldwyn Rodrigues <rgoldwyn at suse.com>
---
fs/ocfs2/stack_user.c | 30 ++++++++++++++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 286edf1..1b18193 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -799,11 +799,31 @@ static int fs_protocol_compare(struct
2017 Sep 17
0
Volume Heal issue
I am using gluster 3.8.12, the default on Centos 7.3
(I will update to 3.10 at some moment)
On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote:
> Hi all,
>
> I have a replica 3 with 1 arbiter.
>
> I see the last days that one file at a volume is always showing as needing
> healing:
>
> gluster volume heal vms info
> Brick
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's like wack-a-mole.
any ideas?
[root at gluster-2 bricks]# glv status digitalcorpora
> Status
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
> bricks on the same server goes down in
2017 Sep 04
2
Slow performance of gluster volume
Hi all,
I have a gluster volume used to host several VMs (managed through oVirt).
The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network
for the storage.
When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1
oflag=direct) out of the volume (e.g. writing at /root/) the performance of
the dd is reported to be ~ 700MB/s, which is quite decent. When testing the
dd on
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika,
I already have a preallocated disk on VM.
Now I am checking performance with dd on the hypervisors which have the
gluster volume configured.
I tried also several values of shard-block-size and I keep getting the same
low values on write performance.
Enabling client-io-threads also did not have any affect.
The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2017 Sep 06
2
Slow performance of gluster volume
Hi Krutika,
Is it anything in the profile indicating what is causing this bottleneck?
In case i can collect any other info let me know.
Thanx
On Sep 5, 2017 13:27, "Abi Askushi" <rightkicktech at gmail.com> wrote:
Hi Krutika,
Attached the profile stats. I enabled profiling then ran some dd tests.
Also 3 Windows VMs are running on top this volume but did not do any stress
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the
2017 Sep 06
2
Slow performance of gluster volume
I tried to follow step from
https://wiki.centos.org/SpecialInterestGroup/Storage to install latest
gluster on the first node.
It installed 3.10 and not 3.11. I am not sure how to install 3.11 without
compiling it.
Then when tried to start the gluster on the node the bricks were reported
down (the other 2 nodes have still 3.8). No sure why. The logs were showing
the below (even after rebooting the
2017 Nov 15
2
virtlock - a VM goes read-only
Dear colleagues,
I am facing a problem that has been troubling me for last week and a half.
Please if you are able to help or offer some guidance.
I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors
and an NFS filer that serves as a VM image storage. The overall environment
works exceptionally well. However, starting a few weeks ago I have been
trying to implement virtlock
2017 Sep 10
2
Slow performance of gluster volume
Great to hear!
----- Original Message -----
> From: "Abi Askushi" <rightkicktech at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "gluster-user" <gluster-users at gluster.org>
> Sent: Friday, September 8, 2017 7:01:00 PM
> Subject: Re: [Gluster-users] Slow performance of gluster volume
>
> Following
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with
and without shard will be the same.
In any case, please attach the volume profile[1], so we can see what else
is slowing things down.
-Krutika
[1] -
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves
perf for this kind of a workload
Also, could you disable eager-lock and check if that helps? I see that max
time is being spent in acquiring locks.
-Krutika
On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote:
> Hi Krutika,
>
> Is it anything in the profile indicating what is