similar to: Different results in setting atime

Displaying 20 results from an estimated 200 matches similar to: "Different results in setting atime"

2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users, Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit. Suppose I have a Gluster volume made up of four 1 MB bricks, like this Volume Name: test Type: Distributed-Replicate Status: Started Number of
2017 Sep 17
2
Volume Heal issue
Hi all, I have a replica 3 with 1 arbiter. I see the last days that one file at a volume is always showing as needing healing: gluster volume heal vms info Brick gluster0:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster1:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster2:/gluster/vms/brick *<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>* Status:
2017 Sep 17
0
Volume Heal issue
I am using gluster 3.8.12, the default on Centos 7.3 (I will update to 3.10 at some moment) On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote: > Hi all, > > I have a replica 3 with 1 arbiter. > > I see the last days that one file at a volume is always showing as needing > healing: > > gluster volume heal vms info > Brick
2018 Feb 05
0
Dir split brain resolution
After stoping/starting the volume I have: gluster volume heal engine info split-brain Brick gluster0:/gluster/engine/brick <gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8> Status: Connected Number of entries in split-brain: 1 Brick gluster1:/gluster/engine/brick Status: Connected Number of entries in split-brain: 0 gluster volume heal engine split-brain latest-mtime
2018 Feb 05
0
Dir split brain resolution
Hi Karthik, I tried to delete one file at one node and that is probably the reason. After several deletes seems that I deleted some files that shouldn't and the ovirt engine hosted on this volume was not able to start. Now I am setting up the engine from scratch... In case I see this kind of split brain again I will get back before I start deleting :) Alex On Mon, Feb 5, 2018 at 2:34 PM,
2018 Feb 05
2
Dir split brain resolution
Hi, I am wondering why the other brick is not showing any entry in split brain in the heal info split-brain output. Can you give the output of stat & getfattr -d -m . -e hex <file-path-on-brick> from both the bricks. Regards, Karthik On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkicktech at gmail.com> wrote: > After stoping/starting the volume I have: > > gluster volume
2013 Mar 20
1
About adding bricks ...
Hi @all, I've created a Distributed-Replicated Volume consisting of 4 bricks on 2 servers. # gluster volume create glusterfs replica 2 transport tcp \ gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1 Now I have the following very nice replication schema: +-------------+ +-------------+ | gluster00 | | gluster01 | +-------------+ +-------------+ | exp0 | exp1 |
2018 Feb 05
2
Dir split brain resolution
Hi all, I have a split brain issue and have the following situation: gluster volume heal engine info split-brain Brick gluster0:/gluster/engine/brick /ad1f38d7-36df-4cee-a092-ab0ce1f98ce9/ha_agent Status: Connected Number of entries in split-brain: 1 Brick gluster1:/gluster/engine/brick Status: Connected Number of entries in split-brain: 0 cd ha_agent/ [root at v0 ha_agent]# ls -al ls:
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > gluster version 3.10.6, replica 3 volume, daemon is present but does not > appear to be functioning > > peculiar behaviour. If I kill the glusterfs brick daemon and restart > glusterd then the brick becomes available - but one of my other volumes > bricks on the same server goes down in
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not appear to be functioning peculiar behaviour. If I kill the glusterfs brick daemon and restart glusterd then the brick becomes available - but one of my other volumes bricks on the same server goes down in the same way it's like wack-a-mole. any ideas? [root at gluster-2 bricks]# glv status digitalcorpora > Status
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves perf for this kind of a workload Also, could you disable eager-lock and check if that helps? I see that max time is being spent in acquiring locks. -Krutika On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote: > Hi Krutika, > > Is it anything in the profile indicating what is
2017 Sep 06
2
Slow performance of gluster volume
Hi Krutika, Is it anything in the profile indicating what is causing this bottleneck? In case i can collect any other info let me know. Thanx On Sep 5, 2017 13:27, "Abi Askushi" <rightkicktech at gmail.com> wrote: Hi Krutika, Attached the profile stats. I enabled profiling then ran some dd tests. Also 3 Windows VMs are running on top this volume but did not do any stress
2017 Sep 08
0
Slow performance of gluster volume
Following changes resolved the perf issue: Added the option /etc/glusterfs/glusterd.vol : option rpc-auth-allow-insecure on restarted glusterd Then set the volume option: gluster volume set vms server.allow-insecure on I am reaching now the max network bandwidth and performance of VMs is quite good. Did not upgrade the glusterd. As a next try I am thinking to upgrade gluster to 3.12 + test
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with and without shard will be the same. In any case, please attach the volume profile[1], so we can see what else is slowing things down. -Krutika [1] - https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2017 Sep 04
2
Slow performance of gluster volume
Hi all, I have a gluster volume used to host several VMs (managed through oVirt). The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network for the storage. When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct) out of the volume (e.g. writing at /root/) the performance of the dd is reported to be ~ 700MB/s, which is quite decent. When testing the dd on
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see shard in the options list. Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the
2017 Sep 11
0
Slow performance of gluster volume
Did not upgrade yet gluster. I am still using 3.8.12. Only the mentioned changes did provide the performance boost. >From which version to which version did you see such performance boost? I will try to upgrade and check difference also. On Sep 11, 2017 2:45 AM, "Ben Turner" <bturner at redhat.com> wrote: Great to hear! ----- Original Message ----- > From: "Abi
2017 Sep 06
2
Slow performance of gluster volume
I tried to follow step from https://wiki.centos.org/SpecialInterestGroup/Storage to install latest gluster on the first node. It installed 3.10 and not 3.11. I am not sure how to install 3.11 without compiling it. Then when tried to start the gluster on the node the bricks were reported down (the other 2 nodes have still 3.8). No sure why. The logs were showing the below (even after rebooting the
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika, I already have a preallocated disk on VM. Now I am checking performance with dd on the hypervisors which have the gluster volume configured. I tried also several values of shard-block-size and I keep getting the same low values on write performance. Enabling client-io-threads also did not have any affect. The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2017 Sep 11
0
Slow performance of gluster volume
Hi Abi Can you please share your current transfer speeds after you made the change? Thank you. On Mon, Sep 11, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote: > ----- Original Message ----- > > From: "Abi Askushi" <rightkicktech at gmail.com> > > To: "Ben Turner" <bturner at redhat.com> > > Cc: "Krutika Dhananjay"