similar to: brick is down but gluster volume status says it's fine

Displaying 20 results from an estimated 1000 matches similar to: "brick is down but gluster volume status says it's fine"

2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > gluster version 3.10.6, replica 3 volume, daemon is present but does not > appear to be functioning > > peculiar behaviour. If I kill the glusterfs brick daemon and restart > glusterd then the brick becomes available - but one of my other volumes > bricks on the same server goes down in
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2011 Sep 02
1
[PATCH 0/7] hivex + hivexml: Add byte runs for nodes and values
This changeset adds byte run reporters for node and value metadata in the hivexml program. This location reporting required several new ABI functions, which required new ABI return types. One benefit to the byte run functions is additional sanity checks, which have revealed new data or parsing errors when run on M57 patents images. An example error: Image: Charlie, 2009-12-11, available at
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W
2008 Jun 04
1
balancing redundancy with space utilization
Currently it would seem that AFR will simply copy everything to every brick in the AFR. If I did something like ... volume afr-example type cluster/afr subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7 brick8 end-volume I would wind up with 8 copies of every file. Clearly, this is too many. What I would rather have is maybe 3 copies of each file distributed randomly across
2018 Jan 23
6
parallel-readdir is not recognized in GlusterFS 3.12.4
Hello, I saw that parallel-readdir was an experimental feature in GlusterFS version 3.10.0, became stable in version 3.11.0, and is now recommended for small file workloads in the Red Hat Gluster Storage Server documentation[2]. I've successfully enabled this on one of my volumes but I notice the following in the client mount log: [2018-01-23 10:24:24.048055] W [MSGID: 101174]
2018 Jan 27
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding devs who work on it On 23 Jan 2018 10:40 pm, "Alan Orth" <alan.orth at gmail.com> wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat Gluster Storage Server > documentation[2]. I've successfully
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote: >> Note that "stripe" is not tested much and practically unmaintained. > > Ah, this was what I suspected. Understood. I'll be happy with "shard". > > Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
Hi Krutika, May I kindly ping to you and ask that If you have any idea yet or figured out whats the issue may? I am awaiting your reply with four eyes :) Apologies for the ping :) -Gencer. From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com Sent: Thursday, July 6, 2017 11:06 AM To: 'Krutika
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika, Sure, here is volume info: root at sr-09-loc-50-14-18:/# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 30426017-59d5-4091-b6bc-279a905b704a Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2: sr-09-loc-50-14-18:/bricks/brick2 Brick3:
2017 Nov 09
2
GlusterFS healing questions
Hi, We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics) 1. Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal. 100gb
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > To: "Alan Orth" <alan.orth at gmail.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Adding
2017 Jul 04
0
Very slow performance on Sharded GlusterFS
Hi Krutika, Thank you so much for myour reply. Let me answer all: 1. I have no idea why it did not get distributed over all bricks. 2. Hm.. This is really weird. And others; No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command: sudo gluster
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
Hi, Sorry for the late response. No, so eager-lock experiment was more to see if the implementation had any new bugs. It doesn't look like it does. I think having it on would be the right thing to do. It will reduce the number of fops having to go over the network. Coming to the performance drop, I compared the volume profile output for stripe and 32MB shard again. The only thing that is
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Thanks. I think reusing the same volume was the cause of lack of IO distribution. The latest profile output looks much more realistic and in line with i would expect. Let me analyse the numbers a bit and get back. -Krutika On Tue, Jul 4, 2017 at 12:55 PM, <gencer at gencgiyen.com> wrote: > Hi Krutika, > > > > Thank you so much for myour reply. Let me answer all: > >
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all, We are having a rather interesting problem with one of our VM storage systems. The GlusterFS client is throwing errors relating to GFID mismatches. We traced this down to multiple shards being present on the gluster nodes, with different gfids. Hypervisor gluster mount log: [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk]
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
What if you disabled eager lock and run your test again on the sharded configuration along with the profile output? # gluster volume set <VOL> cluster.eager-lock off -Krutika On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > Thanks. I think reusing the same volume was the cause of lack of IO > distribution. > The latest profile output
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
Krutika, I?m sorry I forgot to add logs. I attached them now. Thanks, Gencer. From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com Sent: Thursday, July 6, 2017 10:27 AM To: 'Krutika Dhananjay' <kdhananj at redhat.com> Cc: 'gluster-user' <gluster-users at gluster.org> Subject: Re: