similar to: Variable sized bricks & replication

Displaying 20 results from an estimated 30000 matches similar to: "Variable sized bricks & replication"

2018 Apr 26
0
FreeBSD problem adding/removing replicated bricks
Hi Folks, I'm trying to debug an issue that I've found while attempting to qualify GlusterFS for potential distributed storage projects on the FreeBSD-11.1 server platform - using the existing package of GlusterFS v3.11.1_4 The main issue I've encountered is that I cannot add new bricks while setting/increasing the replica count. If I create a replicated volume "poc" on
2018 Apr 26
0
Problem adding replicated bricks on FreeBSD
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger <mark.staudinger at nyi.net> wrote: > Hi Folks, > I'm trying to debug an issue that I've found while attempting to qualify > GlusterFS for potential distributed storage projects on the FreeBSD-11.1 > server platform - using the existing package of GlusterFS v3.11.1_4 > The main issue I've encountered is that I
2017 Jul 02
0
Some bricks are offline after restart, how to bring them online gracefully?
Thank you, I created bug with all logs: https://bugzilla.redhat.com/show_bug.cgi?id=1467050 During testing I found second bug: https://bugzilla.redhat.com/show_bug.cgi?id=1467057 There something wrong with Ganesha when Gluster bricks are named "w0" or "sw0". On Fri, Jun 30, 2017 at 11:36 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > Jan, by
2018 Apr 25
3
Problem adding replicated bricks on FreeBSD
Hi Folks, I'm trying to debug an issue that I've found while attempting to qualify GlusterFS for potential distributed storage projects on the FreeBSD-11.1 server platform - using the existing package of GlusterFS v3.11.1_4 The main issue I've encountered is that I cannot add new bricks while setting/increasing the replica count. If I create a replicated volume "poc"
2018 Jan 08
0
different names for bricks
I just noticed that gluster volume info foo and gluster volume heal foo statistics use different indices for brick numbers. Info uses 1 based but heal statistics uses 0 based. gluster volume info clifford Volume Name: cliffordType: Distributed- ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a945aStatus: StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type: tcpBricks:Brick1:
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
Hi Jan, comments inline. On Fri, Jun 30, 2017 at 1:31 AM, Jan <jan.h.zak at gmail.com> wrote: > Hi all, > > Gluster and Ganesha are amazing. Thank you for this great work! > > I?m struggling with one issue and I think that you might be able to help me. > > I spent some time by playing with Gluster and Ganesha and after I gain some > experience I decided that I
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Hi Artem, Was the volume size correct before the bricks were expanded? This sounds like [1] but that should have been fixed in 4.0.0. Can you let us know the values of shared-brick-count in the files in /var/lib/glusterd/vols/dev_apkmirror_data/ ? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880 On 17 April 2018 at 05:17, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi
2018 Apr 12
0
issues with replicating data to a new brick
Hello everybody, I have some kind of a situation here I want to move some volumes to new hosts. the idea is to add the new bricks to the volume, sync and then drop the old bricks. starting point is: Volume Name: Server_Monthly_02 Type: Replicate Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1:
2018 Apr 13
0
Is the size of bricks limiting the size of files I can store?
Sorry about the late reply, I missed seeing your mail. To begin with, what is your use-case? Sharding is currently supported only for virtual machine image storage use-case. It *could* work in other single-writer use-cases but it's only tested thoroughly for the vm use-case. If yours is not a vm store use-case, you might want to do some tests first to see if it works fine. If you find any
2017 Jun 30
1
Some bricks are offline after restart, how to bring them online gracefully?
Hi, Jan, by multiple times I meant whether you were able to do the whole setup multiple times and face the same issue. So that we have a consistent reproducer to work on. As grepping shows that the process doesn't exist the bug I mentioned doesn't hold good. Seems like another issue irrelevant to the bug i mentioned (have mentioned it now). When you say too often, this means there is a
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii <https://plus.google.com/+ArtemRussakovskii> | @ArtemR <http://twitter.com/ArtemR> On Mon, Apr
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix. There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release). [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19 On 17 April 2018 at 09:58, Artem Russakovskii <archon810 at
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote: > Dear Ravi, > > Thanks for the confirmation?I replaced a brick in a volume last night > and by the morning I see that Gluster has replicated data there, > though I don't have any indication of its progress. The `gluster v > heal volume info` and `gluster v heal volume info split-brain` are all > looking good so I guess that's
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
I just remembered that I didn't run https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test volume/box like I did for the main production gluster, and one of these ops - either heal or the op-version, resolved the issue. I'm now seeing: pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
2017 Jun 30
2
Some bricks are offline after restart, how to bring them online gracefully?
Hi Hari, thank you for your support! Did I try to check offline bricks multiple times? Yes ? I gave it enough time (at least 20 minutes) to recover but it stayed offline. Version? All nodes are 100% equal ? I tried fresh installation several times during my testing, Every time it is CentOS Minimal install with all updates and without any additional software: uname -r 3.10.0-514.21.2.el7.x86_64
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
Hi Jan, It is not recommended that you automate the script for 'volume start force'. Bricks do not go offline just like that. There will be some genuine issue which triggers this. Could you please attach the entire glusterd.logs and the brick logs around the time so that someone would be able to look? Just to make sure, please check if you have any network outage(using iperf or some
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
Hi Nithya, I'm on Gluster 4.0.1. I don't think the bricks were smaller before - if they were, maybe 20GB because Linode's minimum is 20GB, then I extended them to 25GB, resized with resize2fs as instructed, and rebooted many times over since. Yet, gluster refuses to see the full disk size. Here's the status detail output: gluster volume status dev_apkmirror_data detail Status
2018 Apr 18
1
Replicated volume read request are served by remote brick
I have created a 2 brick replicated volume. gluster> volume status Status of volume: storage Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick master:/glusterfs/bricks/storage/mountpoint 49153 0 Y 5301 Brick worker1:/glusterfs/bricks/storage/mountpoint
2013 Mar 20
1
About adding bricks ...
Hi @all, I've created a Distributed-Replicated Volume consisting of 4 bricks on 2 servers. # gluster volume create glusterfs replica 2 transport tcp \ gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1 Now I have the following very nice replication schema: +-------------+ +-------------+ | gluster00 | | gluster01 | +-------------+ +-------------+ | exp0 | exp1 |