Displaying 20 results from an estimated 11000 matches similar to: "Using volumes during fix-layout after add/remove-brick"
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
On 15/08/11 20:00, gluster-users-request at gluster.org wrote:
> Message: 1
> Date: Sun, 14 Aug 2011 23:24:46 +0300
> From: "Deyan Chepishev - SuperHosting.BG"<dchepishev at superhosting.bg>
> Subject: [Gluster-users] cluster.min-free-disk separate for each
> brick
> To: gluster-users at gluster.org
> Message-ID:<4E482F0E.3030604 at superhosting.bg>
2011 Apr 22
1
rebalancing after remove-brick
Hello,
I'm having trouble migrating data from 1 removed replica set to
another active one in a dist replicated volume.
My test scenario is the following:
- create set (A)
- create a bunch of files on it
- add another set (B)
- rebalance (works fine)
- remove-brick A
- rebalance (doesn't rebalance - ran on one brick in each set)
The doc seems to imply that it is possible to remove
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello,
Thanks to "partner" on IRC who told me about this (quite big) problem.
Apparently in a distributed setup once a brick fills up you start
getting write failures. Is there a way to work around this?
I would have thought gluster would check for free space before writing
to a brick.
It's very easy to test, I created a distributed volume from 2 uneven
bricks and started to
2011 Oct 18
2
gluster rebalance taking three months
Hi guys,
we have a rebalance running on eight bricks since July and this is
what the status looks like right now:
===Tue Oct 18 13:45:01 CST 2011 ====
rebalance step 1: layout fix in progress: fixed layout 223623
There are roughly 8T photos in the storage,so how long should this
rebalance take?
What does the number (in this case) 22362 represent?
Our gluster infomation:
Repository
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi,
I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync !
> gluster v status home
volume status home
Status of volume: home
Gluster process TCP Port RDMA Port Online Pid
2011 Dec 08
1
Can't create striped replicated volume
Hi,
I'm trying to create striped replicated volume but getting tis error:
gluster volume create cloud stripe 4 replica 2 transport tcp
nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool
wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path>
Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar,
Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release.
Thanks,
Eva (865) 574-6894
From: Amar Tumballi <atumball at redhat.com>
Date: Wednesday, January 31, 2018 at 12:15 PM
To: Eva Freer
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace
the brick format and mount, then gluster v start volname force.
To start self heal just run gluster v heal volname full.
On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote:
> Hi,
>
>
> My volume home is configured in replicate mode (version 3.12.4) with the bricks
>
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi,
My volume home is configured in replicate mode (version 3.12.4) with the bricks
server1:/data/gluster/brick1
server2:/data/gluster/brick1
server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a
> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi,
I think we have a workaround for until we have a fix in the code. The
following worked on my system.
Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
might need to create the *filter* directory in this path.)
Make sure the file has execute permissions. On my system:
[root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
[root at rhgsserver1 3.12.5]# l
total 4.0K
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer,
Our analysis is that this issue is caused by
https://review.gluster.org/17618. Specifically, in
'gd_set_shared_brick_count()' from
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c
.
But even if we fix it today, I don't think we have a release planned
immediately for shipping this. Are you planning to fix the code and
re-compile?
Regards,
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya,
I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran <nbalacha at redhat.com>
Date: Wednesday, January 31, 2018 at 11:26 AM
To: Eva Freer <freereb at ornl.gov>
Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2017 Nov 21
1
Brick and Subvolume Info
Hello
I have a Distributed-Replicate volume and I would like to know if it is
possible to see what sub-volume a brick belongs to, eg:
A Distributed-Replicate volume containing:
Number of Bricks: 2 x 2 = 4
Brick1: node1.localdomain:/mnt/data1/brick1
Brick2: node2.localdomain:/mnt/data1/brick1
Brick3: node1.localdomain:/mnt/data2/brick2
Brick4: node2.localdomain:/mnt/data2/brick2
Is it possible
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem.
@Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
the volfiles to fix this?
Regards,
Nithya
On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote:
> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran <nbalacha at redhat.com>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <freereb at ornl.gov>
Cc: "Greene, Tami McFarlin" <greenet at
2011 Aug 24
1
Adding/Removing bricks/changing replica value of a replicated volume (Gluster 3.2.1, OpenSuse 11.3/11.4)
Hi!
Until now, I use Gluster in a 2-server setup (volumes created with
replica 2).
Upgrading the hardware, it would be helpful to extend to volume to
replica 3 to integrate the new machine and adding the respective brick
and to reduce it later back to 2 and removing the respective brick when
the old machine is cancelled and not used anymore.
But it seems that this requires to delete and
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva,
I'm sorry but I need to get in touch with another developer to check about
the changes here and he will be available only tomorrow. Is there someone
else I could work with while you are away?
Regards,
Nithya
On 31 January 2018 at 22:00, Freer, Eva B. <freereb at ornl.gov> wrote:
> Nithya,
>
>
>
> I will be out of the office for ~10 days starting tomorrow. Is
2012 Oct 22
1
How to add new bricks to a volume?
Hi, dear glfs experts:
I've been using glusterfs (version 3.2.6) for months,so far it works very
well.Now I'm facing a problem of adding two new bricks to an existed
replicated (rep=2) volume,which is consisted of only two bricks and is
mounted by multiple clients.Can I just use the following commands to add
new bricks without stopping the services which is using the volume as
motioned?
2017 Oct 26
0
not healing one file
Hi Richard,
Thanks for the informations. As you said there is gfid mismatch for the
file.
On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different.
This is not considered as split-brain because we have two good copies here.
Gluster 3.10 does not have a method to resolve this situation other than the
manual intervention [1]. Basically what you need to do is remove the