Displaying 20 results from an estimated 138 matches for "rebalancing".
Did you mean:
balancing
2010 Mar 19
2
Balance taking a lot longer than before
Hi devs,
I''ve been using btrfs on my file server since 2.6.32 and after upgrading to .33 I noticed that rebalancing took way longer than before.
I have a 5 disk array used for a btrfs md setup, different hard drive sizes, brands, and interfaces.
The last time I rebalanced I was using about 800GB of space on the 4.6TB array, and I started before I went to sleep and it was done when I woke up.
I''m in...
2017 Jul 07
2
Rebalance task fails
Hello everyone,
I have problem rebalancing Gluster volume.
Gluster version is 3.7.3.
My 1x3 replicated volume become full, so I've added three more bricks
to make it 2x3 and wanted to rebalance.
But every time I start rebalancing, it fails immediately.
Rebooting Gluster nodes doesn't help.
# gluster volume rebalance gsae_artifacto...
2011 Apr 22
1
rebalancing after remove-brick
Hello,
I'm having trouble migrating data from 1 removed replica set to
another active one in a dist replicated volume.
My test scenario is the following:
- create set (A)
- create a bunch of files on it
- add another set (B)
- rebalance (works fine)
- remove-brick A
- rebalance (doesn't rebalance - ran on one brick in each set)
The doc seems to imply that it is possible to remove
2017 Dec 13
2
Online Rebalancing
Hi
I have a five node 300 TB distributed gluster volume with zero
replication. I am planning to add two more servers which will add around
120 TB. After fixing the layout, can I rebalance the volume while clients
are online and accessing the data?
Thanks
Kashif
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Dec 13
0
Online Rebalancing
On 13 December 2017 at 17:34, mohammad kashif <kashif.alig at gmail.com> wrote:
> Hi
>
> I have a five node 300 TB distributed gluster volume with zero
> replication. I am planning to add two more servers which will add around
> 120 TB. After fixing the layout, can I rebalance the volume while clients
> are online and accessing the data?
>
>
Hi,
Yes, you can. Are
2014 Aug 05
0
Stack dumps in use_block_rsv while rebalancing ("block rsv returned -28")
I already posted this in the thread "ENOSPC with mkdir and rename",
but now I have a device with 100GB unallocated on the "btrfs fi sh"
output, and when I run a rebalance of the form:
> btrfs filesystem balance start -dusage=50 -musage=10 "$mount"
I get more than 75 of such stack traces contaminating the klog. I've
put some of them up in a gist here:
2017 Jul 10
2
Rebalance task fails
...t very helpful.
Best regards,
Szymon Miotk
On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote:
>
> On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote:
>>
>> Hello everyone,
>>
>>
>> I have problem rebalancing Gluster volume.
>> Gluster version is 3.7.3.
>> My 1x3 replicated volume become full, so I've added three more bricks
>> to make it 2x3 and wanted to rebalance.
>> But every time I start rebalancing, it fails immediately.
>> Rebooting Gluster nodes doesn't help...
2017 Jul 09
0
Rebalance task fails
On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote:
> Hello everyone,
>
>
> I have problem rebalancing Gluster volume.
> Gluster version is 3.7.3.
> My 1x3 replicated volume become full, so I've added three more bricks
> to make it 2x3 and wanted to rebalance.
> But every time I start rebalancing, it fails immediately.
> Rebooting Gluster nodes doesn't help.
>
> # gluste...
2017 Jul 13
2
Rebalance task fails
...PM, Nithya Balachandran <nbalacha at redhat.com>
>> wrote:
>> >
>> > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote:
>> >>
>> >> Hello everyone,
>> >>
>> >>
>> >> I have problem rebalancing Gluster volume.
>> >> Gluster version is 3.7.3.
>> >> My 1x3 replicated volume become full, so I've added three more bricks
>> >> to make it 2x3 and wanted to rebalance.
>> >> But every time I start rebalancing, it fails immediately.
>> >...
2017 Jul 05
2
[New Release] GlusterD2 v4.0dev-7
...s, we have another preview release for GlusterD-2.0.
The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework is now available
- And RPMs are available for Fedora >= 25 and EL7.
This release still doesn't provide a CLI. The HTTP ReST API is the
only access method right now.
Prebuilt binaries are available from [1]. RPMs have been b...
2017 Oct 04
0
data corruption - any update?
Just so I know.
Is it correct to assume that this corruption issue is ONLY involved if
you are doing rebalancing with sharding enabled.
So if I am not doing rebalancing I should be fine?
-bill
On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
>
>
>
&...
2017 Jul 13
0
Rebalance task fails
...> On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com>
> wrote:
> >
> > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote:
> >>
> >> Hello everyone,
> >>
> >>
> >> I have problem rebalancing Gluster volume.
> >> Gluster version is 3.7.3.
> >> My 1x3 replicated volume become full, so I've added three more bricks
> >> to make it 2x3 and wanted to rebalance.
> >> But every time I start rebalancing, it fails immediately.
> >> Rebooting Glust...
2012 Nov 30
2
"layout is NULL", "Failed to get node-uuid for [...] and other errors during rebalancing in 3.3.1
I started rebalancing my volume after updating from 3.2.7 to 3.3.1.
After a few hours, I noticed a large number of failures in the rebalance
status:
> Node Rebalanced-files size scanned failures
> status
> --------- ----------- ----------- ----------- -----------
> -...
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
> wr...
2017 Jul 05
0
[New Release] GlusterD2 v4.0dev-7
Il 5 lug 2017 11:31 AM, "Kaushal M" <kshlmster at gmail.com> ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170705/ad238e4c/attachment.html&g...
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts,
Recently we have encountered a self-heal daemon crash issue after
rebalanced volume.
Crash stack bellow:
+------------------------------------------------------------------------------+
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-03-14 16:33:50
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread
2013 Oct 04
0
Recovering btrfs fs after "failed to read chunk root"
...remove the failing disk, mount with -o degraded and every works fine.
The day later I decide to use an usb drive to temporarily work as the
secondary copy.
Now the usb drive had some GB less then the old drive, so I add the
drive plus a small partition of another disk to avoid space problems
on the rebalancing.
I start the "btrfs device delete missing" and btrfs starts rebalancing
the drive. After a while I get
Sep 30 11:35:31 tambura kernel: [264654.275303] kernel BUG at
fs/btrfs/relocation.c:1055!
The complete error is here http://paste.fedoraproject.org/44139/
I look around and find that...
2017 Sep 25
2
Adding bricks to an existing installation.
...A and one on SERVER C) to the
existing volume
Add 2 bricks to the cluster (one on server B and one on SERVER C) to the
existing volume
After that, I need to rebalance all the data between the bricks...
Is this config supported? Is there something I should be careful before I
do this? SHould I do a rebalancing before I add the 3 set of disks?
Regards,
Ludwig
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170925/db5d98ca/attachment.html>
2015 Jul 20
3
dovecot proxy/director and high availability design
I'm trying to determine which dovecot components to use and how to order them in the network path from client to mail store.
If I have say 1,000 users, all stored in MySQL (or LDAP) and have 4 mail stores, configured into 2, 2 node pods.
MS1 and MS2 are pod1 and are configured with replication (dsync) and host users 0-500. MS3 and MS4 are pod2 and are configured with replication between