Displaying 20 results from an estimated 138 matches for "rebalanced".
Did you mean:
rebalance
2010 Mar 19
2
Balance taking a lot longer than before
Hi devs,
I''ve been using btrfs on my file server since 2.6.32 and after upgrading to .33 I noticed that rebalancing took way longer than before.
I have a 5 disk array used for a btrfs md setup, different hard drive sizes, brands, and interfaces.
The last time I rebalanced I was using about 800GB of space on the 4.6TB array, and I started before I went to sleep and it was done when I woke up.
I''m in the middle of a rebalance right now, currently using 950GB, and it has been a full day and the rebalancing is not done yet. Is this an expected change? Is ther...
2017 Jul 07
2
Rebalance task fails
...Rebalance
on gsae_artifactory_cluster_storage has been started successfully. Use
rebalance status command to check status of the rebalance process.
ID: b22572ff-7575-4557-8317-765f7e52d445
# gluster volume rebalance gsae_artifactory_cluster_storage status
Node Rebalanced-files
size scanned failures skipped status
run time in secs
--------- -----------
----------- ----------- ----------- -----------
------------ --------------
localhost 0
0Bytes...
2011 Apr 22
1
rebalancing after remove-brick
Hello,
I'm having trouble migrating data from 1 removed replica set to
another active one in a dist replicated volume.
My test scenario is the following:
- create set (A)
- create a bunch of files on it
- add another set (B)
- rebalance (works fine)
- remove-brick A
- rebalance (doesn't rebalance - ran on one brick in each set)
The doc seems to imply that it is possible to remove
2017 Dec 13
2
Online Rebalancing
Hi
I have a five node 300 TB distributed gluster volume with zero
replication. I am planning to add two more servers which will add around
120 TB. After fixing the layout, can I rebalance the volume while clients
are online and accessing the data?
Thanks
Kashif
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Dec 13
0
Online Rebalancing
On 13 December 2017 at 17:34, mohammad kashif <kashif.alig at gmail.com> wrote:
> Hi
>
> I have a five node 300 TB distributed gluster volume with zero
> replication. I am planning to add two more servers which will add around
> 120 TB. After fixing the layout, can I rebalance the volume while clients
> are online and accessing the data?
>
>
Hi,
Yes, you can. Are
2014 Aug 05
0
Stack dumps in use_block_rsv while rebalancing ("block rsv returned -28")
I already posted this in the thread "ENOSPC with mkdir and rename",
but now I have a device with 100GB unallocated on the "btrfs fi sh"
output, and when I run a rebalance of the form:
> btrfs filesystem balance start -dusage=50 -musage=10 "$mount"
I get more than 75 of such stack traces contaminating the klog. I've
put some of them up in a gist here:
2017 Jul 10
2
Rebalance task fails
...ge has been started successfully. Use
>> rebalance status command to check status of the rebalance process.
>> ID: b22572ff-7575-4557-8317-765f7e52d445
>>
>> # gluster volume rebalance gsae_artifactory_cluster_storage status
>> Node Rebalanced-files
>> size scanned failures skipped status
>> run time in secs
>> --------- -----------
>> ----------- ----------- ----------- -----------
>> ------------ --------------
>>...
2017 Jul 09
0
Rebalance task fails
...actory_cluster_storage has been started successfully. Use
> rebalance status command to check status of the rebalance process.
> ID: b22572ff-7575-4557-8317-765f7e52d445
>
> # gluster volume rebalance gsae_artifactory_cluster_storage status
> Node Rebalanced-files
> size scanned failures skipped status
> run time in secs
> --------- -----------
> ----------- ----------- ----------- -----------
> ------------ --------------
> loc...
2017 Jul 13
2
Rebalance task fails
...; >> rebalance status command to check status of the rebalance process.
>> >> ID: b22572ff-7575-4557-8317-765f7e52d445
>> >>
>> >> # gluster volume rebalance gsae_artifactory_cluster_storage status
>> >> Node Rebalanced-files
>> >> size scanned failures skipped status
>> >> run time in secs
>> >> --------- -----------
>> >> ----------- ----------- ----------- -----------
>> >> ------...
2017 Jul 05
2
[New Release] GlusterD2 v4.0dev-7
After nearly 3 months, we have another preview release for GlusterD-2.0.
The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework
2017 Oct 04
0
data corruption - any update?
Just so I know.
Is it correct to assume that this corruption issue is ONLY involved if
you are doing rebalancing with sharding enabled.
So if I am not doing rebalancing I should be fine?
-bill
On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Jul 13
0
Rebalance task fails
...ssfully. Use
> >> rebalance status command to check status of the rebalance process.
> >> ID: b22572ff-7575-4557-8317-765f7e52d445
> >>
> >> # gluster volume rebalance gsae_artifactory_cluster_storage status
> >> Node Rebalanced-files
> >> size scanned failures skipped status
> >> run time in secs
> >> --------- -----------
> >> ----------- ----------- ----------- -----------
> >> ------------ ---------...
2012 Nov 30
2
"layout is NULL", "Failed to get node-uuid for [...] and other errors during rebalancing in 3.3.1
I started rebalancing my volume after updating from 3.2.7 to 3.3.1.
After a few hours, I noticed a large number of failures in the rebalance
status:
> Node Rebalanced-files size scanned failures
> status
> --------- ----------- ----------- ----------- -----------
> ------------
> localhost 0 0Bytes 4288805 0
> stopped
> ml55 26275 206.2MB 42...
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
2017 Jul 05
0
[New Release] GlusterD2 v4.0dev-7
Il 5 lug 2017 11:31 AM, "Kaushal M" <kshlmster at gmail.com> ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts,
Recently we have encountered a self-heal daemon crash issue after
rebalanced volume.
Crash stack bellow:
+------------------------------------------------------------------------------+
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-03-14 16:33:50
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread...
2013 Oct 04
0
Recovering btrfs fs after "failed to read chunk root"
So I''m writing my (dis)adventure with btrfs here hoping to help the
developers or someone with similar problems.
I had a btrfs filesystem at work, using two 1TB disks, raid1 for both
data and metadata.
A week ago one of the two disks start having hundreds of relocated
sectors, so I decide to change it.
I remove the failing disk, mount with -o degraded and every works fine.
The day later
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2015 Jul 20
3
dovecot proxy/director and high availability design
...S3 and MS4 are pod2 and are configured with replication between them and host users 501-1000. Ideally the active connections in pod1 would be split 50/50 between MS1 and MS2. When maintenance is performed obviously all active connections/users would be moved to the other node in the pod and then rebalanced once maintenance is completed.
I'm not sure if I need to use both the proxy and director, or just one or the other? If both then what is the proper path, from a network perspective? I like the functionality director provides, being able to add/remove servers on the fly and adjust connection...