Displaying 20 results from an estimated 2000 matches similar to: "Gluster rebalance taking many years"
2018 Apr 30
0
Gluster rebalance taking many years
Hi,
This value is an ongoing rough estimate based on the amount of data
rebalance has migrated since it started. The values will cange as the
rebalance progresses.
A few questions:
1. How many files/dirs do you have on this volume?
2. What is the average size of the files?
3. What is the total size of the data on the volume?
Can you send us the rebalance log?
Thanks,
Nithya
On 30
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs
2018 Apr 30
0
Gluster rebalance taking many years
I met a big problem,the cluster rebalance takes a long time after adding a
new node
gluster volume rebalance web status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- -----------
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete
information than I did the first time around. The full rebalance log
from the machine where I started the rebalance can be found at the
following link. It is slightly redacted - one search/replace was made
to replace an identifying word with REDACTED.
https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2013 Mar 05
1
memory leak in 3.3.1 rebalance?
I started rebalancing my 25x2 distributed-replicate volume two days ago.
Since then, the memory usage of the rebalance processes has been
steadily climbing by 1-2 megabytes per minute. Following
http://gluster.org/community/documentation/index.php/High_Memory_Usage,
I tried "echo 2 > /proc/sys/vm/drop_caches". This had no effect on the
processes' memory usage. Some of the
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from
2012 Jun 28
1
Rebalance failures
I am messing around with gluster management and I've added a couple bricks
and did a rebalance, first fix-layout and then migrate data. When I do
this I seem to get a lot of failures:
gluster> volume rebalance MAIL status
Node Rebalanced-files size
scanned failures status
--------- -----------
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
Hello,
I am still working at recovering from a few failed OS hard drives on my gluster storage and have been removing, and re-adding bricks quite a bit. I noticed yesterday night that some of the directories are not visible when I access them through the client, but are still on the brick. For example:
Client:
# ls /scratch/dw
Ethiopian_imputation HGDP Rolwaling Tibetan_Alignment
Brick:
#
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts