similar to: Write failure on distributed volume with free space available

Displaying 20 results from an estimated 10000 matches similar to: "Write failure on distributed volume with free space available"

2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works. Our use case is fairly straight forward.
2011 Aug 07
1
Using volumes during fix-layout after add/remove-brick
Hello All- I regularly increase the size of volumes using "add-brick" followed by "rebalance VOLNAME fix-layout". I usually allow normal use of an expanded volume (i.e reading and writing files) to continue while "fix-layout" is taking place, and I have not experienced any apparent problems as a result. The documentation does not say that volumes cannot be used
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally Through df -i I got the approximate number of files is 63694442 [root at CentOS-73-64-minimal ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 131981312 30901030 101080282 24% / devtmpfs 8192893 435 8192458 1% /dev tmpfs
2018 Apr 30
2
Gluster rebalance taking many years
2018 Apr 30
0
Gluster rebalance taking many years
Hi, This value is an ongoing rough estimate based on the amount of data rebalance has migrated since it started. The values will cange as the rebalance progresses. A few questions: 1. How many files/dirs do you have on this volume? 2. What is the average size of the files? 3. What is the total size of the data on the volume? Can you send us the rebalance log? Thanks, Nithya On 30
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
On 15/08/11 20:00, gluster-users-request at gluster.org wrote: > Message: 1 > Date: Sun, 14 Aug 2011 23:24:46 +0300 > From: "Deyan Chepishev - SuperHosting.BG"<dchepishev at superhosting.bg> > Subject: [Gluster-users] cluster.min-free-disk separate for each > brick > To: gluster-users at gluster.org > Message-ID:<4E482F0E.3030604 at superhosting.bg>
2011 Apr 22
1
rebalancing after remove-brick
Hello, I'm having trouble migrating data from 1 removed replica set to another active one in a dist replicated volume. My test scenario is the following: - create set (A) - create a bunch of files on it - add another set (B) - rebalance (works fine) - remove-brick A - rebalance (doesn't rebalance - ran on one brick in each set) The doc seems to imply that it is possible to remove
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2018 Apr 30
2
Turn off replication
Hi All We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main export point or going on each brick and remove it . We would like
2017 Sep 25
2
Adding bricks to an existing installation.
All, We currently have a Gluster installation which is made of 2 servers. Each server has 10 drives on ZFS. And I have a gluster mirror between these 2. The current config looks like: SERVER A-BRICK 1 replicated to SERVER B-BRICK 1 I now need to add more space and a third server. Before I do the changes, I want to know if this is a supported config. By adding a third server, I simply want to
2018 May 02
0
Turn off replication
Hi, Removing data to speed up from rebalance is not something that is recommended. Rebalance can be stopped but if started again it will start from the beginning (will have to check and skip the files already moved). Rebalance will take a while, better to let it run. It doesn't have any down side. Unless you touch the backend the data on gluster volume will be available for usage in spite of
2013 Mar 05
1
memory leak in 3.3.1 rebalance?
I started rebalancing my 25x2 distributed-replicate volume two days ago. Since then, the memory usage of the rebalance processes has been steadily climbing by 1-2 megabytes per minute. Following http://gluster.org/community/documentation/index.php/High_Memory_Usage, I tried "echo 2 > /proc/sys/vm/drop_caches". This had no effect on the processes' memory usage. Some of the
2010 Oct 25
2
GlusterFS 3.1 on Amazon EC2 Challenge
Another GlusterFS 3.1 question on my blog (http://cloudarchitect.posterous.com). Any help/ideas will be appreciated. Thanks Joshua ---- Here's my challenge: I have several 1 tb ebs volumes now that are un-replicated and reaching capacity. I'm trying to suss out the most efficient way to get each one of these into its own replicated 4 tb gluster fs. My hope was that I could snapshot
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled. Ludwig On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote: > Do you have sharding enabled ? If yes, don't do it. > If no I'll let someone who knows better answer you :) > > On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > > All, > > > > We currently have a Gluster installation which is made of 2
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate volume to 3x2 and performing a full rebalance fuse clients log the following messages for every directory access: [2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk] 1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench [2012-03-08 10:53:56.953065] I
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya