search for: bretherton

Displaying 5 results from an estimated 5 matches for "bretherton".

2011 Aug 07
1
Using volumes during fix-layout after add/remove-brick
...my shrunk volume. According to the master_list.txt file I created recently during the GFID error fixing process, the volume in question has ~1.2 million paths, but "fix-layout VOLNAME status" shows that twice this number of layouts have been fixed already. Regards Dan. -- Mr. D.A. Bretherton Computer System Manager Environmental Systems Science Centre Harry Pitt Building 3 Earley Gate University of Reading Reading, RG6 6AL UK Tel. +44 118 378 5205 Fax: +44 118 378 6413
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
...es in the Administration Guide for example. That would stop other users from going down the path that I did initially, which has given me a real headache because I am now having to move tens of terabytes of data off bricks that are larger than the new standard size. Regards Dan. -- Mr. D.A. Bretherton Computer System Manager Environmental Systems Science Centre Harry Pitt Building 3 Earley Gate University of Reading Reading, RG6 6AL UK Tel. +44 118 378 5205 Fax: +44 118 378 6413
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All- There are a lot of the following type of errors in my client and NFS logs following a recent volume expansion. [2012-02-16 22:59:42.504907] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol: atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501 34 - 1227133511 [2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk] 0-atmos-dht: mismatching
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello, Thanks to "partner" on IRC who told me about this (quite big) problem. Apparently in a distributed setup once a brick fills up you start getting write failures. Is there a way to work around this? I would have thought gluster would check for free space before writing to a brick. It's very easy to test, I created a distributed volume from 2 uneven bricks and started to
2011 Oct 07
0
Manual rsync before self-heal to prevent repaired server hanging
...dor's view is that rsync is safe, but a large amount of continuous GlusterFS file synchronisation is not. I would be happy to use the rsync approach if it keeps the servers running, as long as it doesn't ruin my xattrs. Any comments or suggestions would be much appreciated. Regards Dan Bretherton.