On Tue, Mar 04, 2014 at 10:09:24AM +0100, Dragon wrote:> Hello,
> some time has gone since i reported that problem. I watch this and
> find out, that if i run rebalance, which sorts the free space on all 3
> bricks to the same, i can copy new files on the volume.
> Now i have again this situation on free disk space
> b1: 180GB, b2: 178GB, b3:41GB.
> If i now copy one file with 46GB size, gluster will copy that to b3
> which doesnt have anough free space. Ok i run rebalance which needs
> many hours, can i speed up this? and can i while it is running copy
> files to the volume?, and after that i have b1:130, b2:133 and
> b3:133GB. Ok copy the file again and it fills up b2. After that copy
> further more files and have the same situation for now b2.
>
> My question now - is this an normal behaviour of an distributed
> glustervolume and must i run the rebalance on my own, where is the
> problem? Need serious help.thx
Why not configure striping (with *huge* stripe size)
and min-free-disk at the same time.
set cluster.min-free-disk to a few GB,
preferably larger than your expected maximum single file size.
set cluster.stripe-block-size to a few GB,
again preferably larger than your expected maximum single file size.
Now, *usually* you will get the whole file in one "stripe" block on
the
one expected volume, *unless* that volume already is low on space,
due to "min-free-disk".
And, even if you should write tons of data, you will be able to store
single files larger even than your largest volume, because of the
"striping".
But if you really fill it up to the last few percent,
then, yes, it will still correctly tell you "ENOSPC".
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD? and LINBIT? are registered trademarks of LINBIT, Austria.