Hi,
Since then I setup geo-replication to a volume composed of a single brick
(no replication, no distribution), which seems to be complete / up to date
(the LAST_SYNCED column in 'gluster volume geo-replication VOLUME SLAVE
status detail' is only a few minutes ago), and interestingly the master
bricks (remember the volume is 1 x 2 = 2) each still uses 89GB (according
to 'df'), whereas the slave uses 61GB (df).
61GB is what I would expect and the same as a 'du -h' on the mounted
volume
(as NFS).
For now as you can see my Gluster volume is small, but I want to be sure
all is well before ramping it up.
Why do the master bricks use 50% more space than I would have expected them
to?
Is there a way to find out where why that extra space is used and clear it?
Thibault.
On 12 Aug 2015 3:07 pm, "Thibault Godouet" <tibo92 at
godouet.net> wrote:
> I have a replicated Gluster 3.7.3 volume composed of two bricks, each on a
> different server.
>
> If I mount the volume as NFS (because it is a lot faster than FUSE for
> du), and do a 'du -h' on this, it returns 56GB.
>
> Yet the disk usage on each brick is quite a lot higher:
>
> - a 'du -h' gives me 104GB, 99% (or more) of it being .glusterfs
>
> - a 'df -h' gives me 85GB (and there is nothing else on the
partition)
>
>
>
> I'd be interested to hear if someone knows why du and df give me
different
> values, but even more interested to know why df reports 85GB rather than
> close to 56GB (the actual size of the data in the volume).
>
>
>
> Is this expected?
>
> If not, is there a way to make it clear the extra space and go down to
> 56GB disk usage on each brick?
>
>
>
> Thanks,
>
> Thibault.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150817/06193afd/attachment.html>