I tried setting the shard size to 512MB. It slightly improved the space
utilization during creation - not quite double space utilization. And I
didn't run out of space creating a file that occupied 6gb of the 8gb volume
(and I even tried 7168MB just fine). See attached command line log.
On Fri, Feb 4, 2022 at 6:59 PM Strahil Nikolov <hunter86_bg at yahoo.com>
wrote:
> It sounds like a bug to me.
> In virtualization sharding is quite common (yet, on replica volumes) and I
> have never observed such behavior.
> Can you increase the shard size to 512M and check if the situation is
> better ?
> Also, share the volume info.
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Feb 4, 2022 at 22:32, Fox
> <foxxz.net at gmail.com> wrote:
> Using gluster v10.1 and creating a Distributed-Dispersed volume with
> sharding enabled.
>
> I create a 2gb file on the volume using the 'dd' tool. The file
size shows
> 2gb with 'ls'. However, 'df' shows 4gb of space utilized on
the volume.
> After several minutes the volume utilization drops to the 2gb I would
> expect.
>
> This is repeatable for different large file sizes and different
> disperse/redundancy brick configurations.
>
> I've also encountered a situation, as configured above, where I utilize
> close to full disk capacity and am momentarily unable to delete the file.
>
> I have attached a command line log of an example of above using a set of
> test VMs setup in a glusterfs cluster.
>
> Is this initial 2x space utilization anticipated behavior for sharding?
>
> It would mean that I can never create a file bigger than half my volume
> size as I get an I/O error with no space left on disk.
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20220204/9c87e285/attachment.html>
-------------- next part --------------
root at tg1:~# gluster volume create gv30 disperse 5
tg{1,2,3,4,5}:/data/brick1/gv30 tg{1,2,3,4,5}:/data/brick2/gv30
volume create: gv30: success: please start the volume to access data
root at tg1:~# gluster volume set gv30 features.shard on
volume set: success
root at tg1:~# gluster volume set gv30 features.shard-block-size 512MB
volume set: success
root at tg1:~# gluster volume start gv30
volume start: gv30: success
root at tg1:~# gluster volume info
Volume Name: gv30
Type: Distributed-Disperse
Volume ID: e14cf92b-6f2d-420d-97ac-f725959d0398
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 1) = 10
Transport-type: tcp
Bricks:
Brick1: tg1:/data/brick1/gv30
Brick2: tg2:/data/brick1/gv30
Brick3: tg3:/data/brick1/gv30
Brick4: tg4:/data/brick1/gv30
Brick5: tg5:/data/brick1/gv30
Brick6: tg1:/data/brick2/gv30
Brick7: tg2:/data/brick2/gv30
Brick8: tg3:/data/brick2/gv30
Brick9: tg4:/data/brick2/gv30
Brick10: tg5:/data/brick2/gv30
Options Reconfigured:
features.shard-block-size: 512MB
features.shard: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
root at tg1:~# mount -t glusterfs tg1:/gv30 /mnt
root at tg1:~# cd /mnt
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 399M 7.6G 5% /mnt
root at tg1:/mnt# dd if=/dev/zero of=file bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 36.3422 s, 59.1 MB/s
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 3.9G 4.1G 49% /mnt
(about 5 minutes later)
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 2.4G 5.6G 31% /mnt
root at tg1:/mnt# rm file
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 399M 7.6G 5% /mnt
root at tg1:/mnt# dd if=/dev/zero of=file bs=1M count=6144
6144+0 records in
6144+0 records out
6442450944 bytes (6.4 GB, 6.0 GiB) copied, 96.3252 s, 66.9 MB/s
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 7.0G 1.1G 88% /mnt
(about 5 minutes later)
root at tg1:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
tg1:/gv30 8.0G 6.7G 1.3G 85% /mnt