Claus Jeppesen
2020-Apr-01 15:20 UTC
[Gluster-users] Sharding on 7.4 - filesizes may be wrong
We're using GlusterFS in a replicated brick setup with 2 bricks with sharding turned on (shardsize 128MB). There is something funny going on as we can see that if we copy large VM files to the volume we can end up with files that are a bit larger than the source files DEPENDING on the speed with which we copied the files - e.g.: dd if=SOURCE bs=1M | pv -L NNm | ssh gluster_server "dd of=/gluster/VOL_NAME/TARGET bs=1M" It seems that if NN is <= 25 (i.e. 25 MB/s) the size of SOURCE and TARGET will be the same. If we crank NN to, say, 50 we sometimes risk that a 25G file ends up having a slightly larger size, e.g. 26844413952 or 26844233728 - larger than the expected 26843545600. Unfortunately this is not an illusion ! If we dd the files out of Gluster we will receive the amount of data that 'ls' showed us. In the brick directory (incl .shard directory) we have the expected amount of shards for a 25G files (200) with size precisely equal to 128MB - but there is an additional 0 size shard file created. Has anyone else seen a phenomenon like this ? Thanx, Claus. -- *Claus Jeppesen* Manager, Network Services Datto, Inc. p +45 6170 5901 | Copenhagen Office www.datto.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200401/567b7872/attachment.html>
Dmitry Antipov
2020-Apr-02 09:48 UTC
[Gluster-users] Sharding on 7.4 - filesizes may be wrong
On 4/1/20 6:20 PM, Claus Jeppesen wrote:> Has anyone else seen a phenomenon like this ?Well, this one (observed on 8dev git) may be related: # qemu-img convert gluster://192.168.111.2/TEST/qcow-32G.qcow2 gluster://192.168.111.3/TEST/out-32G.raw [2020-04-02 09:38:08.968750] E [ec-inode-write.c:2004:ec_writev_start] (-->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x12d92) [0x7f3d02a61d92] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x365fd) [0x7f3d02a855fd] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x35ccd) [0x7f3d02a84ccd] ) 0-: Assertion failed: ec_get_inode_size(fop, fop->fd->inode, ¤t) [2020-04-02 09:38:08.971827] E [ec-inode-write.c:2004:ec_writev_start] (-->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x12d92) [0x7f3d02a61d92] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x365fd) [0x7f3d02a855fd] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x35ccd) [0x7f3d02a84ccd] ) 0-: Assertion failed: ec_get_inode_size(fop, fop->fd->inode, ¤t) [2020-04-02 09:38:08.975386] E [ec-inode-write.c:2201:ec_manager_writev] (-->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x12f9f) [0x7f3d02a61f9f] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x12d92) [0x7f3d02a61d92] -->/usr/lib64/glusterfs/8dev/xlator/cluster/disperse.so(+0x3666d) [0x7f3d02a8566d] ) 0-: Assertion failed: __ec_get_inode_size(fop, fop->fd->inode, &cbk->iatt[0].ia_size) # gluster volume info Volume Name: TEST Type: Distributed-Disperse Volume ID: 1b6c4980-dad5-4daa-b662-53995470f891 Status: Started Snapshot Count: 0 Number of Bricks: 5 x (2 + 1) = 15 Transport-type: tcp Bricks: Brick1: 192.168.111.1:/vair/SSD-0000 Brick2: 192.168.111.2:/vair/SSD-0000 Brick3: 192.168.111.3:/vair/SSD-0000 Brick4: 192.168.111.4:/vair/SSD-0000 Brick5: 192.168.111.1:/vair/SSD-0001 Brick6: 192.168.111.2:/vair/SSD-0001 Brick7: 192.168.111.3:/vair/SSD-0001 Brick8: 192.168.111.4:/vair/SSD-0001 Brick9: 192.168.111.1:/vair/SSD-0002 Brick10: 192.168.111.2:/vair/SSD-0002 Brick11: 192.168.111.3:/vair/SSD-0002 Brick12: 192.168.111.4:/vair/SSD-0002 Brick13: 192.168.111.1:/vair/SSD-0003 Brick14: 192.168.111.2:/vair/SSD-0003 Brick15: 192.168.111.3:/vair/SSD-0003 Options Reconfigured: features.shard-block-size: 128MB features.shard: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on Dmitry