Could you share (1) the output of 'getfattr -d -m . -e hex <path>' where <path> represents the path to the original file from the brick where it resides (2) the size of the file as seen from the mount point around the time when (1) is taken (3) output of 'gluster volume info' -Krutika ----- Original Message -----> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com> > To: "gluster-users" <gluster-users at gluster.org> > Sent: Sunday, November 1, 2015 6:29:44 AM > Subject: [Gluster-users] Shard file size (gluster 3.7.5)> Have upgraded my cluster to debian jessie, so able to natively test 3.7.5> I?ve noticed some peculiarities with reported file sizes on the gluster mount > but I seem to recall this is a known issue with shards?> Source file is sparse, nominal size 64GB, real size 25GB. However underlying > storage is ZFS with lz4 compression which reduces it to 16GB> No Shard:> ls ?lh : 64 GB> du ?h : 25 GB> 4MB Shard:> ls ?lh : 144 GB> du ?h : 21 MB> 512MB Shard:> ls ?lh : 72 GB> du ?h : 765 MB> a du ?sh of the .shard directory show 16GB for all datastores> Is this a known bug for sharding? Will it be repaired eventually?> Sent from Mail for Windows 10> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151102/49d51dd9/attachment.html>
On 2 November 2015 at 18:49, Krutika Dhananjay <kdhananj at redhat.com> wrote:> Could you share > (1) the output of 'getfattr -d -m . -e hex <path>' where <path> represents > the path to the original file from the brick where it resides > (2) the size of the file as seen from the mount point around the time > when (1) is taken > (3) output of 'gluster volume info' >Hope this helps (1) getfattr -d -m . -e hex /zfs_vm/datastore3/images/301/vm-301-disk-2.qcow2 getfattr: Removing leading '/' from absolute path names # file: zfs_vm/datastore3/images/301/vm-301-disk-2.qcow2 trusted.afr.dirty=0x000000000000000000000000 trusted.bit-rot.version=0x0200000000000000563732fc000ee087 trusted.gfid=0x25621772b50340ab87e22c7e5e36bf00 trusted.glusterfs.shard.block-size=0x0000000020000000 trusted.glusterfs.shard.file-size=0x0000001711c6000000000000000000000000000000f60d570000000000000000 (2) the size of the file as seen from the mount point around the time when (1) is taken # Bytes cd /mnt/pve/gluster3/images/301 ls -l total 8062636 -rw-r--r-- 1 root root 99082436608 Nov 2 21:48 vm-301-disk-2.qcow2 # Human Readable :) ls -lh ls -l /mnt/pve/gluster3/images/301/vm-301-disk-2.qcow2 total 7.7G -rw-r--r-- 1 root root 93G Nov 2 21:48 vm-301-disk-2.qcow2 # Orignal file it was rsync copied from ls -l /mnt/pve/pxsphere/images/301/vm-301-disk-1.qcow2 -rw-r--r-- 1 root root 27746172928 Oct 29 18:09 /mnt/pve/pxsphere/images/301/vm-301-disk-1.qcow2 (3) output of 'gluster volume info' # Volume Info gluster volume info datastore3 Volume Name: datastore3 Type: Replicate Volume ID: def21ef7-37b5-4f44-a2cd-8e722fc40b24 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vna.proxmox.softlog:/zfs_vm/datastore3 Brick2: vnb.proxmox.softlog:/glusterdata/datastore3 Brick3: vng.proxmox.softlog:/glusterdata/datastore3 Options Reconfigured: performance.io-thread-count: 32 performance.write-behind-window-size: 128MB performance.cache-size: 1GB performance.cache-refresh-timeout: 4 nfs.disable: on nfs.addr-namelookup: off nfs.enable-ino32: on performance.write-behind: on cluster.self-heal-window-size: 256 server.event-threads: 4 client.event-threads: 4 cluster.quorum-type: auto features.shard-block-size: 512MB features.shard: on performance.readdir-ahead: on cluster.server-quorum-ratio: 51% root at vna:/mnt/pve/gluster3/images/301# -- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151102/c6d419c5/attachment.html>
I can reproduce this 100% reliably, just by coping files onto a gluster volume. Reported File size is always larger, sometimes radically so. If I copy the file again, the reported file is different each time. using cmp I found that the file contents match, up to the size of the original file. MD5SUMS probably differ because of the different file sizes. On 2 November 2015 at 18:49, Krutika Dhananjay <kdhananj at redhat.com> wrote:> Could you share > (1) the output of 'getfattr -d -m . -e hex <path>' where <path> represents > the path to the original file from the brick where it resides > (2) the size of the file as seen from the mount point around the time > when (1) is taken > (3) output of 'gluster volume info' > > -Krutika > > ------------------------------ > > *From: *"Lindsay Mathieson" <lindsay.mathieson at gmail.com> > *To: *"gluster-users" <gluster-users at gluster.org> > *Sent: *Sunday, November 1, 2015 6:29:44 AM > *Subject: *[Gluster-users] Shard file size (gluster 3.7.5) > > > Have upgraded my cluster to debian jessie, so able to natively test 3.7.5 > > > > I?ve noticed some peculiarities with reported file sizes on the gluster > mount but I seem to recall this is a known issue with shards? > > > > Source file is sparse, nominal size 64GB, real size 25GB. However > underlying storage is ZFS with lz4 compression which reduces it to 16GB > > > > No Shard: > > ls ?lh : 64 GB > > du ?h : 25 GB > > > > 4MB Shard: > > ls ?lh : 144 GB > > du ?h : 21 MB > > > > 512MB Shard: > > ls ?lh : 72 GB > > du ?h : 765 MB > > > > > > a du ?sh of the .shard directory show 16GB for all datastores > > > > Is this a known bug for sharding? Will it be repaired eventually? > > > > Sent from Mail <http://go.microsoft.com/fwlink/?LinkId=550986> for > Windows 10 > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > > >-- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151103/dfda9469/attachment.html>