Correction. The option needs to be enabled and not disabled.
# gluster volume set <VOL> performance.strict-write-ordering on
-Krutika
----- Original Message -----
> From: "Krutika Dhananjay" <kdhananj at redhat.com>
> To: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> Cc: "gluster-users" <gluster-users at gluster.org>
> Sent: Tuesday, November 3, 2015 9:52:59 AM
> Subject: Re: [Gluster-users] Shard file size (gluster 3.7.5)
> Could you try this again with performance.strict-write-ordering set to
'off'?
> # gluster volume set <VOL> performance.strict-write-ordering off
> -Krutika
> ----- Original Message -----
> > From: "Lindsay Mathieson" <lindsay.mathieson at
gmail.com>
>
> > To: "Krutika Dhananjay" <kdhananj at redhat.com>,
"gluster-users"
> > <gluster-users at gluster.org>
>
> > Sent: Tuesday, November 3, 2015 7:26:41 AM
>
> > Subject: Re: [Gluster-users] Shard file size (gluster 3.7.5)
>
> > I can reproduce this 100% reliably, just by coping files onto a
gluster
> > volume. Reported File size is always larger, sometimes radically so.
If I
> > copy the file again, the reported file is different each time.
>
> > using cmp I found that the file contents match, up to the size of the
> > original file.
>
> > MD5SUMS probably differ because of the different file sizes.
>
> > On 2 November 2015 at 18:49, Krutika Dhananjay < kdhananj at
redhat.com >
> > wrote:
>
> > > Could you share
> >
>
> > > (1) the output of 'getfattr -d -m . -e hex <path>'
where <path>
> > > represents
> > > the path to the original file from the brick where it resides
> >
>
> > > (2) the size of the file as seen from the mount point around the
time
> > > when
> > > (1) is taken
> >
>
> > > (3) output of 'gluster volume info'
> >
>
> > > -Krutika
> >
>
> > > > From: "Lindsay Mathieson" < lindsay.mathieson
at gmail.com >
> > >
> >
>
> > > > To: "gluster-users" < gluster-users at
gluster.org >
> > >
> >
>
> > > > Sent: Sunday, November 1, 2015 6:29:44 AM
> > >
> >
>
> > > > Subject: [Gluster-users] Shard file size (gluster 3.7.5)
> > >
> >
>
> > > > Have upgraded my cluster to debian jessie, so able to
natively test
> > > > 3.7.5
> > >
> >
>
> > > > I?ve noticed some peculiarities with reported file sizes on
the gluster
> > > > mount
> > > > but I seem to recall this is a known issue with shards?
> > >
> >
>
> > > > Source file is sparse, nominal size 64GB, real size 25GB.
However
> > > > underlying
> > > > storage is ZFS with lz4 compression which reduces it to 16GB
> > >
> >
>
> > > > No Shard:
> > >
> >
>
> > > > ls ?lh : 64 GB
> > >
> >
>
> > > > du ?h : 25 GB
> > >
> >
>
> > > > 4MB Shard:
> > >
> >
>
> > > > ls ?lh : 144 GB
> > >
> >
>
> > > > du ?h : 21 MB
> > >
> >
>
> > > > 512MB Shard:
> > >
> >
>
> > > > ls ?lh : 72 GB
> > >
> >
>
> > > > du ?h : 765 MB
> > >
> >
>
> > > > a du ?sh of the .shard directory show 16GB for all
datastores
> > >
> >
>
> > > > Is this a known bug for sharding? Will it be repaired
eventually?
> > >
> >
>
> > > > Sent from Mail for Windows 10
> > >
> >
>
> > > > _______________________________________________
> > >
> >
>
> > > > Gluster-users mailing list
> > >
> >
>
> > > > Gluster-users at gluster.org
> > >
> >
>
> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > >
> >
>
> > --
>
> > Lindsay
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151102/bbc53c0d/attachment.html>