OK. I am not sure what it is that we're doing differently. I tried the steps
you shared and here's what I got:
[root at dhcp35-215 bricks]# gluster volume info
Volume Name: rep
Type: Replicate
Volume ID: 3fd45a4b-0d02-4a44-b74a-41592d48e102
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: kdhananjay:/bricks/1
Brick2: kdhananjay:/bricks/2
Brick3: kdhananjay:/bricks/3
Options Reconfigured:
performance.strict-write-ordering: on
features.shard: on
features.shard-block-size: 512MB
cluster.quorum-type: auto
client.event-threads: 4
server.event-threads: 4
cluster.self-heal-window-size: 256
performance.write-behind: on
nfs.enable-ino32: on
nfs.addr-namelookup: off
nfs.disable: on
performance.cache-refresh-timeout: 4
performance.cache-size: 1GB
performance.write-behind-window-size: 128MB
performance.io-thread-count: 32
performance.readdir-ahead: on
[root at dhcp35-215 mnt]# gluster volume set rep strict-write-ordering on
volume set: success
[root at dhcp35-215 mnt]# dd if=/dev/sda of=test.bin bs=1MB count=8192
8192+0 records in
8192+0 records out
8192000000 bytes (8.2 GB) copied, 133.754 s, 61.2 MB/s
[root at dhcp35-215 mnt]# ls -l
total 8000000
-rw-r--r--. 1 root root 8192000000 Nov 5 16:40 test.bin
[root at dhcp35-215 mnt]# ls -lh
total 7.7G
-rw-r--r--. 1 root root 7.7G Nov 5 16:40 test.bin
[root at dhcp35-215 mnt]# du test.bin
8000000 test.bin
[root at dhcp35-215 bricks]# du /bricks/1/.shard/
7475780 /bricks/1/.shard/
[root at dhcp35-215 bricks]# du /bricks/1/
.glusterfs/ .shard/ test.bin .trashcan/
[root at dhcp35-215 bricks]# du /bricks/1/test.bin
524292 /bricks/1/test.bin
Just to be sure, did you rerun the test on the already broken file (test.bin)
which was written to when strict-write-ordering had been off?
Or did you try the new test with strict-write-ordering on a brand new file?
-Krutika
----- Original Message -----
> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "gluster-users" <gluster-users at gluster.org>
> Sent: Thursday, November 5, 2015 3:04:51 AM
> Subject: Re: [Gluster-users] Shard file size (gluster 3.7.5)
> On 5 November 2015 at 01:09, Krutika Dhananjay < kdhananj at redhat.com
> wrote:
> > Ah! It's the same issue. Just saw your volume info output.
Enabling
> > strict-write-ordering should ensure both size and disk usage are
accurate.
>
> Tested it - nope :( Size s accurate (27746172928 bytes), but disk usage is
> wildly inaccurate (698787).
> I have compression disabled on the underlying storage now.
> --
> Lindsay
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151105/8d230d8b/attachment.html>