Hi I am extending my ZFS+rsync backup to be able to handle large files (think virtual machine disk images) in an efficient manner. however, during testing I have found a very strange behavior of --inplace flag (which seems to be what I am looking for). what I did: create a 100MB file, rsync, snapshot, change 1k in random location, rsync, snapshot, change 1K in other random location, repeat a couple times, `zfs list` to see how large my volume actually is. the strange thing here is that the resulting size was wildly different depending on how I created the initial file. all modifications were done by the same command, namely dd if=/dev/urandom of=testfile count=1 bs=1024 seek=some_num conv=notrunc situation A: file was created by running dd if=/dev/zero of=testfile bs=1024 count=102400 the resulting size of the volume is approximately 100MB times the number of snapshots situation B: file was created by running dd if=/dev/urandom of=testfile count=102400 bs=1024 the resulting size of the volume is just a bit over 100MB the rsync command used was rsync -aHAv --delete --inplace root at remote:/test/ . rsync on backup machine (the destination) is 3.1.0, remote has 3.0.9 there is no compression or dedup enabled on the zfs volume anyone seen this behavior before? is it a bug? can I avoid it? can I make rsync give me disk IO statistics to confirm? regards Pavel Herrmann