think the high-level situation is as follows: - write()''s to the loop (block) device hit page cache as buffers. This data is subject to similar caching/writeback behavior as a local filesystem (e.g., write returns when the data is cached; if the write is sync, wait on a flush before returning). - Flushing eventually kicks in, which is page based and results in a bunch of writepage requests. The block/buffer handling code converts these writepage requests into 4k I/O (bio) requests. - These 4k I/O requests hit loop. In the file backed case, it issues write() requests to the underlying file. - In a local filesystem cache, I believe this would result in further caching in the local filesystem mapping. In the case of fuse, requests to userspace are submitted immediately, thus gluster is now receiving 4k write requests rather than 128k requests when writing to the file directly via dd with a 1MB buffer size. Given that you can reproduce the sync write variance without Xen, I would rule that out for the time being and suggest the following: - Give the profile tool a try to compare the local loop case when throughput is higher vs. lower. It would be interesting to see if anything jumps out that could help explain what is happening differently between the runs. - See what caching/performance translators are enabled in your gluster client graph (the volname-fuse.vol volfile) and see about disabling some of those one at a time, e.g.: gluster volume set myvol io-cache disable (repeat for write-behind, read-ahead, quick-read, etc.) ... and see if you get any more consistent results (good or bad). - Out of curiosity (and if you''re running a recent enough gluster), try the fopen-keep-cache mount option on your gluster mount and see if it changes any behavior, particularly with a cleanly mapped loop dev. Brian