On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur <vbellur at redhat.com>
wrote:
> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <samppah at
neutraali.net>
> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
dataset.
> gluster-test1 is server and gluster-test2 is client mounting with FUSE.
> >
> > Writing file with oflag=direct is not ok:
> > [root at gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > dd: failed to open ?file?: Invalid argument
> >
> > Enable network.remote-dio on Gluster Volume:
> > [root at gluster-test1 gluster]# gluster volume set gluster
> network.remote-dio enable
> > volume set: success
> >
> > Writing small file with oflag=direct is ok:
> > [root at gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > 1+0 records in
> > 1+0 records out
> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
> >
> > Writing bigger file with oflag=direct is ok:
> > [root at gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
> >
> > Enable Sharding on Gluster Volume:
> > [root at gluster-test1 gluster]# gluster volume set gluster
features.shard
> enable
> > volume set: success
> >
> > Writing small file with oflag=direct is ok:
> > [root at gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=1 bs=1M
> > 1+0 records in
> > 1+0 records out
> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
> >
> > Writing bigger file with oflag=direct is not ok:
> > [root at gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > dd: error writing ?file3?: Operation not permitted
> > dd: closing output file ?file3?: Operation not permitted
> >
>
>
> Thank you for these tests! would it be possible to share the brick and
> client logs?
>
Not sure if his tests are same as my setup but here is what I end up with
Volume Name: glustershard
Type: Replicate
Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.71.10:/gluster1/shard1/1
Brick2: 192.168.71.11:/gluster1/shard2/1
Brick3: 192.168.71.12:/gluster1/shard3/1
Options Reconfigured:
features.shard-block-size: 64MB
features.shard: on
server.allow-insecure: on
storage.owner-uid: 36
storage.owner-gid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.quick-read: off
cluster.self-heal-window-size: 1024
cluster.background-self-heal-count: 16
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
performance.read-ahead: off
performance.readdir-ahead: on
dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
oflag=direct count=100 bs=1M
81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
.trashcan/
[root at ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
oflag=direct count=100 bs=1M
dd: error writing
?/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test?:
Operation not permitted
creates the 64M file in expected location then the shard is 0
# file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.version=0x0200000000000000579231f3000e16e7
trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
trusted.glusterfs.shard.block-size=0x0000000004000000
trusted.glusterfs.shard.file-size=0x0000000004000000000000000000000000000000000000010000000000000000
# file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
> Regards,
> Vijay
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160722/b6d29f9b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: client.log
Type: text/x-log
Size: 37550 bytes
Desc: not available
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160722/b6d29f9b/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: brick.log
Type: text/x-log
Size: 25421 bytes
Desc: not available
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160722/b6d29f9b/attachment-0001.bin>