satish kondapalli
2015-Nov-10 03:01 UTC
[Gluster-users] [Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi
Hi, I am running performance test between fuse vs libgfapi. I have a single node, client and server are running on same node. I have NVMe SSD device as a storage. My volume info:: [root at sys04 ~]# gluster vol info Volume Name: vol1 Type: Distribute Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 172.16.71.19:/mnt_nvme/brick1 Options Reconfigured: performance.cache-size: 0 performance.write-behind: off performance.read-ahead: off performance.io-cache: off performance.strict-o-direct: on fio Job file:: [global] direct=1 runtime=20 time_based ioengine=gfapi iodepth=1 volume=vol1 brick=172.16.71.19 rw=read size=128g bs=32k group_reporting numjobs=1 filename=128g.bar While doing sequential read test, I am not seeing any data transfer on device with iostat tool. Looks like gfapi engine is reading from the cache because i am reading from same file with different block sizes. But i disabled io cache for my volume. Can someone help me from where fio is reading the data? Sateesh -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151109/52030d45/attachment.html>
Piotr Rybicki
2015-Nov-10 09:47 UTC
[Gluster-users] [Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi
W dniu 2015-11-10 o 04:01, satish kondapalli pisze:> Hi, > > I am running performance test between fuse vs libgfapi. I have a > single node, client and server are running on same node. I have NVMe SSD > device as a storage. > > My volume info:: > > [root at sys04 ~]# gluster vol info > Volume Name: vol1 > Type: Distribute > Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7 > Status: Started > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: 172.16.71.19:/mnt_nvme/brick1 > Options Reconfigured: > performance.cache-size: 0 > performance.write-behind: off > performance.read-ahead: off > performance.io-cache: off > performance.strict-o-direct: on > > > fio Job file:: > > [global] > direct=1 > runtime=20 > time_based > ioengine=gfapi > iodepth=1 > volume=vol1 > brick=172.16.71.19 > rw=read > size=128g > bs=32k > group_reporting > numjobs=1 > filename=128g.bar > > While doing sequential read test, I am not seeing any data transfer on > device with iostat tool. Looks like gfapi engine is reading from the > cache because i am reading from same file with different block sizes. > > But i disabled io cache for my volume. Can someone help me from where > fio is reading the data?Hi. It is normal - not seeing traffic on ethernet interface, when using native RDMA protocol (not TCP via IPoIB). Try perfquery -x , to see traffic counters increase on RDMA interface. Regards Piotr Rybicki