Paul Guo
2015-May-22 10:50 UTC
[Gluster-users] seq read performance comparion between libgfapi and fuse
Hello,
I wrote two simple single-process seq read test case to compare libgfapi
and fuse. The logic looks like this.
char buf[32768];
while (1) {
cnt = read(fd, buf, sizeof(buf));
if (cnt == 0)
break;
else if (cnt > 0)
total += cnt;
// No "cnt < 0" was found during testing.
}
Following is the time which is needed to finish reading a large file.
fuse libgfapi
direct io: 40s 51s
non direct io: 40s 47s
The version is 3.6.3 on centos6.5. The result shows that libgfapi is
obviously slower than the fuse interface although the cpu cycles were
saved a lot during libgfapi testing. Each test was run before cleaning
up all kernel pageche&inode&dentry caches and stopping and then starting
glusterd&gluster (to clean up gluster cache).
I tested direct io because I suspected that fuse kernel readahead
helped more than the read optimization solutions in gluster. I searched
a lot but I did not find much about the comparison between fuse and
libgfapi. Anyone has known about this and known why?
Thanks,
Paul
Niels de Vos
2015-May-22 11:58 UTC
[Gluster-users] seq read performance comparion between libgfapi and fuse
On Fri, May 22, 2015 at 06:50:40PM +0800, Paul Guo wrote:> Hello, > > I wrote two simple single-process seq read test case to compare libgfapi and > fuse. The logic looks like this. > > char buf[32768]; > while (1) { > cnt = read(fd, buf, sizeof(buf)); > if (cnt == 0) > break; > else if (cnt > 0) > total += cnt; > // No "cnt < 0" was found during testing. > } > > Following is the time which is needed to finish reading a large file. > > fuse libgfapi > direct io: 40s 51s > non direct io: 40s 47s > > The version is 3.6.3 on centos6.5. The result shows that libgfapi is > obviously slower than the fuse interface although the cpu cycles were saved > a lot during libgfapi testing. Each test was run before cleaning up all > kernel pageche&inode&dentry caches and stopping and then starting > glusterd&gluster (to clean up gluster cache). > > I tested direct io because I suspected that fuse kernel readahead > helped more than the read optimization solutions in gluster. I searched > a lot but I did not find much about the comparison between fuse and > libgfapi. Anyone has known about this and known why?Does your testing include the mount/unmount and/or libgfapi:glfs_init() parts? Maybe you can share your test programs so that others can try and check it too? https://github.com/axboe/fio supports Gluster natively too. That tool has been developed to test and compare I/O performance results. Does it give you similar differences? Niels