<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body bgcolor="#ffffff" text="#000000"> Hi all,<br> <br> I installed glusterFS on 2 computers under Mandriva 2008.<br> Connection type : Ethernet 100 mbits/S (mii-tool result: negotiated 100baseTx-FD flow-control)<br> <br> Hereafter is my configuration (same on both PCs):<br> <br> <b><big>glusterfs-server.vol:<br> <br> </big></b><tt>volume dir_main<br> type storage/posix # POSIX FS translator<br> option directory /main # Export this directory<br> end-volume<br> <br> volume locks_main<br> type features/posix-locks<br> subvolumes dir_main<br> end-volume<br> <br> volume main<br> type protocol/server<br> option transport-type tcp/server # For TCP/IP transport<br> subvolumes locks_main<br> option auth.ip.main.allow 127.0.0.1,172.16.1.* # Allow access to "brick" volume<br> end-volume</tt><br> <br> <br> <b><big>client.vol:<br> <br> </big></b><tt>volume main_loc<br> type protocol/client<br> option transport-type tcp/client<br> option remote-host localhost<br> option remote-subvolume main<br> end-volume<br> <br> volume main_dist<br> type protocol/client<br> option transport-type tcp/client<br> option remote-host other<br> option remote-subvolume main<br> end-volume<br> <br> volume raid_main_afr<br> type cluster/afr<br> subvolumes main_loc main_dist<br> option read-subvolume main_loc<br> end-volume<br> <br> volume raid_main_ra<br> type performance/read-ahead<br> option page-size 128kB<br> option page-count 4<br> option force-atime-update off<br> subvolumes raid_main_afr<br> end-volume<br> <br> volume raid_main_wb<br> type performance/write-behind<br> option aggregate-size 1MB<br> option flush-behind on<br> subvolumes raid_main_ra<br> end-volume<br> <br> volume raid_main<br> type performance/io-cache<br> option cache-size 512MB<br> option page-size 1MB<br> option priority *:0 # *.html:2,*:1<br> option force-revalidate-timeout 2 # default is 1<br> subvolumes raid_main_wb<br> end-volume<br> </tt><br> <br> It works fine, but slowly !<br> I''m a newbie in glusterFS, so may be some option isn''t adequate. Please advise.<br> <br> Due to the option "<tt>read-subvolume main_loc" I didn''t expect network traffic when I just list files or read them, but actually, even with a simple ls, I see a lot of network traffic.<br> A "ls -R" takes 7 to 8 seconds for less than 8000 files. If I do it locally, I get the result instantly.<br> </tt>Question 1 : Is this traffic normal on read only operation ?<br> <br> Question 2 : in the documentation, I read that there is 2 protocols : ASCII protocol and binary protocol. Currently, according to what I see with tcpdump, my glusterFS uses the ASCII protocol. I guess it''s not the best for performance ! What is the way to enforce it using the binary protocol ?<br> <br> Thank''s for any help.<br> <br> Best regards,<br> <pre class="moz-signature" cols="72">-- Francis GASCHET / NUMLOG <a class="moz-txt-link-freetext" href="http://www.numlog.fr">http://www.numlog.fr</a> Tel.: +33 (0) 130 791 616 Fax.: +33 (0) 130 819 286 NUMLOG recrute sur LOLIX : <a class="moz-txt-link-freetext" href="http://fr.lolix.org/">http://fr.lolix.org/</a> </pre> </body> </html>
Hi Francis, what is the version of glusterfs you are using? On Thu, Dec 11, 2008 at 5:38 PM, Francis GASCHET <fg at numlog.fr> wrote:> Hi all, > > I installed glusterFS on 2 computers under Mandriva 2008. > Connection type : Ethernet 100 mbits/S (mii-tool result: negotiated > 100baseTx-FD flow-control) > > Hereafter is my configuration (same on both PCs): > > *glusterfs-server.vol: > > *volume dir_main > type storage/posix # POSIX FS > translator > option directory /main # Export this > directory > end-volume > > volume locks_main > type features/posix-locks > subvolumes dir_main > end-volume > > volume main > type protocol/server > option transport-type tcp/server # For TCP/IP > transport > subvolumes locks_main > option auth.ip.main.allow 127.0.0.1,172.16.1.* # Allow > access to "brick" volume > end-volume > > > *client.vol: > > *volume main_loc > type protocol/client > option transport-type tcp/client > option remote-host localhost > option remote-subvolume main > end-volume > > volume main_dist > type protocol/client > option transport-type tcp/client > option remote-host other > option remote-subvolume main > end-volume > > volume raid_main_afr > type cluster/afr > subvolumes main_loc main_dist > option read-subvolume main_loc > end-volume > > volume raid_main_ra > type performance/read-ahead > option page-size 128kB > option page-count 4 > option force-atime-update off > subvolumes raid_main_afr > end-volume > > volume raid_main_wb > type performance/write-behind > option aggregate-size 1MB > option flush-behind on > subvolumes raid_main_ra > end-volume > > volume raid_main > type performance/io-cache > option cache-size 512MB > option page-size 1MB > option priority *:0 # *.html:2,*:1 > option force-revalidate-timeout 2 # default is 1 > subvolumes raid_main_wb > end-volume > > > It works fine, but slowly ! > I'm a newbie in glusterFS, so may be some option isn't adequate. Please > advise. > > Due to the option "read-subvolume main_loc" I didn't expect network > traffic when I just list files or read them, but actually, even with a > simple ls, I see a lot of network traffic. > A "ls -R" takes 7 to 8 seconds for less than 8000 files. If I do it > locally, I get the result instantly. > Question 1 : Is this traffic normal on read only operation ? > > Question 2 : in the documentation, I read that there is 2 protocols : ASCII > protocol and binary protocol. Currently, according to what I see with > tcpdump, my glusterFS uses the ASCII protocol. I guess it's not the best for > performance ! What is the way to enforce it using the binary protocol ? > > Thank's for any help. > > Best regards, > > -- > Francis GASCHET / NUMLOGhttp://www.numlog.fr > Tel.: +33 (0) 130 791 616 > Fax.: +33 (0) 130 819 286 > > NUMLOG recrute sur LOLIX :http://fr.lolix.org/ > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >-- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081212/5030b804/attachment.html>
Hi Francis, what is the version of glusterfs you are using? regards, On Thu, Dec 11, 2008 at 5:38 PM, Francis GASCHET <fg at numlog.fr> wrote:> Hi all, > > I installed glusterFS on 2 computers under Mandriva 2008. > Connection type : Ethernet 100 mbits/S (mii-tool result: negotiated > 100baseTx-FD flow-control) > > Hereafter is my configuration (same on both PCs): > > *glusterfs-server.vol: > > *volume dir_main > type storage/posix # POSIX FS > translator > option directory /main # Export this > directory > end-volume > > volume locks_main > type features/posix-locks > subvolumes dir_main > end-volume > > volume main > type protocol/server > option transport-type tcp/server # For TCP/IP > transport > subvolumes locks_main > option auth.ip.main.allow 127.0.0.1,172.16.1.* # Allow > access to "brick" volume > end-volume > > > *client.vol: > > *volume main_loc > type protocol/client > option transport-type tcp/client > option remote-host localhost > option remote-subvolume main > end-volume > > volume main_dist > type protocol/client > option transport-type tcp/client > option remote-host other > option remote-subvolume main > end-volume > > volume raid_main_afr > type cluster/afr > subvolumes main_loc main_dist > option read-subvolume main_loc > end-volume > > volume raid_main_ra > type performance/read-ahead > option page-size 128kB > option page-count 4 > option force-atime-update off > subvolumes raid_main_afr > end-volume > > volume raid_main_wb > type performance/write-behind > option aggregate-size 1MB > option flush-behind on > subvolumes raid_main_ra > end-volume > > volume raid_main > type performance/io-cache > option cache-size 512MB > option page-size 1MB > option priority *:0 # *.html:2,*:1 > option force-revalidate-timeout 2 # default is 1 > subvolumes raid_main_wb > end-volume > > > It works fine, but slowly ! > I'm a newbie in glusterFS, so may be some option isn't adequate. Please > advise. > > Due to the option "read-subvolume main_loc" I didn't expect network > traffic when I just list files or read them, but actually, even with a > simple ls, I see a lot of network traffic. > A "ls -R" takes 7 to 8 seconds for less than 8000 files. If I do it > locally, I get the result instantly. > Question 1 : Is this traffic normal on read only operation ? > > Question 2 : in the documentation, I read that there is 2 protocols : ASCII > protocol and binary protocol. Currently, according to what I see with > tcpdump, my glusterFS uses the ASCII protocol. I guess it's not the best for > performance ! What is the way to enforce it using the binary protocol ? > > Thank's for any help. > > Best regards, > > -- > Francis GASCHET / NUMLOGhttp://www.numlog.fr > Tel.: +33 (0) 130 791 616 > Fax.: +33 (0) 130 819 286 > > NUMLOG recrute sur LOLIX :http://fr.lolix.org/ > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >-- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081212/da5f1a4b/attachment.html>