Ate Poorthuis
2009-May-01 14:47 UTC
[Gluster-users] samba export: transport endpoint not connected
I have a fully working gluster setup (2 distributed nodes that are
replicated on the server side to 2 other nodes). When exporting the
mountpoint as samba share, the gigE link is fully saturated on writing over
samba. However, when reading, performance is seriously reduced.
Client bandwidth:
| iface Rx Tx
Total
=============================================================================
lo: 0.00 KB/s 0.00 KB/s 0.00
KB/s
eth0: 255.29 KB/s 5546.58 KB/s 5801.86
KB/s
eth1: 16890.96 KB/s 389.40 KB/s 17280.36
KB/s
------------------------------------------------------------------------------
total: 17146.25 KB/s 5935.98 KB/s 23082.23
KB/s
And the following errors shows up in the logs:
2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse:
73272: ERR => -1 (Transport endpoint is not connected)
2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse:
73289: ERR => -1 (Transport endpoint is not connected)
2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse:
73306: ERR => -1 (Transport endpoint is not connected)
2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse:
73323: ERR => -1 (Transport endpoint is not connected)
I hope someone can provide any pointers on how to solve this. Vol files can
be found below.
Thanks a lot,
Ate
Server .vol
--------
volume posix1
type storage/posix # POSIX FS translator
option directory /srv/export/gfs1/ # Export this directory
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume read-ahead1
type performance/read-ahead
option page-count 4
subvolumes locks1
end-volume
volume write-behind1
type performance/write-behind
subvolumes read-ahead1
end-volume
volume afr_52
type protocol/client
option transport-type tcp/client
option remote-host 192.168.5.52
option remote-subvolume write-behind1
end-volume
volume afr
type cluster/replicate
subvolumes write-behind1 afr_52
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp
subvolumes write-behind1 afr
option auth.addr.write-behind1.allow * # Allow access to "brick"
volume
option auth.addr.afr.allow * # Allow access to "brick" volume
end-volume
Client .vol
-----------
volume brick1_51
type protocol/client
option transport-type tcp
option remote-host 192.168.5.51 # IP address of the remote brick
option remote-subvolume afr # name of the remote volume
end-volume
volume brick1_101
type protocol/client
option transport-type tcp
option remote-host 192.168.5.101 # IP address of the remote brick
option remote-subvolume afr # name of the remote volume
end-volume
volume ha1
type /testing/cluster/ha
subvolumes brick1_51 brick1_101
end-volume
volume brick1_52
type protocol/client
option transport-type tcp
option remote-host 192.168.5.52 # IP address of the remote brick
option remote-subvolume afr # name of the remote volume
end-volume
volume brick1_102
type protocol/client
option transport-type tcp
option remote-host 192.168.5.102 # IP address of the remote brick
option remote-subvolume afr # name of the remote volume
end-volume
volume ha2
type /testing/cluster/ha
subvolumes brick1_52 brick1_102
end-volume
volume bricks
type cluster/distribute
option min-free-disk 5%
subvolumes ha1 ha2
end-volume
### Add readahead feature
volume readahead
type performance/read-ahead
option page-size 1MB # unit in bytes
option page-count 2 # cache per file = (page-count x page-size)
subvolumes bricks
end-volume
### Add writeback feature
volume writeback
type performance/write-behind
option flush-behind off
subvolumes readahead
end-volume
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090501/8fa7601a/attachment.html>
Ate Poorthuis
2009-May-01 15:01 UTC
[Gluster-users] samba export: transport endpoint not connected
Forgot to add that I am using Debian 5.0.1 and glusterfs 2.0.0 final, not using the gluster patched fuse. On Fri, May 1, 2009 at 4:47 PM, Ate Poorthuis <atepoorthuis at gmail.com>wrote:> I have a fully working gluster setup (2 distributed nodes that are > replicated on the server side to 2 other nodes). When exporting the > mountpoint as samba share, the gigE link is fully saturated on writing over > samba. However, when reading, performance is seriously reduced. > > Client bandwidth: > > | iface Rx Tx > Total > > =============================================================================> lo: 0.00 KB/s 0.00 KB/s 0.00 > KB/s > eth0: 255.29 KB/s 5546.58 KB/s 5801.86 > KB/s > eth1: 16890.96 KB/s 389.40 KB/s 17280.36 > KB/s > > ------------------------------------------------------------------------------ > total: 17146.25 KB/s 5935.98 KB/s 23082.23 > KB/s > > And the following errors shows up in the logs: > > 2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse: > 73272: ERR => -1 (Transport endpoint is not connected) > 2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse: > 73289: ERR => -1 (Transport endpoint is not connected) > 2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse: > 73306: ERR => -1 (Transport endpoint is not connected) > 2009-05-01 16:31:25 E [fuse-bridge.c:2332:fuse_getlk_cbk] glusterfs-fuse: > 73323: ERR => -1 (Transport endpoint is not connected) > > I hope someone can provide any pointers on how to solve this. Vol files can > be found below. > > Thanks a lot, > > Ate > > Server .vol > -------- > volume posix1 > type storage/posix # POSIX FS translator > option directory /srv/export/gfs1/ # Export this directory > end-volume > > volume locks1 > type features/locks > subvolumes posix1 > end-volume > > volume read-ahead1 > type performance/read-ahead > option page-count 4 > subvolumes locks1 > end-volume > > volume write-behind1 > type performance/write-behind > subvolumes read-ahead1 > end-volume > > volume afr_52 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.5.52 > option remote-subvolume write-behind1 > end-volume > > volume afr > type cluster/replicate > subvolumes write-behind1 afr_52 > end-volume > > ### Add network serving capability to above brick. > volume server > type protocol/server > option transport-type tcp > subvolumes write-behind1 afr > option auth.addr.write-behind1.allow * # Allow access to "brick" volume > option auth.addr.afr.allow * # Allow access to "brick" volume > end-volume > > > Client .vol > ----------- > volume brick1_51 > type protocol/client > option transport-type tcp > option remote-host 192.168.5.51 # IP address of the remote brick > option remote-subvolume afr # name of the remote volume > end-volume > volume brick1_101 > type protocol/client > option transport-type tcp > option remote-host 192.168.5.101 # IP address of the remote brick > option remote-subvolume afr # name of the remote volume > end-volume > > volume ha1 > type /testing/cluster/ha > subvolumes brick1_51 brick1_101 > end-volume > > volume brick1_52 > type protocol/client > option transport-type tcp > option remote-host 192.168.5.52 # IP address of the remote brick > option remote-subvolume afr # name of the remote volume > end-volume > > volume brick1_102 > type protocol/client > option transport-type tcp > option remote-host 192.168.5.102 # IP address of the remote brick > option remote-subvolume afr # name of the remote volume > end-volume > > volume ha2 > type /testing/cluster/ha > subvolumes brick1_52 brick1_102 > end-volume > > volume bricks > type cluster/distribute > option min-free-disk 5% > subvolumes ha1 ha2 > end-volume > > ### Add readahead feature > volume readahead > type performance/read-ahead > option page-size 1MB # unit in bytes > option page-count 2 # cache per file = (page-count x page-size) > subvolumes bricks > end-volume > ### Add writeback feature > volume writeback > type performance/write-behind > option flush-behind off > subvolumes readahead > end-volume > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090501/533f7f9a/attachment.html>