Sabuj Pattanayek
2009-Oct-05 17:50 UTC
[Gluster-users] bonnie++-1.03a crashes during "Stat files in sequential order"
Hi,
Has anyone successfully run bonnie++-1.03a with these parameters or
something similar?:
./bonnie++ -f -d /glfsdist/bonnie -s 24G:32k > bonnie.out
I'm able to run it to completion with gluster in striped mode and
directly to the XFS layer but in distributed mode I get the following
error a few minutes into the test:
Stat files in sequential order...Expected 16384 files but only got 0
Cleaning up test directory after error.
I'm running:
glusterfs-server-2.0.4-1.el5
glusterfs-common-2.0.4-1.el5
glusterfs-client-2.0.4-1.el5
with (patched) fuse:
kernel-module-fuse-2.6.18-128.2.1.el5-2.7.4glfs11-1 (this is the RPM i
built from the fuse.spec file after patching the sources).
fuse-libs-2.7.4glfs11-1
fuse-2.7.4glfs11-1
fuse-debuginfo-2.7.4glfs11-1
fuse-devel-2.7.4glfs11-1
on a 2.6.18-128.2.1.el5 x86_64 kernel. There are 5 gluster servers
connected via a QDR IB Mellanox switch, here's one of the server
files:
####
volume posix-stripe
type storage/posix
option directory /export/gluster1/stripe
end-volume
volume posix-distribute
type storage/posix
option directory /export/gluster1/distribute
end-volume
volume locks
type features/locks
subvolumes posix-stripe
end-volume
volume locks-dist
type features/locks
subvolumes posix-distribute
end-volume
volume iothreads
type performance/io-threads
option thread-count 16
subvolumes locks
end-volume
volume iothreads-dist
type performance/io-threads
option thread-count 16
subvolumes locks-dist
end-volume
volume server
type protocol/server
option transport-type ib-verbs
option auth.addr.iothreads.allow 10.2.178.*
option auth.addr.iothreads-dist.allow 10.2.178.*
subvolumes iothreads iothreads-dist
end-volume
####
The server configuration on the other servers are the same except for
the name of the posix/storage directory, i.e. /export/gluster2,
/export/gluster3 etc. The client file for mounting the distributed
gluster looks like this :
####
volume client-distribute-1
type protocol/client
option transport-type ib-verbs
option remote-host gluster1
option remote-subvolume iothreads-dist
end-volume
volume client-distribute-2
type protocol/client
option transport-type ib-verbs
option remote-host gluster2
option remote-subvolume iothreads-dist
end-volume
volume client-distribute-3
type protocol/client
option transport-type ib-verbs
option remote-host gluster3
option remote-subvolume iothreads-dist
end-volume
volume client-distribute-4
type protocol/client
option transport-type ib-verbs
option remote-host gluster4
option remote-subvolume iothreads-dist
end-volume
volume client-distribute-5
type protocol/client
option transport-type ib-verbs
option remote-host gluster5
option remote-subvolume iothreads-dist
end-volume
volume readahead-gluster1
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-distribute-1
end-volume
volume readahead-gluster2
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-distribute-2
end-volume
volume readahead-gluster3
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-distribute-3
end-volume
volume readahead-gluster4
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-distribute-4
end-volume
volume readahead-gluster5
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-distribute-5
end-volume
volume writebehind-gluster1
type performance/write-behind
option flush-behind on
subvolumes readahead-gluster1
end-volume
volume writebehind-gluster2
type performance/write-behind
option flush-behind on
subvolumes readahead-gluster2
end-volume
volume writebehind-gluster3
type performance/write-behind
option flush-behind on
subvolumes readahead-gluster3
end-volume
volume writebehind-gluster4
type performance/write-behind
option flush-behind on
subvolumes readahead-gluster4
end-volume
volume writebehind-gluster5
type performance/write-behind
option flush-behind on
subvolumes readahead-gluster5
end-volume
volume distribute
type cluster/distribute
#option block-size 2MB
#subvolumes client-distribute-1 client-distribute-2
client-distribute-3 client-distribute-4 client-distribute-5
subvolumes writebehind-gluster1 writebehind-gluster2
writebehind-gluster3 writebehind-gluster4 writebehind-gluster5
end-volume
####
Thanks,
Sabuj Pattanayek
Sabuj Pattanayek
2009-Oct-05 18:26 UTC
[Gluster-users] bonnie++-1.03a crashes during "Stat files in sequential order"
I re-compiled bonnie++ on the client, the previous binary was compiled on a 32 bit RHEL5 box. The test ran all the way through with the new x86_64 binary. Still wondering why it crashed where it did during the test in distributed mode, since the gluster stripe and direct xfs tests used the 32bit binary and had no problems (at least on the initial run). On Mon, Oct 5, 2009 at 12:50 PM, Sabuj Pattanayek <sabujp at gmail.com> wrote:> Hi, > > Has anyone successfully run bonnie++-1.03a with these parameters or > something similar?: > > ./bonnie++ -f -d /glfsdist/bonnie -s 24G:32k > bonnie.out