I'm getting stability issues with 2.0x series. The log entries are: [2009-06-07 21:31:09] E [client-protocol.c:292:call_bail] Node2: bailing out frame LOOKUP(32) frame sent = 2009-06-07 21:01:02. frame- timeout = 1800 Currently this a test cluster so there's not much traffic going on. However, the GlusterFS mount isn't stable for more than 48 hours. Any ideas? Then the GlusterFS mount diappears. My configuration: volume posix type storage/posix option directory /home/export end-volume volume locks type features/locks subvolumes posix end-volume volume brick type performance/io-threads option autoscaling yes subvolumes locks end-volume volume server type protocol/server option transport-type ib-verbs option transport.ib-verbs.listen-port 6990 option auth.addr.brick.allow * subvolumes brick end-volume volume Node1 type protocol/client option transport-type ib-verbs option remote-port 6990 option remote-host Node1 option remote-subvolume brick end-volume volume Node2 type protocol/client option transport-type ib-verbs option remote-port 6990 option remote-host Node2 option remote-subvolume brick end-volume volume nufa type cluster/replicate option scheduler nufa option read-subvolume brick subvolumes Node1 Node2 end-volume volume writebehind type performance/write-behind option cache-size 8MB subvolumes nufa end-volume volume cache type performance/io-cache option cache-size 8MB subvolumes writebehind end-volume
Scandic wrote:> I'm getting stability issues with 2.0x series. The log entries are: > > [2009-06-07 21:31:09] E [client-protocol.c:292:call_bail] Node2: bailing > out frame LOOKUP(32) frame sent = 2009-06-07 21:01:02. frame-timeout = 1800 > > Currently this a test cluster so there's not much traffic going on. > However, the GlusterFS mount isn't stable for more than 48 hours. Any > ideas? > > Then the GlusterFS mount diappears. > > My configuration: > > volume posix > type storage/posix > option directory /home/export > end-volume > > volume locks > type features/locks > subvolumes posix > end-volume > > volume brick > type performance/io-threads > option autoscaling yes > subvolumes locks > end-volume >First, try without autoscaling. Then, if you need autoscaling, use a non-default thread count by using: option max-threads 32 Currently, the default is 256. This has already been reduced by a commit in the repository. -Shehjar> volume server > type protocol/server > option transport-type ib-verbs > option transport.ib-verbs.listen-port 6990 > option auth.addr.brick.allow * > subvolumes brick > end-volume > > volume Node1 > type protocol/client > option transport-type ib-verbs > option remote-port 6990 > option remote-host Node1 > option remote-subvolume brick > end-volume > > volume Node2 > type protocol/client > option transport-type ib-verbs > option remote-port 6990 > option remote-host Node2 > option remote-subvolume brick > end-volume > > volume nufa > type cluster/replicate > option scheduler nufa > option read-subvolume brick > subvolumes Node1 Node2 > end-volume > > volume writebehind > type performance/write-behind > option cache-size 8MB > subvolumes nufa > end-volume > > volume cache > type performance/io-cache > option cache-size 8MB > subvolumes writebehind > end-volume > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >
I had same issue before. Try to avoid using autoscaling, that helped me. Scandic wrote:> I'm getting stability issues with 2.0x series. The log entries are: > > [2009-06-07 21:31:09] E [client-protocol.c:292:call_bail] Node2: > bailing out frame LOOKUP(32) frame sent = 2009-06-07 21:01:02. > frame-timeout = 1800 > > Currently this a test cluster so there's not much traffic going on. > However, the GlusterFS mount isn't stable for more than 48 hours. Any > ideas? > > Then the GlusterFS mount diappears. > > My configuration: > > volume posix > type storage/posix > option directory /home/export > end-volume > > volume locks > type features/locks > subvolumes posix > end-volume > > volume brick > type performance/io-threads > option autoscaling yes > subvolumes locks > end-volume > > volume server > type protocol/server > option transport-type ib-verbs > option transport.ib-verbs.listen-port 6990 > option auth.addr.brick.allow * > subvolumes brick > end-volume > > volume Node1 > type protocol/client > option transport-type ib-verbs > option remote-port 6990 > option remote-host Node1 > option remote-subvolume brick > end-volume > > volume Node2 > type protocol/client > option transport-type ib-verbs > option remote-port 6990 > option remote-host Node2 > option remote-subvolume brick > end-volume > > volume nufa > type cluster/replicate > option scheduler nufa > option read-subvolume brick > subvolumes Node1 Node2 > end-volume > > volume writebehind > type performance/write-behind > option cache-size 8MB > subvolumes nufa > end-volume > > volume cache > type performance/io-cache > option cache-size 8MB > subvolumes writebehind > end-volume > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >-------------- next part -------------- A non-text attachment was scrubbed... Name: maris.vcf Type: text/x-vcard Size: 206 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090608/ba927439/attachment.vcf>