Hi, I have noticed that when I use the verbs transport and try to do a umount of the filesystem that I get a device is busy, even when there is no process with a hold on it (e.g., my shell CWD is not in the filesystem tree). In order to get rid of the mount, I need to kill the client side process--not ideal. I do not have any difficulties with the tcp transport. Thanks, John
Hi, All, I need some configuration advice. I''m running 1.3.10. I *think* I''m doing things right but when I was running 1.3.8 there were occasional times when it hangs and I have to restart gluster, kill things that are on the mount, and remount it. so it''s possible it was just 1.3.8 bugs, but I''m not sure I''ve got things in the right order to address posix locks,etc... There are 2 servers AFR''ing each other. so both configs are the same: volume home1 type storage/posix # POSIX FS translator option directory /gluster/home # Export this directory end-volume volume posix-locks-home1 type features/posix-locks # option mandatory on subvolumes home1 end-volume volume home2 type protocol/client # POSIX FS translator option transport-type tcp/client option remote-host REMOTE_IP # IP address of remote host option remote-subvolume posix-locks-home1 # use home1 on remote host (with locking) end-volume volume home type cluster/afr option read-subvolume posix-locks-home1 subvolumes posix-locks-home1 home2 end-volume volume threads1 type performance/io-threads option thread-count 8 option cache-size 32MB subvolumes home end-volume volume server type protocol/server option transport-type tcp/server # For TCP/IP transport subvolumes home home1 posix-locks-home1 option auth.ip.posix-locks-home1.allow REMOTE_IP,127.0.0.1 # Allow access to "home1" volume option auth.ip.home1.allow REMOTE_IP,127.0.0.1 # Allow access to "home1" volume option auth.ip.home.allow REMOTE_IP,127.0.0.1 # Allow access to "afr" volume end-volume
Hi, All, I need some configuration advice. I'm running 1.3.10. I *think* I'm doing things right but when I was running 1.3.8 there were occasional times when it hangs and I have to restart gluster, kill things that are on the mount, and remount it. so it's possible it was just 1.3.8 bugs, but I'm not sure I've got things in the right order to address posix locks,etc... There are 2 servers AFR'ing each other. so both configs are the same: volume home1 type storage/posix # POSIX FS translator option directory /gluster/home # Export this directory end-volume volume posix-locks-home1 type features/posix-locks # option mandatory on subvolumes home1 end-volume volume home2 type protocol/client # POSIX FS translator option transport-type tcp/client option remote-host REMOTE_IP # IP address of remote host option remote-subvolume posix-locks-home1 # use home1 on remote host (with locking) end-volume volume home type cluster/afr option read-subvolume posix-locks-home1 subvolumes posix-locks-home1 home2 end-volume volume threads1 type performance/io-threads option thread-count 8 option cache-size 32MB subvolumes home end-volume volume server type protocol/server option transport-type tcp/server # For TCP/IP transport subvolumes home home1 posix-locks-home1 option auth.ip.posix-locks-home1.allow REMOTE_IP,127.0.0.1 # Allow access to "home1" volume option auth.ip.home1.allow REMOTE_IP,127.0.0.1 # Allow access to "home1" volume option auth.ip.home.allow REMOTE_IP,127.0.0.1 # Allow access to "afr" volume end-volume
Hi, The configuration looks fine with proper IP addresses substituted for REMOTE_IP. If you face any problems, kindly report to us. regards, On Tue, Jul 22, 2008 at 2:01 AM, Keith Freedman <freedman at freeformit.com> wrote:> Hi, All, > > I need some configuration advice. I'm running 1.3.10. > > I *think* I'm doing things right but when I was running 1.3.8 there > were occasional times when it hangs and I have to restart gluster, > kill things that are on the mount, and remount it. > > so it's possible it was just 1.3.8 bugs, but I'm not sure I've got > things in the right order to address posix locks,etc... > > There are 2 servers AFR'ing each other. so both configs are the same: > volume home1 > type storage/posix # POSIX FS translator > option directory /gluster/home # Export this directory > end-volume > > volume posix-locks-home1 > type features/posix-locks > # option mandatory on > subvolumes home1 > end-volume > > volume home2 > type protocol/client # POSIX FS translator > option transport-type tcp/client > option remote-host REMOTE_IP # IP address of remote host > option remote-subvolume posix-locks-home1 # use home1 on > remote host (with locking) > end-volume > > volume home > type cluster/afr > option read-subvolume posix-locks-home1 > subvolumes posix-locks-home1 home2 > end-volume > > volume threads1 > type performance/io-threads > option thread-count 8 > option cache-size 32MB > subvolumes home > end-volume > > volume server > type protocol/server > option transport-type tcp/server # For TCP/IP transport > subvolumes home home1 posix-locks-home1 > option auth.ip.posix-locks-home1.allow REMOTE_IP,127.0.0.1 # Allow > access to "home1" volume > option auth.ip.home1.allow REMOTE_IP,127.0.0.1 # Allow access to > "home1" volume > option auth.ip.home.allow REMOTE_IP,127.0.0.1 # Allow access to "afr" > volume > end-volume > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >-- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080722/ddfbc236/attachment.html>