Just posted on #gluster and didn't get a response, so I thought I'd post here as well: Is Anyone familiar with nfs.trusted-sync behaviour? Specifically in a multinode replica cluster, does NFS send ack back to the client when data is received in memory on whichever host is providing NFS services, or does it send ack when that data has been replicated in memory of all the replica member nodes? I suppose the question could also be, does the data have to be on disk of one of the nodes, before it is replicated to the other nodes? Thanks, *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* *Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140224/1cba3886/attachment.html>
Hi, I've set up a 2 brick replicate system, using bonded GigE. eth0 - management eth1 & eth2 - bonded 192.168.20.x eth3 & eth4 - bonded 192.168.10.x I created the replicate over the 192.168.10 interfaces. # gluster volume info Volume Name: raid5 Type: Replicate Volume ID: 02b24ff0-e55c-4f92-afa5-731fd52d0e1a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: filer-1:/gluster-exported/raid5/data Brick2: filer-2:/gluster-exported/raid5/data Options Reconfigured: performance.nfs.stat-prefetch: on performance.nfs.io-cache: on performance.nfs.read-ahead: on performance.nfs.io-threads: on nfs.trusted-sync: on performance.cache-size: 13417728 performance.io-thread-count: 64 performance.write-behind-window-size: 4MB performance.io-cache: on performance.read-ahead: on I attached an NFS client across the 192.168.20 interface. The NFS works fine. Under load, though, I get 100% CPU usage of the nfs process and lose connectivity. My plan was to replicate across the 192.168.10 bond as well as do gluster mounts. The NFS mount on 192.168.20 was to keep NFS traffic off the gluster link. Is this a supported configuration? Does anyone else do this? Gerald -- Gerald Brandt Majentis Technologies gbr at majentis.com 204-229-6595 www.majentis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140224/f2b4c7df/attachment.html>