During a recent review of our technology that''s under development, a couple questions came up: 1. Is there any reason why lustre (using tcp) wouldn''t work through a NAT box? I believe the core issue is whether servers initiate connections back to clients, or whether the client is responsible for initiating all connections. 2. Is there a way to mount a (lustre) filesystem not only read-only, but read-only-and-I-know-everybody-else-has-it-read-only-too ? The goal would be to allow lustre to make some assumptions about when and how it needs to lock stuff. I didn''t think there was anything in there like that, but said I''d ask. Thanks in advance!
Hi Jrd (your proably have a name like me :):> -----Original Message----- > From: lustre-discuss-bounces@clusterfs.com > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of jrd@jrd.org > Sent: Monday, September 18, 2006 6:12 AM > To: lustre-discuss@clusterfs.com > Subject: [Lustre-discuss] More questions for the experts > > During a recent review of our technology that''s under > development, a couple questions came up: > > 1. Is there any reason why lustre (using tcp) wouldn''t work > through a NAT > box? I believe the core issue is whether servers > initiate connections > back to clients, or whether the client is responsible for > initiating all > connections. >Lustre across NAT firewalls should work and was used in Boston by former employees to share music files over cable modems etc. There is one situation where not having a NAT firewall would allow recovery to proceed while with a NAT firewall it will time out: this is when servers re-send lock cancellation callbacks.> 2. Is there a way to mount a (lustre) filesystem not only > read-only, but > read-only-and-I-know-everybody-else-has-it-read-only-too > ? The goal would > be to allow lustre to make some assumptions about when > and how it needs to > lock stuff. I didn''t think there was anything in there > like that, but > said I''d ask.Interesting, so this would be for an archive that is never modified and is read by many clients. Locking is not currently believed to be a serious overhead, because the locking is wrapped with the operation for which locks are acquired (we called this "lock intents"). I think presently all clients would take read locks I think, but I like your thought. It is not so common though, most archives see an occasional update, like a farm of web servers. We have currently no plans ot implement this. - Peter -> > Thanks in advance! > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
From: "Peter J. Braam" <braam@clusterfs.com> Date: Tue, 19 Sep 2006 18:42:42 -0600 Hi Jrd (your proably have a name like me :): Yes, but jrd is easier to type :-} Lustre across NAT firewalls should work and was used in Boston by former employees to share music files over cable modems etc. There is one situation where not having a NAT firewall would allow recovery to proceed while with a NAT firewall it will time out: this is when servers re-send lock cancellation callbacks. I see. So in fact the protocol does allow for servers to connect back to clients. Presumably that doesn''t usually happen, just in oddball cases like the recovery one you cite. I''ll have to think about whether that''s likely to cause a problem for us. Interesting, so this would be for an archive that is never modified and is read by many clients. Right. In fact, the scenario that we were kicking around is one where there''s a not-real-large part of the filesystem (something like a couple hundred MB) which will in general be used very heavily as a shared read-only section by many clients, quite distinct from other areas of the fs which will be larger, read/write, etc.. It could be updated very infrequently (ie weeks or months) but those events would coincide with either the entire system being restarted or at least some kind of out-of-band mechanism for triggering the clients to resync. Locking is not currently believed to be a serious overhead, because the locking is wrapped with the operation for which locks are acquired (we called this "lock intents"). Yes, understood. I think presently all clients would take read locks I think, but I like your thought. Yes. The real savings, if any, would probably come from the server having to do less work trying to decide whether to invalidate anybody. It may be that that''s already slick enough that it doesn''t matter; as long as clients are allowed to cache some stuff, and nobody''s invalidating it, it probably doesn''t make any difference. It is not so common though, most archives see an occasional update, like a farm of web servers. We have currently no plans ot implement this. Sure. I said I''d ask. Thanks!