Richard Hedges
2007-Jan-07  20:28 UTC
[Fwd: Re: [Lustre-discuss] 1.4.8, shared root clusters and flock...]
Hi Oleg, There are 45 tests in the fcntl section of the POSIX test suite. I think around 10 of them require the the filesystem is mounted with flock, so I don''t know about "comprehensive", but there''s a start. - Richard>-------- Original Message -------- >Subject: Re: [Lustre-discuss] 1.4.8, shared root clusters and flock... >Date: Sat, 6 Jan 2007 02:11:56 +0200 >From: Oleg Drokin <green@clusterfs.com> >To: David Golden <dgolden@cp.dias.ie> >CC: lustre-discuss@clusterfs.com >References: <200701051802.02805.dgolden@cp.dias.ie> > > > >Hello! > >On Fri, Jan 05, 2007 at 06:02:02PM +0000, David Golden wrote: > >>Just wondering is anyone else hitting this: >>1.4.8 (correctly I guess!) started rejecting locks unless the >>"flock" mount option was used, as per changelog note for bug >>#10743. I guess in the >>past it faked success but the file wasn''t really locked? > >File was locked, but only on a that node, other nodes won''t see it. > >>=> So, what''s the latest on file lock support in general in lustre? >We are trying to fix when we learn about problems with it. >(see bug 11415 if you plan to use fcntl locks with anything above >2.6.5 kernel) > >>Actually using the "flock" mount option doesn''t seem to be a >>solution, as AFAIK it''s known incomplete in 1.4.x (?): Unless flock >>is supposed to be working fully in 1.4.8, in which case, well, >>maybe there is a bug to be found... > >Well, we found one already. >I wonder if there are any comprehensive flock/fcntl locking testsuites >we can try to use? > >Bye, > Oleg > >_______________________________________________ >Lustre-discuss mailing list >Lustre-discuss@clusterfs.com >https://mail.clusterfs.com/mailman/listinfo/lustre-discuss-- =================================================== Richard Hedges Scalable I/O Project Development Environment Group - Livermore Computing Lawrence Livermore National Laboratory 7000 East Avenue, MS L-557 Livermore, CA 94551 v: (925) 423-2699 f: (925) 423-6961 E: richard-hedges@llnl.gov
David Golden
2007-Jan-08  06:09 UTC
[Fwd: Re: [Lustre-discuss] 1.4.8, shared root clusters and flock...]
On Saturday 06 January 2007 00:34, Richard Hedges wrote:> Hi Oleg, > > There are 45 tests in the fcntl section of the POSIX test suite. I > think around 10 of them require the the filesystem is mounted with > flock, so I don''t know about "comprehensive", but there''s a start.I haven''t used or even looked at the open group''s test suites, but I''m guessing they only test intranode locking? (internode probably outside scope?) A while back colleague forwarded me a tiny (definitely not comprehensive, but if the test fails, fs locking certainly can''t be used internode...) perl stress test for internode parallel locking, basically just do ~10K locks+writes per node on N nodes at once to a single file, and inspect if the file comes out gibberish or neat at the end. Lustre fs mounted without flock used to fail it (unsurprisingly) but I was going to give it another go when/if I get our cluster''s fs mounted with flock.
Aaron Knister
2007-Jan-08  20:04 UTC
[Lustre-discuss] exporting lustre subdirectory via nfs
Hi,
I have found that when I mount via nfs a subdirectory of a lustre  
file system I get the message posted below (plus it doesnt work).
Example-
/disk/data
/disk is the lustre file system and /disk/data is the directory I  
want to export. I can export /disk just fine but I really only want  
to export /disk/data for some security reasons.
Message found in /var/log/messages--
LustreError: 4539:0:(ldlm_lock.c:82:ldlm_it2str()) Unknown intent  
-2145573515
LustreError: 4539:0:(dcache.c:290:ll_frob_intent()) ASSERTION(it- 
 >it_magic == INTENT_MAGIC) failed:bad intent magic: 802db80
LustreError: 4539:0:(dcache.c:290:ll_frob_intent()) LBUG
Lustre: 4539:0:(linux-debug.c:156:libcfs_debug_dumpstack()) showing  
stack for process 4539
rpc.mountd    R  running task       0  4539      1          4612   
4535 (NOTLB)
000001000117a030 ffffffff801d037b 0000000045a25af1 000000000b6bd680
        000000d272cf7110 0000000000000246 0000000000000256  
0000010000013780
        0000000000000000 00000000000000d2
Call Trace:<ffffffff801d037b>{inode_has_perm+89} <ffffffff801cf5b4> 
{avc_has_perm+70}
        <ffffffff801d2575>{selinux_file_permission+298}  
<ffffffffa0149a5d>{:sunrpc:cache_write+186}
        <ffffffff80177424>{vfs_write+207}
<ffffffff8017750c>{sys_write
+69}
        <ffffffff8011026a>{system_call+126}
LustreError: dumping log to /tmp/lustre-log-crew01.iges.org. 
1168268017.4539
Lustre: 4539:0:(linux-debug.c:96:libcfs_run_upcall()) Invoked LNET  
upcall /usr/lib/lustre/lnet_upcall LBUG,/testsuite/tmp/lbuild-boulder/ 
lbuild-v1_4_7_3-2.6-rhel4-x86_64/lbuild/BUILD/lustre-1.4.7.3/lustre/ 
llite/dcache.c,ll_frob_intent,290