Hi-- Keehyoung Joo wrote:> > Lustre file system performance is poor than nfs file system. > > I tested on program compiling (339 source files). > It made 324 object code, 1 library archive and 51 executable files.For Lustre 1.0.x and 1.2.x, this is unfortunately not a surprising result. We have made no secret of the fact that we have optimized initially for large-file, HPC-style workloads, and not things like compilation. Lustre 1.4, which will be released in the next few months, will include many improvements for precisely these situations: - locking refinements to substantially reduce metadata lock traffic - a client-side cache for file handles - batching I/O for many small files into a single RPC - batching I/O for many small files into a single disk transaction With these improvements we expect to be competitive with NFS on small-file workloads. With the metadata writeback cache and clusters of metadata servers (both in Lustre 2.0) we expect to raise the bar.> 0. Under NFS file system ( other cluster system 100mb network) > - % time make > 0m50.50 real > 0m39.45 user > 0m5.23 sys > > 1. Under Lustre file system (Giga bit network cluster) (all server’s > /proc/sys/portals/debug = 0 ) > - stripe_size = 666360 , stripe_count = 0 ( 8 stripe ) > % time make > 13m40.05 real > 0m26.10 user > 6m6.50 sys > - stripe_size = 655360 , stripe_count = 1 ( 1 stripe ) > % time make > 3m33.04 real > 0m26.15 user > 0m31.05 sysThis is also not so surprising. Striping increases the peak theoretical aggregate bandwidth for that file, but also increases the metadata overhead (as your test demonstrates). We recommend that all files use 1 stripe except those files which need to see I/O rates which are larger than a single OST can provide (for example, a data file which is read by the entire cluster). Thanks-- -Phil
Hi,
 
Lustre file system performance is poor than nfs file system.
I don''t know how to configuration.
 
We have 4 ost server, 1 mds, and 67 clients.
Configurations: ( Gigabit network, SuSE Linux Enterprise server 8 for
opteron CPUs )
-------------------------------------------------------------------
Ost1 : ost1a (/dev/sdb1) , ost1b (/dev/sdc1)     (each 1.2T raid5 array)
Ost2 : ost2a (/dev/sdb1) , ost2b (/dev/sdc1)     (each 1.2T raid5 array)
Ost3 : ost3a (/dev/sdb1) , ost3b (/dev/sdc1)     (each 1.2T raid5 array)
Ost4 : ost4a (/dev/sdb1) , ost4b (/dev/sdc1)     (each 1.2T raid5 array)
 
-> 8 ost devices on 4 ost server
 
MDS: mds1 (/dev/sdb1)   350G raid 5 array
-------------------------------------------------------------------
 
I tested on program compiling (339 source files). 
It made 324 object code, 1 library archive and 51 executable files.
 
0. Under NFS file system ( other cluster system 100mb network)
  - % time make
       0m50.50 real
       0m39.45 user
       0m5.23 sys
 
1. Under Lustre file system (Giga bit network cluster) (all server''s
/proc/sys/portals/debug = 0 )
-  stripe_size = 666360 , stripe_count = 0   ( 8 stripe )
  % time make
     13m40.05 real
      0m26.10 user
      6m6.50  sys
-  stripe_size = 655360 , stripe_count = 1   ( 1 stripe )
% time make
      3m33.04 real
      0m26.15 user
      0m31.05  sys
 
 
NFS file system very fast than my lustre file system.
I watched the compiling process.
Luster is slowing down when 51 executable file is made from object code
and library archive.
 
What is the problem? 
Please give me your some comments and helps.
Thanks.
 
                   Keehyoung Joo
------------------------------------------------------------------
In him was life, and the life was the light of men. [John 1:4]
I love Jesus Christ who is my savior. He gives me meaning of life.
In Chris, I have become shepherd and bible teacher.
http://gene.kias.re.kr/~newton
Newton@kias.re.kr
------------------------------------------------------------------