Is there a way to see how a file system was formatted, i.e. the block size and cluster size? I currently have a 2TB file system, of which about 840GB are in use by around 9 million image files. Average size of these images is 60-100KB. Currently our production servers still have separate file systems on ext3 and we are doing nightly rsync from there to this ocfs2 volume. This currently takes ~6 hours, which seems a tad slow. The system spends most time during writing files which have changed on the production servers, with high I/O wait. The SAN this ocfs2 volume is on is pretty much idle, I only see up to about 20MB/sec traffic and the two nodes which have this volume mounted have a private GigE interconnect setup for cluster.conf. Any tips on how to debug where this slowness comes from? Or even suggestion to use another cluster file system for a scenario like this. Regards, Ulf.
# debugfs.ocfs2 -R "stats -h" /dev/sdy2 | grep "Cluster Size" Block Size Bits: 12 Cluster Size Bits: 17 12 = 4K 17 = 128K Have you tried stracing the process? # strace -tt -T -o /tmp/strace.out ... Ulf Zimmermann wrote:> Is there a way to see how a file system was formatted, i.e. the block > size and cluster size? I currently have a 2TB file system, of which > about 840GB are in use by around 9 million image files. Average size of > these images is 60-100KB. Currently our production servers still have > separate file systems on ext3 and we are doing nightly rsync from there > to this ocfs2 volume. This currently takes ~6 hours, which seems a tad > slow. The system spends most time during writing files which have > changed on the production servers, with high I/O wait. > > The SAN this ocfs2 volume is on is pretty much idle, I only see up to > about 20MB/sec traffic and the two nodes which have this volume mounted > have a private GigE interconnect setup for cluster.conf. > > Any tips on how to debug where this slowness comes from? Or even > suggestion to use another cluster file system for a scenario like this. > > Regards, Ulf. > > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users >
> -----Original Message----- > From: Sunil Mushran [mailto:Sunil.Mushran@oracle.com] > Sent: 04/25/2007 10:31 > To: Ulf Zimmermann > Cc: ocfs2-users@oss.oracle.com > Subject: Re: [Ocfs2-users] Some questions about ocfs2 > > # debugfs.ocfs2 -R "stats -h" /dev/sdy2 | grep "Cluster Size" > Block Size Bits: 12 Cluster Size Bits: 17 > 12 = 4K > 17 = 128K > > Have you tried stracing the process? > # strace -tt -T -o /tmp/strace.out ...Yes, strace shows shows most time is spent in lstat64 (> 99%), where average execution time on ext3 is < 60 usecs/call while on the ocfs2 volume it is > 500 usecs/call.> > Ulf Zimmermann wrote: > > Is there a way to see how a file system was formatted, i.e. theblock> > size and cluster size? I currently have a 2TB file system, of which > > about 840GB are in use by around 9 million image files. Average sizeof> > these images is 60-100KB. Currently our production servers stillhave> > separate file systems on ext3 and we are doing nightly rsync fromthere> > to this ocfs2 volume. This currently takes ~6 hours, which seems atad> > slow. The system spends most time during writing files which have > > changed on the production servers, with high I/O wait. > > > > The SAN this ocfs2 volume is on is pretty much idle, I only see upto> > about 20MB/sec traffic and the two nodes which have this volumemounted> > have a private GigE interconnect setup for cluster.conf. > > > > Any tips on how to debug where this slowness comes from? Or even > > suggestion to use another cluster file system for a scenario likethis.> > > > Regards, Ulf.Regards, Ulf. --------------------------------------------------------------------- ATC-Onlane Inc., T: 650-532-6382, F: 650-532-6441 4600 Bohannon Drive, Suite 100, Menlo Park, CA 94025 ---------------------------------------------------------------------