Our Invoice posting routine (intensive harddrive io) freezes every few seconds to flush the cache. Reading this: https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html I decided to try: # elvtune -r 2048 -w 131072 /dev/sda # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush # run_post_routine # elvtune -r 128 -w 512 /dev/sda # echo "30 500 0 0 500 3000 60 20 0" >/proc/sys/vm/bdflush # sync I like it, but I think that's way too lax and risky - the whole post routine never wrote to disk until I sync'd! But, is there a setting that would ensure reliable constant i/o so that my post process is pretty much all flushed in real time? Is constantly changing the bdflush parameters before the type of job I'm about to run a bad thing? I noticed that changing back to the "30 500 0 0 500 3000 60 20 0" default doesn't flush to queue, I still had to "sync". -Eric Wood
Eric, I've also noticed that tweaking bdflush can have a big difference on performance. I ran a 1,2,4,8,16, and 32 thread Postmark benchmark on a variety of encrypting file systems, and the four thread case was significantly slower than 2 and 8 (also 3 and 5), where I was a expecting a nice linear progression. The amount of I/O being generated by these four threads was "just right" to make this oddity show up. After quite a bit of thinking, I started to play with the bdflush parameters. It turned out that the default setting of 60% for nfract_sync was causing me problems, so I changed it to 90% and the anomalous behavior went away. I would try tweaking nfract_sync (the seventh number), without bumping up nfract (the first number) so much. This way bdflush will get kicked in until the number of free buffers hits nfract_stop (the eighth number), but hopefully won't cause your process to get hung up by synchronously flushing so many buffers. Charles BTW, My performance comparison of crypto file systems, including this bdflush behavior, is described in this paper: http://www.fsl.cs.sunysb.edu/docs/nc-perf/perf.pdf On Thu, 2004-02-05 at 12:00, Eric Wood wrote:> Our Invoice posting routine (intensive harddrive io) freezes every few > seconds to flush the cache. Reading this: > > https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html > > > I decided to try: > > # elvtune -r 2048 -w 131072 /dev/sda > # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush > # run_post_routine > # elvtune -r 128 -w 512 /dev/sda > # echo "30 500 0 0 500 3000 60 20 0" >/proc/sys/vm/bdflush > # sync > > I like it, but I think that's way too lax and risky - the whole post routine > never wrote to disk until I sync'd! But, is there a setting that would > ensure reliable constant i/o so that my post process is pretty much all > flushed in real time? Is constantly changing the bdflush parameters before > the type of job I'm about to run a bad thing? I noticed that changing back > to the "30 500 0 0 500 3000 60 20 0" default doesn't flush to queue, I still > had to "sync". > > -Eric Wood > > > _______________________________________________ > Ext3-users mailing list > Ext3-users@redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users
On Feb 05, 2004 12:00 -0500, Eric Wood wrote:> Our Invoice posting routine (intensive harddrive io) freezes every few > seconds to flush the cache. Reading this: > > https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html > > > I decided to try: > > # elvtune -r 2048 -w 131072 /dev/sda > # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush > # run_post_routine > # elvtune -r 128 -w 512 /dev/sda > # echo "30 500 0 0 500 3000 60 20 0" >/proc/sys/vm/bdflush > # sync > > I like it, but I think that's way too lax and risky - the whole post routine > never wrote to disk until I sync'd! But, is there a setting that would > ensure reliable constant i/o so that my post process is pretty much all > flushed in real time? Is constantly changing the bdflush parameters before > the type of job I'm about to run a bad thing? I noticed that changing back > to the "30 500 0 0 500 3000 60 20 0" default doesn't flush to queue, I still > had to "sync".You could try deleting the journal and creating a larger one (unmounted!): tune2fs -O ^has_journal <dev> tune2fs -J size=128 <dev> Max journal size is 400MB, but this can also consume that much RAM so use with caution. Cheers, Andreas -- Andreas Dilger http://sourceforge.net/projects/ext2resize/ http://www-mddsp.enel.ucalgary.ca/People/adilger/
Hi, all, I think the hdparm "-u" option is really helpful for this case.. According to the hdparm manual, it says this option will greatly increase Linux's repsonsiveness, but also brings risk of disk|file-system corruption. From my experience, the corruption risk is not too high to accept, while the responsiveness is outperformed. hdparm -d1 -c3 -m16 -a16 -A1 -u1 -W1 -k1 -K1 /dev/hd{a,b,c,d,...} works great for us. --Guolin -----Original Message----- From: Andreas Dilger [mailto:adilger@clusterfs.com] Sent: Thursday, February 05, 2004 10:12 AM To: Eric Wood Cc: ext3-users@redhat.com Subject: Re: increasing ext3 or io responsiveness On Feb 05, 2004 12:00 -0500, Eric Wood wrote:> Our Invoice posting routine (intensive harddrive io) freezes every few > seconds to flush the cache. Reading this: > > https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html > > > I decided to try: > > # elvtune -r 2048 -w 131072 /dev/sda > # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush > # run_post_routine > # elvtune -r 128 -w 512 /dev/sda > # echo "30 500 0 0 500 3000 60 20 0" >/proc/sys/vm/bdflush > # sync > > I like it, but I think that's way too lax and risky - the whole post routine > never wrote to disk until I sync'd! But, is there a setting that would > ensure reliable constant i/o so that my post process is pretty much all > flushed in real time? Is constantly changing the bdflush parameters before > the type of job I'm about to run a bad thing? I noticed that changing back > to the "30 500 0 0 500 3000 60 20 0" default doesn't flush to queue, I still > had to "sync".You could try deleting the journal and creating a larger one (unmounted!): tune2fs -O ^has_journal <dev> tune2fs -J size=128 <dev> Max journal size is 400MB, but this can also consume that much RAM so use with caution. Cheers, Andreas -- Andreas Dilger http://sourceforge.net/projects/ext2resize/ http://www-mddsp.enel.ucalgary.ca/People/adilger/ _______________________________________________ Ext3-users mailing list Ext3-users@redhat.com https://www.redhat.com/mailman/listinfo/ext3-users