Hi, guys,
 I would like to collect your response before submitting a bug report.
 Any comment would be greatly appreciated.
 The bug is in the following piece at vm/vm_object.c:
               
/*
 * Try to optimize the next page.  If we can't we pick up
 * our (random) scan where we left off.
 */
if (msync_flush_flags & MSYNC_FLUSH_SOFTSEQ) {
        if ((p = vm_page_lookup(object, pi + n)) != NULL)
                goto again;
}
 The piece of code tries to optimize locating next vm_page. When it does
 so, kernel always flushes the vm_page p in an ascending order (in term
 of p->pindex). However, the problem comes in when one round finishes
 (when vm_page_lookup returns NULL), the kernel will then begin with
 vm_page np and redo most of the whole work again!!! 
 If beforehand, all vm_pages are in an ascending order, the total cost
 would be n^2/2 instead of n.  We could fix the code by break the loop
 when vm_page_lookup returns NULL. However, since the vm_pages are not
 always sorted, we might solve this problem only by disabling
 MSYNC_FLUSH_SOFTSEQ flag:
 99c99
< static int msync_flush_flags = MSYNC_FLUSH_HARDSEQ |
MSYNC_FLUSH_SOFTSEQ;
---> static int msync_flush_flags = MSYNC_FLUSH_HARDSEQ;
 Notice this bug doesn't violate any correctness of the kernel, though it
 causes so much unnesscary disk traffic.   
Bests
Jinyuan
-- 
http://www.fastmail.fm - Send your email first class