I just upped kern.maxdsize to 1G, but noticed that a test program that mallocs and frees in a loop (increasing sized chunks 1M -> 1024M) takes about 6 times longer for 1024 iterations than it does only 512 of 'em. Is this non-linearity expected? I see from a profile that ifree is taking most of the time. I guess this use-case is probably unusual (it's only a test program that I used to check I really can use the whole 1G...). I'm running 6.1-STABLE from about 09 May. (prog and gprof attached). Cheers Mark