hi all, suddenly ran into a very odd issue with a 151a server used primarily for cifs... out of (seemingly) nowhere, writes are incredibly slow, often <10kb/s. this is what zpool iostat 1 looks like when i copy a big file: storepool 13.4T 1.07T 57 0 6.13M 0 storepool 13.4T 1.07T 216 91 740K 5.58M storepool 13.4T 1.07T 127 182 232K 1004K storepool 13.4T 1.07T 189 99 361K 5.47M storepool 13.4T 1.07T 357 172 910K 949K storepool 13.4T 1.07T 454 222 1.42M 2.14M storepool 13.4T 1.07T 55 209 711K 1.05M basically instead of the usual txg 5-second write pattern zfs writes to the zpool every second. this is certainly not an issue with the disks... iostat -En shows no errors and -Xn shows that the disks are barely being used (<20%). the only situation in which i''ve seen this before was a multi-terabyte pool with dedup=on and constant writes (goes away once you turn off dedup). no dedup anywhere on this zpool, though. arc usage is normal (total ram is 12gb, max is set to 11gb, current usage is 8gb). pool is an 8-disk raidz2. any ideas? pretty stumped. milosz
thanks, bill. i killed an old filesystem. also forgot about arc_meta_limit. kicked it up to 4gb from 2gb. things are back to normal. On Thu, Dec 15, 2011 at 1:06 PM, Bill Sommerfeld <sommerfeld at alum.mit.edu> wrote:> On 12/15/11 09:35, milosz wrote: >> >> hi all, >> >> suddenly ran into a very odd issue with a 151a server used primarily >> for cifs... out of (seemingly) nowhere, writes are incredibly slow, >> often<10kb/s. ?this is what zpool iostat 1 looks like when i copy a >> big file: >> >> storepool ? 13.4T ?1.07T ? ? 57 ? ? ?0 ?6.13M ? ? ?0 >> storepool ? 13.4T ?1.07T ? ?216 ? ? 91 ? 740K ?5.58M > > ... > >> any ideas? ?pretty stumped. > > > Behavior I''ve observed with multiple pools is that you will sometimes hit a > performance wall when the pool gets too full; the system spends lots of time > reading in metaslab metadata looking for a place to put newly-allocated > blocks. ?If you''re in this mode, kernel profiling will show a lot of time > spent in metaslab-related code. > > Exactly where you hit the wall seems to depend on the history of what went > into the pool; I''ve seen the problem kick in with only 69%-70% usage in one > pool that was used primarily for solaris development. > > The workaround turned out to be simple: delete stuff you don''t need to keep. > ?Once there was enough free space, write performance returned to normal. > > There are a few metaslab-related tunables that can be tweaked as well. > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?- Bill
2011-12-15 22:44, milosz ?????:>> There are a few metaslab-related tunables that can be tweaked as well. >> - BillFor the sake of completeness, here are the relevant lines I have in /etc/system: ****** * fix up metaslab min size (recent default ~10Mb seems bad, * recommended return to 4Kb, we''ll do 4*8K) * greatly increases write speed in filled-up pools set zfs:metaslab_min_alloc_size = 0x8000 set zfs:metaslab_smo_bonus_pct = 0xc8 ****** These values were described in greater detail on the list this summer, I think. HTH, //Jim
hi guys, does anyone know if a fix for this (space map thrashing) is in the works? i''ve been running into this on and off on a number of systems i manage. sometimes i can delete snapshots and things go back to normal, sometimes the only thing that works is enabling metaslab_debug. obviously the latter is only really an option for systems with a huge amount of ram. or: am i doing something wrong? milosz On Mon, Dec 19, 2011 at 8:02 AM, Jim Klimov <jimklimov at cos.ru> wrote:> 2011-12-15 22:44, milosz ?????: > >>> There are a few metaslab-related tunables that can be tweaked as well. >>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?- Bill > > > For the sake of completeness, here are the relevant lines > I have in /etc/system: > > ****** > * fix up metaslab min size (recent default ~10Mb seems bad, > * recommended return to 4Kb, we''ll do 4*8K) > * greatly increases write speed in filled-up pools > set zfs:metaslab_min_alloc_size = 0x8000 > set zfs:metaslab_smo_bonus_pct = 0xc8 > ****** > > These values were described in greater detail on the list > this summer, I think. > > HTH, > //Jim
Hi Milosz, As far as I know, fix for space map thrashing is in the works. As of now, you still have to do with the above mentioned tunables. Thanks, Deepak. On 02/ 1/12 11:20 PM, milosz wrote:> hi guys, > > does anyone know if a fix for this (space map thrashing) is in the > works? i''ve been running into this on and off on a number of systems > i manage. sometimes i can delete snapshots and things go back to > normal, sometimes the only thing that works is enabling > metaslab_debug. obviously the latter is only really an option for > systems with a huge amount of ram. > > or: am i doing something wrong? > > milosz > > On Mon, Dec 19, 2011 at 8:02 AM, Jim Klimov<jimklimov at cos.ru> wrote: >> 2011-12-15 22:44, milosz ?????: >> >>>> There are a few metaslab-related tunables that can be tweaked as well. >>>> - Bill >> >> For the sake of completeness, here are the relevant lines >> I have in /etc/system: >> >> ****** >> * fix up metaslab min size (recent default ~10Mb seems bad, >> * recommended return to 4Kb, we''ll do 4*8K) >> * greatly increases write speed in filled-up pools >> set zfs:metaslab_min_alloc_size = 0x8000 >> set zfs:metaslab_smo_bonus_pct = 0xc8 >> ****** >> >> These values were described in greater detail on the list >> this summer, I think. >> >> HTH, >> //Jim > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss