Hi. I have a server with 9.0/amd64 and 4 Gigs of RAM. Today's questions are about the amount of memory in 'wired' state and the ARC size. If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut==ARC Size: 12.50% 363.14 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84 GiB ===Cut== At the same time I have 3500 megs in wired state: ===Cut==Mem: 237M Active, 36M Inact, 3502M Wired, 78M Cache, 432K Buf, 37M Free ===Cut== First question - what is the actual size of ARC, and how can it be determined ? Solaris version of the script is more comprehensive (ran on Solaris): ===Cut==ARC Size: Current Size: 6457 MB (arcsize) Target Size (Adaptive): 6457 MB (c) Min Size (Hard Limit): 2941 MB (zfs_arc_min) Max Size (Hard Limit): 23534 MB (zfs_arc_max) ===Cut== The arcstat script makes me think that the ARC size is about 380 megs indeed: ===Cut== Time read miss miss% dmis dm% pmis pm% mmis mm% size tsize 14:33:35 170M 7466K 4 7466K 4 192 78 793K 3 380M 380M ===Cut== Second question: if the size is 363 Megs, why do I have 3500 Megs in wired state? From my experience this is directly related to the zfs, but 380 megs its like about ten times smaller than 3600 megs. At the same time I have like 700 Megs in swap, so my guess - zfs isn't freeing memory for current needs that easily. Yeah, I can tune it down, but I just would like to know what is happening on an untuned machine. Thanks. Eugene.
on 07/02/2012 10:36 Eugene M. Zheganin said the following:> Hi. > > I have a server with 9.0/amd64 and 4 Gigs of RAM. > Today's questions are about the amount of memory in 'wired' state and the ARC size. > > If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: > > ===Cut==> ARC Size: 12.50% 363.14 MiB > Target Size: (Adaptive) 12.50% 363.18 MiB > Min Size (Hard Limit): 12.50% 363.18 MiB > Max Size (High Water): 8:1 2.84 GiB > ===Cut==> > At the same time I have 3500 megs in wired state: > > ===Cut==> Mem: 237M Active, 36M Inact, 3502M Wired, 78M Cache, 432K Buf, 37M Free > ===Cut==> > First question - what is the actual size of ARC, and how can it be determined ? > Solaris version of the script is more comprehensive (ran on Solaris): > > ===Cut==> ARC Size: > Current Size: 6457 MB (arcsize) > Target Size (Adaptive): 6457 MB (c) > Min Size (Hard Limit): 2941 MB (zfs_arc_min) > Max Size (Hard Limit): 23534 MB (zfs_arc_max) > ===Cut==Please try sysutils/zfs-stats; zfs-stats -a output should provide a good overview of the state and configuration of the system.> The arcstat script makes me think that the ARC size is about 380 megs indeed: > > ===Cut==> Time read miss miss% dmis dm% pmis pm% mmis mm% size tsize > 14:33:35 170M 7466K 4 7466K 4 192 78 793K 3 380M 380M > ===Cut==> > Second question: if the size is 363 Megs, why do I have 3500 Megs in wired > state? From my experience this is directly related to the zfs, but 380 megs its > like about ten times smaller than 3600 megs. At the same time I have like 700 > Megs in swap, so my guess - zfs isn't freeing memory for current needs that easily. > > Yeah, I can tune it down, but I just would like to know what is happening on an > untuned machine.-- Andriy Gapon
Hi, if you are not using USB3 and a fast memory stick, it will be slower than swapping to disk. Bye, Alexander. -- Send via an Android device, please forgive brevity and typographic and spelling errors. Freddie Cash <fjwcash@gmail.com> hat geschrieben:On Wed, Feb 8, 2012 at 10:25 AM, Eugene M. Zheganin <emz@norma.perm.ru> wrote:> On 08.02.2012 18:15, Alexander Leidinger wrote: >> I can't remember to have seen any mention of SWAP on ZFS being safe >> now. So if nobody can provide a reference to a place which tells that >> the problems with SWAP on ZFS are fixed: >> ?1. do not use SWAP on ZFS >> ?2. see 1. >> ?3. check if you see the same problem without SWAP on ZFS (btw. see 1.) >> > So, if a swap have to be used, and, it has to be backed up with something > like gmirror so it won't come down with one of the disks, there's no need to > use zfs for system. > > This makes zfs only useful in cases where you need to store something on a > couple+ of terabytes, still having OS on ufs. Occam's razor and so on.Or, you plug a USB stick into the back (or even inside the case as a lot of mobos have internal USB connectors now) and use that for swap. -- Freddie Cash fjwcash@gmail.com _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Hi, this only applies to old systems (slooow disks, no NCQ support), or very fast USB3 memory sticks. Current (I would say at least 2-3 year old) hardware is slowed down by USB2. Bye, Alexander. -- Send via an Android device, please forgive brevity and typographic and spelling errors. Freddie Cash <fjwcash@gmail.com> hat geschrieben:On Wed, Feb 8, 2012 at 10:40 AM, Freddie Cash <fjwcash@gmail.com> wrote:> On Wed, Feb 8, 2012 at 10:25 AM, Eugene M. Zheganin <emz@norma.perm.ru> wrote: >> On 08.02.2012 18:15, Alexander Leidinger wrote: >>> I can't remember to have seen any mention of SWAP on ZFS being safe >>> now. So if nobody can provide a reference to a place which tells that >>> the problems with SWAP on ZFS are fixed: >>> ?1. do not use SWAP on ZFS >>> ?2. see 1. >>> ?3. check if you see the same problem without SWAP on ZFS (btw. see 1.) >>> >> So, if a swap have to be used, and, it has to be backed up with something >> like gmirror so it won't come down with one of the disks, there's no need to >> use zfs for system. >> >> This makes zfs only useful in cases where you need to store something on a >> couple+ of terabytes, still having OS on ufs. Occam's razor and so on. > > Or, you plug a USB stick into the back (or even inside the case as a > lot of mobos have internal USB connectors now) and use that for swap.That also works well for adding L2ARC (cache) to the ZFS pool as well. -- Freddie Cash fjwcash@gmail.com _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Hi, a possible soution would be to start a wiki pagee with what you know, e.g. a page which explains that solaris and zio* belong to ZFS. Over time people can extend with additional info. Bye, Alexander. -- Send via an Android device, please forgive brevity and typographic and spelling errors. Jeremy Chadwick <freebsd@jdc.parodius.com> hat geschrieben:On Wed, Feb 08, 2012 at 10:29:36PM +0200, Andriy Gapon wrote:> on 08/02/2012 12:31 Eugene M. Zheganin said the following: > > Hi. > > > > On 08.02.2012 02:17, Andriy Gapon wrote: > >> [output snipped] > >> > >> Thank you.? I don't see anything suspicious/unusual there. > >> Just case, do you have ZFS dedup enabled by a chance? > >> > >> I think that examination of vmstat -m and vmstat -z outputs may provide some > >> clues as to what got all that memory wired. > >> > > Nope, I don't have deduplication feature enabled. > > OK.? So, did you have a chance to inspect vmstat -m and vmstat -z?Andriy, Politely -- recommending this to a user is a good choice of action, but the problem is that no user, even an experienced user, is going to know what all of the "Types" (vmstat -m) or "ITEMs" (vmstat -z) correlate with on the system. For example, for vmstat -m, the ITEM name is "solaris".? For vmstat -z, the Types are named zio_* but I have a feeling there are more than just that which pertain to ZFS.? I'm having to make *assumptions*. The FreeBSD VM is highly complex and is not "easy to understand" even remotely.? It becomes more complex when you consider that we use terms like "wired", "active", "inactive", "cache", and "free" -- and none of them, in simple English terms, actually represent the words chosen for what they do. Furthermore, the only definition I've been able to find over the years for how any of these work, what they do/mean, etc. is here: http://www.freebsd.org/doc/en/books/arch-handbook/vm.html And this piece of documentation is only useful for people who understand VMs (note: it was written by Matt Dillon, for example).? It is not useful for end-users trying to track down what within the kernel is actually eating up memory.? "vmstat -m" is as best as it's going to get, and like I said, with the ITEM names being borderline ambiguous (depending on what you're looking for -- with VFS and so on it's spread all over the place), this becomes a very tedious task, where the user or admin have to continually ask developers on the mailing lists what it is they're looking at. -- | Jeremy Chadwick???????????????????????????????? jdc@parodius.com | | Parodius Networking???????????????????? http://www.parodius.com/ | | UNIX Systems Administrator???????????????? Mountain View, CA, US | | Making life hard for others since 1977.???????????? PGP 4BD6C0CB | _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Hi, feel free to register with FirstnameLastname in the wiki and tell us about it. We provide write access to people which seriously want to help improve the wiki content. Bye, Alexander. -- Send via an Android device, please forgive brevity and typographic and spelling errors. Charles Sprickman <spork@bway.net> hat geschrieben: On Feb 8, 2012, at 7:43 PM, Artem Belevich wrote:> On Wed, Feb 8, 2012 at 4:28 PM, Jeremy Chadwick > <freebsd@jdc.parodius.com> wrote: >> On Thu, Feb 09, 2012 at 01:11:36AM +0100, Miroslav Lachman wrote: > ... >>> ARC Size: >>>????????? Current Size:???????????? 1769 MB (arcsize) >>>????????? Target Size (Adaptive):?? 512 MB (c) >>>????????? Min Size (Hard Limit):??? 512 MB (zfs_arc_min) >>>????????? Max Size (Hard Limit):??? 3584 MB (zfs_arc_max) >>> >>> The target size is going down to the min size and after few more >>> days, the system is so slow, that I must reboot the machine. Then it >>> is running fine for about 107 days and then it all repeat again. >>> >>> You can see more on MRTG graphs >>> http://freebsd.quip.cz/ext/2012/2012-02-08-kiwi-mrtg-12-15/ >>> You can see links to other useful informations on top of the page >>> (arc_summary, top, dmesg, fs usage, loader.conf) >>> >>> There you can see nightly backups (higher CPU load started at >>> 01:13), otherwise the machine is idle. >>> >>> It coresponds with ARC target size lowering in last 5 days >>> http://freebsd.quip.cz/ext/2012/2012-02-08-kiwi-mrtg-12-15/local_zfs_arcstats_size.html >>> >>> And with ARC metadata cache overflowing the limit in last 5 days >>> http://freebsd.quip.cz/ext/2012/2012-02-08-kiwi-mrtg-12-15/local_zfs_vfs_meta.html >>> >>> I don't know what's going on and I don't know if it is something >>> know / fixed in newer releases. We are running a few more ZFS >>> systems on 8.2 without this issue. But those systems are in >>> different roles. >> >> This sounds like the... damn, what is it called... some kind of internal >> "counter" or "ticks" thing within the ZFS code that was discovered to >> only begin happening after a certain period of time (which correlated to >> some number of days, possibly 107).? I'm sorry that I can't be more >> specific, but it's been discussed heavily on the lists in the past, and >> fixes for all of that were committed to RELENG_8.? I wish I could >> remember the name of the function or macro or variable name it pertained >> to, something like LTHAW or TLOCK or something like that.? I would say >> "I don't know why I can't remember", but I do know why I can't remember: >> because I gave up trying to track all of these problems. >> >> Does someone else remember this issue?? CC'ing Martin who might remember >> for certain. > > It's LBOLT. :-) > > And there was more than one related integer overflow. One of them > manifested itself as L2ARC feeding thread hogging CPU time after about > a month of uptime. Another one caused issue with ARC reclaim after 107 > days. See more details in this thread: > > http://lists.freebsd.org/pipermail/freebsd-fs/2011-May/011584.htmlThis would be an excellent piece of information to have on one of the ZFS wiki pages.? The 107 day issue exists post-8.2, correct?? Anyone on this cc: list have permissions to edit those pages? Thanks, Charles> > --Artem > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"_______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"