Hi all, I''ve ''heard'' that ZFS likes to have a good chunk of RAM available to it. What is the recommended RAM size for a machine running ZFS? Thanks. -Moazam
Hello Moazam, Thursday, April 27, 2006, 11:26:09 PM, you wrote: MR> Hi all, I''ve ''heard'' that ZFS likes to have a good chunk of RAM MR> available to it. What is the recommended RAM size for a machine MR> running ZFS? Generally the same as for UFS or any other filesystem. However there''s still not fixed problem on 32bit platforms with memory starving by ZFS (or is it fixed already?). -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
And more importantly, how much memory you require will *always* depend on the workload... If it''s only a laptop, with 30 GB of disk, that averages only 1 MB every 5 minutes, then, not much... If it''s a 25K with 576GB of memory, multiple databases, and a throughput to disk of 2-300MB/s, and a fleet of storage connected to it, I''m tipping that it might like quite a bit more... :) Nathan. On Fri, 2006-04-28 at 08:57, Robert Milkowski wrote:> Hello Moazam, > > Thursday, April 27, 2006, 11:26:09 PM, you wrote: > > MR> Hi all, I''ve ''heard'' that ZFS likes to have a good chunk of RAM > MR> available to it. What is the recommended RAM size for a machine > MR> running ZFS? > > Generally the same as for UFS or any other filesystem. > However there''s still not fixed problem on 32bit platforms with memory > starving by ZFS (or is it fixed already?). > > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ////////////////////////////////////////////////////////////////// // Nathan Kroenert nathan.kroenert at sun.com // // PTS Engineer Phone: +61 2 9844-5235 // // Sun Services Direct Ext: x57235 // // Level 2, 828 Pacific Hwy Fax: +61 2 9844-5311 // // Gordon 2072 New South Wales Australia // //////////////////////////////////////////////////////////////////
I''m running 512mb of RAM on a Sun Blade 1500. I created some zpools and then proceeded to create zones on top of the ZFS file systems. This ran for *hours* and was still in the beginning phases of bringing up the zone. I figured it''s either a ZFS issue, or maybe a SX problem. I have not debugged it much yet. -Moazam On Apr 27, 2006, at 6:31 PM, Nathan Kroenert wrote:> And more importantly, how much memory you require will *always* depend > on the workload... > > If it''s only a laptop, with 30 GB of disk, that averages only 1 MB > every > 5 minutes, then, not much... > > If it''s a 25K with 576GB of memory, multiple databases, and a > throughput > to disk of 2-300MB/s, and a fleet of storage connected to it, I''m > tipping that it might like quite a bit more... > > :) > > Nathan. > > On Fri, 2006-04-28 at 08:57, Robert Milkowski wrote: >> Hello Moazam, >> >> Thursday, April 27, 2006, 11:26:09 PM, you wrote: >> >> MR> Hi all, I''ve ''heard'' that ZFS likes to have a good chunk of RAM >> MR> available to it. What is the recommended RAM size for a machine >> MR> running ZFS? >> >> Generally the same as for UFS or any other filesystem. >> However there''s still not fixed problem on 32bit platforms with >> memory >> starving by ZFS (or is it fixed already?). >> >> -- >> Best regards, >> Robert mailto:rmilkowski at task.gda.pl >> http://milek.blogspot.com >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- > ////////////////////////////////////////////////////////////////// > // Nathan Kroenert nathan.kroenert at sun.com // > // PTS Engineer Phone: +61 2 9844-5235 // > // Sun Services Direct Ext: x57235 // > // Level 2, 828 Pacific Hwy Fax: +61 2 9844-5311 // > // Gordon 2072 New South Wales Australia // > ////////////////////////////////////////////////////////////////// >
On Thu, Apr 27, 2006 at 06:34:48PM -0700, Moazam Raja wrote:> I''m running 512mb of RAM on a Sun Blade 1500. I created some zpools > and then proceeded to create zones on top of the ZFS file systems. > This ran for *hours* and was still in the beginning phases of > bringing up the zone. > > I figured it''s either a ZFS issue, or maybe a SX problem. I have not > debugged it much yet.What backing store are you using for your pools? When you "some zpools", how many are you talking about? - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Apr 27, 2006, at 6:54 PM, Eric Schrock wrote:> On Thu, Apr 27, 2006 at 06:34:48PM -0700, Moazam Raja wrote: >> I''m running 512mb of RAM on a Sun Blade 1500. I created some zpools >> and then proceeded to create zones on top of the ZFS file systems. >> This ran for *hours* and was still in the beginning phases of >> bringing up the zone. >> >> I figured it''s either a ZFS issue, or maybe a SX problem. I have not >> debugged it much yet. > > What backing store are you using for your pools? When you "some > zpools", how many are you talking about? > > - Eric >Just as one datapoint, when booting up, it takes us about 6-7 minutes to mount almost 10,000 zfs file systems in the tens of terabytes onto a single system with 8GBs of RAM. Then I don''t really see much persistent RAM use at all. -J
Hello Moazam, Friday, April 28, 2006, 3:34:48 AM, you wrote: MR> I''m running 512mb of RAM on a Sun Blade 1500. I created some zpools MR> and then proceeded to create zones on top of the ZFS file systems. MR> This ran for *hours* and was still in the beginning phases of MR> bringing up the zone. MR> I figured it''s either a ZFS issue, or maybe a SX problem. I have not MR> debugged it much yet. What exaclty was happening? Did you see any IOs? It was during zone creation or rather during zone startup? What SX build? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Nathan Kroenert wrote:>And more importantly, how much memory you require will *always* depend >on the workload... > >If it''s only a laptop, with 30 GB of disk, that averages only 1 MB every >5 minutes, then, not much... > >Averages can be deceiving. How much disk I/O is required to start up StarOffice, including paging in all of those libraries, etc? Or to start up mozilla or thunderbird or... Depending on how it functions, reading in 1MB every minute for an hour may be just as "bad" as 60MB in 1 minute. What matters is what buffering strategy does ZFS use? Darren
Well..actually, this is quite embarrassing now that I actually think about what I did (about a week ago). I have 1 physical disk in this machine. I created 2-3 partitions and added them to 1 zpool. I then created a zone onto this zpool. If I remember correctly, during zone bootup, it took forever and a half. I eventually cancelled. Looking back on this exercise, I can see how or why it was amazingly slow (I think, I didn''t do anything fancy, no striping, etc.). Sooo anyways, this was one of the reasons I asked the original question about RAM, but the other reason is more general regarding RAM requirements of ZFS. -Moazam On Apr 27, 2006, at 6:54 PM, Eric Schrock wrote:> On Thu, Apr 27, 2006 at 06:34:48PM -0700, Moazam Raja wrote: >> I''m running 512mb of RAM on a Sun Blade 1500. I created some zpools >> and then proceeded to create zones on top of the ZFS file systems. >> This ran for *hours* and was still in the beginning phases of >> bringing up the zone. >> >> I figured it''s either a ZFS issue, or maybe a SX problem. I have not >> debugged it much yet. > > What backing store are you using for your pools? When you "some > zpools", how many are you talking about? > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/ > eschrock
> However there''s still not fixed problem on 32bit platforms with memory > starving by ZFS (or is it fixed already?).If that is: Bug 6397610: ARC cache performance problems, on 32-bit x86 kernel and 6398177: zfs: poor nightly build performance in 32-bit mode (high disk activity) I''ve been using the following patch for a few weeks now, and the problem is gone: diff -ru ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c usr/src/uts/common/fs/zfs/arc.c --- ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c 2006-04-18 15:58:12.000000000 +0200 +++ usr/src/uts/common/fs/zfs/arc.c 2006-04-19 12:28:40.475685026 +0200 @@ -1212,6 +1212,11 @@ * up too much memory. */ dnlc_reduce_cache((void *)(uintptr_t)arc_reduce_dnlc_percent); + + /* + * Reclaim unused memory from all kmem caches. + */ + kmem_reap(); #endif /* This message posted from opensolaris.org
Roch Bourbonnais - Performance Engineering
2006-Apr-28 10:32 UTC
[zfs-discuss] ZFS RAM requirements?
Moazam Raja writes: > Hi all, I''ve ''heard'' that ZFS likes to have a good chunk of RAM > available to it. What is the recommended RAM size for a machine > running ZFS? > A already noted, this needs not be different from other FS but is still an interesting question. I''ll touch 3 aspects here - reported freemem - syscall writes to mmap pages - application write throttling Reported freemem will be lower when running with ZFS than say UFS. The UFS page cache is considered as freemem. ZFS will return it''s ''cache'' only when memory is needed. So you will operate with lower freemem but won''t actually suffer from this. It''s been wrongly feared that this mode of operation puts us back to the days of Solaris 2.6 and 7 where we saw a roaller coaster effect on freemem leading to sub-par application performance. We actually DO NOT have this problem with ZFS. The old problem came because the memory reaper could distinguish between a useful application page and an UFS cached page. That was bad. ZFS frees up it''s cache in a way that does not cause a problem. There is one peculiar workload that does lead ZFS to consume more memory which is writting (using syscalls) to pages that are also mmaped. As you may know, ZFS never overwrites live data on disk. But an mmaped image must be keept up to date. So syscall writting to a mmaped page, means we will keep 2 copies of the associated data at least until we manage to get the data to disk. We don''t expect that load to commonly use large amount of ram. And this is the cost to pay to have an always consistent on disk layout. Finally, one area where ZFS will behave quite differently from UFS is in throttling writters. With UFS, up to not long ago, we throttled a process as soon as it had 0.5M of I/O pending. This has been recently upped to 16M. The gain of such throttling is that we preserve system memory for the system. The downside is we throttle an application possibly unnecessarely when memory is plenty. ZFS will not throttled individual apps like this. The scheme is more like, when the global load of applications data overflows the I/O subsystem for 5 to 10 seconds then we throttle the applications. The invidual apps this have _a lot_ more ram to play with before being throttled. This is probably what''s behind the notion that ZFS likes more RAM. But now applications run a lot more decoupled from the I/O subsystem and ZFS can driver the I/O a lot closer to it''s top speed. ____________________________________________________________________________________ Roch Bourbonnais Sun Microsystems, Icnc-Grenoble Senior Performance Analyst 180, Avenue De L''Europe, 38330, Montbonnot Saint Martin, France Performance & Availability Engineering http://icncweb.france/~rbourbon http://blogs.sun.com/roller/page/roch Roch.Bourbonnais at Sun.Com (+33).4.76.18.83.20
Hello J?rgen, Friday, April 28, 2006, 11:26:31 AM, you wrote:>> However there''s still not fixed problem on 32bit platforms with memory >> starving by ZFS (or is it fixed already?).JK> If that is: JK> Bug 6397610: ARC cache performance problems, on 32-bit x86 kernel JK> and 6398177: zfs: poor nightly build performance in 32-bit mode (high disk activity) That''s it. JK> I''ve been using the following patch for a few weeks now, and the problem is gone: JK> diff -ru ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c usr/src/uts/common/fs/zfs/arc.c JK> --- ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c JK> 2006-04-18 15:58:12.000000000 +0200 JK> +++ usr/src/uts/common/fs/zfs/arc.c 2006-04-19 12:28:40.475685026 +0200 JK> @@ -1212,6 +1212,11 @@ JK> * up too much memory. JK> */ JK> dnlc_reduce_cache((void *)(uintptr_t)arc_reduce_dnlc_percent); JK> + JK> + /* JK> + * Reclaim unused memory from all kmem caches. JK> + */ JK> + kmem_reap(); JK> #endif JK> /* JK> JK> I vaguely remember you were proposing that solution however there must be good reason it wasn''t accepted. I don''t know but it looks like U2 on x86/32bit will not have this fixed, at least not at the beginning. Or maybe it is the solution and you just need to ping ZFS team. It would be good if this will be fixed before U2 so people on x86/32bit won''t loudly complain about ZFS. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
I''ll add to that concern. One of our to-be-deployed ZFS NAS boxes has to be a quad-xeon based system of the 32 bit variety. Sadly, I can''t update that to new hardware because of specific project funding sources. On 4/28/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> Hello J?rgen, > > Friday, April 28, 2006, 11:26:31 AM, you wrote: > > >> However there''s still not fixed problem on 32bit platforms with memory > >> starving by ZFS (or is it fixed already?). > > JK> If that is: > > JK> Bug 6397610: ARC cache performance problems, on 32-bit x86 kernel > JK> and 6398177: zfs: poor nightly build performance in 32-bit mode (high disk activity) > > That''s it. > > JK> I''ve been using the following patch for a few weeks now, and the problem is gone: > > JK> diff -ru ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c usr/src/uts/common/fs/zfs/arc.c > JK> --- ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c > JK> 2006-04-18 15:58:12.000000000 +0200 > JK> +++ usr/src/uts/common/fs/zfs/arc.c 2006-04-19 12:28:40.475685026 +0200 > JK> @@ -1212,6 +1212,11 @@ > JK> * up too much memory. > JK> */ > JK> dnlc_reduce_cache((void *)(uintptr_t)arc_reduce_dnlc_percent); > JK> + > JK> + /* > JK> + * Reclaim unused memory from all kmem caches. > JK> + */ > JK> + kmem_reap(); > JK> #endif > > JK> /* > JK> > JK> > > I vaguely remember you were proposing that solution however there must > be good reason it wasn''t accepted. I don''t know but it looks like U2 > on x86/32bit will not have this fixed, at least not at the beginning. > Or maybe it is the solution and you just need to ping ZFS team. > It would be good if this will be fixed before U2 so people on > x86/32bit won''t loudly complain about ZFS. > > > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Jason Hoffman schrieb:> Just as one datapoint, when booting up, it takes us about 6-7 minutes to > mount almost 10,000 zfs file systems in the tens of terabytes onto a > single system with 8GBs of RAM. Then I don''t really see much persistent > RAM use at all.Not here, I''m testing on an ancient Enterprise 3500 with 2GB RAM. Attached are A5000 disks from which I formed two disk pools, one pool consisting of mirrors 2x5x9GB, another pool configured as raidz 6x9GB. kernel memory usage at one point reached ~1GB (I check kernel memory via "kstat -n system_pages -s pp_kernel") during installation of two Oracle database instances ontop of one pool (compression enabled), each with 600MB SGA. During catalog creation I noticed memory shortage. Daniel
Roch Bourbonnais - Performance Engineering wrote:>Reported freemem will be lower when running with ZFS than >say UFS. The UFS page cache is considered as freemem. ZFS >will return it''s ''cache'' only when memory is needed. So you >will operate with lower freemem but won''t actually suffer >from this. > >It''s been wrongly feared that this mode of operation puts us >back to the days of Solaris 2.6 and 7 where we saw a roaller >coaster effect on freemem leading to sub-par application >performance. We actually DO NOT have this problem with ZFS. >The old problem came because the memory reaper could >distinguish between a useful application page and an UFS >cached page. That was bad. ZFS frees up it''s cache in a way >that does not cause a problem. > >Thanks for the very informative write-up. This clears a few issues for me, at least. However, I''m still a bit worried that we''ll be running with a lower freemem value. The issue here is one of provisioning and capacity planning - or put another way, how do I know when I''ve got enough memory if freemem is always low? Having a freemem value we could believe in - as well as the corresponding performance improvements - was a huge win for us when Solaris 8 came along, and it makes it very easy to see when we''re out of memory. For example, in our production environments at work we have automated monitoring which alerts us when freemem drops below a particular %age of the total physical memory on the machine; it sounds like ZFS is going to break this. Is there any way (preferably a simple one) to get the same easy-to-understand figure when ZFS is in use, or am I missing something? Thanks, Phil.
Sorry guys, I have to take the blame for letting this slip. I have been working with the VM folks on some comprehensive changes to the way ZFS works with the VM system (still a ways out I''m afraid), and let this bug slip into the background. I''m afraid its probably too late to get this into the Update 2 release, but I will try to get this into a patch. -Mark Joe Little wrote:> I''ll add to that concern. One of our to-be-deployed ZFS NAS boxes has > to be a quad-xeon based system of the 32 bit variety. Sadly, I can''t > update that to new hardware because of specific project funding > sources. > > > On 4/28/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote: > >> Hello J?rgen, >> >> Friday, April 28, 2006, 11:26:31 AM, you wrote: >> >> >> However there''s still not fixed problem on 32bit platforms with memory >> >> starving by ZFS (or is it fixed already?). >> >> JK> If that is: >> >> JK> Bug 6397610: ARC cache performance problems, on 32-bit x86 kernel >> JK> and 6398177: zfs: poor nightly build performance in 32-bit mode >> (high disk activity) >> >> That''s it. >> >> JK> I''ve been using the following patch for a few weeks now, and the >> problem is gone: >> >> JK> diff -ru ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c >> usr/src/uts/common/fs/zfs/arc.c >> JK> --- ../opensolaris-20060417/usr/src/uts/common/fs/zfs/arc.c >> JK> 2006-04-18 15:58:12.000000000 +0200 >> JK> +++ usr/src/uts/common/fs/zfs/arc.c 2006-04-19 >> 12:28:40.475685026 +0200 >> JK> @@ -1212,6 +1212,11 @@ >> JK> * up too much memory. >> JK> */ >> JK> dnlc_reduce_cache((void >> *)(uintptr_t)arc_reduce_dnlc_percent); >> JK> + >> JK> + /* >> JK> + * Reclaim unused memory from all kmem caches. >> JK> + */ >> JK> + kmem_reap(); >> JK> #endif >> >> JK> /* >> JK> >> JK> >> >> I vaguely remember you were proposing that solution however there must >> be good reason it wasn''t accepted. I don''t know but it looks like U2 >> on x86/32bit will not have this fixed, at least not at the beginning. >> Or maybe it is the solution and you just need to ping ZFS team. >> It would be good if this will be fixed before U2 so people on >> x86/32bit won''t loudly complain about ZFS. >> >> >> -- >> Best regards, >> Robert mailto:rmilkowski at task.gda.pl >> http://milek.blogspot.com >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discus > s
Roch Bourbonnais - Performance Engineering schrieb:> A already noted, this needs not be different from other FS > but is still an interesting question. I''ll touch 3 aspects > here > > - reported freemem > - syscall writes to mmap pages > - application write throttling > > > Reported freemem will be lower when running with ZFS than > say UFS. The UFS page cache is considered as freemem. ZFS > will return it''s ''cache'' only when memory is needed. So you > will operate with lower freemem but won''t actually suffer > from this.The RAM usage should be made more transparent to the administrator. Just today, after installing snv_37 on another machine, I couldn''t disable swap because zfs has grabbed all free memory it could get and didn''t release it (even after a "zfs export"): # swap -l swapfile dev swaplo blocks free /dev/md/dsk/d2 85,4 8 4193272 4193272 # swap -s total: 275372k bytes allocated + 93876k reserved = 369248k used, 1899492k available # swap -d /dev/md/dsk/d2 /dev/md/dsk/d2: Not enough space # kstat | grep pp_kernel pp_kernel 872514 # prtconf | head -2 System Configuration: Sun Microsystems i86pc Memory size: 4095 Megabytes # zpool export pool # zpool list no pools available # swap -d /dev/md/dsk/d2 This was shortly after installation with not much running on the machine. To speed up ''mirroring'' of swap I usually do the following (but couldn''t in this case): swap -d /dev/md/dsk/d2 metaclear d2 metainit d2 -m d21 d22 0 swap -a /dev/md/dsk/d2 Daniel
Roch Bourbonnais - Performance Engineering
2006-May-11 12:48 UTC
[zfs-discuss] ZFS RAM requirements?
Certainly something we''ll have to tackle. How about a zpool memstat (or zpool -m iostat) variation that would report at least freemem and the amount evictable cached data ? Would that work for you ? -r Philip Beevers writes: > Roch Bourbonnais - Performance Engineering wrote: > > >Reported freemem will be lower when running with ZFS than > >say UFS. The UFS page cache is considered as freemem. ZFS > >will return it''s ''cache'' only when memory is needed. So you > >will operate with lower freemem but won''t actually suffer > >from this. > > > >It''s been wrongly feared that this mode of operation puts us > >back to the days of Solaris 2.6 and 7 where we saw a roaller > >coaster effect on freemem leading to sub-par application > >performance. We actually DO NOT have this problem with ZFS. > >The old problem came because the memory reaper could > >distinguish between a useful application page and an UFS > >cached page. That was bad. ZFS frees up it''s cache in a way > >that does not cause a problem. > > > > > Thanks for the very informative write-up. This clears a few issues for > me, at least. > > However, I''m still a bit worried that we''ll be running with a lower > freemem value. The issue here is one of provisioning and capacity > planning - or put another way, how do I know when I''ve got enough memory > if freemem is always low? Having a freemem value we could believe in - > as well as the corresponding performance improvements - was a huge win > for us when Solaris 8 came along, and it makes it very easy to see when > we''re out of memory. For example, in our production environments at work > we have automated monitoring which alerts us when freemem drops below a > particular %age of the total physical memory on the machine; it sounds > like ZFS is going to break this. > > Is there any way (preferably a simple one) to get the same > easy-to-understand figure when ZFS is in use, or am I missing something? > > > Thanks, > > > Phil.
Roch Bourbonnais - Performance Engineering
2006-May-11 13:20 UTC
[zfs-discuss] ZFS RAM requirements?
I think there are 2 potential issues here. The ZFS cache or ARC manages memory for all pools on a system but the data is not really organized per pool. So on a pool export we don''t free up buffers associated with that pool. The memory is actually returned to the system either when pressure arises or on a modunload of the ZFS module; yep that''s a bit extreme. So how about an rfe say: 6424665: "ZFS/ARC should cleanup more after itself" That would have helped your scenario. But I see another point here is that the "swap -d " failed to exert the require memory pressure on ZFS. Sounds like another bug we''d need to track; -r Daniel Rock writes: > Roch Bourbonnais - Performance Engineering schrieb: > > A already noted, this needs not be different from other FS > > but is still an interesting question. I''ll touch 3 aspects > > here > > > > - reported freemem > > - syscall writes to mmap pages > > - application write throttling > > > > > > Reported freemem will be lower when running with ZFS than > > say UFS. The UFS page cache is considered as freemem. ZFS > > will return it''s ''cache'' only when memory is needed. So you > > will operate with lower freemem but won''t actually suffer > > from this. > > > The RAM usage should be made more transparent to the administrator. Just > today, after installing snv_37 on another machine, I couldn''t disable swap > because zfs has grabbed all free memory it could get and didn''t release it > (even after a "zfs export"): > > # swap -l > swapfile dev swaplo blocks free > /dev/md/dsk/d2 85,4 8 4193272 4193272 > > # swap -s > total: 275372k bytes allocated + 93876k reserved = 369248k used, 1899492k > available > > # swap -d /dev/md/dsk/d2 > /dev/md/dsk/d2: Not enough space > > # kstat | grep pp_kernel > pp_kernel 872514 > > # prtconf | head -2 > System Configuration: Sun Microsystems i86pc > Memory size: 4095 Megabytes > > # zpool export pool > # zpool list > no pools available > # swap -d /dev/md/dsk/d2 > > This was shortly after installation with not much running on the machine. To > speed up ''mirroring'' of swap I usually do the following (but couldn''t in > this case): > > swap -d /dev/md/dsk/d2 > metaclear d2 > metainit d2 -m d21 d22 0 > swap -a /dev/md/dsk/d2 > > > > > Daniel > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 5/11/06, Roch Bourbonnais - Performance Engineering <Roch.Bourbonnais at sun.com> wrote:> > Certainly something we''ll have to tackle. How about a zpool > memstat (or zpool -m iostat) variation that would report at > least freemem and the amount evictable cached data ? > > Would that work for you ? > > -rSuppose I have a system with 32 GB of RAM and 50 GB of ZFS file systems running a fairly light load - a single JVM that uses about one GB of RAM. The first time that backups run, the free memory as reported by vmstat will drop to nearly zero. For all practical purposes, it will not return to having a significant amount of free RAM until the system reboots. This breaks the capacity planning tools that I use across hundreds of machines. The retraining of sysadmins, DBA''s and others just seems like my punishment for finally getting %iowait == 0. A stable interface is needed to indicate how much memory is in active use. Yes, active can be fuzzy, but "I backed that 500 MB log file up 2 days ago and it is sitting in the zfs buffer cache" is certainly not active. While adding the ability to say the amount of RAM availalble is the sum of the amount indicated by vmstat and the amount used by the "zfs memstat" would be workable, it makes me feel like I am in for another change in the next update to zfs, the next overhaul of nfs, etc. Perhaps what would be really useful (does this coincide with memory set work?) is to have a vmstat option that says how much memory is being used by classes of memory usage. It may be useful to see something like: $ vmstat -b 5 memtype resident hot warm cold kernel 2345 120 540 1685 usr 7823 1034 970 5819 nfsbuf 3434 0 0 3434 zfsbuf 9804 540 1078 8186 free ... memtype resident hot warm cold ... Some columns for paging activity of the various types may be useful too. The key thing that I wanted to suggest here was that with such an interface you can see how much is in a very active working set, how much is used sometimes, and how much is there just because nothing else has needed the space yet. It would certainly help in understanding how memory is used, particularly when you have workloads that see memory mappings to fs buffers lost due to application heap demands. Perhaps in the world of memory sets, nfsbuf and zfsbuf are their own memory sets. usr may be split across several memory sets. And why "vmstat -b"? According to the S9 man page I had handy, that option wasn''t taken. Perhaps "vmstat -t" for temperature? But I bet that others have better words to use than hot, warm, and cold. And of course, I am not sure that my notion of hot, warm, and cold has any pratical way to be counted in the current vm subsystem or under the improvements in the works. Is this worth pursuing further? Mike -- Mike Gerdts http://mgerdts.blogspot.com/