On 2020-Jun-25 11:30:31 -0700, Donald Wilde <dwilde1 at gmail.com> wrote:>Here's 'pstat -s' on the i3 (which registers as cpu HAMMER): > >Device 1K-blocks Used Avail Capacity >/dev/ada0s1b 33554432 0 33554432 0% >/dev/ada0s1d 33554432 0 33554432 0% >Total 67108864 0 67108864 0%I strongly suggest you don't have more than one swap device on spinning rust - the VM system will stripe I/O across the available devices and that will give particularly poor results when it has to seek between the partitions. Also, you can't actually use 64GB swap with 4GB RAM. If you look back through your boot messages, I expect you'll find messages like: warning: total configured swap (524288 pages) exceeds maximum recommended amount (498848 pages). warning: increase kern.maxswzone or reduce amount of swap. or maybe: WARNING: reducing swap size to maximum of xxxxMB per unit The absolute limit on swap space is vm.swap_maxpages pages but the realistic limit is about half that. By default the realistic limit is about 4?RAM (on 64-bit architectures), but this can be adjusted via kern.maxswzone (which defines the #bytes of RAM to allocate to swzone structures - the actual space allocated is vm.swzone). As a further piece of arcana, vm.pageout_oom_seq is a count that controls the number of passes before the pageout daemon gives up and starts killing processes when it can't free up enough RAM. "out of swap space" messages generally mean that this number is too low, rather than there being a shortage of swap - particularly if your swap device is rather slow. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20200626/5ecc6581/attachment.sig>
On 6/26/20, Peter Jeremy <peter at rulingia.com> wrote:> On 2020-Jun-25 11:30:31 -0700, Donald Wilde <dwilde1 at gmail.com> wrote: >>Here's 'pstat -s' on the i3 (which registers as cpu HAMMER): >> >>Device 1K-blocks Used Avail Capacity >>/dev/ada0s1b 33554432 0 33554432 0% >>/dev/ada0s1d 33554432 0 33554432 0% >>Total 67108864 0 67108864 0% > > I strongly suggest you don't have more than one swap device on spinning > rust - the VM system will stripe I/O across the available devices and > that will give particularly poor results when it has to seek between the > partitions.My intent is to make this machine function -- getting the bear dancing. How deftly she dances is less important than that she dances at all. My for-real boxen will have real HP and real cores and RAM.> > Also, you can't actually use 64GB swap with 4GB RAM. If you look back > through your boot messages, I expect you'll find messages like: > warning: total configured swap (524288 pages) exceeds maximum recommended > amount (498848 pages). > warning: increase kern.maxswzone or reduce amount of swap.Yes, as I posted, those were part of the failure stream from the synth program. When I had kern.maxswzone increased, it got through boot without complaining.> or maybe: > WARNING: reducing swap size to maximum of xxxxMB per unitThe warnings were there, in the as-it-failed complaints.> The absolute limit on swap space is vm.swap_maxpages pages but the > realistic > limit is about half that. By default the realistic limit is about 4?RAM > (on > 64-bit architectures), but this can be adjusted via kern.maxswzone (which > defines the #bytes of RAM to allocate to swzone structures - the actual > space allocated is vm.swzone). > > As a further piece of arcana, vm.pageout_oom_seq is a count that controls > the number of passes before the pageout daemon gives up and starts killing > processes when it can't free up enough RAM. "out of swap space" messages > generally mean that this number is too low, rather than there being a > shortage of swap - particularly if your swap device is rather slow. >Thanks, Peter! -- Don Wilde **************************************************** * What is the Internet of Things but a system * * of systems including humans? * ****************************************************
Am 26.06.20 um 12:23 schrieb Peter Jeremy:> On 2020-Jun-25 11:30:31 -0700, Donald Wilde <dwilde1 at gmail.com> wrote: >> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER): >> >> Device 1K-blocks Used Avail Capacity >> /dev/ada0s1b 33554432 0 33554432 0% >> /dev/ada0s1d 33554432 0 33554432 0% >> Total 67108864 0 67108864 0% > > I strongly suggest you don't have more than one swap device on spinning > rust - the VM system will stripe I/O across the available devices and > that will give particularly poor results when it has to seek between the > partitions.This used to be beneficial, when disk read and write bandwidth was limited and whole processes had to be swapped in or out due to RAM pressure. (This changed due to more RAM and a different ratio of seek to transfer times for a given amount of data.) An idea for a better strategy: It might be better to use an allocation algorithm that assigns a swap device to each running process that needs pages written to the swap device and only assign another swap device (and use if from then on for that process) if there is no free space left on the one used until then. Such a strategy would at least reduce the number of processes that need all configured swap devices at the same time in a striped configuration. If all processes start with the first configured swap device assigned to them, this will lead to only one of them being used until it fills up, then progressing to the next one. The strategy of whether the initial swap device assigned to a process is always the first one configured in the system, or whether after that could not be used by some process is moved on to the next one (typically the one assigned to that process for further page-outs) is not obvious to me. The behavior could be controlled by a sysctl to allow to adapt the strategy to the hardware (e.g. rotating vs. flash disks for swap).> As a further piece of arcana, vm.pageout_oom_seq is a count that controls > the number of passes before the pageout daemon gives up and starts killing > processes when it can't free up enough RAM. "out of swap space" messages > generally mean that this number is too low, rather than there being a > shortage of swap - particularly if your swap device is rather slow.I'm not sure that this specific sysctl is documented in such a way that it is easy to find by people suffering from out-of-memory kills. Perhaps it could be mentioned as a parameter that may need tuning in the OOM message? And while it does not come up that often in the mail list, it might be better for many kinds of application if the default was increased (a longer wait for resources might be more acceptable than the loss of all results of a long running computation). Regards, STefan
> On 26 Jun 2020, at 11:23, Peter Jeremy <peter at rulingia.com> wrote: > > On 2020-Jun-25 11:30:31 -0700, Donald Wilde <dwilde1 at gmail.com> wrote: >> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER): >> >> Device 1K-blocks Used Avail Capacity >> /dev/ada0s1b 33554432 0 33554432 0% >> /dev/ada0s1d 33554432 0 33554432 0% >> Total 67108864 0 67108864 0% > > I strongly suggest you don't have more than one swap device on spinning > rust - the VM system will stripe I/O across the available devices and > that will give particularly poor results when it has to seek between the > partitions.If you configure a ZFS mirror in bsdinstall you get a swap partition per drive by default.> Also, you can't actually use 64GB swap with 4GB RAM. If you look back > through your boot messages, I expect you'll find messages like: > warning: total configured swap (524288 pages) exceeds maximum recommended amount (498848 pages). > warning: increase kern.maxswzone or reduce amount of swap. > or maybe: > WARNING: reducing swap size to maximum of xxxxMB per unit > > The absolute limit on swap space is vm.swap_maxpages pages but the realistic > limit is about half that. By default the realistic limit is about 4?RAM (on > 64-bit architectures), but this can be adjusted via kern.maxswzone (which > defines the #bytes of RAM to allocate to swzone structures - the actual > space allocated is vm.swzone). > > As a further piece of arcana, vm.pageout_oom_seq is a count that controls > the number of passes before the pageout daemon gives up and starts killing > processes when it can't free up enough RAM. "out of swap space" messages > generally mean that this number is too low, rather than there being a > shortage of swap - particularly if your swap device is rather slow. > > -- > Peter Jeremy-- Bob Bishop t: +44 (0)118 940 1243 rb at gid.co.uk m: +44 (0)783 626 4518 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: Message signed with OpenPGP URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20200626/a41e30d1/attachment.sig>