> On 17 February 2016, at 16:50, Lowell Gilbert <freebsd-stable-local at
be-well.ilk.org> wrote:
>
> hiren panchasara <hiren at strugglingcoder.info> writes:
>
>> On 02/17/16 at 04:44P, Efra?n D?ctor wrote:
>>> El 17/02/2016 a las 01:15 p. m., dweimer escribi?:
>>>>
>>>> They may not show as swapped unless the entire process is
actually
>>>> swapped, which would be unlikely to occur. Personally I
wouldn't worry
>>>> about it, the only thing I can think of is to restart processes
one at
>>>> a time to see which one clears up the swap usage. Granted you
may see
>>>> a little clear after each process.
>>>>
>>>> The more important task would be to determine what caused the
memory
>>>> to run out in the first place, and decide if its going to be a
>>>> frequent enough occurrence to justify adding physical memory to
the
>>>> system.
>>>>
>>>> There is likely some way to find out what is using it, but that
is
>>>> beyond my knowledge.
>>>>
>>>> --
>>>> Thanks,
>>>> Dean E. Weimer
>>>> http://www.dweimer.net/
>>>
>>> The server has 64 GB of RAM, 40-45 GB are always inactive thats why
I'm
>>> wondering why are the processes being swapped out.
>
> There are almost certainly no processes being swapped out. Some of their
> *pages* are being stored in swap, but that is a very different thing.
>
>> Yes, I've seen this too. Inact end up accumulating a very large
chunk of
>> memory leaving Free to very low.
>
> That's yet another, different thing.
>
>> What VM/pagedaemon seems to care about is Free+Cache and not just Free.
>
> Which makes sense, even without a deep understanding of the concepts,
> because those are guaranteed to be able to be re-allocated immediately.
> It is literally true that the pageout scan does nothing when free+cache
> is less than the target.
>
>> I kind of get that Free mem is wasted mem but putting everything in
Inact
>> to the point that machine has to go into swap when a sudden need arises
>> also doesn't seem right.
>
> "Go into swap" is too vague to mean much. I suspect that you
mean the
> system will have to start swapping out rapidly, but that isn't actually
> the case. First of all, pages in "inact" aren't necessarily
dirty, so
> re-using them wouldn't be as expensive as having to write them to
> backing store. Also, when a page is copied to swap, the surrounding
> pages are eligible to be copied to swap also, to take advantage of the
> bandwidth advantages of larger transfer sizes. This does not move them
> to the cache queue, although it does make that easier to do later if and
> when their "turn" comes up.
>
>> I guess it all boils down to adjusting defaults to the system's
need.
>> i.e. if you know you have a proc that may need a large chunk of mem
>> you'd need to tweak free+cache target accordingly. What I find
lacking
>> is the correct/easy way to do it. If I look at available sysctls:
>> vm.v_free_min: Minimum low-free-pages threshold
>> vm.v_cache_min: Min pages on cache queue
>> vm.v_free_target: Desired free pages
>> And I cannot get them to do the right thing to have more Free around so
>> swapping doesn't happen in sudden need. And are these all runtime
>> sysctls? OR does it require reboot for them to work right?
>
> These take effect immediately, from what I can see.
>
> Have you measured that paging (not swapping; that's a more extreme
> measure where the whole process gets removed from memory) is a
> significant load on your system in a specific case? If not, I doubt that
> it's actually the case, and you're mitigating a non-existent
problem
I believe the question here is what is using up the swap space. From what I
have been able to find with a similar situation is that malloc will allocate
swap space to backup memory and mmap will also allocate swap if there is no
backing file. procstat -v can be helpful in chasing down some of those issues.
However, I ended up guessing which process it was by sequentially restarting
processes and watching swapinfo. I still have not been able to chase down what
in that process is using the space. There are no mmaps that are not file backed
so it must be a malloc. Finding the right one has been elusive.