Displaying 6 results from an estimated 6 matches for "swappines".
Did you mean:
swappiness
2015 Jun 05
1
Effectiveness of CentOS vm.swappiness
...t my first time I see this behaviour to this extent/on
so many servers. So you can say that I'm kind of a newbie in swapping ;)
> If you don't explicitly lock things into memory, file I/O can and will
> cause idle pages to get pushed out. It happens less often if you
> manipulate swappines.
So, is a swappiness value of 60 not recommended for servers? I worked
with hundreds of servers (swappiness 60) on a social platform and
swapping very rarely happened and then only on databases (which had
swappiness set to 0). The only two differences (that I can see) to my
current servers are...
2015 Jun 05
6
Effectiveness of CentOS vm.swappiness
...it's very low. AFAIK it's, as you already suggested,
just that some (probably unused) parts are swapped out. But, some of
those parts are the salt-minion, php-fpm or mysqld. All services which
are important for us and which suffer badly from being swapped out. I
already made some tests with swappiness 10 which mildly made it better.
But there still was swap usage. So I tend to set swappiness to 1. Which
I don't like to do, since those default values aren't there for nothing.
Is it possible that this happens because the servers are VMs on an
ESX-server. How could that affect this? How c...
2015 Jun 05
0
Effectiveness of CentOS vm.swappiness
.../questions/196725/how-to-mlock-all-pages-of-a-process-tree-from-console
(I've always only done this with my own code explicitly calling mlock)
If you don't explicitly lock things into memory, file I/O can and will
cause idle pages to get pushed out. It happens less often if you
manipulate swappines.
-- greg
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
...gt; Bad news is, that iozone is still the same. There might be some
> misunderstanding.
>
> I have to cases:
>
> 1) cache=unsafe. In this case, I can see that hypervizor is prone to swap.
> Swap a lot. It usually eats whole swap partition and kswapd is running on 100%
> CPU. swappines, dirty_ration and company do not improve things at all.
> However, I believe, this is just wrong option for scratch disks where one can
> expect huge I/O load. Moreover, the hypevizor is poor machine with only low
> memory left (ok, in my case about 10GB available), so it does not make sen...
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote:
> Hello,
>
> ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
> with iozone remains the same.
>
> The spec is running, however, it runs slower than 1-NUMA case.
>
> The corrected XML looks like follows:
[Reformated XML for better reading]
<cpu mode="host-passthrough">
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
...previous wrong case.
So far so good.
Bad news is, that iozone is still the same. There might be some
misunderstanding.
I have to cases:
1) cache=unsafe. In this case, I can see that hypervizor is prone to swap.
Swap a lot. It usually eats whole swap partition and kswapd is running on 100%
CPU. swappines, dirty_ration and company do not improve things at all.
However, I believe, this is just wrong option for scratch disks where one can
expect huge I/O load. Moreover, the hypevizor is poor machine with only low
memory left (ok, in my case about 10GB available), so it does not make sense
to use that...