Displaying 20 results from an estimated 1285 matches for "reclaim".
[PATCH nbdkit v4 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
2019 Jan 03
0
[PATCH nbdkit v4 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim.
However that cannot work given the design of filters, because a
background thread cannot access the next_ops struct which is only
available during requests.
Therefore we spread the work over the request threads. Each blk_*
function checks whether there is work to do, and if there is will
reclaim...
2018 Dec 28
0
[PATCH nbdkit 9/9] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim.
However that cannot work given the design of filters, because a
background thread cannot access the next_ops struct which is only
available during requests.
Therefore we spread the work over the request threads. Each blk_*
function checks whether there is work to do, and if there is will
reclaim...
2019 Jan 03
2
Re: [PATCH nbdkit v2 4/4] cache: Implement cache-max-size and method of reclaiming space from the cache.
On 1/1/19 8:33 AM, Richard W.M. Jones wrote:
> The original plan was to have a background thread doing the reclaim.
> However that cannot work given the design of filters, because a
> background thread cannot access the next_ops struct which is only
> available during requests.
>
> Therefore we spread the work over the request threads. Each blk_*
> function checks whether there is work to do...
[PATCH nbdkit v2 4/4] cache: Implement cache-max-size and method of reclaiming space from the cache.
2019 Jan 01
0
[PATCH nbdkit v2 4/4] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim.
However that cannot work given the design of filters, because a
background thread cannot access the next_ops struct which is only
available during requests.
Therefore we spread the work over the request threads. Each blk_*
function checks whether there is work to do, and if there is will
reclaim...
[PATCH nbdkit v3 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
2019 Jan 03
0
[PATCH nbdkit v3 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim.
However that cannot work given the design of filters, because a
background thread cannot access the next_ops struct which is only
available during requests.
Therefore we spread the work over the request threads. Each blk_*
function checks whether there is work to do, and if there is will
reclaim...
2019 Jan 04
0
[PATCH nbdkit v5 3/3] cache: Implement cache-max-size and cache space reclaim.
The original plan was to have a background thread doing the reclaim.
However that cannot work given the design of filters, because a
background thread cannot access the next_ops struct which is only
available during requests.
Therefore we spread the work over the request threads. Each blk_*
function checks whether there is work to do, and if there is will
reclaim...
2019 Jan 03
4
[PATCH nbdkit v4 0/2] cache: Implement cache-max-size and method of
v3 was broken by a bad rebase, so let's forget about that one.
Compared to v2:
- Patch 1 is the same except for a minor comment change.
- Patch 2 splits the reclaim code into a separate file
(filters/cache/reclaim.c)
- Addressed Eric's comments from his review of v2.
- Retested on Linux and FreeBSD.
2008 Feb 21
3
Reclaiming transmit descriptors by NIC drivers with Crossbow new scheduling
...ching a NIC (or individual Rx
rings on the NIC) to polling mode.
The receive interrupt will become not only rarer, but more importantly
outside the control of the NIC drivers.
Some drivers, on the other hand, were designed and written before that
change. They used to
piggy-back the tx descriptor reclaiming at the end of the Rx interrupt
for example. At the same time, they
disable to transmit interrupt all together, in an effort to minimize the
number of interrupts to the host (and
the entailed context switches). As expected, that sort of approach will
(and it was actually observed) lead
to qui...
2019 Jan 03
1
Re: [PATCH nbdkit v4 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
On 1/3/19 6:37 AM, Richard W.M. Jones wrote:
> The original plan was to have a background thread doing the reclaim.
> However that cannot work given the design of filters, because a
> background thread cannot access the next_ops struct which is only
> available during requests.
>
> Therefore we spread the work over the request threads. Each blk_*
> function checks whether there is work to do...
[PATCH nbdkit v3 0/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
2019 Jan 03
3
[PATCH nbdkit v3 0/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
Patch 1 is the same as last time, except for a minor comment fix.
Patch 2 should address everything that Eric mentioned in his review,
and has been retested.
Rich.
2010 Nov 08
4
2.0, hourly performance stats
I'm getting constantly high numbers of page reclaims & involuntary
context switches for dovecot/auth.
page reclaims = minor faults = cpu switching back to system-mode, But
why is the auth process doing that so excessively? Same for the large
number of involuntary context switches...
Attached is my "dovecot -n" output.
Date: Sun, 07...
2019 Jan 01
7
[PATCH nbdkit v2 0/4] cache: Implement cache-max-size etc.
These are essentially identical to what was previously posted as
patches 6/9 through 9/9 here:
https://www.redhat.com/archives/libguestfs/2018-December/msg00145.html
except that it has been rebased onto the current git master and
retested thoroughly.
Rich.
2016 Dec 15
0
How to actively reclaim stack memory
On 15 Dec 2016, at 07:26, haifeng.qin at wellintech.com via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> For help:
>
> for loop lead to stack overflow, want to actively reclaim stack memory of alloc instruction.
>
> How to actively reclaim stack memory ?
This sounds as if you’re putting the alloca inside the loop, not in the entry basic block and reusing it. If you need the same amount of storage for each loop iteration, then you should put the alloca in the entr...
2010 Feb 15
3
zfs questions wrt unused blocks
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused
blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual
(thin-provisioned) disk on a NFS server and a zpool inside that vdisk.
This vdisk tends to grow in size even if the user writes and deletes a
file again. Question...
2019 Jan 04
5
[PATCH nbdkit v5 3/3] cache: Implement cache-max-size and cache space reclaim.
v4:
https://www.redhat.com/archives/libguestfs/2019-January/msg00032.html
v5:
- Now we set the block size at run time.
I'd like to say that I was able to test this change, but
unfortunately I couldn't find any easy way to create a filesystem
on x86-64 with a block size > 4K. Ext4 doesn't support it at all,
and XFS doesn't support block size > page size (and I
2018 Jul 12
1
[PATCH v35 1/5] mm: support to get hints of free page blocks
...sooner or later (I would much rather like the former) so
> > do not build a new logic on top of it. I would appreciate if you
> > actually remove the notifier much more.
> >
> > You can give memory back from the standard shrinker interface. If we are
> > reaching low reclaim priorities then we are struggling to reclaim memory
> > and then you can start returning pages back.
>
> OK. Just curious why oom notifier is thought to be hideous, and has it been
> a consensus?
Because it is a completely non-transparent callout from the OOM context
which is reall...
2020 Feb 14
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
...shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory f...
2020 Feb 14
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
...shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory f...
2016 Dec 15
2
How to actively reclaim stack memory
For help:
for loop lead to stack overflow, want to actively reclaim stack memory of alloc instruction.
How to actively reclaim stack memory ?
haifeng.qin at wellintech.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161215/54b3b18c/attachment.html>
2016 Oct 18
0
Lockd: failed to reclaim lock for pid ...
...get a date stamp for the dmesg?
At least on CentOS7: dmesg -T
----- Original Message -----
From: "Dan Hyatt" <dhyatt at dsgmail.wustl.edu>
To: "CentOS mailing list" <centos at centos.org>
Sent: Tuesday, October 18, 2016 1:36:46 PM
Subject: [CentOS] Lockd: failed to reclaim lock for pid ...
My environment is "heterogeneous" my authentication and home server are
currently stuck on a 1G shared network, the production servers and
storage servers are on a bonded 40G network, all are in the same VLAN. I
have about 100 servers on the 40GB bonded network each w...