Displaying 20 results from an estimated 529 matches for "eviction".
2018 Dec 05
3
Strange regalloc behaviour: one more available register causes much worse allocation
enableAdvancedRASplitCost() does the same thing as ConsiderLocalIntervalCost, but as a
subtarget option instead of a command-line option, and as I’ve said it doesn’t help because
it’s a non-local interval causing the eviction chain (RAGreedy::splitCanCauseEvictionChain
only considers the local interval for a single block, and it’s unclear to me how to make it
handle a non-local interval).
John
From: Nirav Davé [mailto:niravd at google.com]
Sent: 05 December 2018 17:14
To: John Brawn
Cc: llvm-dev; nd
Subject: Re: [llvm...
2017 Aug 27
7
[Bug 102430] New: nv4x - memory problems when starting graphical application - logs included
...0208 data 02000248
[ 121.769931] nouveau 0000:08:00.0: gr: intr 00100000 [ERROR] nsource 00000002
[DATA_ERROR] nstatus 02000000 [BAD_ARGUMENT] ch 3 [00118000 supertuxkart[1074]]
subc 7 class 4097 mthd 0208 data 020
[ 124.918095] [TTM] Failed to find memory space for buffer 0xffff97a0c738dc00
eviction
[ 124.925842] [TTM] No space for ffff97a0c738dc00 (1366 pages, 5464K, 5M)
[ 124.932458] [TTM] placement[0]=0x00070002 (1)
[ 124.936978] [TTM] has_type: 1
[ 124.940283] [TTM] use_type: 1
[ 124.943590] [TTM] flags: 0x0000000A
[ 124.947423] [TTM] gpu_offset: 0x00000000
[ 124.9...
2018 Dec 05
2
Strange regalloc behaviour: one more available register causes much worse allocation
...immediately instead of being
requeued, and then makes %391 have a higher score than %253 causing it to
be allocated before it. This works, but ends up causing an extra spill.
* In RAGreedy::splitAroundRegion put global intervals into stage RS_Split
immediately. This makes the chain of evictions after %396 not happen, but
that gives us one extra spill and we still get one pair of copies in
bb.17.switchdest13.
* In RAGreedy::evictInterference put evicted registers into a new RS_Evicted
stage, which is like RS_Assign but can't evict anything. This seemed to give
OK res...
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection
2007 Sep 18
3
newbie question about a dbuf-arc eviction race
Hi,
Can a dbuf be in DB_CACHED state, db_holds == 0,
b_efunc != NULL while its db_buf is put on the
eviction list ? From an ASSERT in dbuf_do_evict(),
it appears that it can. If it can, I am wondering what
is preventing the following race
dbuf_hold_impl()
db = dbuf_find(dn, level, blkid);
...
if (db->db_buf && refcount_is_zero(&db->db_holds)) {
arc_buf_...
2023 Sep 12
1
Feature Concept: enable iCloud Drive with rsync
Hi,
I have also posted this on GitHub but it isn?t clear that was the right place: https://github.com/WayneD/rsync/issues/522
iCloud Drive will evict files that are unused or when additional space is needed on the local drive. The evicted files are replace by "bookmark" files that allow MacOS to continue to report the files in the file system as though they were actually present. The
2017 Feb 08
3
[Bug 1119] New: Hash code evicting other entries upon entry deletion (v6.25.1-v6.30)
...ity: normal
Priority: P5
Component: default
Assignee: netfilter-buglog at lists.netfilter.org
Reporter: eje.netfilter at ewanco.com
Created attachment 492
--> https://bugzilla.netfilter.org/attachment.cgi?id=492&action=edit
Bash script to demonstrate eviction of entries
ipset (v6.30, v6.29, v6.25.1, but not v6.21.1) hash code is sometimes evicting
(or bumping) as a side-effect other entries in the set upon entry deletion
(ipset del). The symptom of this is that you get the error "ipset v6.30:
Element cannot be deleted from the set: it's not a...
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM
2009 Dec 26
1
NV50: the tiled buffer object eviction problem
In short, we move out the low level content of a buffer object. In the
case of textures and such this is utterly useless. Still it is
accessed, because ttm sees no problem in using PL_SYSTEM or PL_TT
memory. What is the best way to let ttm know we don't really
appreciate that? A custom memory to evict to that cannot be mapped
maybe. Or an extension of PL_SYSTEM or PL_TT. Share your ideas.
2010 Jan 16
0
[PATCH] drm/nouveau: Evict buffers in VRAM before freeing sgdma
Currently, we take down the sgdma engine without evicting all buffers
from VRAM.
The TTM device release will try to evict anything in VRAM to GART
memory, but this will fail since sgdma has already been taken down.
This causes an infinite loop in kernel mode on module unload.
It usually doesn't happen because there aren't any buffer on close.
However, if the GPU is locked up, this
2007 Dec 14
1
evicting clients when shutdown cleanly?
Should I be seeing messages like:
Dec 14 12:06:59 nyx170 kernel: Lustre: MGS: haven''t heard from client
dadccfac-8610-06e7-9c02-90e552694947 (at 141.212.30.185 at tcp) in 234
seconds. I think it''s dead, and I am evicting it.
when the client was shut down cleanly? and the lustre file system is
mounted via /etc/fstab ? The file system (i would hope) would be
unmounted
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 - cache device
I ran an OLTP bench mark to emulate a DMBS
One I ran the benchmark, the pool started create the database file on the
ssd cache device ???????????
can any one explain why this happening ?
is not L2ARC is used to absorb the evicted data
2015 Jul 14
4
[LLVMdev] Poor register allocation (constants causing spilling)
...D).
4) The virtual register for interval C is assigned to XMM9.
5) The virtual register for interval D is assigned to XMM0.
At the boundary between A and C we have a copy from XMM8 to XMM9, and
at the boundary between C and D we have a copy from XMM9 to XMM0.
The spill is the result of several evictions. To assign interval D to
XMM0 another virtual register was evicted. This virtual register had
previously caused an eviction, which was subsequently spilled.
*** Spill Weights
The main question is why does the register allocator decide to split
the constant live range and evict other non-remate...
2022 Aug 19
4
[PATCH] nouveau: explicitly wait on the fence in nouveau_bo_move_m2mf
It is a bit unlcear to us why that's helping, but it does and unbreaks
suspend/resume on a lot of GPUs without any known drawbacks.
Cc: stable at vger.kernel.org # v5.15+
Closes: https://gitlab.freedesktop.org/drm/nouveau/-/issues/156
Signed-off-by: Karol Herbst <kherbst at redhat.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we
tell the VFS to truncate all of the inode''s pages. This results
in calls to btrfs_invalidatepage() which in turn does calls to
lock_extent_bits() and clear_extent_bit(). These calls result in
too many merges and splits of extent_state structures...
2020 Aug 07
2
[PATCH nbdkit] plugins: file: More standard cache mode names
The new cache=none mode is misleading since it does not avoid usage of
the page cache. When using shared storage, we may get stale data from
the page cache. When writing, we flush after every write which is
inefficient and unneeded.
Rename the cache modes to:
- writeback - write complete when the system call returned, and the data
was copied to the page cache.
- writethrough - write completes
2020 Aug 07
3
[PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
...16135 16 16119
pages in cache: 14533/8388608 (0.2%) [filesize=33554432.0K, pagesize=4K]
In this case the file largely avoids being pulled into the page cache,
and we do not evict useful stuff.
Notice that the test takes slightly longer to run. This is expected
because page cache eviction happens synchronously. I expect the cost
when doing sequential writes to be higher. Linus outlined a technique
to do this without the overhead, but unfortunately it is considerably
more complex and dangerous than I am comfortable adding to the file
plugin:
http://lkml.iu.edu/hypermail/linux/kern...
2012 Dec 16
0
[Bug 56788] [nv96] Dota2 (wine) consistently crashes with "WARNING: out of code space, evicting all shaders"
...---------------------------------------------------
Status|NEW |RESOLVED
Resolution|--- |FIXED
--- Comment #6 from Emil Velikov <emil.l.velikov at gmail.com> ---
nv50/c0 was reworked to reallocate the last shader after the eviction, thus
resolving the crash
Messages about "not uniquely defined" are still present although do not cause
any noticeable issues
Closing bug
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed.....
2011 Mar 04
1
node eviction
Hello... I wonder if someone have had similar problem like this... a node evicts almost in a weekly basis and I have not found the root cause yet....
Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Node 1 joins domain 129859624F7042EAB9829B18CA65FC88
Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Nodes in domain ("129859624F7042EAB9829B18CA65FC88"): 1 2 3 4
Mar 3 16:18:02 xirisoas3 kernel: