search for: evicts

Displaying 20 results from an estimated 529 matches for "evicts".

Did you mean: evict
2018 Dec 05
3
Strange regalloc behaviour: one more available register causes much worse allocation
...2B %427:gpr32 = COPY %429:gpr32 7800B %432:gpr64 = COPY %434:gpr64 7808B %373:gpr64sp = IMPLICIT_DEF 7816B %374:gpr64sp = IMPLICIT_DEF 8048B B %bb.30 Looking at the debug output of the register allocator, the sequence of events which kicks things off is %223 assigned to w0 %283 evicts %381 from w15 %381 requeued for second round %253 assigned to w15 %381 split for w15 in 4 bundles into %391-%395 %391, %392, %395 are not local intervals %393 is the local interval for bb.11.switchdest09 %394 is the local interval for bb.17.switchdest13 %392 assigned to w15 %391 evicts %...
2017 Aug 27
7
[Bug 102430] New: nv4x - memory problems when starting graphical application - logs included
https://bugs.freedesktop.org/show_bug.cgi?id=102430 Bug ID: 102430 Summary: nv4x - memory problems when starting graphical application - logs included Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: normal Priority: medium
2018 Dec 05
2
Strange regalloc behaviour: one more available register causes much worse allocation
...2B %427:gpr32 = COPY %429:gpr32 7800B %432:gpr64 = COPY %434:gpr64 7808B %373:gpr64sp = IMPLICIT_DEF 7816B %374:gpr64sp = IMPLICIT_DEF 8048B B %bb.30 Looking at the debug output of the register allocator, the sequence of events which kicks things off is %223 assigned to w0 %283 evicts %381 from w15 %381 requeued for second round %253 assigned to w15 %381 split for w15 in 4 bundles into %391-%395 %391, %392, %395 are not local intervals %393 is the local interval for bb.11.switchdest09 %394 is the local interval for bb.17.switchdest13 %392 assigned to w15 %391 evicts %...
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi! We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to mostly work (we haven''t had it OOPS on us yet like the earlier 1.6-versions did). However, we had this weird incident where an active client (it was copying 4GB files and running ls at the time) got evicted by the MDS and all OST''s. After a while logs indicate that it did recover the connection
2007 Sep 18
3
newbie question about a dbuf-arc eviction race
...n, level, blkid); ... if (db->db_buf && refcount_is_zero(&db->db_holds)) { arc_buf_add_ref(db->db_buf, db); /* * The above just returns as db_buf is on the * eviction list. Now, suppose arc_do_user_evicts() * selects this buf and calls dbuf_do_evict(). * Nothing really stops this function. * * Now there is a race between dbuf_do_evict() and * the following code */ if (db->d...
2023 Sep 12
1
Feature Concept: enable iCloud Drive with rsync
Hi, I have also posted this on GitHub but it isn?t clear that was the right place: https://github.com/WayneD/rsync/issues/522 iCloud Drive will evict files that are unused or when additional space is needed on the local drive. The evicted files are replace by "bookmark" files that allow MacOS to continue to report the files in the file system as though they were actually present. The
2017 Feb 08
3
[Bug 1119] New: Hash code evicting other entries upon entry deletion (v6.25.1-v6.30)
https://bugzilla.netfilter.org/show_bug.cgi?id=1119 Bug ID: 1119 Summary: Hash code evicting other entries upon entry deletion (v6.25.1-v6.30) Product: ipset Version: unspecified Hardware: x86_64 OS: other Status: NEW Severity: normal Priority: P5 Component: default
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be generalized in order to make the DRM GPUVA manager represent a basic GPU-VM
2009 Dec 26
1
NV50: the tiled buffer object eviction problem
In short, we move out the low level content of a buffer object. In the case of textures and such this is utterly useless. Still it is accessed, because ttm sees no problem in using PL_SYSTEM or PL_TT memory. What is the best way to let ttm know we don't really appreciate that? A custom memory to evict to that cannot be mapped maybe. Or an extension of PL_SYSTEM or PL_TT. Share your ideas.
2010 Jan 16
0
[PATCH] drm/nouveau: Evict buffers in VRAM before freeing sgdma
Currently, we take down the sgdma engine without evicting all buffers from VRAM. The TTM device release will try to evict anything in VRAM to GART memory, but this will fail since sgdma has already been taken down. This causes an infinite loop in kernel mode on module unload. It usually doesn't happen because there aren't any buffer on close. However, if the GPU is locked up, this
2007 Dec 14
1
evicting clients when shutdown cleanly?
Should I be seeing messages like: Dec 14 12:06:59 nyx170 kernel: Lustre: MGS: haven''t heard from client dadccfac-8610-06e7-9c02-90e552694947 (at 141.212.30.185 at tcp) in 234 seconds. I think it''s dead, and I am evicting it. when the client was shut down cleanly? and the lustre file system is mounted via /etc/fstab ? The file system (i would hope) would be unmounted
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2015 Jul 14
4
[LLVMdev] Poor register allocation (constants causing spilling)
Hi, While investigating a performance issue with an internal codebase I came across what looks to be poor register allocation. I have constructed a small(ish) reproducible which demonstrates the issue (see test.ll attached). I have spent some time going through the register allocator to understand what is happening. I have also experimented with some small changes to try and improve the
2022 Aug 19
4
[PATCH] nouveau: explicitly wait on the fence in nouveau_bo_move_m2mf
It is a bit unlcear to us why that's helping, but it does and unbreaks suspend/resume on a lot of GPUs without any known drawbacks. Cc: stable at vger.kernel.org # v5.15+ Closes: https://gitlab.freedesktop.org/drm/nouveau/-/issues/156 Signed-off-by: Karol Herbst <kherbst at redhat.com> --- drivers/gpu/drm/nouveau/nouveau_bo.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we tell the VFS to truncate all of the inode''s pages. This results in calls to btrfs_invalidatepage() which in turn does calls to lock_extent_bits() and clear_extent_bit(). These calls result in too many merges and splits of extent_state structures, which consume a lot of time and cpu when the inode has many pages. In some
2020 Aug 07
2
[PATCH nbdkit] plugins: file: More standard cache mode names
The new cache=none mode is misleading since it does not avoid usage of the page cache. When using shared storage, we may get stale data from the page cache. When writing, we flush after every write which is inefficient and unneeded. Rename the cache modes to: - writeback - write complete when the system call returned, and the data was copied to the page cache. - writethrough - write completes
2020 Aug 07
3
[PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
You can use these flags as described in the manual page to optimize access patterns, and to get better behaviour with the page cache in some scenarios. For my testing I used the cachedel and cachestats utilities written by Julius Plenz (https://github.com/Feh/nocache). I started with a 32 GB file of random data on a machine with about 32 GB of RAM. At the beginning of the test I evicted the
2012 Dec 16
0
[Bug 56788] [nv96] Dota2 (wine) consistently crashes with "WARNING: out of code space, evicting all shaders"
https://bugs.freedesktop.org/show_bug.cgi?id=56788 Emil Velikov <emil.l.velikov at gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #6 from Emil Velikov
2011 Mar 04
1
node eviction
Hello... I wonder if someone have had similar problem like this... a node evicts almost in a weekly basis and I have not found the root cause yet.... Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Node 1 joins domain 129859624F7042EAB9829B18CA65FC88 Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Nodes in domain ("129859624F7042EAB9829B18CA65FC88"): 1 2 3 4 Mar 3 16:18:02 x...