search for: livepatching

Displaying 14 results from an estimated 14 matches for "livepatching".

2023 Jan 26
1
[PATCH 2/2] vhost: check for pending livepatches from vhost worker kthreads
...ask immediately. > > > > > > > > It should be safe because the kthread never leaves vhost_worker(). > > > > It means that the same kthread could never re-enter this function > > > > and use the new code. > > > > > > My knowledge of livepatching internals is fairly limited, so I'll accept > > > it if you say that it's safe to do it this way. But let me ask about one > > > scenario. > > > > > > Let's say that a livepatch is loaded which replaces vhost_worker(). New > > > vhost worker...
2023 Jan 27
1
[PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads
On Thu, Jan 26, 2023 at 08:43:55PM -0800, Josh Poimboeuf wrote: > On Thu, Jan 26, 2023 at 03:12:35PM -0600, Seth Forshee (DigitalOcean) wrote: > > On Thu, Jan 26, 2023 at 06:03:16PM +0100, Petr Mladek wrote: > > > On Fri 2023-01-20 16:12:20, Seth Forshee (DigitalOcean) wrote: > > > > We've fairly regularaly seen liveptches which cannot transition within
2023 Jan 22
0
[PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads
On Fri, Jan 20, 2023 at 04:12:20PM -0600, Seth Forshee (DigitalOcean) wrote: > We've fairly regularaly seen liveptches which cannot transition within kpatch's > timeout period due to busy vhost worker kthreads. In looking for a solution the > only answer I found was to call klp_update_patch_state() from a safe location. > I tried adding this call to vhost_worker(), and it
2023 Jan 27
0
[PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads
...I hadn't > considered. Could you please provide some more details about the test system? Is there anything important to make it reproducible? The following aspects come to my mind. It might require: + more workers running on the same system + have a dedicated CPU for the worker + livepatching the function called by work->fn() + running the same work again and again + huge and overloaded system > > Honestly, kpatch's timeout 1 minute looks incredible low to me. Note > > that the transition is tried only once per minute. It means that there > > are "o...
2023 Jan 26
0
[PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads
On Fri 2023-01-20 16:12:20, Seth Forshee (DigitalOcean) wrote: > We've fairly regularaly seen liveptches which cannot transition within kpatch's > timeout period due to busy vhost worker kthreads. I have missed this detail. Miroslav told me that we have solved something similar some time ago, see https://lore.kernel.org/all/20220507174628.2086373-1-song at kernel.org/ Honestly,
2019 Oct 09
2
Livepatch: Linux kernel updates without rebooting
Hi, I am running CentOS Linux release 7.7.1908 (Core). Does CentOS Linux kernel 3.10.0-1062.1.1.el7.x86_64 support kernel updates without rebooting (Live Patching)? I look forward to hearing from you and thanks in advance. Best Regards, Kaushal
2019 Oct 09
0
Livepatch: Linux kernel updates without rebooting
On Wed, Oct 9, 2019 at 11:27 AM Kaushal Shriyan <kaushalshriyan at gmail.com> wrote: > Hi, > > I am running CentOS Linux release 7.7.1908 (Core). Does CentOS Linux kernel > 3.10.0-1062.1.1.el7.x86_64 support kernel updates without rebooting (Live > Patching)? > > I look forward to hearing from you and thanks in advance. > > > this was just discussed in length
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
Instead of storing the map per CPU provide and use per task storage. That prepares for local kmaps which are preemptible. The context switch code is preparatory and not yet in use because kmap_atomic() runs with preemption disabled. Will be made usable in the next step. The context switch logic is safe even when an interrupt happens after clearing or before restoring the kmaps. The kmap index in
2019 May 05
3
CentOS 7 Xen 4.12 libvirt/virt-manager wrong path for qemu-system-i386
Hello, While testing Virt-SIG Xen 4.12 rpms on CentOS7 I noticed the following problem with libvirt/virt-manager when manually installing a new HVM guest from virt-manager GUI.. basicly the VM installation won't start, because libvirt/virt-manager is not able to start the VM, due to "missing" qemu-system-i386 binary: Unable to complete install: 'unsupported configuration:
2018 Oct 11
2
xen_4.11.1~pre.20180911.5acdd26fdc+dfsg-2_multi.changes ACCEPTED into unstable, unstable
Accepted: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Format: 1.8 Date: Fri, 05 Oct 2018 19:38:52 +0100 Source: xen Binary: xenstore-utils xen-utils-common xen-hypervisor-common xen-doc xen-utils-4.11 xen-hypervisor-4.11-amd64 xen-system-amd64 xen-hypervisor-4.11-arm64 xen-system-arm64 xen-hypervisor-4.11-armhf xen-system-armhf libxen-dev libxenmisc4.11 libxencall1 libxendevicemodel1
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to allow device drivers with MMUs to dynamically mirror a process' page tables based on device faults and invalidation callbacks. The Nouveau driver is updated to use the extended API and a set of stand alone self tests is added to help validate and maintain correctness. The patches are based on linux-5.5.0-rc6 and are for