search for: text_poke_ear

Displaying 20 results from an estimated 23 matches for "text_poke_ear".

Did you mean: text_poke_early
2017 Oct 06
4
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...Xen HVM guests. >From what I can tell, HVM guests still use pv_time_ops and pv_mmu_ops.exit_mmap, right? > > + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); > > +} > > > This is a problem (at least for Xen PV guests): > apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. Ah, right. > It might be possible not to turn off/on the interrupts in this > particular case since the guest probably won't be able to handle an > interrupt at this point anyway. Yeah, that should work. For Xen and for the o...
2017 Oct 06
4
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...Xen HVM guests. >From what I can tell, HVM guests still use pv_time_ops and pv_mmu_ops.exit_mmap, right? > > + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); > > +} > > > This is a problem (at least for Xen PV guests): > apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. Ah, right. > It might be possible not to turn off/on the interrupts in this > particular case since the guest probably won't be able to handle an > interrupt at this point anyway. Yeah, that should work. For Xen and for the o...
2015 Mar 19
2
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement the paravirt qspinlock for x86-kvm. > > We use the regular paravirt call patching to switch between: > > native_queue_spin_lock_slowpath() __pv_queue_spin_lock_slowpath() > native_queue_spin_unlock() __pv_queue_spin_unlock() > > We use a callee saved call for the unlock function which reduces the > i-cache
2015 Mar 19
2
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement the paravirt qspinlock for x86-kvm. > > We use the regular paravirt call patching to switch between: > > native_queue_spin_lock_slowpath() __pv_queue_spin_lock_slowpath() > native_queue_spin_unlock() __pv_queue_spin_unlock() > > We use a callee saved call for the unlock function which reduces the > i-cache
2017 Oct 12
2
[Xen-devel] [PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...n tell, HVM guests still use pv_time_ops and >> pv_mmu_ops.exit_mmap, right? >> >>>> + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); >>>> +} >>> This is a problem (at least for Xen PV guests): >>> apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. >> Ah, right. >> >>> It might be possible not to turn off/on the interrupts in this >>> particular case since the guest probably won't be able to handle an >>> interrupt at this point anyway. >&gt...
2017 Oct 12
2
[Xen-devel] [PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...n tell, HVM guests still use pv_time_ops and >> pv_mmu_ops.exit_mmap, right? >> >>>> + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); >>>> +} >>> This is a problem (at least for Xen PV guests): >>> apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. >> Ah, right. >> >>> It might be possible not to turn off/on the interrupts in this >>> particular case since the guest probably won't be able to handle an >>> interrupt at this point anyway. >&gt...
2017 Oct 05
0
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
.... > + */ > +void __init apply_pv_alternatives(void) > +{ > + setup_force_cpu_cap(X86_FEATURE_PV_OPS); Not for Xen HVM guests. > + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); > +} This is a problem (at least for Xen PV guests): apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. It might be possible not to turn off/on the interrupts in this particular case since the guest probably won't be able to handle an interrupt at this point anyway. > + > void __init_or_module apply_paravirt(struct paravirt_patch_...
2015 Mar 19
0
[PATCH 9/9] qspinlock,x86,kvm: Implement KVM support for paravirt qspinlock
...f risky to use it here unless we can guarantee that > call site patching is atomic wrt other CPUs. Just look at where the patching is done: init/main.c:start_kernel() check_bugs() alternative_instructions() apply_paravirt() We're UP and not holding any locks, disable IRQs (see text_poke_early()) and have NMIs 'disabled'.
2015 Mar 19
1
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
...ee that >> call site patching is atomic wrt other CPUs. > Just look at where the patching is done: > > init/main.c:start_kernel() > check_bugs() > alternative_instructions() > apply_paravirt() > > We're UP and not holding any locks, disable IRQs (see text_poke_early()) > and have NMIs 'disabled'. You are probably right. The initial apply_paravirt() was done before the SMP boot. Subsequent ones were at kernel module load time. I put a counter in the __native_queue_spin_unlock() and it registered 26949 unlock calls in a 16-cpu guest before it go...
2015 Mar 19
0
[PATCH 9/9] qspinlock,x86,kvm: Implement KVM support for paravirt qspinlock
...f risky to use it here unless we can guarantee that > call site patching is atomic wrt other CPUs. Just look at where the patching is done: init/main.c:start_kernel() check_bugs() alternative_instructions() apply_paravirt() We're UP and not holding any locks, disable IRQs (see text_poke_early()) and have NMIs 'disabled'.
2015 Mar 19
1
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
...ee that >> call site patching is atomic wrt other CPUs. > Just look at where the patching is done: > > init/main.c:start_kernel() > check_bugs() > alternative_instructions() > apply_paravirt() > > We're UP and not holding any locks, disable IRQs (see text_poke_early()) > and have NMIs 'disabled'. You are probably right. The initial apply_paravirt() was done before the SMP boot. Subsequent ones were at kernel module load time. I put a counter in the __native_queue_spin_unlock() and it registered 26949 unlock calls in a 16-cpu guest before it go...
2017 Oct 12
0
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...From what I can tell, HVM guests still use pv_time_ops and > pv_mmu_ops.exit_mmap, right? > >>> + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); >>> +} >> >> This is a problem (at least for Xen PV guests): >> apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. > Ah, right. > >> It might be possible not to turn off/on the interrupts in this >> particular case since the guest probably won't be able to handle an >> interrupt at this point anyway. > Yeah, that should work...
2017 Oct 04
1
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...@@ -269,6 +270,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len) } extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; +extern struct alt_instr __pv_alt_instructions[], __pv_alt_instructions_end[]; extern s32 __smp_locks[], __smp_locks_end[]; void *text_poke_early(void *addr, const void *opcode, size_t len); @@ -598,6 +600,17 @@ int alternatives_text_reserved(void *start, void *end) #endif /* CONFIG_SMP */ #ifdef CONFIG_PARAVIRT +/* + * Paravirt alternatives are applied much earlier than normal alternatives. + * They are only applied when running on...
2017 Oct 12
0
[Xen-devel] [PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...use pv_time_ops and >>> pv_mmu_ops.exit_mmap, right? >>> >>>>> + apply_alternatives(__pv_alt_instructions, __pv_alt_instructions_end); >>>>> +} >>>> This is a problem (at least for Xen PV guests): >>>> apply_alternatives()->text_poke_early()->local_irq_save()->...'cli'->death. >>> Ah, right. >>> >>>> It might be possible not to turn off/on the interrupts in this >>>> particular case since the guest probably won't be able to handle an >>>> interrupt at this p...
2023 Jun 08
3
[RFC PATCH 0/3] x86/paravirt: Get rid of paravirt patching
This is a small series getting rid of paravirt patching by switching completely to alternative patching for the same functionality. The basic idea is to add the capability to switch from indirect to direct calls via a special alternative patching option. This removes _some_ of the paravirt macro maze, but most of it needs to stay due to the need of hiding the call instructions from the compiler
2023 Jun 08
3
[RFC PATCH 0/3] x86/paravirt: Get rid of paravirt patching
This is a small series getting rid of paravirt patching by switching completely to alternative patching for the same functionality. The basic idea is to add the capability to switch from indirect to direct calls via a special alternative patching option. This removes _some_ of the paravirt macro maze, but most of it needs to stay due to the need of hiding the call instructions from the compiler
2018 Oct 29
2
guestfs launch failed in CentOS 7.5
...ve_set_fixmap+0x40/0x40 [ 110.721000] [<ffffffffad86b14c>] ? end_pv_cpu_ops_usergs_sysret32+0x3/0x3 [ 110.721000] [<ffffffffadf28c83>] ? simd_coprocessor_error+0x3/0x30 [ 110.721000] [<ffffffffad86a4b6>] ? native_restore_fl+0x6/0x10 [ 110.721000] [<ffffffffad833126>] text_poke_early+0x36/0x40 [ 110.721000] [<ffffffffad833383>] apply_paravirt+0xb3/0xe0 [ 110.721000] [<ffffffffad8973f4>] ? vprintk_emit+0x3c4/0x510 [ 110.721000] [<ffffffffad9a00e6>] ? free_hot_cold_page+0x106/0x160 [ 110.721000] [<ffffffffad99bc6d>] ? adjust_managed_page_count+0x...
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality. For example, instead of: callq *0xffffffff81e3a400 (pv_irq_ops.save_fl) vmlinux will now show: pushfq pop %rax nop nop nop nop nop which is what the runtime version of the code will show in most cases. This idea was suggested by Andy Lutomirski. The benefits are: - For the most common runtime cases
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality. For example, instead of: callq *0xffffffff81e3a400 (pv_irq_ops.save_fl) vmlinux will now show: pushfq pop %rax nop nop nop nop nop which is what the runtime version of the code will show in most cases. This idea was suggested by Andy Lutomirski. The benefits are: - For the most common runtime cases
2013 Nov 15
23
[PATCH -tip RFC v2 00/22] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
Currently the blacklist is maintained by hand in kprobes.c which is separated from the function definition and is hard to catch up the kernel update. To solve this issue, I've tried to implement new NOKPROBE_SYMBOL() macro for making kprobe blacklist at build time. Since the NOKPROBE_SYMBOL() macros can be placed right after the function is defined, it is easy to maintain. This series