search for: send_ipi_self

Displaying 7 results from an estimated 7 matches for "send_ipi_self".

2011 Sep 07
10
[PATCH] IRQ: Group IRQ_MOVE_CLEANUP_VECTOR with other hypervisor IPIs
...c.c --- a/xen/arch/x86/io_apic.c Mon Sep 05 15:10:28 2011 +0100 +++ b/xen/arch/x86/io_apic.c Wed Sep 07 16:00:55 2011 +0100 @@ -476,7 +476,7 @@ fastcall void smp_irq_move_cleanup_inter * to myself. */ if (irr & (1 << (vector % 32))) { - genapic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR); + genapic->send_IPI_self(MOVE_CLEANUP_VECTOR); TRACE_3D(TRC_HW_IRQ_MOVE_CLEANUP_DELAY, irq, vector, smp_processor_id()); goto unlock; @@ -513,7 +513,7 @@ static void send_cleanup_vector(struct i cpus_and(...
2012 Apr 20
1
[PATCH v2 0/2] fix "perf top" soft lockups under Xen
...ter irq_work_run These 2 patches fixed the "perf top" soft lockups under Xen reported by Steven at: https://lkml.org/lkml/2012/2/9/506 Both Steven and I tested it and "perf top" works well now. The soft lockup code path is: __irq_work_queue arch_irq_work_raise apic->send_IPI_self(IRQ_WORK_VECTOR); apic_send_IPI_self __default_send_IPI_shortcut __xapic_wait_icr_idle static inline void __xapic_wait_icr_idle(void) { while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY) cpu_relax(); } The lockup happens at above while looop....
2012 Apr 15
0
(no subject)
..."perf top" soft lockups under Xen reported by Steven at: https://lkml.org/lkml/2012/2/9/506 I tested it with 3.4-rc2 and "perf top" works well now. Steven, Could you please help to test it too? The soft lockup code path is: __irq_work_queue arch_irq_work_raise apic->send_IPI_self(IRQ_WORK_VECTOR); apic_send_IPI_self __default_send_IPI_shortcut __xapic_wait_icr_idle static inline void __xapic_wait_icr_idle(void) { while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY) cpu_relax(); } The lockup happens at above while looop...
2012 Apr 15
0
Re: [PATCH 0/2] fix "perf top" soft lockups under Xen
...ted by Steven at: https://lkml.org/lkml/2012/2/9/506 > > I tested it with 3.4-rc2 and "perf top" works well now. > > Steven, > Could you please help to test it too? > > The soft lockup code path is: > > __irq_work_queue >  arch_irq_work_raise >    apic->send_IPI_self(IRQ_WORK_VECTOR); >      apic_send_IPI_self >        __default_send_IPI_shortcut >          __xapic_wait_icr_idle > > static inline void __xapic_wait_icr_idle(void) > { >        while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY) >                cpu_relax(); > } &...
2007 Apr 18
2
refactoring io_apic.c
...efined(CONFIG_IRQBALANCE) # include <asm/processor.h> /* kernel_thread() */ # include <linux/kernel_stat.h> /* kstat */ @@ -661,24 +524,6 @@ late_initcall(balanced_irq_init); static inline void move_irq(int irq) { } #endif /* CONFIG_IRQBALANCE */ -#ifndef CONFIG_SMP -void fastcall send_IPI_self(int vector) -{ - unsigned int cfg; - - /* - * Wait for idle. - */ - apic_wait_icr_idle(); - cfg = APIC_DM_FIXED | APIC_DEST_SELF | vector | APIC_DEST_LOGICAL; - /* - * Send the IPI. The write to APIC_ICR fires this off. - */ - apic_write_around(APIC_ICR, cfg); -} -#endif /* !CONFIG_SMP */ - -...
2007 Apr 18
2
refactoring io_apic.c
...efined(CONFIG_IRQBALANCE) # include <asm/processor.h> /* kernel_thread() */ # include <linux/kernel_stat.h> /* kstat */ @@ -661,24 +524,6 @@ late_initcall(balanced_irq_init); static inline void move_irq(int irq) { } #endif /* CONFIG_IRQBALANCE */ -#ifndef CONFIG_SMP -void fastcall send_IPI_self(int vector) -{ - unsigned int cfg; - - /* - * Wait for idle. - */ - apic_wait_icr_idle(); - cfg = APIC_DM_FIXED | APIC_DEST_SELF | vector | APIC_DEST_LOGICAL; - /* - * Send the IPI. The write to APIC_ICR fires this off. - */ - apic_write_around(APIC_ICR, cfg); -} -#endif /* !CONFIG_SMP */ - -...
2012 Nov 22
41
[PATCH V3] vmx/nmi: Do not use self_nmi() in VMEXIT handler
The self_nmi() code cause''s an NMI to be triggered by sending an APIC message to the local processor. However, NMIs are blocked by the VMEXIT, until the next iret or VMENTER. Volume 3 Chapter 27 Section 1 of the Intel SDM states: An NMI causes subsequent NMIs to be blocked, but only after the VM exit completes. As a result, as soon as the VMENTER happens, an immediate VMEXIT happens