search for: unmask_irq

Displaying 8 results from an estimated 8 matches for "unmask_irq".

2015 Dec 30
0
[PATCH 03/34] ia64: rename nop->iosapic_nop
...d int irq, unsigned int dest, int mask) } static void -nop (struct irq_data *data) +iosapic_nop (struct irq_data *data) { /* do nothing... */ } @@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data) #define iosapic_shutdown_level_irq mask_irq #define iosapic_enable_level_irq unmask_irq #define iosapic_disable_level_irq mask_irq -#define iosapic_ack_level_irq nop +#define iosapic_ack_level_irq iosapic_nop static struct irq_chip irq_type_iosapic_level = { .name = "IO-SAPIC-level", @@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data) } #define ios...
2011 Feb 16
8
[PATCH] irq: Exclude percpu IRQs from being fixed up
irq: Exclude percpu IRQs from being fixed up Xen spin unlock uses spurious ipi "lock_kicker_irq" to wake up blocked vCPUs waiting on that lock. This irq should always be disabled. However, when Dom0 is shuting down, function fixup_irqs is called which unmasks all irqs. Function unmask_irq effectively re-enables lock_kicker_irq and its irq handler is invoked which reports bug and crashes Dom0. This patch sets IRQ_PER_CPU flag in irq desc and excludes percpu IRQs from being fixed up when taking CPUs down. Signed-off-by: Fengzhe Zhang <fengzhe.zhang@intel.com> diff --git a/arc...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who said > You could of course go fix that instead of mutilating things into > sort-of functional state. This work is needed for virtio, so it's probably easiest to merge it through my tree - is this fine by everyone? Arnd, if you agree, could you ack this please? Note to arch maintainers: please don't
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who said > You could of course go fix that instead of mutilating things into > sort-of functional state. This work is needed for virtio, so it's probably easiest to merge it through my tree - is this fine by everyone? Arnd, if you agree, could you ack this please? Note to arch maintainers: please don't
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the