This is an RFC of Linux guest-side implementation of the FIFO-based event channel ABI described in this design document: http://xenbits.xen.org/people/dvrabel/event-channels-C.pdf Refer also to the Xen series. Patch 1 fixes a regression introduced in 3.7 and is unrelated to this series. Patch 2 is a obvious refactoring of common code. Patch 3-7 prepare for supporting multiple ABIs. Patch 8 adds the low-level evtchn_ops hooks. Patch 9-10 add an additional hook for ABI-specific per-port setup (used for expanding the event array as more event are bound). Patch 11-12 add the ABI and the implementation. Main known limitations are listed in patch 12. David
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 01/12] xen/events: avoid race with raising an event in unmask_evtchn()
From: David Vrabel <david.vrabel@citrix.com> In unmask_evtchn(), when the mask bit is cleared after testing for pending and the event becomes pending between the test and clear, then the upcall will not become pending and the event may be lost or delayed. Avoid this by always clearing the mask bit before checking for pending. This fixes a regression introduced in 3.7 by b5e579232d635b79a3da052964cb357ccda8d9ea (xen/events: fix unmask_evtchn for PV on HVM guests) which reordered the clear mask and check pending operations. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Cc: stable@vger.kernel.org Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- drivers/xen/events.c | 10 +++++----- 1 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index d17aa41..4bdd0a5 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -403,11 +403,13 @@ static void unmask_evtchn(int port) if (unlikely((cpu != cpu_from_evtchn(port)))) do_hypercall = 1; - else + else { + sync_clear_bit(port, BM(&s->evtchn_mask[0])); evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); - if (unlikely(evtchn_pending && xen_hvm_domain())) - do_hypercall = 1; + if (unlikely(evtchn_pending && xen_hvm_domain())) + do_hypercall = 1; + } /* Slow path (hypercall) if this is a non-local port or if this is * an hvm domain and an event is pending (hvm domains don''t have @@ -418,8 +420,6 @@ static void unmask_evtchn(int port) } else { struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); - sync_clear_bit(port, BM(&s->evtchn_mask[0])); - /* * The following is basically the equivalent of * ''hw_resend_irq''. Just like a real IO-APIC we ''lose -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 02/12] xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
From: David Vrabel <david.vrabel@citrix.com> These two function did the same thing with different parameters, put the common bits in retrigger_evtchn(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 27 +++++++++------------------ 1 files changed, 9 insertions(+), 18 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 4bdd0a5..c12e973 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -1498,13 +1498,13 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, return rebind_irq_to_cpu(data->irq, tcpu); } -int resend_irq_on_evtchn(unsigned int irq) +static int retrigger_evtchn(int evtchn) { - int masked, evtchn = evtchn_from_irq(irq); + int masked; struct shared_info *s = HYPERVISOR_shared_info; if (!VALID_EVTCHN(evtchn)) - return 1; + return 0; masked = sync_test_and_set_bit(evtchn, BM(s->evtchn_mask)); sync_set_bit(evtchn, BM(s->evtchn_pending)); @@ -1514,6 +1514,11 @@ int resend_irq_on_evtchn(unsigned int irq) return 1; } +int resend_irq_on_evtchn(unsigned int irq) +{ + return retrigger_evtchn(evtchn_from_irq(irq)); +} + static void enable_dynirq(struct irq_data *data) { int evtchn = evtchn_from_irq(data->irq); @@ -1548,21 +1553,7 @@ static void mask_ack_dynirq(struct irq_data *data) static int retrigger_dynirq(struct irq_data *data) { - int evtchn = evtchn_from_irq(data->irq); - struct shared_info *sh = HYPERVISOR_shared_info; - int ret = 0; - - if (VALID_EVTCHN(evtchn)) { - int masked; - - masked = sync_test_and_set_bit(evtchn, BM(sh->evtchn_mask)); - sync_set_bit(evtchn, BM(sh->evtchn_pending)); - if (!masked) - unmask_evtchn(evtchn); - ret = 1; - } - - return ret; + return retrigger_evtchn(evtchn_from_irq(data->irq)); } static void restore_pirqs(void) -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 03/12] xen/events: remove unnecessary init_evtchn_cpu_bindings()
From: David Vrabel <david.vrabel@citrix.com> Event channels are always explicitly bound to a specific VCPU before they are first enabled. There is no need to initialize all possible events as bound to VCPU 0 at start of day or after a resume. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 22 ---------------------- 1 files changed, 0 insertions(+), 22 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index c12e973..0d5c210 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -333,24 +333,6 @@ static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu) info_for_irq(irq)->cpu = cpu; } -static void init_evtchn_cpu_bindings(void) -{ - int i; -#ifdef CONFIG_SMP - struct irq_info *info; - - /* By default all event channels notify CPU#0. */ - list_for_each_entry(info, &xen_irq_list_head, list) { - struct irq_desc *desc = irq_to_desc(info->irq); - cpumask_copy(desc->irq_data.affinity, cpumask_of(0)); - } -#endif - - for_each_possible_cpu(i) - memset(per_cpu(cpu_evtchn_mask, i), - (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i))); -} - static inline void clear_evtchn(int port) { struct shared_info *s = HYPERVISOR_shared_info; @@ -1713,8 +1695,6 @@ void xen_irq_resume(void) unsigned int cpu, evtchn; struct irq_info *info; - init_evtchn_cpu_bindings(); - /* New event-channel space is not ''live'' yet. */ for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++) mask_evtchn(evtchn); @@ -1827,8 +1807,6 @@ void __init xen_init_IRQ(void) for (i = 0; i < NR_EVENT_CHANNELS; i++) evtchn_to_irq[i] = -1; - init_evtchn_cpu_bindings(); - /* No event channels are ''live'' right now. */ for (i = 0; i < NR_EVENT_CHANNELS; i++) mask_evtchn(i); -- 1.7.2.5
From: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 8 +++++++- 1 files changed, 7 insertions(+), 1 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 0d5c210..88d91d7 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -351,6 +351,12 @@ static inline int test_evtchn(int port) return sync_test_bit(port, BM(&s->evtchn_pending[0])); } +static inline int test_and_set_mask(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); +} + /** * notify_remote_via_irq - send event to remote end of event channel via irq @@ -1488,7 +1494,7 @@ static int retrigger_evtchn(int evtchn) if (!VALID_EVTCHN(evtchn)) return 0; - masked = sync_test_and_set_bit(evtchn, BM(s->evtchn_mask)); + masked = test_and_set_mask(evtchn); sync_set_bit(evtchn, BM(s->evtchn_pending)); if (!masked) unmask_evtchn(evtchn); -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 05/12] xen/events: replace raw bit ops with functions
From: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 88d91d7..db0e97c 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -1489,13 +1489,12 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, static int retrigger_evtchn(int evtchn) { int masked; - struct shared_info *s = HYPERVISOR_shared_info; if (!VALID_EVTCHN(evtchn)) return 0; masked = test_and_set_mask(evtchn); - sync_set_bit(evtchn, BM(s->evtchn_pending)); + set_evtchn(evtchn); if (!masked) unmask_evtchn(evtchn); -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 06/12] xen/events: move drivers/xen/events.c into drivers/xen/events/
From: David Vrabel <david.vrabel@citrix.com> events.c will be split into multiple files so move it into its own directory. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/Makefile | 3 ++- drivers/xen/events/Kbuild | 2 ++ drivers/xen/{ => events}/events.c | 0 3 files changed, 4 insertions(+), 1 deletions(-) create mode 100644 drivers/xen/events/Kbuild rename drivers/xen/{ => events}/events.c (100%) diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile index eabd0ee..8176691 100644 --- a/drivers/xen/Makefile +++ b/drivers/xen/Makefile @@ -3,7 +3,8 @@ obj-y += manage.o obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o endif obj-$(CONFIG_X86) += fallback.o -obj-y += grant-table.o features.o events.o balloon.o +obj-y += grant-table.o features.o balloon.o +obj-y += events/ obj-y += xenbus/ nostackp := $(call cc-option, -fno-stack-protector) diff --git a/drivers/xen/events/Kbuild b/drivers/xen/events/Kbuild new file mode 100644 index 0000000..aea331e --- /dev/null +++ b/drivers/xen/events/Kbuild @@ -0,0 +1,2 @@ +obj-y += events.o +obj-y += n-level.o diff --git a/drivers/xen/events.c b/drivers/xen/events/events.c similarity index 100% rename from drivers/xen/events.c rename to drivers/xen/events/events.c -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 07/12] xen/events: move 2-level specific code into its own file
From: David Vrabel <david.vrabel@citrix.com> In preparation for alternative event channel ABIs, move all the functions accessing the shared data structures into their own file. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 346 +-------------------------------- drivers/xen/events/events_internal.h | 74 +++++++ drivers/xen/events/n-level.c | 315 +++++++++++++++++++++++++++++++ 3 files changed, 400 insertions(+), 335 deletions(-) create mode 100644 drivers/xen/events/events_internal.h create mode 100644 drivers/xen/events/n-level.c diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index db0e97c..e85c00a 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -56,6 +56,8 @@ #include <xen/interface/sched.h> #include <asm/hw_irq.h> +#include "events_internal.h" + /* * This lock protects updates to the following mapping and reference-count * arrays. The lock does not need to be acquired to read the mapping tables. @@ -70,74 +72,12 @@ static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1}; /* IRQ <-> IPI mapping */ static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1}; -/* Interrupt types. */ -enum xen_irq_type { - IRQT_UNBOUND = 0, - IRQT_PIRQ, - IRQT_VIRQ, - IRQT_IPI, - IRQT_EVTCHN -}; - -/* - * Packed IRQ information: - * type - enum xen_irq_type - * event channel - irq->event channel mapping - * cpu - cpu this event channel is bound to - * index - type-specific information: - * PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM - * guest, or GSI (real passthrough IRQ) of the device. - * VIRQ - virq number - * IPI - IPI vector - * EVTCHN - - */ -struct irq_info { - struct list_head list; - int refcnt; - enum xen_irq_type type; /* type */ - unsigned irq; - unsigned short evtchn; /* event channel */ - unsigned short cpu; /* cpu bound */ - - union { - unsigned short virq; - enum ipi_vector ipi; - struct { - unsigned short pirq; - unsigned short gsi; - unsigned char vector; - unsigned char flags; - uint16_t domid; - } pirq; - } u; -}; -#define PIRQ_NEEDS_EOI (1 << 0) -#define PIRQ_SHAREABLE (1 << 1) - -static int *evtchn_to_irq; +int *evtchn_to_irq; #ifdef CONFIG_X86 static unsigned long *pirq_eoi_map; #endif static bool (*pirq_needs_eoi)(unsigned irq); -/* - * Note sizeof(xen_ulong_t) can be more than sizeof(unsigned long). Be - * careful to only use bitops which allow for this (e.g - * test_bit/find_first_bit and friends but not __ffs) and to pass - * BITS_PER_EVTCHN_WORD as the bitmask length. - */ -#define BITS_PER_EVTCHN_WORD (sizeof(xen_ulong_t)*8) -/* - * Make a bitmask (i.e. unsigned long *) of a xen_ulong_t - * array. Primarily to avoid long lines (hence the terse name). - */ -#define BM(x) (unsigned long *)(x) -/* Find the first set bit in a evtchn mask */ -#define EVTCHN_FIRST_BIT(w) find_first_bit(BM(&(w)), BITS_PER_EVTCHN_WORD) - -static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], - cpu_evtchn_mask); - /* Xen will never allocate port zero for any purpose. */ #define VALID_EVTCHN(chn) ((chn) != 0) @@ -148,7 +88,7 @@ static void enable_dynirq(struct irq_data *data); static void disable_dynirq(struct irq_data *data); /* Get info for IRQ */ -static struct irq_info *info_for_irq(unsigned irq) +struct irq_info *info_for_irq(unsigned irq) { return irq_get_handler_data(irq); } @@ -278,12 +218,12 @@ static enum xen_irq_type type_from_irq(unsigned irq) return info_for_irq(irq)->type; } -static unsigned cpu_from_irq(unsigned irq) +unsigned cpu_from_irq(unsigned irq) { return info_for_irq(irq)->cpu; } -static unsigned int cpu_from_evtchn(unsigned int evtchn) +unsigned int cpu_from_evtchn(unsigned int evtchn) { int irq = evtchn_to_irq[evtchn]; unsigned ret = 0; @@ -309,55 +249,21 @@ static bool pirq_needs_eoi_flag(unsigned irq) return info->u.pirq.flags & PIRQ_NEEDS_EOI; } -static inline xen_ulong_t active_evtchns(unsigned int cpu, - struct shared_info *sh, - unsigned int idx) -{ - return sh->evtchn_pending[idx] & - per_cpu(cpu_evtchn_mask, cpu)[idx] & - ~sh->evtchn_mask[idx]; -} - static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu) { int irq = evtchn_to_irq[chn]; + struct irq_info *info = info_for_irq(irq); BUG_ON(irq == -1); #ifdef CONFIG_SMP cpumask_copy(irq_to_desc(irq)->irq_data.affinity, cpumask_of(cpu)); #endif - clear_bit(chn, BM(per_cpu(cpu_evtchn_mask, cpu_from_irq(irq)))); - set_bit(chn, BM(per_cpu(cpu_evtchn_mask, cpu))); - - info_for_irq(irq)->cpu = cpu; -} - -static inline void clear_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_clear_bit(port, BM(&s->evtchn_pending[0])); -} - -static inline void set_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_set_bit(port, BM(&s->evtchn_pending[0])); -} - -static inline int test_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - return sync_test_bit(port, BM(&s->evtchn_pending[0])); -} + xen_evtchn_port_bind_to_cpu(info, cpu); -static inline int test_and_set_mask(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); + info->cpu = cpu; } - /** * notify_remote_via_irq - send event to remote end of event channel via irq * @irq: irq of event channel to send event to @@ -375,53 +281,6 @@ void notify_remote_via_irq(int irq) } EXPORT_SYMBOL_GPL(notify_remote_via_irq); -static void mask_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_set_bit(port, BM(&s->evtchn_mask[0])); -} - -static void unmask_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - unsigned int cpu = get_cpu(); - int do_hypercall = 0, evtchn_pending = 0; - - BUG_ON(!irqs_disabled()); - - if (unlikely((cpu != cpu_from_evtchn(port)))) - do_hypercall = 1; - else { - sync_clear_bit(port, BM(&s->evtchn_mask[0])); - evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); - - if (unlikely(evtchn_pending && xen_hvm_domain())) - do_hypercall = 1; - } - - /* Slow path (hypercall) if this is a non-local port or if this is - * an hvm domain and an event is pending (hvm domains don''t have - * their own implementation of irq_enable). */ - if (do_hypercall) { - struct evtchn_unmask unmask = { .port = port }; - (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); - } else { - struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); - - /* - * The following is basically the equivalent of - * ''hw_resend_irq''. Just like a real IO-APIC we ''lose - * the interrupt edge'' if the channel is masked. - */ - if (evtchn_pending && - !sync_test_and_set_bit(port / BITS_PER_EVTCHN_WORD, - BM(&vcpu_info->evtchn_pending_sel))) - vcpu_info->evtchn_upcall_pending = 1; - } - - put_cpu(); -} - static void xen_irq_init(unsigned irq) { struct irq_info *info; @@ -1188,204 +1047,21 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector) notify_remote_via_irq(irq); } -irqreturn_t xen_debug_interrupt(int irq, void *dev_id) -{ - struct shared_info *sh = HYPERVISOR_shared_info; - int cpu = smp_processor_id(); - xen_ulong_t *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu); - int i; - unsigned long flags; - static DEFINE_SPINLOCK(debug_lock); - struct vcpu_info *v; - - spin_lock_irqsave(&debug_lock, flags); - - printk("\nvcpu %d\n ", cpu); - - for_each_online_cpu(i) { - int pending; - v = per_cpu(xen_vcpu, i); - pending = (get_irq_regs() && i == cpu) - ? xen_irqs_disabled(get_irq_regs()) - : v->evtchn_upcall_mask; - printk("%d: masked=%d pending=%d event_sel %0*"PRI_xen_ulong"\n ", i, - pending, v->evtchn_upcall_pending, - (int)(sizeof(v->evtchn_pending_sel)*2), - v->evtchn_pending_sel); - } - v = per_cpu(xen_vcpu, cpu); - - printk("\npending:\n "); - for (i = ARRAY_SIZE(sh->evtchn_pending)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)sizeof(sh->evtchn_pending[0])*2, - sh->evtchn_pending[i], - i % 8 == 0 ? "\n " : " "); - printk("\nglobal mask:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - sh->evtchn_mask[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nglobally unmasked:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - sh->evtchn_pending[i] & ~sh->evtchn_mask[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nlocal cpu%d mask:\n ", cpu); - for (i = (NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", (int)(sizeof(cpu_evtchn[0])*2), - cpu_evtchn[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nlocally unmasked:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) { - xen_ulong_t pending = sh->evtchn_pending[i] - & ~sh->evtchn_mask[i] - & cpu_evtchn[i]; - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - pending, i % 8 == 0 ? "\n " : " "); - } - - printk("\npending list:\n"); - for (i = 0; i < NR_EVENT_CHANNELS; i++) { - if (sync_test_bit(i, BM(sh->evtchn_pending))) { - int word_idx = i / BITS_PER_EVTCHN_WORD; - printk(" %d: event %d -> irq %d%s%s%s\n", - cpu_from_evtchn(i), i, - evtchn_to_irq[i], - sync_test_bit(word_idx, BM(&v->evtchn_pending_sel)) - ? "" : " l2-clear", - !sync_test_bit(i, BM(sh->evtchn_mask)) - ? "" : " globally-masked", - sync_test_bit(i, BM(cpu_evtchn)) - ? "" : " locally-masked"); - } - } - - spin_unlock_irqrestore(&debug_lock, flags); - - return IRQ_HANDLED; -} - static DEFINE_PER_CPU(unsigned, xed_nesting_count); -static DEFINE_PER_CPU(unsigned int, current_word_idx); -static DEFINE_PER_CPU(unsigned int, current_bit_idx); -/* - * Mask out the i least significant bits of w - */ -#define MASK_LSBS(w, i) (w & ((~((xen_ulong_t)0UL)) << i)) - -/* - * Search the CPUs pending events bitmasks. For each one found, map - * the event number to an irq, and feed it into do_IRQ() for - * handling. - * - * Xen uses a two-level bitmap to speed searching. The first level is - * a bitset of words which contain pending event bits. The second - * level is a bitset of pending events themselves. - */ static void __xen_evtchn_do_upcall(void) { - int start_word_idx, start_bit_idx; - int word_idx, bit_idx; - int i; - int cpu = get_cpu(); - struct shared_info *s = HYPERVISOR_shared_info; struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + int cpu = get_cpu(); unsigned count; do { - xen_ulong_t pending_words; - vcpu_info->evtchn_upcall_pending = 0; if (__this_cpu_inc_return(xed_nesting_count) - 1) goto out; - /* - * Master flag must be cleared /before/ clearing - * selector flag. xchg_xen_ulong must contain an - * appropriate barrier. - */ - pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0); - - start_word_idx = __this_cpu_read(current_word_idx); - start_bit_idx = __this_cpu_read(current_bit_idx); - - word_idx = start_word_idx; - - for (i = 0; pending_words != 0; i++) { - xen_ulong_t pending_bits; - xen_ulong_t words; - - words = MASK_LSBS(pending_words, word_idx); - - /* - * If we masked out all events, wrap to beginning. - */ - if (words == 0) { - word_idx = 0; - bit_idx = 0; - continue; - } - word_idx = EVTCHN_FIRST_BIT(words); - - pending_bits = active_evtchns(cpu, s, word_idx); - bit_idx = 0; /* usually scan entire word from start */ - if (word_idx == start_word_idx) { - /* We scan the starting word in two parts */ - if (i == 0) - /* 1st time: start in the middle */ - bit_idx = start_bit_idx; - else - /* 2nd time: mask bits done already */ - bit_idx &= (1UL << start_bit_idx) - 1; - } - - do { - xen_ulong_t bits; - int port, irq; - struct irq_desc *desc; - - bits = MASK_LSBS(pending_bits, bit_idx); - - /* If we masked out all events, move on. */ - if (bits == 0) - break; - - bit_idx = EVTCHN_FIRST_BIT(bits); - - /* Process port. */ - port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; - irq = evtchn_to_irq[port]; - - if (irq != -1) { - desc = irq_to_desc(irq); - if (desc) - generic_handle_irq_desc(irq, desc); - } - - bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD; - - /* Next caller starts at last processed + 1 */ - __this_cpu_write(current_word_idx, - bit_idx ? word_idx : - (word_idx+1) % BITS_PER_EVTCHN_WORD); - __this_cpu_write(current_bit_idx, bit_idx); - } while (bit_idx != 0); - - /* Scan start_l1i twice; all others once. */ - if ((word_idx != start_word_idx) || (i != 0)) - pending_words &= ~(1UL << word_idx); - - word_idx = (word_idx + 1) % BITS_PER_EVTCHN_WORD; - } + xen_evtchn_handle_events(cpu); BUG_ON(!irqs_disabled()); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h new file mode 100644 index 0000000..79ac70b --- /dev/null +++ b/drivers/xen/events/events_internal.h @@ -0,0 +1,74 @@ +/* + * Xen Event Channels (internal header) + * + * Copyright (C) 2013 Citrix Systems R&D Ltd. + * + * This source code is licensed under the GNU General Public License, + * Version 2 or later. See the file COPYING for more details. + */ +#ifndef __EVENTS_INTERNAL_H__ +#define __EVENTS_INTERNAL_H__ + +/* Interrupt types. */ +enum xen_irq_type { + IRQT_UNBOUND = 0, + IRQT_PIRQ, + IRQT_VIRQ, + IRQT_IPI, + IRQT_EVTCHN +}; + +/* + * Packed IRQ information: + * type - enum xen_irq_type + * event channel - irq->event channel mapping + * cpu - cpu this event channel is bound to + * index - type-specific information: + * PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM + * guest, or GSI (real passthrough IRQ) of the device. + * VIRQ - virq number + * IPI - IPI vector + * EVTCHN - + */ +struct irq_info { + struct list_head list; + int refcnt; + enum xen_irq_type type; /* type */ + unsigned irq; + unsigned short evtchn; /* event channel */ + unsigned short cpu; /* cpu bound */ + + union { + unsigned short virq; + enum ipi_vector ipi; + struct { + unsigned short pirq; + unsigned short gsi; + unsigned char vector; + unsigned char flags; + uint16_t domid; + } pirq; + } u; +}; + +#define PIRQ_NEEDS_EOI (1 << 0) +#define PIRQ_SHAREABLE (1 << 1) + +extern int *evtchn_to_irq; + +struct irq_info *info_for_irq(unsigned irq); +unsigned cpu_from_irq(unsigned irq); +unsigned cpu_from_evtchn(unsigned int evtchn); + +void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu); + +void clear_evtchn(int port); +void set_evtchn(int port); +int test_evtchn(int port); +int test_and_set_mask(int port); +void mask_evtchn(int port); +void unmask_evtchn(int port); + +void xen_evtchn_handle_events(int cpu); + +#endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c new file mode 100644 index 0000000..05762d5 --- /dev/null +++ b/drivers/xen/events/n-level.c @@ -0,0 +1,315 @@ +/* + * Xen event channels (N-level ABI) + * + * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007 + */ + +#include <linux/linkage.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/module.h> + +#include <asm/sync_bitops.h> +#include <asm/xen/hypercall.h> +#include <asm/xen/hypervisor.h> + +#include <xen/xen.h> +#include <xen/xen-ops.h> +#include <xen/events.h> +#include <xen/interface/xen.h> +#include <xen/interface/event_channel.h> + +#include "events_internal.h" + +/* + * Note sizeof(xen_ulong_t) can be more than sizeof(unsigned long). Be + * careful to only use bitops which allow for this (e.g + * test_bit/find_first_bit and friends but not __ffs) and to pass + * BITS_PER_EVTCHN_WORD as the bitmask length. + */ +#define BITS_PER_EVTCHN_WORD (sizeof(xen_ulong_t)*8) +/* + * Make a bitmask (i.e. unsigned long *) of a xen_ulong_t + * array. Primarily to avoid long lines (hence the terse name). + */ +#define BM(x) (unsigned long *)(x) +/* Find the first set bit in a evtchn mask */ +#define EVTCHN_FIRST_BIT(w) find_first_bit(BM(&(w)), BITS_PER_EVTCHN_WORD) + +static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], + cpu_evtchn_mask); + +void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +{ + clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); + set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); +} + +void clear_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_clear_bit(port, BM(&s->evtchn_pending[0])); +} + +void set_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_set_bit(port, BM(&s->evtchn_pending[0])); +} + +int test_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_bit(port, BM(&s->evtchn_pending[0])); +} + +int test_and_set_mask(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); +} + +void mask_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_set_bit(port, BM(&s->evtchn_mask[0])); +} + +void unmask_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + unsigned int cpu = get_cpu(); + int do_hypercall = 0, evtchn_pending = 0; + + BUG_ON(!irqs_disabled()); + + if (unlikely((cpu != cpu_from_evtchn(port)))) + do_hypercall = 1; + else { + sync_clear_bit(port, BM(&s->evtchn_mask[0])); + evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); + + if (unlikely(evtchn_pending && xen_hvm_domain())) + do_hypercall = 1; + } + + /* Slow path (hypercall) if this is a non-local port or if this is + * an hvm domain and an event is pending (hvm domains don''t have + * their own implementation of irq_enable). */ + if (do_hypercall) { + struct evtchn_unmask unmask = { .port = port }; + (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); + } else { + struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + + /* + * The following is basically the equivalent of + * ''hw_resend_irq''. Just like a real IO-APIC we ''lose + * the interrupt edge'' if the channel is masked. + */ + if (evtchn_pending && + !sync_test_and_set_bit(port / BITS_PER_EVTCHN_WORD, + BM(&vcpu_info->evtchn_pending_sel))) + vcpu_info->evtchn_upcall_pending = 1; + } + + put_cpu(); +} + +static DEFINE_PER_CPU(unsigned int, current_word_idx); +static DEFINE_PER_CPU(unsigned int, current_bit_idx); + +/* + * Mask out the i least significant bits of w + */ +#define MASK_LSBS(w, i) (w & ((~((xen_ulong_t)0UL)) << i)) + +static inline xen_ulong_t active_evtchns(unsigned int cpu, + struct shared_info *sh, + unsigned int idx) +{ + return sh->evtchn_pending[idx] & + per_cpu(cpu_evtchn_mask, cpu)[idx] & + ~sh->evtchn_mask[idx]; +} + +/* + * Search the CPU''s pending events bitmasks. For each one found, map + * the event number to an irq, and feed it into do_IRQ() for handling. + * + * Xen uses a two-level bitmap to speed searching. The first level is + * a bitset of words which contain pending event bits. The second + * level is a bitset of pending events themselves. + */ +void xen_evtchn_handle_events(int cpu) +{ + xen_ulong_t pending_words; + int start_word_idx, start_bit_idx; + int word_idx, bit_idx; + int i; + struct shared_info *s = HYPERVISOR_shared_info; + struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + + /* + * Master flag must be cleared /before/ clearing + * selector flag. xchg_xen_ulong must contain an + * appropriate barrier. + */ + pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0); + + start_word_idx = __this_cpu_read(current_word_idx); + start_bit_idx = __this_cpu_read(current_bit_idx); + + word_idx = start_word_idx; + + for (i = 0; pending_words != 0; i++) { + xen_ulong_t pending_bits; + xen_ulong_t words; + + words = MASK_LSBS(pending_words, word_idx); + + /* + * If we masked out all events, wrap to beginning. + */ + if (words == 0) { + word_idx = 0; + bit_idx = 0; + continue; + } + word_idx = EVTCHN_FIRST_BIT(words); + + pending_bits = active_evtchns(cpu, s, word_idx); + bit_idx = 0; /* usually scan entire word from start */ + if (word_idx == start_word_idx) { + /* We scan the starting word in two parts */ + if (i == 0) + /* 1st time: start in the middle */ + bit_idx = start_bit_idx; + else + /* 2nd time: mask bits done already */ + bit_idx &= (1UL << start_bit_idx) - 1; + } + + do { + xen_ulong_t bits; + int port, irq; + struct irq_desc *desc; + + bits = MASK_LSBS(pending_bits, bit_idx); + + /* If we masked out all events, move on. */ + if (bits == 0) + break; + + bit_idx = EVTCHN_FIRST_BIT(bits); + + /* Process port. */ + port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; + irq = evtchn_to_irq[port]; + + if (irq != -1) { + desc = irq_to_desc(irq); + if (desc) + generic_handle_irq_desc(irq, desc); + } + + bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD; + + /* Next caller starts at last processed + 1 */ + __this_cpu_write(current_word_idx, + bit_idx ? word_idx : + (word_idx+1) % BITS_PER_EVTCHN_WORD); + __this_cpu_write(current_bit_idx, bit_idx); + } while (bit_idx != 0); + + /* Scan start_l1i twice; all others once. */ + if ((word_idx != start_word_idx) || (i != 0)) + pending_words &= ~(1UL << word_idx); + + word_idx = (word_idx + 1) % BITS_PER_EVTCHN_WORD; + } +} + +irqreturn_t xen_debug_interrupt(int irq, void *dev_id) +{ + struct shared_info *sh = HYPERVISOR_shared_info; + int cpu = smp_processor_id(); + xen_ulong_t *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu); + int i; + unsigned long flags; + static DEFINE_SPINLOCK(debug_lock); + struct vcpu_info *v; + + spin_lock_irqsave(&debug_lock, flags); + + printk("\nvcpu %d\n ", cpu); + + for_each_online_cpu(i) { + int pending; + v = per_cpu(xen_vcpu, i); + pending = (get_irq_regs() && i == cpu) + ? xen_irqs_disabled(get_irq_regs()) + : v->evtchn_upcall_mask; + printk("%d: masked=%d pending=%d event_sel %0*"PRI_xen_ulong"\n ", i, + pending, v->evtchn_upcall_pending, + (int)(sizeof(v->evtchn_pending_sel)*2), + v->evtchn_pending_sel); + } + v = per_cpu(xen_vcpu, cpu); + + printk("\npending:\n "); + for (i = ARRAY_SIZE(sh->evtchn_pending)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)sizeof(sh->evtchn_pending[0])*2, + sh->evtchn_pending[i], + i % 8 == 0 ? "\n " : " "); + printk("\nglobal mask:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + sh->evtchn_mask[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nglobally unmasked:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + sh->evtchn_pending[i] & ~sh->evtchn_mask[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nlocal cpu%d mask:\n ", cpu); + for (i = (NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", (int)(sizeof(cpu_evtchn[0])*2), + cpu_evtchn[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nlocally unmasked:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) { + xen_ulong_t pending = sh->evtchn_pending[i] + & ~sh->evtchn_mask[i] + & cpu_evtchn[i]; + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + pending, i % 8 == 0 ? "\n " : " "); + } + + printk("\npending list:\n"); + for (i = 0; i < NR_EVENT_CHANNELS; i++) { + if (sync_test_bit(i, BM(sh->evtchn_pending))) { + int word_idx = i / BITS_PER_EVTCHN_WORD; + printk(" %d: event %d -> irq %d%s%s%s\n", + cpu_from_evtchn(i), i, + evtchn_to_irq[i], + sync_test_bit(word_idx, BM(&v->evtchn_pending_sel)) + ? "" : " l2-clear", + !sync_test_bit(i, BM(sh->evtchn_mask)) + ? "" : " globally-masked", + sync_test_bit(i, BM(cpu_evtchn)) + ? "" : " locally-masked"); + } + } + + spin_unlock_irqrestore(&debug_lock, flags); + + return IRQ_HANDLED; +} -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 08/12] xen/events: add struct evtchn_ops for the low-level port operations
From: David Vrabel <david.vrabel@citrix.com> evtchn_ops contains the low-level operations that access the shared data structures. This allows alternate ABIs to be supported. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 4 ++ drivers/xen/events/events_internal.h | 61 +++++++++++++++++++++++++++++---- drivers/xen/events/n-level.c | 27 ++++++++++---- 3 files changed, 76 insertions(+), 16 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index e85c00a..1017d9f 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -58,6 +58,8 @@ #include "events_internal.h" +struct evtchn_ops evtchn_ops; + /* * This lock protects updates to the following mapping and reference-count * arrays. The lock does not need to be acquired to read the mapping tables. @@ -1482,6 +1484,8 @@ void __init xen_init_IRQ(void) { int i; + evtchn_ops = evtchn_ops_nlevel; + evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), GFP_KERNEL); BUG_ON(!evtchn_to_irq); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 79ac70b..6badb05 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -54,21 +54,66 @@ struct irq_info { #define PIRQ_NEEDS_EOI (1 << 0) #define PIRQ_SHAREABLE (1 << 1) +struct evtchn_ops { + void (*bind_to_cpu)(struct irq_info *info, int cpu); + + void (*clear_pending)(int port); + void (*set_pending)(int port); + bool (*is_pending)(int port); + bool (*test_and_set_mask)(int port); + void (*mask)(int port); + void (*unmask)(int port); + + void (*handle_events)(int cpu); +}; + +extern struct evtchn_ops evtchn_ops; +extern struct evtchn_ops evtchn_ops_nlevel; + extern int *evtchn_to_irq; struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); unsigned cpu_from_evtchn(unsigned int evtchn); -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu); +static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +{ + evtchn_ops.bind_to_cpu(info, cpu); +} + +static inline void clear_evtchn(int port) +{ + evtchn_ops.clear_pending(port); +} + +static inline void set_evtchn(int port) +{ + evtchn_ops.set_pending(port); +} + +static inline bool test_evtchn(int port) +{ + return evtchn_ops.is_pending(port); +} + +static inline bool test_and_set_mask(int port) +{ + return evtchn_ops.test_and_set_mask(port); +} + +static inline void mask_evtchn(int port) +{ + return evtchn_ops.mask(port); +} -void clear_evtchn(int port); -void set_evtchn(int port); -int test_evtchn(int port); -int test_and_set_mask(int port); -void mask_evtchn(int port); -void unmask_evtchn(int port); +static inline void unmask_evtchn(int port) +{ + return evtchn_ops.unmask(port); +} -void xen_evtchn_handle_events(int cpu); +static inline void xen_evtchn_handle_events(int cpu) +{ + return evtchn_ops.handle_events(cpu); +} #endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c index 05762d5..74f8e94 100644 --- a/drivers/xen/events/n-level.c +++ b/drivers/xen/events/n-level.c @@ -39,43 +39,43 @@ static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], cpu_evtchn_mask); -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +static void nlevel_bind_to_cpu(struct irq_info *info, int cpu) { clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); } -void clear_evtchn(int port) +static void nlevel_clear_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_clear_bit(port, BM(&s->evtchn_pending[0])); } -void set_evtchn(int port) +static void nlevel_set_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_set_bit(port, BM(&s->evtchn_pending[0])); } -int test_evtchn(int port) +static bool nlevel_is_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; return sync_test_bit(port, BM(&s->evtchn_pending[0])); } -int test_and_set_mask(int port) +static bool nlevel_test_and_set_mask(int port) { struct shared_info *s = HYPERVISOR_shared_info; return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); } -void mask_evtchn(int port) +static void nlevel_mask(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_set_bit(port, BM(&s->evtchn_mask[0])); } -void unmask_evtchn(int port) +static void nlevel_unmask(int port) { struct shared_info *s = HYPERVISOR_shared_info; unsigned int cpu = get_cpu(); @@ -141,7 +141,7 @@ static inline xen_ulong_t active_evtchns(unsigned int cpu, * a bitset of words which contain pending event bits. The second * level is a bitset of pending events themselves. */ -void xen_evtchn_handle_events(int cpu) +static void nlevel_handle_events(int cpu) { xen_ulong_t pending_words; int start_word_idx, start_bit_idx; @@ -313,3 +313,14 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } + +struct evtchn_ops evtchn_ops_nlevel = { + .bind_to_cpu = nlevel_bind_to_cpu, + .clear_pending = nlevel_clear_pending, + .set_pending = nlevel_set_pending, + .is_pending = nlevel_is_pending, + .test_and_set_mask = nlevel_test_and_set_mask, + .mask = nlevel_mask, + .unmask = nlevel_unmask, + .handle_events = nlevel_handle_events, +}; -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 09/12] xen/events: allow setup of irq_info to fail
From: David Vrabel <david.vrabel@citrix.com> The FIFO-based event ABI requires additional setup of newly bound events (it may need to expand the event array) and this setup may fail. xen_irq_info_common_init() is a useful place to put this setup so allow this call to fail. This call and the other similar calls are renamed to be *_setup() to reflect that they may now fail. This failure can only occur with new event channels not on rebind. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 155 +++++++++++++++++++++++++----------------- 1 files changed, 92 insertions(+), 63 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 1017d9f..50f8ba6 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -96,7 +96,7 @@ struct irq_info *info_for_irq(unsigned irq) } /* Constructors for packed IRQ information. */ -static void xen_irq_info_common_init(struct irq_info *info, +static int xen_irq_info_common_setup(struct irq_info *info, unsigned irq, enum xen_irq_type type, unsigned short evtchn, @@ -111,45 +111,47 @@ static void xen_irq_info_common_init(struct irq_info *info, info->cpu = cpu; evtchn_to_irq[evtchn] = irq; + + return 0; } -static void xen_irq_info_evtchn_init(unsigned irq, +static int xen_irq_info_evtchn_setup(unsigned irq, unsigned short evtchn) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_EVTCHN, evtchn, 0); + return xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0); } -static void xen_irq_info_ipi_init(unsigned cpu, +static int xen_irq_info_ipi_setup(unsigned cpu, unsigned irq, unsigned short evtchn, enum ipi_vector ipi) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_IPI, evtchn, 0); - info->u.ipi = ipi; per_cpu(ipi_to_irq, cpu)[ipi] = irq; + + return xen_irq_info_common_setup(info, irq, IRQT_IPI, evtchn, 0); } -static void xen_irq_info_virq_init(unsigned cpu, +static int xen_irq_info_virq_setup(unsigned cpu, unsigned irq, unsigned short evtchn, unsigned short virq) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_VIRQ, evtchn, 0); - info->u.virq = virq; per_cpu(virq_to_irq, cpu)[virq] = irq; + + return xen_irq_info_common_setup(info, irq, IRQT_VIRQ, evtchn, 0); } -static void xen_irq_info_pirq_init(unsigned irq, +static int xen_irq_info_pirq_setup(unsigned irq, unsigned short evtchn, unsigned short pirq, unsigned short gsi, @@ -159,13 +161,13 @@ static void xen_irq_info_pirq_init(unsigned irq, { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_PIRQ, evtchn, 0); - info->u.pirq.pirq = pirq; info->u.pirq.gsi = gsi; info->u.pirq.vector = vector; info->u.pirq.domid = domid; info->u.pirq.flags = flags; + + return xen_irq_info_common_setup(info, irq, IRQT_PIRQ, evtchn, 0); } /* @@ -511,6 +513,47 @@ int xen_irq_from_gsi(unsigned gsi) } EXPORT_SYMBOL_GPL(xen_irq_from_gsi); +static void __unbind_from_irq(unsigned int irq) +{ + struct evtchn_close close; + int evtchn = evtchn_from_irq(irq); + struct irq_info *info = irq_get_handler_data(irq); + + if (info->refcnt > 0) { + info->refcnt--; + if (info->refcnt != 0) + return; + } + + if (VALID_EVTCHN(evtchn)) { + close.port = evtchn; + if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) + BUG(); + + switch (type_from_irq(irq)) { + case IRQT_VIRQ: + per_cpu(virq_to_irq, cpu_from_evtchn(evtchn)) + [virq_from_irq(irq)] = -1; + break; + case IRQT_IPI: + per_cpu(ipi_to_irq, cpu_from_evtchn(evtchn)) + [ipi_from_irq(irq)] = -1; + break; + default: + break; + } + + /* Closed ports are implicitly re-bound to VCPU0. */ + bind_evtchn_to_cpu(evtchn, 0); + + evtchn_to_irq[evtchn] = -1; + } + + BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND); + + xen_free_irq(irq); +} + /* * Do not make any assumptions regarding the relationship between the * IRQ number returned here and the Xen pirq argument. @@ -526,6 +569,7 @@ int xen_bind_pirq_gsi_to_irq(unsigned gsi, { int irq = -1; struct physdev_irq irq_op; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -553,8 +597,13 @@ int xen_bind_pirq_gsi_to_irq(unsigned gsi, goto out; } - xen_irq_info_pirq_init(irq, 0, pirq, gsi, irq_op.vector, DOMID_SELF, - shareable ? PIRQ_SHAREABLE : 0); + ret = xen_irq_info_pirq_setup(irq, 0, pirq, gsi, irq_op.vector, DOMID_SELF, + shareable ? PIRQ_SHAREABLE : 0); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } pirq_query_unmask(irq); /* We try to use the handler with the appropriate semantic for the @@ -615,7 +664,9 @@ int xen_bind_pirq_msi_to_irq(struct pci_dev *dev, struct msi_desc *msidesc, irq_set_chip_and_handler_name(irq, &xen_pirq_chip, handle_edge_irq, name); - xen_irq_info_pirq_init(irq, 0, pirq, 0, vector, domid, 0); + ret = xen_irq_info_pirq_setup(irq, 0, pirq, 0, vector, domid, 0); + if (ret < 0) + goto error_irq; ret = irq_set_msi_desc(irq, msidesc); if (ret < 0) goto error_irq; @@ -623,8 +674,8 @@ out: mutex_unlock(&irq_mapping_update_lock); return irq; error_irq: + __unbind_from_irq(irq); mutex_unlock(&irq_mapping_update_lock); - xen_free_irq(irq); return ret; } #endif @@ -694,9 +745,11 @@ int xen_pirq_from_irq(unsigned irq) return pirq_from_irq(irq); } EXPORT_SYMBOL_GPL(xen_pirq_from_irq); + int bind_evtchn_to_irq(unsigned int evtchn) { int irq; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -710,7 +763,12 @@ int bind_evtchn_to_irq(unsigned int evtchn) irq_set_chip_and_handler_name(irq, &xen_dynamic_chip, handle_edge_irq, "event"); - xen_irq_info_evtchn_init(irq, evtchn); + ret = xen_irq_info_evtchn_setup(irq, evtchn); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } } else { struct irq_info *info = info_for_irq(irq); WARN_ON(info == NULL || info->type != IRQT_EVTCHN); @@ -728,6 +786,7 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) { struct evtchn_bind_ipi bind_ipi; int evtchn, irq; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -747,8 +806,12 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) BUG(); evtchn = bind_ipi.port; - xen_irq_info_ipi_init(cpu, irq, evtchn, ipi); - + ret = xen_irq_info_ipi_setup(cpu, irq, evtchn, ipi); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } bind_evtchn_to_cpu(evtchn, cpu); } else { struct irq_info *info = info_for_irq(irq); @@ -827,7 +890,12 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu) evtchn = ret; } - xen_irq_info_virq_init(cpu, irq, evtchn, virq); + ret = xen_irq_info_virq_setup(cpu, irq, evtchn, virq); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } bind_evtchn_to_cpu(evtchn, cpu); } else { @@ -843,47 +911,8 @@ out: static void unbind_from_irq(unsigned int irq) { - struct evtchn_close close; - int evtchn = evtchn_from_irq(irq); - struct irq_info *info = irq_get_handler_data(irq); - mutex_lock(&irq_mapping_update_lock); - - if (info->refcnt > 0) { - info->refcnt--; - if (info->refcnt != 0) - goto done; - } - - if (VALID_EVTCHN(evtchn)) { - close.port = evtchn; - if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) - BUG(); - - switch (type_from_irq(irq)) { - case IRQT_VIRQ: - per_cpu(virq_to_irq, cpu_from_evtchn(evtchn)) - [virq_from_irq(irq)] = -1; - break; - case IRQT_IPI: - per_cpu(ipi_to_irq, cpu_from_evtchn(evtchn)) - [ipi_from_irq(irq)] = -1; - break; - default: - break; - } - - /* Closed ports are implicitly re-bound to VCPU0. */ - bind_evtchn_to_cpu(evtchn, 0); - - evtchn_to_irq[evtchn] = -1; - } - - BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND); - - xen_free_irq(irq); - - done: + __unbind_from_irq(irq); mutex_unlock(&irq_mapping_update_lock); } @@ -1114,7 +1143,7 @@ void rebind_evtchn_irq(int evtchn, int irq) so there should be a proper type */ BUG_ON(info->type == IRQT_UNBOUND); - xen_irq_info_evtchn_init(irq, evtchn); + xen_irq_info_evtchn_setup(irq, evtchn); mutex_unlock(&irq_mapping_update_lock); @@ -1279,7 +1308,7 @@ static void restore_cpu_virqs(unsigned int cpu) evtchn = bind_virq.port; /* Record the new mapping. */ - xen_irq_info_virq_init(cpu, irq, evtchn, virq); + xen_irq_info_virq_setup(cpu, irq, evtchn, virq); bind_evtchn_to_cpu(evtchn, cpu); } } @@ -1303,7 +1332,7 @@ static void restore_cpu_ipis(unsigned int cpu) evtchn = bind_ipi.port; /* Record the new mapping. */ - xen_irq_info_ipi_init(cpu, irq, evtchn, ipi); + xen_irq_info_ipi_setup(cpu, irq, evtchn, ipi); bind_evtchn_to_cpu(evtchn, cpu); } } -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 10/12] xen/events: add a evtchn_op for port setup
From: David Vrabel <david.vrabel@citrix.com> Add a hook for port-specific setup and call it from xen_irq_info_common_setup(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 2 +- drivers/xen/events/events_internal.h | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 50f8ba6..e6895b9 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -112,7 +112,7 @@ static int xen_irq_info_common_setup(struct irq_info *info, evtchn_to_irq[evtchn] = irq; - return 0; + return xen_evtchn_port_setup(info); } static int xen_irq_info_evtchn_setup(unsigned irq, diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 6badb05..1c71a5d 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -55,6 +55,7 @@ struct irq_info { #define PIRQ_SHAREABLE (1 << 1) struct evtchn_ops { + int (*setup)(struct irq_info *info); void (*bind_to_cpu)(struct irq_info *info, int cpu); void (*clear_pending)(int port); @@ -76,6 +77,13 @@ struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); unsigned cpu_from_evtchn(unsigned int evtchn); +static inline int xen_evtchn_port_setup(struct irq_info *info) +{ + if (evtchn_ops.setup) + return evtchn_ops.setup(info); + return 0; +} + static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) { evtchn_ops.bind_to_cpu(info, cpu); -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 11/12] xen/events: Add the hypervisor interface for the FIFO-based event channels
From: David Vrabel <david.vrabel@citrix.com> Add the hypercall sub-ops and the structures for the shared data used in the FIFO-based event channel ABI. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- include/xen/interface/event_channel.h | 70 +++++++++++++++++++++++++++++++++ 1 files changed, 70 insertions(+), 0 deletions(-) diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h index f494292..10472f5 100644 --- a/include/xen/interface/event_channel.h +++ b/include/xen/interface/event_channel.h @@ -190,6 +190,50 @@ struct evtchn_reset { }; typedef struct evtchn_reset evtchn_reset_t; +/* + * EVTCHNOP_init_control: initialize the control block for the FIFO ABI. + */ +#define EVTCHNOP_init_control 11 +struct evtchn_init_control { + /* IN parameters. */ + uint64_t control_mfn; + uint32_t offset; + uint32_t vcpu; +}; +typedef struct evtchn_init_control evtchn_init_control_t; + +/* + * EVTCHNOP_expand_array: add an additional page to the event array. + */ +#define EVTCHNOP_expand_array 12 +struct evtchn_expand_array { + /* IN parameters. */ + uint64_t array_mfn; +}; +typedef struct evtchn_expand_array evtchn_expand_array_t; + +/* + * EVTCHNOP_set_priority: set the priority for an event channel. + */ +#define EVTCHNOP_set_priority 13 +struct evtchn_set_priority { + /* IN parameters. */ + uint32_t port; + uint32_t priority; +}; +typedef struct evtchn_set_priority evtchn_set_priority_t; + +/* + * EVTCHNOP_set_limit: set the maximum event channel port that may be bound. + */ +#define EVTCHNOP_set_limit 14 +struct evtchn_set_limit { + /* IN parameters. */ + uint32_t domid; + uint32_t max_port; +}; +typedef struct evtchn_set_limit evtchn_set_limit_t; + struct evtchn_op { uint32_t cmd; /* EVTCHNOP_* */ union { @@ -207,4 +251,30 @@ struct evtchn_op { }; DEFINE_GUEST_HANDLE_STRUCT(evtchn_op); +/* + * FIFO ABI + */ + +/* Events may have priorities from 0 (highest) to 15 (lowest). */ +#define EVTCHN_FIFO_PRIORITY_MIN 15 +#define EVTCHN_FIFO_PRIORITY_DEFAULT 7 + +#define EVTCHN_FIFO_MAX_QUEUES (EVTCHN_FIFO_PRIORITY_MIN + 1) + +typedef uint32_t event_word_t; + +#define EVTCHN_FIFO_PENDING 31 +#define EVTCHN_FIFO_MASKED 30 +#define EVTCHN_FIFO_LINKED 29 + +#define EVTCHN_FIFO_LINK_BITS 17 +#define EVTCHN_FIFO_LINK_MASK ((1 << EVTCHN_FIFO_LINK_BITS) - 1) + +struct evtchn_fifo_control_block { + uint32_t ready; + uint32_t _rsvd1; + event_word_t head[EVTCHN_FIFO_MAX_QUEUES]; +}; +typedef struct evtchn_fifo_control_block evtchn_fifo_control_block_t; + #endif /* __XEN_PUBLIC_EVENT_CHANNEL_H__ */ -- 1.7.2.5
David Vrabel
2013-Mar-19 21:04 UTC
[PATCH 12/12] xen/events: use the FIFO-based ABI if available
From: David Vrabel <david.vrabel@citrix.com> If the hypervisor supports the FIFO-based ABI, enable it by initializing the control block for the boot VCPU and subsequent VCPUs as they a brought up. The event array is expanded as required when event ports are setup. This implementation has some known limitations: - The number of event channels is not raised above 4096 as this requires changing the way some internal structures were allocated. - Migration will not work as the control blocks or event arrays are not remapped by Xen at the destination. - The timer VIRQ which previously was treated as the highest priority event has the default priority. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/Kbuild | 1 + drivers/xen/events/events.c | 7 +- drivers/xen/events/events_internal.h | 2 + drivers/xen/events/fifo.c | 312 ++++++++++++++++++++++++++++++++++ 4 files changed, 321 insertions(+), 1 deletions(-) create mode 100644 drivers/xen/events/fifo.c diff --git a/drivers/xen/events/Kbuild b/drivers/xen/events/Kbuild index aea331e..74644d0 100644 --- a/drivers/xen/events/Kbuild +++ b/drivers/xen/events/Kbuild @@ -1,2 +1,3 @@ obj-y += events.o +obj-y += fifo.o obj-y += n-level.o diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index e6895b9..a7124f8 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -1512,8 +1512,13 @@ void xen_callback_vector(void) {} void __init xen_init_IRQ(void) { int i; + int ret; - evtchn_ops = evtchn_ops_nlevel; + ret = xen_evtchn_init_fifo_based(); + if (ret < 0) { + printk(KERN_INFO "xen: falling back to n-level event channels"); + evtchn_ops = evtchn_ops_nlevel; + } evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), GFP_KERNEL); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 1c71a5d..d6bedb6 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -124,4 +124,6 @@ static inline void xen_evtchn_handle_events(int cpu) return evtchn_ops.handle_events(cpu); } +int xen_evtchn_init_fifo_based(void); + #endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/fifo.c b/drivers/xen/events/fifo.c new file mode 100644 index 0000000..8f8e390 --- /dev/null +++ b/drivers/xen/events/fifo.c @@ -0,0 +1,312 @@ +/* + * Xen event channels (FIFO-based ABI) + * + * Copyright (C) 2013 Citrix Systems R&D ltd. + * + * This source code is licensed under the GNU General Public License, + * Version 2 or later. See the file COPYING for more details. + */ + +#include <linux/linkage.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/module.h> +#include <linux/smp.h> +#include <linux/percpu.h> +#include <linux/cpu.h> + +#include <asm/sync_bitops.h> +#include <asm/xen/hypercall.h> +#include <asm/xen/hypervisor.h> +#include <asm/xen/page.h> + +#include <xen/xen.h> +#include <xen/xen-ops.h> +#include <xen/events.h> +#include <xen/interface/xen.h> +#include <xen/interface/event_channel.h> + +#include "events_internal.h" + +#define EVENT_WORDS_PER_PAGE (PAGE_SIZE / sizeof(event_word_t)) +#define MAX_EVENT_ARRAY_PAGES ((1 << EVTCHN_FIFO_LINK_BITS) \ + / EVENT_WORDS_PER_PAGE) + +static DEFINE_PER_CPU(struct evtchn_fifo_control_block *, cpu_control_block); +static event_word_t *event_array[MAX_EVENT_ARRAY_PAGES]; +static unsigned event_array_pages; + +#define BM(w) ((unsigned long *)(w)) + +static inline event_word_t *event_word_from_port(int port) +{ + int i = port / EVENT_WORDS_PER_PAGE; + + if (i >= event_array_pages) + return NULL; + return event_array[i] + port; +} + +static int fifo_setup(struct irq_info *info) +{ + int port = info->evtchn; + int i; + int ret = -ENOMEM; + + i = port / EVENT_WORDS_PER_PAGE; + + if (i >= MAX_EVENT_ARRAY_PAGES) + return -EINVAL; + + while (i >= event_array_pages) { + struct page *array_page = NULL; + struct evtchn_expand_array expand_array; + + array_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (array_page == NULL) + goto error; + + expand_array.array_mfn = virt_to_mfn(page_address(array_page)); + + ret = HYPERVISOR_event_channel_op(EVTCHNOP_expand_array, &expand_array); + if (ret < 0) { + __free_page(array_page); + goto error; + } + + event_array[event_array_pages++] = page_address(array_page); + } + return 0; + + error: + if (event_array_pages == 0) + panic("xen: unable to expand event array with initial page (%d)\n", ret); + else + printk(KERN_ERR "xen: unable to expand event array (%d)\n", ret); + return ret; +} + +static void fifo_bind_to_cpu(struct irq_info *info, int cpu) +{ + /* no-op */ +} + +static void fifo_clear_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + sync_clear_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static void fifo_set_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + sync_set_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static bool fifo_is_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + return sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static bool fifo_test_and_set_mask(int port) +{ + event_word_t *word = event_word_from_port(port); + return sync_test_and_set_bit(EVTCHN_FIFO_MASKED, BM(word)); +} + +static void fifo_mask(int port) +{ + event_word_t *word = event_word_from_port(port); + if (word) + sync_set_bit(EVTCHN_FIFO_MASKED, BM(word)); +} + +static void fifo_unmask(int port) +{ + unsigned int cpu = get_cpu(); + bool do_hypercall = false; + bool evtchn_pending = false; + + BUG_ON(!irqs_disabled()); + + if (unlikely((cpu != cpu_from_evtchn(port)))) + do_hypercall = true; + else { + event_word_t *word = event_word_from_port(port); + + sync_clear_bit(EVTCHN_FIFO_MASKED, BM(word)); + evtchn_pending = sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); + if (evtchn_pending) + do_hypercall = true; + } + + if (do_hypercall) { + struct evtchn_unmask unmask = { .port = port }; + (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); + } + + put_cpu(); +} + +static uint32_t clear_linked(volatile event_word_t *word) +{ + event_word_t n, o, w; + + w = *word; + + do { + o = w; + n = (w & ~((1 << EVTCHN_FIFO_LINKED) | EVTCHN_FIFO_LINK_MASK)); + } while ( (w = sync_cmpxchg(word, o, n)) != o ); + + return w & EVTCHN_FIFO_LINK_MASK; +} + +static void handle_irq_for_port(int port) +{ + int irq; + struct irq_desc *desc; + + irq = evtchn_to_irq[port]; + if (irq != -1) { + desc = irq_to_desc(irq); + if (desc) + generic_handle_irq_desc(irq, desc); + } +} + +static void consume_one_event(struct evtchn_fifo_control_block *control_block, + int priority, uint32_t *ready) +{ + volatile uint32_t *head; + int port; + event_word_t *word; + uint32_t link; + + head = &control_block->head[priority]; + + rmb(); /* Ensure word is up-to-date before reading head. */ + port = *head; + word = event_word_from_port(port); + + link = clear_linked(word); + + /* + * If the link is non-zero, there are more events in the + * queue, otherwise the queue is empty. + * + * We don''t set HEAD if the queue is empty as this may race + * with Xen adding a new event to the now empty list and + * setting HEAD. + */ + if (link != 0) + *head = link; + else + clear_bit(priority, BM(ready)); + + if (sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)) + && !sync_test_bit(EVTCHN_FIFO_MASKED, BM(word))) + handle_irq_for_port(port); +} + +#define EVTCHN_FIFO_READY_MASK ((1 << EVTCHN_FIFO_MAX_QUEUES) - 1) + +static void fifo_handle_events(int cpu) +{ + struct evtchn_fifo_control_block *control_block; + uint32_t ready; + int q; + + control_block = per_cpu(cpu_control_block, cpu); + + ready = xchg(&control_block->ready, 0); + + while (ready & EVTCHN_FIFO_READY_MASK) { + q = find_first_bit(BM(&ready), EVTCHN_FIFO_MAX_QUEUES); + consume_one_event(control_block, q, &ready); + } +} + +struct evtchn_ops evtchn_ops_fifo = { + .setup = fifo_setup, + .bind_to_cpu = fifo_bind_to_cpu, + .clear_pending = fifo_clear_pending, + .set_pending = fifo_set_pending, + .is_pending = fifo_is_pending, + .test_and_set_mask = fifo_test_and_set_mask, + .mask = fifo_mask, + .unmask = fifo_unmask, + .handle_events = fifo_handle_events, +}; + +static int __cpuinit fifo_init_control_block(int cpu) +{ + struct page *control_block = NULL; + struct evtchn_init_control init_control; + int ret = -ENOMEM; + + control_block = alloc_page(GFP_KERNEL|__GFP_ZERO); + if (control_block == NULL) + goto error; + + init_control.control_mfn = virt_to_mfn(page_address(control_block)); + init_control.offset = 0; + init_control.vcpu = cpu; + + ret = HYPERVISOR_event_channel_op(EVTCHNOP_init_control, &init_control); + if (ret < 0) + goto error; + + per_cpu(cpu_control_block, cpu) = page_address(control_block); + + return 0; + + error: + __free_page(control_block); + return ret; +} + +static int __cpuinit fifo_cpu_notification(struct notifier_block *self, + unsigned long action, void *hcpu) +{ + int cpu = (long)hcpu; + int ret = 0; + + switch (action) { + case CPU_UP_PREPARE: + ret = fifo_init_control_block(cpu); + break; + default: + break; + } + return ret < 0 ? NOTIFY_BAD : NOTIFY_OK; +} + +static struct notifier_block fifo_cpu_notifier __cpuinitdata = { + .notifier_call = fifo_cpu_notification, +}; + + +int __init xen_evtchn_init_fifo_based(void) +{ + int cpu = get_cpu(); + int ret; + + ret = fifo_init_control_block(cpu); + if (ret < 0) + goto error; + + printk(KERN_INFO "xen: switching to FIFO-based event channels\n"); + + evtchn_ops = evtchn_ops_fifo; + + register_cpu_notifier(&fifo_cpu_notifier); + + put_cpu(); + return 0; + + error: + put_cpu(); + return ret; +} -- 1.7.2.5
Roger Pau Monné
2013-Mar-20 09:38 UTC
Re: [PATCH 12/12] xen/events: use the FIFO-based ABI if available
On 19/03/13 22:04, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > If the hypervisor supports the FIFO-based ABI, enable it by > initializing the control block for the boot VCPU and subsequent VCPUs > as they a brought up. The event array is expanded as required when > event ports are setup. > > This implementation has some known limitations: > > - The number of event channels is not raised above 4096 as this > requires changing the way some internal structures were allocated. > > - Migration will not work as the control blocks or event arrays are > not remapped by Xen at the destination. > > - The timer VIRQ which previously was treated as the highest priority > event has the default priority. > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > drivers/xen/events/Kbuild | 1 + > drivers/xen/events/events.c | 7 +- > drivers/xen/events/events_internal.h | 2 + > drivers/xen/events/fifo.c | 312 ++++++++++++++++++++++++++++++++++ > 4 files changed, 321 insertions(+), 1 deletions(-) > create mode 100644 drivers/xen/events/fifo.c > > diff --git a/drivers/xen/events/Kbuild b/drivers/xen/events/Kbuild > index aea331e..74644d0 100644 > --- a/drivers/xen/events/Kbuild > +++ b/drivers/xen/events/Kbuild > @@ -1,2 +1,3 @@ > obj-y += events.o > +obj-y += fifo.o > obj-y += n-level.o > diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c > index e6895b9..a7124f8 100644 > --- a/drivers/xen/events/events.c > +++ b/drivers/xen/events/events.c > @@ -1512,8 +1512,13 @@ void xen_callback_vector(void) {} > void __init xen_init_IRQ(void) > { > int i; > + int ret; > > - evtchn_ops = evtchn_ops_nlevel; > + ret = xen_evtchn_init_fifo_based(); > + if (ret < 0) { > + printk(KERN_INFO "xen: falling back to n-level event channels"); > + evtchn_ops = evtchn_ops_nlevel; > + } > > evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), > GFP_KERNEL); > diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h > index 1c71a5d..d6bedb6 100644 > --- a/drivers/xen/events/events_internal.h > +++ b/drivers/xen/events/events_internal.h > @@ -124,4 +124,6 @@ static inline void xen_evtchn_handle_events(int cpu) > return evtchn_ops.handle_events(cpu); > } > > +int xen_evtchn_init_fifo_based(void); > + > #endif /* #ifndef __EVENTS_INTERNAL_H__ */ > diff --git a/drivers/xen/events/fifo.c b/drivers/xen/events/fifo.c > new file mode 100644 > index 0000000..8f8e390 > --- /dev/null > +++ b/drivers/xen/events/fifo.c > @@ -0,0 +1,312 @@ > +/* > + * Xen event channels (FIFO-based ABI) > + * > + * Copyright (C) 2013 Citrix Systems R&D ltd. > + * > + * This source code is licensed under the GNU General Public License, > + * Version 2 or later. See the file COPYING for more details. > + */I know this is still only a RFC, but could this code be licensed under something similar to what other parts of the xen related linux kernel code use (dual license): /* * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ So it can be imported into other OSes that are not under the GPL.
Stefano Stabellini
2013-Mar-20 11:00 UTC
Re: [PATCH 01/12] xen/events: avoid race with raising an event in unmask_evtchn()
On Tue, 19 Mar 2013, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > In unmask_evtchn(), when the mask bit is cleared after testing for > pending and the event becomes pending between the test and clear, then > the upcall will not become pending and the event may be lost or > delayed. > > Avoid this by always clearing the mask bit before checking for > pending. > > This fixes a regression introduced in 3.7 by > b5e579232d635b79a3da052964cb357ccda8d9ea (xen/events: fix > unmask_evtchn for PV on HVM guests) which reordered the clear mask and > check pending operations.The race you are trying to fix is real, but the fix you are proposing breaks PV on HVM and ARM guests again. From the description of b5e579232d635b79a3da052964cb357ccda8d9ea, it''s clear that the reason to call EVTCHNOP_unmask is to trigger an event notification injection again. But if you clear the evtchn_mask bit *before* calling EVTCHNOP_unmask, EVTCHNOP_unmask won''t reinject the event. From evtchn_unmask: if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) && test_bit (port, &shared_info(d, evtchn_pending)) && !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d), &vcpu_info(v, evtchn_pending_sel)) ) { vcpu_mark_events_pending(v); } The first condition for reinjection would fail.> Signed-off-by: David Vrabel <david.vrabel@citrix.com> > Cc: stable@vger.kernel.org > Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > --- > drivers/xen/events.c | 10 +++++----- > 1 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/drivers/xen/events.c b/drivers/xen/events.c > index d17aa41..4bdd0a5 100644 > --- a/drivers/xen/events.c > +++ b/drivers/xen/events.c > @@ -403,11 +403,13 @@ static void unmask_evtchn(int port) > > if (unlikely((cpu != cpu_from_evtchn(port)))) > do_hypercall = 1; > - else > + else { > + sync_clear_bit(port, BM(&s->evtchn_mask[0])); > evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); > > - if (unlikely(evtchn_pending && xen_hvm_domain())) > - do_hypercall = 1; > + if (unlikely(evtchn_pending && xen_hvm_domain())) > + do_hypercall = 1; > + } > > /* Slow path (hypercall) if this is a non-local port or if this is > * an hvm domain and an event is pending (hvm domains don''t have > @@ -418,8 +420,6 @@ static void unmask_evtchn(int port) > } else { > struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); > > - sync_clear_bit(port, BM(&s->evtchn_mask[0])); > - > /* > * The following is basically the equivalent of > * ''hw_resend_irq''. Just like a real IO-APIC we ''lose > -- > 1.7.2.5 >
Jan Beulich
2013-Mar-20 11:06 UTC
Re: [PATCH 02/12] xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
>>> On 19.03.13 at 22:04, David Vrabel <david.vrabel@citrix.com> wrote: > From: David Vrabel <david.vrabel@citrix.com> > > These two function did the same thing with different parameters, put > the common bits in retrigger_evtchn().A smaller patch with - afaict - the same net effect would be to keep resend_irq_on_evtchn() as is and simply have retrigger_dynirq() call it. Jan
Jan Beulich
2013-Mar-20 11:09 UTC
Re: [PATCH 03/12] xen/events: remove unnecessary init_evtchn_cpu_bindings()
>>> On 19.03.13 at 22:04, David Vrabel <david.vrabel@citrix.com> wrote: > From: David Vrabel <david.vrabel@citrix.com> > > Event channels are always explicitly bound to a specific VCPU before > they are first enabled. There is no need to initialize all possible > events as bound to VCPU 0 at start of day or after a resume.That part may indeed be safe to do, but ...> --- a/drivers/xen/events.c > +++ b/drivers/xen/events.c > @@ -333,24 +333,6 @@ static void bind_evtchn_to_cpu(unsigned int chn, > unsigned int cpu) > info_for_irq(irq)->cpu = cpu; > } > > -static void init_evtchn_cpu_bindings(void) > -{ > - int i; > -#ifdef CONFIG_SMP > - struct irq_info *info; > - > - /* By default all event channels notify CPU#0. */ > - list_for_each_entry(info, &xen_irq_list_head, list) { > - struct irq_desc *desc = irq_to_desc(info->irq); > - cpumask_copy(desc->irq_data.affinity, cpumask_of(0)); > - } > -#endif > - > - for_each_possible_cpu(i) > - memset(per_cpu(cpu_evtchn_mask, i), > - (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i)));... you also remove the initialization of the mask bits here. If that was intended, a sentence about the safety of this would certainly be good to add to the description. Jan
Jan Beulich
2013-Mar-20 11:12 UTC
Re: [PATCH 08/12] xen/events: add struct evtchn_ops for the low-level port operations
>>> On 19.03.13 at 22:04, David Vrabel <david.vrabel@citrix.com> wrote: > --- a/drivers/xen/events/events.c > +++ b/drivers/xen/events/events.c > @@ -58,6 +58,8 @@ > > #include "events_internal.h" > > +struct evtchn_ops evtchn_ops;Either make this a pointer (to const struct evtchn_ops), ...> +struct evtchn_ops evtchn_ops_nlevel = { > + .bind_to_cpu = nlevel_bind_to_cpu, > + .clear_pending = nlevel_clear_pending, > + .set_pending = nlevel_set_pending, > + .is_pending = nlevel_is_pending, > + .test_and_set_mask = nlevel_test_and_set_mask, > + .mask = nlevel_mask, > + .unmask = nlevel_unmask, > + .handle_events = nlevel_handle_events, > +};... or make this __initdata. Jan
David Vrabel
2013-Mar-20 12:20 UTC
Re: [PATCH 01/12] xen/events: avoid race with raising an event in unmask_evtchn()
On 20/03/13 11:00, Stefano Stabellini wrote:> On Tue, 19 Mar 2013, David Vrabel wrote: >> From: David Vrabel <david.vrabel@citrix.com> >> >> In unmask_evtchn(), when the mask bit is cleared after testing for >> pending and the event becomes pending between the test and clear, then >> the upcall will not become pending and the event may be lost or >> delayed. >> >> Avoid this by always clearing the mask bit before checking for >> pending. >> >> This fixes a regression introduced in 3.7 by >> b5e579232d635b79a3da052964cb357ccda8d9ea (xen/events: fix >> unmask_evtchn for PV on HVM guests) which reordered the clear mask and >> check pending operations. > > The race you are trying to fix is real, but the fix you are proposing > breaks PV on HVM and ARM guests again. > > From the description of b5e579232d635b79a3da052964cb357ccda8d9ea, it''s > clear that the reason to call EVTCHNOP_unmask is to trigger an event > notification injection again. > But if you clear the evtchn_mask bit *before* calling EVTCHNOP_unmask, > EVTCHNOP_unmask won''t reinject the event. > From evtchn_unmask: > > if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) && > test_bit (port, &shared_info(d, evtchn_pending)) && > !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d), > &vcpu_info(v, evtchn_pending_sel)) ) > { > vcpu_mark_events_pending(v); > } > > The first condition for reinjection would fail.I missed this. The only way I can think of fixing this is to set the mask bit before call the unmask hypercall. The FIFO-based ABI doesn''t have this problem as it always tries to relink the event whatever the previous state of the mask bit was. David>> Signed-off-by: David Vrabel <david.vrabel@citrix.com> >> Cc: stable@vger.kernel.org >> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> >> --- >> drivers/xen/events.c | 10 +++++----- >> 1 files changed, 5 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/xen/events.c b/drivers/xen/events.c >> index d17aa41..4bdd0a5 100644 >> --- a/drivers/xen/events.c >> +++ b/drivers/xen/events.c >> @@ -403,11 +403,13 @@ static void unmask_evtchn(int port) >> >> if (unlikely((cpu != cpu_from_evtchn(port)))) >> do_hypercall = 1; >> - else >> + else { >> + sync_clear_bit(port, BM(&s->evtchn_mask[0])); >> evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); >> >> - if (unlikely(evtchn_pending && xen_hvm_domain())) >> - do_hypercall = 1; >> + if (unlikely(evtchn_pending && xen_hvm_domain())) >> + do_hypercall = 1; >> + } >> >> /* Slow path (hypercall) if this is a non-local port or if this is >> * an hvm domain and an event is pending (hvm domains don''t have >> @@ -418,8 +420,6 @@ static void unmask_evtchn(int port) >> } else { >> struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); >> >> - sync_clear_bit(port, BM(&s->evtchn_mask[0])); >> - >> /* >> * The following is basically the equivalent of >> * ''hw_resend_irq''. Just like a real IO-APIC we ''lose >> -- >> 1.7.2.5 >>
Stefano Stabellini
2013-Mar-20 12:21 UTC
Re: [PATCH 01/12] xen/events: avoid race with raising an event in unmask_evtchn()
On Wed, 20 Mar 2013, David Vrabel wrote:> On 20/03/13 11:00, Stefano Stabellini wrote: > > On Tue, 19 Mar 2013, David Vrabel wrote: > >> From: David Vrabel <david.vrabel@citrix.com> > >> > >> In unmask_evtchn(), when the mask bit is cleared after testing for > >> pending and the event becomes pending between the test and clear, then > >> the upcall will not become pending and the event may be lost or > >> delayed. > >> > >> Avoid this by always clearing the mask bit before checking for > >> pending. > >> > >> This fixes a regression introduced in 3.7 by > >> b5e579232d635b79a3da052964cb357ccda8d9ea (xen/events: fix > >> unmask_evtchn for PV on HVM guests) which reordered the clear mask and > >> check pending operations. > > > > The race you are trying to fix is real, but the fix you are proposing > > breaks PV on HVM and ARM guests again. > > > > From the description of b5e579232d635b79a3da052964cb357ccda8d9ea, it''s > > clear that the reason to call EVTCHNOP_unmask is to trigger an event > > notification injection again. > > But if you clear the evtchn_mask bit *before* calling EVTCHNOP_unmask, > > EVTCHNOP_unmask won''t reinject the event. > > From evtchn_unmask: > > > > if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) && > > test_bit (port, &shared_info(d, evtchn_pending)) && > > !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d), > > &vcpu_info(v, evtchn_pending_sel)) ) > > { > > vcpu_mark_events_pending(v); > > } > > > > The first condition for reinjection would fail. > > I missed this. The only way I can think of fixing this is to set the > mask bit before call the unmask hypercall.that might work
David Vrabel
2013-Mar-20 13:20 UTC
Re: [PATCH 03/12] xen/events: remove unnecessary init_evtchn_cpu_bindings()
On 20/03/13 11:09, Jan Beulich wrote:>>>> On 19.03.13 at 22:04, David Vrabel <david.vrabel@citrix.com> wrote: >> From: David Vrabel <david.vrabel@citrix.com> >> >> Event channels are always explicitly bound to a specific VCPU before >> they are first enabled. There is no need to initialize all possible >> events as bound to VCPU 0 at start of day or after a resume. > > That part may indeed be safe to do, but ... > >> --- a/drivers/xen/events.c >> +++ b/drivers/xen/events.c >> @@ -333,24 +333,6 @@ static void bind_evtchn_to_cpu(unsigned int chn, >> unsigned int cpu) >> info_for_irq(irq)->cpu = cpu; >> } >> >> -static void init_evtchn_cpu_bindings(void) >> -{ >> - int i; >> -#ifdef CONFIG_SMP >> - struct irq_info *info; >> - >> - /* By default all event channels notify CPU#0. */ >> - list_for_each_entry(info, &xen_irq_list_head, list) { >> - struct irq_desc *desc = irq_to_desc(info->irq); >> - cpumask_copy(desc->irq_data.affinity, cpumask_of(0)); >> - } >> -#endif >> - >> - for_each_possible_cpu(i) >> - memset(per_cpu(cpu_evtchn_mask, i), >> - (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i))); > > ... you also remove the initialization of the mask bits here. If > that was intended, a sentence about the safety of this would > certainly be good to add to the description.These are similarly initialized when an event is bound to a VCPU. Note that cpu_evtchn_mask is poorly named as it''s really the inverse of a mask. I''ll extend the commit message to mention this. David
Jan Beulich
2013-Mar-20 13:40 UTC
Re: [PATCH 03/12] xen/events: remove unnecessary init_evtchn_cpu_bindings()
>>> On 20.03.13 at 14:20, David Vrabel <david.vrabel@citrix.com> wrote: > These are similarly initialized when an event is bound to a VCPU. Note > that cpu_evtchn_mask is poorly named as it''s really the inverse of a mask.Oh, right, I forgot that strange naming, and took it to be the set of mask bits, not the mask of bound event channels for a CPU. No need really to mention that separately in the commit message then. Sorry for the noise, Jan
Wei Liu
2013-Mar-20 14:03 UTC
Re: [PATCH 11/12] xen/events: Add the hypervisor interface for the FIFO-based event channels
On Tue, 2013-03-19 at 21:04 +0000, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > Add the hypercall sub-ops and the structures for the shared data used > in the FIFO-based event channel ABI. > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > include/xen/interface/event_channel.h | 70 +++++++++++++++++++++++++++++++++ > 1 files changed, 70 insertions(+), 0 deletions(-) > > diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h > index f494292..10472f5 100644 > --- a/include/xen/interface/event_channel.h > +++ b/include/xen/interface/event_channel.h > @@ -190,6 +190,50 @@ struct evtchn_reset { > }; > typedef struct evtchn_reset evtchn_reset_t; >This typedef slipped into the header and should be removed. No typedef in Linux header. Wei.
David Vrabel
2013-Mar-20 14:18 UTC
Re: [PATCH 11/12] xen/events: Add the hypervisor interface for the FIFO-based event channels
On 20/03/13 14:03, Wei Liu wrote:> On Tue, 2013-03-19 at 21:04 +0000, David Vrabel wrote: >> From: David Vrabel <david.vrabel@citrix.com> >> >> Add the hypercall sub-ops and the structures for the shared data used >> in the FIFO-based event channel ABI. >> >> Signed-off-by: David Vrabel <david.vrabel@citrix.com> >> --- >> include/xen/interface/event_channel.h | 70 +++++++++++++++++++++++++++++++++ >> 1 files changed, 70 insertions(+), 0 deletions(-) >> >> diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h >> index f494292..10472f5 100644 >> --- a/include/xen/interface/event_channel.h >> +++ b/include/xen/interface/event_channel.h >> @@ -190,6 +190,50 @@ struct evtchn_reset { >> }; >> typedef struct evtchn_reset evtchn_reset_t; >> > > This typedef slipped into the header and should be removed. No typedef > in Linux header.This has nothing to do with this series. However, I will make sure I don''t add any more typedefs for structures. David
Wei Liu
2013-Mar-20 14:36 UTC
Re: [PATCH 11/12] xen/events: Add the hypervisor interface for the FIFO-based event channels
On Wed, 2013-03-20 at 14:18 +0000, David Vrabel wrote:> On 20/03/13 14:03, Wei Liu wrote: > > On Tue, 2013-03-19 at 21:04 +0000, David Vrabel wrote: > >> From: David Vrabel <david.vrabel@citrix.com> > >> > >> Add the hypercall sub-ops and the structures for the shared data used > >> in the FIFO-based event channel ABI. > >> > >> Signed-off-by: David Vrabel <david.vrabel@citrix.com> > >> --- > >> include/xen/interface/event_channel.h | 70 +++++++++++++++++++++++++++++++++ > >> 1 files changed, 70 insertions(+), 0 deletions(-) > >> > >> diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h > >> index f494292..10472f5 100644 > >> --- a/include/xen/interface/event_channel.h > >> +++ b/include/xen/interface/event_channel.h > >> @@ -190,6 +190,50 @@ struct evtchn_reset { > >> }; > >> typedef struct evtchn_reset evtchn_reset_t; > >> > > > > This typedef slipped into the header and should be removed. No typedef > > in Linux header. > > This has nothing to do with this series. >IIRC this one was my fault when I upstreamed evtchn_reset. I have a trivial patch to remove it.> However, I will make sure I don''t add any more typedefs for structures. >Yes please. Wei.
Konrad Rzeszutek Wilk
2013-May-06 19:51 UTC
Re: [PATCH RFC 0/12] Linux: FIFO-based event channel ABI
On Tue, Mar 19, 2013 at 09:04:47PM +0000, David Vrabel wrote:> This is an RFC of Linux guest-side implementation of the FIFO-based > event channel ABI described in this design document: > > http://xenbits.xen.org/people/dvrabel/event-channels-C.pdf > > Refer also to the Xen series. > > Patch 1 fixes a regression introduced in 3.7 and is unrelated to this > series. > > Patch 2 is a obvious refactoring of common code. > > Patch 3-7 prepare for supporting multiple ABIs. > > Patch 8 adds the low-level evtchn_ops hooks. > > Patch 9-10 add an additional hook for ABI-specific per-port setup > (used for expanding the event array as more event are bound). > > Patch 11-12 add the ABI and the implementation. Main known > limitations are listed in patch 12.So what is the status of these patches? Are they going to be reposted at some point?> > David > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >
David Vrabel
2013-May-07 12:26 UTC
Re: [PATCH RFC 0/12] Linux: FIFO-based event channel ABI
On 06/05/13 20:51, Konrad Rzeszutek Wilk wrote:> On Tue, Mar 19, 2013 at 09:04:47PM +0000, David Vrabel wrote: >> This is an RFC of Linux guest-side implementation of the FIFO-based >> event channel ABI described in this design document: >> >> http://xenbits.xen.org/people/dvrabel/event-channels-C.pdf >> >> Refer also to the Xen series. >> >> Patch 1 fixes a regression introduced in 3.7 and is unrelated to this >> series. >> >> Patch 2 is a obvious refactoring of common code. >> >> Patch 3-7 prepare for supporting multiple ABIs. >> >> Patch 8 adds the low-level evtchn_ops hooks. >> >> Patch 9-10 add an additional hook for ABI-specific per-port setup >> (used for expanding the event array as more event are bound). >> >> Patch 11-12 add the ABI and the implementation. Main known >> limitations are listed in patch 12. > > So what is the status of these patches? Are they going to be > reposted at some point?Yes. It''s not the highest priority right now so it might be a while before I can work on this. David