David Vrabel
2013-Aug-09 18:15 UTC
[RFC PATCHv2 00/12] Linux: FIFO-based event channel ABI
This is an RFC of Linux guest-side implementation of the FIFO-based event channel ABI described in this design document: http://xenbits.xen.org/people/dvrabel/event-channels-D.pdf Refer also to the Xen series. Patch 1 is a obvious refactoring of common code. Patch 2-6 prepare for supporting multiple ABIs. Patch 7 adds the low-level evtchn_ops hooks. Patch 8-9 add an additional hook for ABI-specific per-port setup (used for expanding the event array as more event are bound). Patch 10 allows many more event channels to be supported by altering how the event channel to irq map is allocated. Note that other factors limit the number of supported IRQs (IRQs is 8192 + 64 * NR_CPUS). Patch 11-12 add the ABI and the implementation. Main known limitations are listed in patch 12. David
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 01/12] xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
From: David Vrabel <david.vrabel@citrix.com> These two function did the same thing with different parameters, put the common bits in retrigger_evtchn(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 27 +++++++++------------------ 1 files changed, 9 insertions(+), 18 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index a58ac43..1697cf2 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -1529,13 +1529,13 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, return rebind_irq_to_cpu(data->irq, tcpu); } -int resend_irq_on_evtchn(unsigned int irq) +static int retrigger_evtchn(int evtchn) { - int masked, evtchn = evtchn_from_irq(irq); + int masked; struct shared_info *s = HYPERVISOR_shared_info; if (!VALID_EVTCHN(evtchn)) - return 1; + return 0; masked = sync_test_and_set_bit(evtchn, BM(s->evtchn_mask)); sync_set_bit(evtchn, BM(s->evtchn_pending)); @@ -1545,6 +1545,11 @@ int resend_irq_on_evtchn(unsigned int irq) return 1; } +int resend_irq_on_evtchn(unsigned int irq) +{ + return retrigger_evtchn(evtchn_from_irq(irq)); +} + static void enable_dynirq(struct irq_data *data) { int evtchn = evtchn_from_irq(data->irq); @@ -1579,21 +1584,7 @@ static void mask_ack_dynirq(struct irq_data *data) static int retrigger_dynirq(struct irq_data *data) { - int evtchn = evtchn_from_irq(data->irq); - struct shared_info *sh = HYPERVISOR_shared_info; - int ret = 0; - - if (VALID_EVTCHN(evtchn)) { - int masked; - - masked = sync_test_and_set_bit(evtchn, BM(sh->evtchn_mask)); - sync_set_bit(evtchn, BM(sh->evtchn_pending)); - if (!masked) - unmask_evtchn(evtchn); - ret = 1; - } - - return ret; + return retrigger_evtchn(evtchn_from_irq(data->irq)); } static void restore_pirqs(void) -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 02/12] xen/events: remove unnecessary init_evtchn_cpu_bindings()
From: David Vrabel <david.vrabel@citrix.com> Event channels are always explicitly bound to a specific VCPU before they are first enabled. There is no need to initialize all possible events as bound to VCPU 0 at start of day or after a resume. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 22 ---------------------- 1 files changed, 0 insertions(+), 22 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 1697cf2..af61813 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -333,24 +333,6 @@ static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu) info_for_irq(irq)->cpu = cpu; } -static void init_evtchn_cpu_bindings(void) -{ - int i; -#ifdef CONFIG_SMP - struct irq_info *info; - - /* By default all event channels notify CPU#0. */ - list_for_each_entry(info, &xen_irq_list_head, list) { - struct irq_desc *desc = irq_to_desc(info->irq); - cpumask_copy(desc->irq_data.affinity, cpumask_of(0)); - } -#endif - - for_each_possible_cpu(i) - memset(per_cpu(cpu_evtchn_mask, i), - (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i))); -} - static inline void clear_evtchn(int port) { struct shared_info *s = HYPERVISOR_shared_info; @@ -1749,8 +1731,6 @@ void xen_irq_resume(void) unsigned int cpu, evtchn; struct irq_info *info; - init_evtchn_cpu_bindings(); - /* New event-channel space is not ''live'' yet. */ for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++) mask_evtchn(evtchn); @@ -1861,8 +1841,6 @@ void __init xen_init_IRQ(void) for (i = 0; i < NR_EVENT_CHANNELS; i++) evtchn_to_irq[i] = -1; - init_evtchn_cpu_bindings(); - /* No event channels are ''live'' right now. */ for (i = 0; i < NR_EVENT_CHANNELS; i++) mask_evtchn(i); -- 1.7.2.5
From: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 8 +++++++- 1 files changed, 7 insertions(+), 1 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index af61813..057a033 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -351,6 +351,12 @@ static inline int test_evtchn(int port) return sync_test_bit(port, BM(&s->evtchn_pending[0])); } +static inline int test_and_set_mask(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); +} + /** * notify_remote_via_irq - send event to remote end of event channel via irq @@ -1519,7 +1525,7 @@ static int retrigger_evtchn(int evtchn) if (!VALID_EVTCHN(evtchn)) return 0; - masked = sync_test_and_set_bit(evtchn, BM(s->evtchn_mask)); + masked = test_and_set_mask(evtchn); sync_set_bit(evtchn, BM(s->evtchn_pending)); if (!masked) unmask_evtchn(evtchn); -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 04/12] xen/events: replace raw bit ops with functions
From: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 057a033..13df85c 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -1520,13 +1520,12 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, static int retrigger_evtchn(int evtchn) { int masked; - struct shared_info *s = HYPERVISOR_shared_info; if (!VALID_EVTCHN(evtchn)) return 0; masked = test_and_set_mask(evtchn); - sync_set_bit(evtchn, BM(s->evtchn_pending)); + set_evtchn(evtchn); if (!masked) unmask_evtchn(evtchn); -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 05/12] xen/events: move drivers/xen/events.c into drivers/xen/events/
From: David Vrabel <david.vrabel@citrix.com> events.c will be split into multiple files so move it into its own directory. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/Makefile | 3 ++- drivers/xen/events/Makefile | 1 + drivers/xen/{ => events}/events.c | 0 3 files changed, 3 insertions(+), 1 deletions(-) create mode 100644 drivers/xen/events/Makefile rename drivers/xen/{ => events}/events.c (100%) diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile index 14fe79d..d75c811 100644 --- a/drivers/xen/Makefile +++ b/drivers/xen/Makefile @@ -2,7 +2,8 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),) obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o endif obj-$(CONFIG_X86) += fallback.o -obj-y += grant-table.o features.o events.o balloon.o manage.o +obj-y += grant-table.o features.o balloon.o manage.o +obj-y += events/ obj-y += xenbus/ nostackp := $(call cc-option, -fno-stack-protector) diff --git a/drivers/xen/events/Makefile b/drivers/xen/events/Makefile new file mode 100644 index 0000000..86fd7c1 --- /dev/null +++ b/drivers/xen/events/Makefile @@ -0,0 +1 @@ +obj-y += events.o diff --git a/drivers/xen/events.c b/drivers/xen/events/events.c similarity index 100% rename from drivers/xen/events.c rename to drivers/xen/events/events.c -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 06/12] xen/events: move 2-level specific code into its own file
From: David Vrabel <david.vrabel@citrix.com> In preparation for alternative event channel ABIs, move all the functions accessing the shared data structures into their own file. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/Makefile | 1 + drivers/xen/events/events.c | 365 +--------------------------------- drivers/xen/events/events_internal.h | 74 +++++++ drivers/xen/events/n-level.c | 315 +++++++++++++++++++++++++++++ 4 files changed, 401 insertions(+), 354 deletions(-) create mode 100644 drivers/xen/events/events_internal.h create mode 100644 drivers/xen/events/n-level.c diff --git a/drivers/xen/events/Makefile b/drivers/xen/events/Makefile index 86fd7c1..aea331e 100644 --- a/drivers/xen/events/Makefile +++ b/drivers/xen/events/Makefile @@ -1 +1,2 @@ obj-y += events.o +obj-y += n-level.o diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 13df85c..8d5a5cf 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -58,6 +58,8 @@ #include <xen/interface/sched.h> #include <asm/hw_irq.h> +#include "events_internal.h" + /* * This lock protects updates to the following mapping and reference-count * arrays. The lock does not need to be acquired to read the mapping tables. @@ -72,72 +74,12 @@ static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1}; /* IRQ <-> IPI mapping */ static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1}; -/* Interrupt types. */ -enum xen_irq_type { - IRQT_UNBOUND = 0, - IRQT_PIRQ, - IRQT_VIRQ, - IRQT_IPI, - IRQT_EVTCHN -}; - -/* - * Packed IRQ information: - * type - enum xen_irq_type - * event channel - irq->event channel mapping - * cpu - cpu this event channel is bound to - * index - type-specific information: - * PIRQ - physical IRQ, GSI, flags, and owner domain - * VIRQ - virq number - * IPI - IPI vector - * EVTCHN - - */ -struct irq_info { - struct list_head list; - int refcnt; - enum xen_irq_type type; /* type */ - unsigned irq; - unsigned short evtchn; /* event channel */ - unsigned short cpu; /* cpu bound */ - - union { - unsigned short virq; - enum ipi_vector ipi; - struct { - unsigned short pirq; - unsigned short gsi; - unsigned char flags; - uint16_t domid; - } pirq; - } u; -}; -#define PIRQ_NEEDS_EOI (1 << 0) -#define PIRQ_SHAREABLE (1 << 1) - -static int *evtchn_to_irq; +int *evtchn_to_irq; #ifdef CONFIG_X86 static unsigned long *pirq_eoi_map; #endif static bool (*pirq_needs_eoi)(unsigned irq); -/* - * Note sizeof(xen_ulong_t) can be more than sizeof(unsigned long). Be - * careful to only use bitops which allow for this (e.g - * test_bit/find_first_bit and friends but not __ffs) and to pass - * BITS_PER_EVTCHN_WORD as the bitmask length. - */ -#define BITS_PER_EVTCHN_WORD (sizeof(xen_ulong_t)*8) -/* - * Make a bitmask (i.e. unsigned long *) of a xen_ulong_t - * array. Primarily to avoid long lines (hence the terse name). - */ -#define BM(x) (unsigned long *)(x) -/* Find the first set bit in a evtchn mask */ -#define EVTCHN_FIRST_BIT(w) find_first_bit(BM(&(w)), BITS_PER_EVTCHN_WORD) - -static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], - cpu_evtchn_mask); - /* Xen will never allocate port zero for any purpose. */ #define VALID_EVTCHN(chn) ((chn) != 0) @@ -148,7 +90,7 @@ static void enable_dynirq(struct irq_data *data); static void disable_dynirq(struct irq_data *data); /* Get info for IRQ */ -static struct irq_info *info_for_irq(unsigned irq) +struct irq_info *info_for_irq(unsigned irq) { return irq_get_handler_data(irq); } @@ -278,12 +220,12 @@ static enum xen_irq_type type_from_irq(unsigned irq) return info_for_irq(irq)->type; } -static unsigned cpu_from_irq(unsigned irq) +unsigned cpu_from_irq(unsigned irq) { return info_for_irq(irq)->cpu; } -static unsigned int cpu_from_evtchn(unsigned int evtchn) +unsigned int cpu_from_evtchn(unsigned int evtchn) { int irq = evtchn_to_irq[evtchn]; unsigned ret = 0; @@ -309,55 +251,21 @@ static bool pirq_needs_eoi_flag(unsigned irq) return info->u.pirq.flags & PIRQ_NEEDS_EOI; } -static inline xen_ulong_t active_evtchns(unsigned int cpu, - struct shared_info *sh, - unsigned int idx) -{ - return sh->evtchn_pending[idx] & - per_cpu(cpu_evtchn_mask, cpu)[idx] & - ~sh->evtchn_mask[idx]; -} - static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu) { int irq = evtchn_to_irq[chn]; + struct irq_info *info = info_for_irq(irq); BUG_ON(irq == -1); #ifdef CONFIG_SMP cpumask_copy(irq_to_desc(irq)->irq_data.affinity, cpumask_of(cpu)); #endif - clear_bit(chn, BM(per_cpu(cpu_evtchn_mask, cpu_from_irq(irq)))); - set_bit(chn, BM(per_cpu(cpu_evtchn_mask, cpu))); + xen_evtchn_port_bind_to_cpu(info, cpu); - info_for_irq(irq)->cpu = cpu; -} - -static inline void clear_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_clear_bit(port, BM(&s->evtchn_pending[0])); -} - -static inline void set_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_set_bit(port, BM(&s->evtchn_pending[0])); -} - -static inline int test_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - return sync_test_bit(port, BM(&s->evtchn_pending[0])); -} - -static inline int test_and_set_mask(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); + info->cpu = cpu; } - /** * notify_remote_via_irq - send event to remote end of event channel via irq * @irq: irq of event channel to send event to @@ -375,63 +283,6 @@ void notify_remote_via_irq(int irq) } EXPORT_SYMBOL_GPL(notify_remote_via_irq); -static void mask_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - sync_set_bit(port, BM(&s->evtchn_mask[0])); -} - -static void unmask_evtchn(int port) -{ - struct shared_info *s = HYPERVISOR_shared_info; - unsigned int cpu = get_cpu(); - int do_hypercall = 0, evtchn_pending = 0; - - BUG_ON(!irqs_disabled()); - - if (unlikely((cpu != cpu_from_evtchn(port)))) - do_hypercall = 1; - else { - /* - * Need to clear the mask before checking pending to - * avoid a race with an event becoming pending. - * - * EVTCHNOP_unmask will only trigger an upcall if the - * mask bit was set, so if a hypercall is needed - * remask the event. - */ - sync_clear_bit(port, BM(&s->evtchn_mask[0])); - evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); - - if (unlikely(evtchn_pending && xen_hvm_domain())) { - sync_set_bit(port, BM(&s->evtchn_mask[0])); - do_hypercall = 1; - } - } - - /* Slow path (hypercall) if this is a non-local port or if this is - * an hvm domain and an event is pending (hvm domains don''t have - * their own implementation of irq_enable). */ - if (do_hypercall) { - struct evtchn_unmask unmask = { .port = port }; - (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); - } else { - struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); - - /* - * The following is basically the equivalent of - * ''hw_resend_irq''. Just like a real IO-APIC we ''lose - * the interrupt edge'' if the channel is masked. - */ - if (evtchn_pending && - !sync_test_and_set_bit(port / BITS_PER_EVTCHN_WORD, - BM(&vcpu_info->evtchn_pending_sel))) - vcpu_info->evtchn_upcall_pending = 1; - } - - put_cpu(); -} - static void xen_irq_init(unsigned irq) { struct irq_info *info; @@ -1205,215 +1056,21 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector) notify_remote_via_irq(irq); } -irqreturn_t xen_debug_interrupt(int irq, void *dev_id) -{ - struct shared_info *sh = HYPERVISOR_shared_info; - int cpu = smp_processor_id(); - xen_ulong_t *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu); - int i; - unsigned long flags; - static DEFINE_SPINLOCK(debug_lock); - struct vcpu_info *v; - - spin_lock_irqsave(&debug_lock, flags); - - printk("\nvcpu %d\n ", cpu); - - for_each_online_cpu(i) { - int pending; - v = per_cpu(xen_vcpu, i); - pending = (get_irq_regs() && i == cpu) - ? xen_irqs_disabled(get_irq_regs()) - : v->evtchn_upcall_mask; - printk("%d: masked=%d pending=%d event_sel %0*"PRI_xen_ulong"\n ", i, - pending, v->evtchn_upcall_pending, - (int)(sizeof(v->evtchn_pending_sel)*2), - v->evtchn_pending_sel); - } - v = per_cpu(xen_vcpu, cpu); - - printk("\npending:\n "); - for (i = ARRAY_SIZE(sh->evtchn_pending)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)sizeof(sh->evtchn_pending[0])*2, - sh->evtchn_pending[i], - i % 8 == 0 ? "\n " : " "); - printk("\nglobal mask:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - sh->evtchn_mask[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nglobally unmasked:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - sh->evtchn_pending[i] & ~sh->evtchn_mask[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nlocal cpu%d mask:\n ", cpu); - for (i = (NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD)-1; i >= 0; i--) - printk("%0*"PRI_xen_ulong"%s", (int)(sizeof(cpu_evtchn[0])*2), - cpu_evtchn[i], - i % 8 == 0 ? "\n " : " "); - - printk("\nlocally unmasked:\n "); - for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) { - xen_ulong_t pending = sh->evtchn_pending[i] - & ~sh->evtchn_mask[i] - & cpu_evtchn[i]; - printk("%0*"PRI_xen_ulong"%s", - (int)(sizeof(sh->evtchn_mask[0])*2), - pending, i % 8 == 0 ? "\n " : " "); - } - - printk("\npending list:\n"); - for (i = 0; i < NR_EVENT_CHANNELS; i++) { - if (sync_test_bit(i, BM(sh->evtchn_pending))) { - int word_idx = i / BITS_PER_EVTCHN_WORD; - printk(" %d: event %d -> irq %d%s%s%s\n", - cpu_from_evtchn(i), i, - evtchn_to_irq[i], - sync_test_bit(word_idx, BM(&v->evtchn_pending_sel)) - ? "" : " l2-clear", - !sync_test_bit(i, BM(sh->evtchn_mask)) - ? "" : " globally-masked", - sync_test_bit(i, BM(cpu_evtchn)) - ? "" : " locally-masked"); - } - } - - spin_unlock_irqrestore(&debug_lock, flags); - - return IRQ_HANDLED; -} - static DEFINE_PER_CPU(unsigned, xed_nesting_count); -static DEFINE_PER_CPU(unsigned int, current_word_idx); -static DEFINE_PER_CPU(unsigned int, current_bit_idx); -/* - * Mask out the i least significant bits of w - */ -#define MASK_LSBS(w, i) (w & ((~((xen_ulong_t)0UL)) << i)) - -/* - * Search the CPUs pending events bitmasks. For each one found, map - * the event number to an irq, and feed it into do_IRQ() for - * handling. - * - * Xen uses a two-level bitmap to speed searching. The first level is - * a bitset of words which contain pending event bits. The second - * level is a bitset of pending events themselves. - */ static void __xen_evtchn_do_upcall(void) { - int start_word_idx, start_bit_idx; - int word_idx, bit_idx; - int i, irq; - int cpu = get_cpu(); - struct shared_info *s = HYPERVISOR_shared_info; struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + int cpu = get_cpu(); unsigned count; do { - xen_ulong_t pending_words; - xen_ulong_t pending_bits; - struct irq_desc *desc; - vcpu_info->evtchn_upcall_pending = 0; if (__this_cpu_inc_return(xed_nesting_count) - 1) goto out; - /* - * Master flag must be cleared /before/ clearing - * selector flag. xchg_xen_ulong must contain an - * appropriate barrier. - */ - if ((irq = per_cpu(virq_to_irq, cpu)[VIRQ_TIMER]) != -1) { - int evtchn = evtchn_from_irq(irq); - word_idx = evtchn / BITS_PER_LONG; - pending_bits = evtchn % BITS_PER_LONG; - if (active_evtchns(cpu, s, word_idx) & (1ULL << pending_bits)) { - desc = irq_to_desc(irq); - if (desc) - generic_handle_irq_desc(irq, desc); - } - } - - pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0); - - start_word_idx = __this_cpu_read(current_word_idx); - start_bit_idx = __this_cpu_read(current_bit_idx); - - word_idx = start_word_idx; - - for (i = 0; pending_words != 0; i++) { - xen_ulong_t words; - - words = MASK_LSBS(pending_words, word_idx); - - /* - * If we masked out all events, wrap to beginning. - */ - if (words == 0) { - word_idx = 0; - bit_idx = 0; - continue; - } - word_idx = EVTCHN_FIRST_BIT(words); - - pending_bits = active_evtchns(cpu, s, word_idx); - bit_idx = 0; /* usually scan entire word from start */ - if (word_idx == start_word_idx) { - /* We scan the starting word in two parts */ - if (i == 0) - /* 1st time: start in the middle */ - bit_idx = start_bit_idx; - else - /* 2nd time: mask bits done already */ - bit_idx &= (1UL << start_bit_idx) - 1; - } - - do { - xen_ulong_t bits; - int port; - - bits = MASK_LSBS(pending_bits, bit_idx); - - /* If we masked out all events, move on. */ - if (bits == 0) - break; - - bit_idx = EVTCHN_FIRST_BIT(bits); - - /* Process port. */ - port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; - irq = evtchn_to_irq[port]; - - if (irq != -1) { - desc = irq_to_desc(irq); - if (desc) - generic_handle_irq_desc(irq, desc); - } - - bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD; - - /* Next caller starts at last processed + 1 */ - __this_cpu_write(current_word_idx, - bit_idx ? word_idx : - (word_idx+1) % BITS_PER_EVTCHN_WORD); - __this_cpu_write(current_bit_idx, bit_idx); - } while (bit_idx != 0); - - /* Scan start_l1i twice; all others once. */ - if ((word_idx != start_word_idx) || (i != 0)) - pending_words &= ~(1UL << word_idx); - - word_idx = (word_idx + 1) % BITS_PER_EVTCHN_WORD; - } + xen_evtchn_handle_events(cpu); BUG_ON(!irqs_disabled()); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h new file mode 100644 index 0000000..79ac70b --- /dev/null +++ b/drivers/xen/events/events_internal.h @@ -0,0 +1,74 @@ +/* + * Xen Event Channels (internal header) + * + * Copyright (C) 2013 Citrix Systems R&D Ltd. + * + * This source code is licensed under the GNU General Public License, + * Version 2 or later. See the file COPYING for more details. + */ +#ifndef __EVENTS_INTERNAL_H__ +#define __EVENTS_INTERNAL_H__ + +/* Interrupt types. */ +enum xen_irq_type { + IRQT_UNBOUND = 0, + IRQT_PIRQ, + IRQT_VIRQ, + IRQT_IPI, + IRQT_EVTCHN +}; + +/* + * Packed IRQ information: + * type - enum xen_irq_type + * event channel - irq->event channel mapping + * cpu - cpu this event channel is bound to + * index - type-specific information: + * PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM + * guest, or GSI (real passthrough IRQ) of the device. + * VIRQ - virq number + * IPI - IPI vector + * EVTCHN - + */ +struct irq_info { + struct list_head list; + int refcnt; + enum xen_irq_type type; /* type */ + unsigned irq; + unsigned short evtchn; /* event channel */ + unsigned short cpu; /* cpu bound */ + + union { + unsigned short virq; + enum ipi_vector ipi; + struct { + unsigned short pirq; + unsigned short gsi; + unsigned char vector; + unsigned char flags; + uint16_t domid; + } pirq; + } u; +}; + +#define PIRQ_NEEDS_EOI (1 << 0) +#define PIRQ_SHAREABLE (1 << 1) + +extern int *evtchn_to_irq; + +struct irq_info *info_for_irq(unsigned irq); +unsigned cpu_from_irq(unsigned irq); +unsigned cpu_from_evtchn(unsigned int evtchn); + +void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu); + +void clear_evtchn(int port); +void set_evtchn(int port); +int test_evtchn(int port); +int test_and_set_mask(int port); +void mask_evtchn(int port); +void unmask_evtchn(int port); + +void xen_evtchn_handle_events(int cpu); + +#endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c new file mode 100644 index 0000000..05762d5 --- /dev/null +++ b/drivers/xen/events/n-level.c @@ -0,0 +1,315 @@ +/* + * Xen event channels (N-level ABI) + * + * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007 + */ + +#include <linux/linkage.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/module.h> + +#include <asm/sync_bitops.h> +#include <asm/xen/hypercall.h> +#include <asm/xen/hypervisor.h> + +#include <xen/xen.h> +#include <xen/xen-ops.h> +#include <xen/events.h> +#include <xen/interface/xen.h> +#include <xen/interface/event_channel.h> + +#include "events_internal.h" + +/* + * Note sizeof(xen_ulong_t) can be more than sizeof(unsigned long). Be + * careful to only use bitops which allow for this (e.g + * test_bit/find_first_bit and friends but not __ffs) and to pass + * BITS_PER_EVTCHN_WORD as the bitmask length. + */ +#define BITS_PER_EVTCHN_WORD (sizeof(xen_ulong_t)*8) +/* + * Make a bitmask (i.e. unsigned long *) of a xen_ulong_t + * array. Primarily to avoid long lines (hence the terse name). + */ +#define BM(x) (unsigned long *)(x) +/* Find the first set bit in a evtchn mask */ +#define EVTCHN_FIRST_BIT(w) find_first_bit(BM(&(w)), BITS_PER_EVTCHN_WORD) + +static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], + cpu_evtchn_mask); + +void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +{ + clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); + set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); +} + +void clear_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_clear_bit(port, BM(&s->evtchn_pending[0])); +} + +void set_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_set_bit(port, BM(&s->evtchn_pending[0])); +} + +int test_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_bit(port, BM(&s->evtchn_pending[0])); +} + +int test_and_set_mask(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); +} + +void mask_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + sync_set_bit(port, BM(&s->evtchn_mask[0])); +} + +void unmask_evtchn(int port) +{ + struct shared_info *s = HYPERVISOR_shared_info; + unsigned int cpu = get_cpu(); + int do_hypercall = 0, evtchn_pending = 0; + + BUG_ON(!irqs_disabled()); + + if (unlikely((cpu != cpu_from_evtchn(port)))) + do_hypercall = 1; + else { + sync_clear_bit(port, BM(&s->evtchn_mask[0])); + evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0])); + + if (unlikely(evtchn_pending && xen_hvm_domain())) + do_hypercall = 1; + } + + /* Slow path (hypercall) if this is a non-local port or if this is + * an hvm domain and an event is pending (hvm domains don''t have + * their own implementation of irq_enable). */ + if (do_hypercall) { + struct evtchn_unmask unmask = { .port = port }; + (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); + } else { + struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + + /* + * The following is basically the equivalent of + * ''hw_resend_irq''. Just like a real IO-APIC we ''lose + * the interrupt edge'' if the channel is masked. + */ + if (evtchn_pending && + !sync_test_and_set_bit(port / BITS_PER_EVTCHN_WORD, + BM(&vcpu_info->evtchn_pending_sel))) + vcpu_info->evtchn_upcall_pending = 1; + } + + put_cpu(); +} + +static DEFINE_PER_CPU(unsigned int, current_word_idx); +static DEFINE_PER_CPU(unsigned int, current_bit_idx); + +/* + * Mask out the i least significant bits of w + */ +#define MASK_LSBS(w, i) (w & ((~((xen_ulong_t)0UL)) << i)) + +static inline xen_ulong_t active_evtchns(unsigned int cpu, + struct shared_info *sh, + unsigned int idx) +{ + return sh->evtchn_pending[idx] & + per_cpu(cpu_evtchn_mask, cpu)[idx] & + ~sh->evtchn_mask[idx]; +} + +/* + * Search the CPU''s pending events bitmasks. For each one found, map + * the event number to an irq, and feed it into do_IRQ() for handling. + * + * Xen uses a two-level bitmap to speed searching. The first level is + * a bitset of words which contain pending event bits. The second + * level is a bitset of pending events themselves. + */ +void xen_evtchn_handle_events(int cpu) +{ + xen_ulong_t pending_words; + int start_word_idx, start_bit_idx; + int word_idx, bit_idx; + int i; + struct shared_info *s = HYPERVISOR_shared_info; + struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + + /* + * Master flag must be cleared /before/ clearing + * selector flag. xchg_xen_ulong must contain an + * appropriate barrier. + */ + pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0); + + start_word_idx = __this_cpu_read(current_word_idx); + start_bit_idx = __this_cpu_read(current_bit_idx); + + word_idx = start_word_idx; + + for (i = 0; pending_words != 0; i++) { + xen_ulong_t pending_bits; + xen_ulong_t words; + + words = MASK_LSBS(pending_words, word_idx); + + /* + * If we masked out all events, wrap to beginning. + */ + if (words == 0) { + word_idx = 0; + bit_idx = 0; + continue; + } + word_idx = EVTCHN_FIRST_BIT(words); + + pending_bits = active_evtchns(cpu, s, word_idx); + bit_idx = 0; /* usually scan entire word from start */ + if (word_idx == start_word_idx) { + /* We scan the starting word in two parts */ + if (i == 0) + /* 1st time: start in the middle */ + bit_idx = start_bit_idx; + else + /* 2nd time: mask bits done already */ + bit_idx &= (1UL << start_bit_idx) - 1; + } + + do { + xen_ulong_t bits; + int port, irq; + struct irq_desc *desc; + + bits = MASK_LSBS(pending_bits, bit_idx); + + /* If we masked out all events, move on. */ + if (bits == 0) + break; + + bit_idx = EVTCHN_FIRST_BIT(bits); + + /* Process port. */ + port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; + irq = evtchn_to_irq[port]; + + if (irq != -1) { + desc = irq_to_desc(irq); + if (desc) + generic_handle_irq_desc(irq, desc); + } + + bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD; + + /* Next caller starts at last processed + 1 */ + __this_cpu_write(current_word_idx, + bit_idx ? word_idx : + (word_idx+1) % BITS_PER_EVTCHN_WORD); + __this_cpu_write(current_bit_idx, bit_idx); + } while (bit_idx != 0); + + /* Scan start_l1i twice; all others once. */ + if ((word_idx != start_word_idx) || (i != 0)) + pending_words &= ~(1UL << word_idx); + + word_idx = (word_idx + 1) % BITS_PER_EVTCHN_WORD; + } +} + +irqreturn_t xen_debug_interrupt(int irq, void *dev_id) +{ + struct shared_info *sh = HYPERVISOR_shared_info; + int cpu = smp_processor_id(); + xen_ulong_t *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu); + int i; + unsigned long flags; + static DEFINE_SPINLOCK(debug_lock); + struct vcpu_info *v; + + spin_lock_irqsave(&debug_lock, flags); + + printk("\nvcpu %d\n ", cpu); + + for_each_online_cpu(i) { + int pending; + v = per_cpu(xen_vcpu, i); + pending = (get_irq_regs() && i == cpu) + ? xen_irqs_disabled(get_irq_regs()) + : v->evtchn_upcall_mask; + printk("%d: masked=%d pending=%d event_sel %0*"PRI_xen_ulong"\n ", i, + pending, v->evtchn_upcall_pending, + (int)(sizeof(v->evtchn_pending_sel)*2), + v->evtchn_pending_sel); + } + v = per_cpu(xen_vcpu, cpu); + + printk("\npending:\n "); + for (i = ARRAY_SIZE(sh->evtchn_pending)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)sizeof(sh->evtchn_pending[0])*2, + sh->evtchn_pending[i], + i % 8 == 0 ? "\n " : " "); + printk("\nglobal mask:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + sh->evtchn_mask[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nglobally unmasked:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + sh->evtchn_pending[i] & ~sh->evtchn_mask[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nlocal cpu%d mask:\n ", cpu); + for (i = (NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD)-1; i >= 0; i--) + printk("%0*"PRI_xen_ulong"%s", (int)(sizeof(cpu_evtchn[0])*2), + cpu_evtchn[i], + i % 8 == 0 ? "\n " : " "); + + printk("\nlocally unmasked:\n "); + for (i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--) { + xen_ulong_t pending = sh->evtchn_pending[i] + & ~sh->evtchn_mask[i] + & cpu_evtchn[i]; + printk("%0*"PRI_xen_ulong"%s", + (int)(sizeof(sh->evtchn_mask[0])*2), + pending, i % 8 == 0 ? "\n " : " "); + } + + printk("\npending list:\n"); + for (i = 0; i < NR_EVENT_CHANNELS; i++) { + if (sync_test_bit(i, BM(sh->evtchn_pending))) { + int word_idx = i / BITS_PER_EVTCHN_WORD; + printk(" %d: event %d -> irq %d%s%s%s\n", + cpu_from_evtchn(i), i, + evtchn_to_irq[i], + sync_test_bit(word_idx, BM(&v->evtchn_pending_sel)) + ? "" : " l2-clear", + !sync_test_bit(i, BM(sh->evtchn_mask)) + ? "" : " globally-masked", + sync_test_bit(i, BM(cpu_evtchn)) + ? "" : " locally-masked"); + } + } + + spin_unlock_irqrestore(&debug_lock, flags); + + return IRQ_HANDLED; +} -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 07/12] xen/events: add struct evtchn_ops for the low-level port operations
From: David Vrabel <david.vrabel@citrix.com> evtchn_ops contains the low-level operations that access the shared data structures. This allows alternate ABIs to be supported. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 4 ++ drivers/xen/events/events_internal.h | 62 +++++++++++++++++++++++++++++---- drivers/xen/events/n-level.c | 32 +++++++++++++---- 3 files changed, 82 insertions(+), 16 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 8d5a5cf..dff8134 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -60,6 +60,8 @@ #include "events_internal.h" +const struct evtchn_ops *evtchn_ops; + /* * This lock protects updates to the following mapping and reference-count * arrays. The lock does not need to be acquired to read the mapping tables. @@ -1497,6 +1499,8 @@ void __init xen_init_IRQ(void) { int i; + xen_evtchn_init_nlevel(); + evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), GFP_KERNEL); BUG_ON(!evtchn_to_irq); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 79ac70b..513c34d 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -54,21 +54,67 @@ struct irq_info { #define PIRQ_NEEDS_EOI (1 << 0) #define PIRQ_SHAREABLE (1 << 1) +struct evtchn_ops { + void (*bind_to_cpu)(struct irq_info *info, int cpu); + + void (*clear_pending)(int port); + void (*set_pending)(int port); + bool (*is_pending)(int port); + bool (*test_and_set_mask)(int port); + void (*mask)(int port); + void (*unmask)(int port); + + void (*handle_events)(int cpu); +}; + +extern const struct evtchn_ops *evtchn_ops; + extern int *evtchn_to_irq; struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); unsigned cpu_from_evtchn(unsigned int evtchn); -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu); +static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +{ + evtchn_ops->bind_to_cpu(info, cpu); +} + +static inline void clear_evtchn(int port) +{ + evtchn_ops->clear_pending(port); +} + +static inline void set_evtchn(int port) +{ + evtchn_ops->set_pending(port); +} + +static inline bool test_evtchn(int port) +{ + return evtchn_ops->is_pending(port); +} + +static inline bool test_and_set_mask(int port) +{ + return evtchn_ops->test_and_set_mask(port); +} + +static inline void mask_evtchn(int port) +{ + return evtchn_ops->mask(port); +} + +static inline void unmask_evtchn(int port) +{ + return evtchn_ops->unmask(port); +} -void clear_evtchn(int port); -void set_evtchn(int port); -int test_evtchn(int port); -int test_and_set_mask(int port); -void mask_evtchn(int port); -void unmask_evtchn(int port); +static inline void xen_evtchn_handle_events(int cpu) +{ + return evtchn_ops->handle_events(cpu); +} -void xen_evtchn_handle_events(int cpu); +void xen_evtchn_init_nlevel(void); #endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c index 05762d5..a449b69 100644 --- a/drivers/xen/events/n-level.c +++ b/drivers/xen/events/n-level.c @@ -39,43 +39,43 @@ static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], cpu_evtchn_mask); -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) +static void nlevel_bind_to_cpu(struct irq_info *info, int cpu) { clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); } -void clear_evtchn(int port) +static void nlevel_clear_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_clear_bit(port, BM(&s->evtchn_pending[0])); } -void set_evtchn(int port) +static void nlevel_set_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_set_bit(port, BM(&s->evtchn_pending[0])); } -int test_evtchn(int port) +static bool nlevel_is_pending(int port) { struct shared_info *s = HYPERVISOR_shared_info; return sync_test_bit(port, BM(&s->evtchn_pending[0])); } -int test_and_set_mask(int port) +static bool nlevel_test_and_set_mask(int port) { struct shared_info *s = HYPERVISOR_shared_info; return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); } -void mask_evtchn(int port) +static void nlevel_mask(int port) { struct shared_info *s = HYPERVISOR_shared_info; sync_set_bit(port, BM(&s->evtchn_mask[0])); } -void unmask_evtchn(int port) +static void nlevel_unmask(int port) { struct shared_info *s = HYPERVISOR_shared_info; unsigned int cpu = get_cpu(); @@ -141,7 +141,7 @@ static inline xen_ulong_t active_evtchns(unsigned int cpu, * a bitset of words which contain pending event bits. The second * level is a bitset of pending events themselves. */ -void xen_evtchn_handle_events(int cpu) +static void nlevel_handle_events(int cpu) { xen_ulong_t pending_words; int start_word_idx, start_bit_idx; @@ -313,3 +313,19 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } + +static const struct evtchn_ops evtchn_ops_nlevel = { + .bind_to_cpu = nlevel_bind_to_cpu, + .clear_pending = nlevel_clear_pending, + .set_pending = nlevel_set_pending, + .is_pending = nlevel_is_pending, + .test_and_set_mask = nlevel_test_and_set_mask, + .mask = nlevel_mask, + .unmask = nlevel_unmask, + .handle_events = nlevel_handle_events, +}; + +void __init xen_evtchn_init_nlevel(void) +{ + evtchn_ops = &evtchn_ops_nlevel; +} -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 08/12] xen/events: allow setup of irq_info to fail
From: David Vrabel <david.vrabel@citrix.com> The FIFO-based event ABI requires additional setup of newly bound events (it may need to expand the event array) and this setup may fail. xen_irq_info_common_init() is a useful place to put this setup so allow this call to fail. This call and the other similar calls are renamed to be *_setup() to reflect that they may now fail. This failure can only occur with new event channels not on rebind. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 156 +++++++++++++++++++++++++------------------ 1 files changed, 91 insertions(+), 65 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index dff8134..759acb4 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -98,7 +98,7 @@ struct irq_info *info_for_irq(unsigned irq) } /* Constructors for packed IRQ information. */ -static void xen_irq_info_common_init(struct irq_info *info, +static int xen_irq_info_common_setup(struct irq_info *info, unsigned irq, enum xen_irq_type type, unsigned short evtchn, @@ -115,45 +115,47 @@ static void xen_irq_info_common_init(struct irq_info *info, evtchn_to_irq[evtchn] = irq; irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN); + + return 0; } -static void xen_irq_info_evtchn_init(unsigned irq, +static int xen_irq_info_evtchn_setup(unsigned irq, unsigned short evtchn) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_EVTCHN, evtchn, 0); + return xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0); } -static void xen_irq_info_ipi_init(unsigned cpu, +static int xen_irq_info_ipi_setup(unsigned cpu, unsigned irq, unsigned short evtchn, enum ipi_vector ipi) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_IPI, evtchn, 0); - info->u.ipi = ipi; per_cpu(ipi_to_irq, cpu)[ipi] = irq; + + return xen_irq_info_common_setup(info, irq, IRQT_IPI, evtchn, 0); } -static void xen_irq_info_virq_init(unsigned cpu, +static int xen_irq_info_virq_setup(unsigned cpu, unsigned irq, unsigned short evtchn, unsigned short virq) { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_VIRQ, evtchn, 0); - info->u.virq = virq; per_cpu(virq_to_irq, cpu)[virq] = irq; + + return xen_irq_info_common_setup(info, irq, IRQT_VIRQ, evtchn, 0); } -static void xen_irq_info_pirq_init(unsigned irq, +static int xen_irq_info_pirq_setup(unsigned irq, unsigned short evtchn, unsigned short pirq, unsigned short gsi, @@ -162,12 +164,12 @@ static void xen_irq_info_pirq_init(unsigned irq, { struct irq_info *info = info_for_irq(irq); - xen_irq_info_common_init(info, irq, IRQT_PIRQ, evtchn, 0); - info->u.pirq.pirq = pirq; info->u.pirq.gsi = gsi; info->u.pirq.domid = domid; info->u.pirq.flags = flags; + + return xen_irq_info_common_setup(info, irq, IRQT_PIRQ, evtchn, 0); } /* @@ -515,6 +517,47 @@ int xen_irq_from_gsi(unsigned gsi) } EXPORT_SYMBOL_GPL(xen_irq_from_gsi); +static void __unbind_from_irq(unsigned int irq) +{ + struct evtchn_close close; + int evtchn = evtchn_from_irq(irq); + struct irq_info *info = irq_get_handler_data(irq); + + if (info->refcnt > 0) { + info->refcnt--; + if (info->refcnt != 0) + return; + } + + if (VALID_EVTCHN(evtchn)) { + close.port = evtchn; + if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) + BUG(); + + switch (type_from_irq(irq)) { + case IRQT_VIRQ: + per_cpu(virq_to_irq, cpu_from_evtchn(evtchn)) + [virq_from_irq(irq)] = -1; + break; + case IRQT_IPI: + per_cpu(ipi_to_irq, cpu_from_evtchn(evtchn)) + [ipi_from_irq(irq)] = -1; + break; + default: + break; + } + + /* Closed ports are implicitly re-bound to VCPU0. */ + bind_evtchn_to_cpu(evtchn, 0); + + evtchn_to_irq[evtchn] = -1; + } + + BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND); + + xen_free_irq(irq); +} + /* * Do not make any assumptions regarding the relationship between the * IRQ number returned here and the Xen pirq argument. @@ -530,6 +573,7 @@ int xen_bind_pirq_gsi_to_irq(unsigned gsi, { int irq = -1; struct physdev_irq irq_op; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -557,8 +601,13 @@ int xen_bind_pirq_gsi_to_irq(unsigned gsi, goto out; } - xen_irq_info_pirq_init(irq, 0, pirq, gsi, DOMID_SELF, + ret = xen_irq_info_pirq_setup(irq, 0, pirq, gsi, DOMID_SELF, shareable ? PIRQ_SHAREABLE : 0); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } pirq_query_unmask(irq); /* We try to use the handler with the appropriate semantic for the @@ -618,7 +667,9 @@ int xen_bind_pirq_msi_to_irq(struct pci_dev *dev, struct msi_desc *msidesc, irq_set_chip_and_handler_name(irq, &xen_pirq_chip, handle_edge_irq, name); - xen_irq_info_pirq_init(irq, 0, pirq, 0, domid, 0); + ret = xen_irq_info_pirq_setup(irq, 0, pirq, 0, domid, 0); + if (ret < 0) + goto error_irq; ret = irq_set_msi_desc(irq, msidesc); if (ret < 0) goto error_irq; @@ -626,8 +677,8 @@ out: mutex_unlock(&irq_mapping_update_lock); return irq; error_irq: + __unbind_from_irq(irq); mutex_unlock(&irq_mapping_update_lock); - xen_free_irq(irq); return ret; } #endif @@ -697,9 +748,11 @@ int xen_pirq_from_irq(unsigned irq) return pirq_from_irq(irq); } EXPORT_SYMBOL_GPL(xen_pirq_from_irq); + int bind_evtchn_to_irq(unsigned int evtchn) { int irq; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -713,7 +766,12 @@ int bind_evtchn_to_irq(unsigned int evtchn) irq_set_chip_and_handler_name(irq, &xen_dynamic_chip, handle_edge_irq, "event"); - xen_irq_info_evtchn_init(irq, evtchn); + ret = xen_irq_info_evtchn_setup(irq, evtchn); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } } else { struct irq_info *info = info_for_irq(irq); WARN_ON(info == NULL || info->type != IRQT_EVTCHN); @@ -730,6 +788,7 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) { struct evtchn_bind_ipi bind_ipi; int evtchn, irq; + int ret; mutex_lock(&irq_mapping_update_lock); @@ -749,8 +808,12 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) BUG(); evtchn = bind_ipi.port; - xen_irq_info_ipi_init(cpu, irq, evtchn, ipi); - + ret = xen_irq_info_ipi_setup(cpu, irq, evtchn, ipi); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } bind_evtchn_to_cpu(evtchn, cpu); } else { struct irq_info *info = info_for_irq(irq); @@ -829,7 +892,12 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu) evtchn = ret; } - xen_irq_info_virq_init(cpu, irq, evtchn, virq); + ret = xen_irq_info_virq_setup(cpu, irq, evtchn, virq); + if (ret < 0) { + __unbind_from_irq(irq); + irq = ret; + goto out; + } bind_evtchn_to_cpu(evtchn, cpu); } else { @@ -845,50 +913,8 @@ out: static void unbind_from_irq(unsigned int irq) { - struct evtchn_close close; - int evtchn = evtchn_from_irq(irq); - struct irq_info *info = irq_get_handler_data(irq); - - if (WARN_ON(!info)) - return; - mutex_lock(&irq_mapping_update_lock); - - if (info->refcnt > 0) { - info->refcnt--; - if (info->refcnt != 0) - goto done; - } - - if (VALID_EVTCHN(evtchn)) { - close.port = evtchn; - if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) - BUG(); - - switch (type_from_irq(irq)) { - case IRQT_VIRQ: - per_cpu(virq_to_irq, cpu_from_evtchn(evtchn)) - [virq_from_irq(irq)] = -1; - break; - case IRQT_IPI: - per_cpu(ipi_to_irq, cpu_from_evtchn(evtchn)) - [ipi_from_irq(irq)] = -1; - break; - default: - break; - } - - /* Closed ports are implicitly re-bound to VCPU0. */ - bind_evtchn_to_cpu(evtchn, 0); - - evtchn_to_irq[evtchn] = -1; - } - - BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND); - - xen_free_irq(irq); - - done: + __unbind_from_irq(irq); mutex_unlock(&irq_mapping_update_lock); } @@ -1126,7 +1152,7 @@ void rebind_evtchn_irq(int evtchn, int irq) so there should be a proper type */ BUG_ON(info->type == IRQT_UNBOUND); - xen_irq_info_evtchn_init(irq, evtchn); + xen_irq_info_evtchn_setup(irq, evtchn); mutex_unlock(&irq_mapping_update_lock); @@ -1291,7 +1317,7 @@ static void restore_cpu_virqs(unsigned int cpu) evtchn = bind_virq.port; /* Record the new mapping. */ - xen_irq_info_virq_init(cpu, irq, evtchn, virq); + xen_irq_info_virq_setup(cpu, irq, evtchn, virq); bind_evtchn_to_cpu(evtchn, cpu); } } @@ -1315,7 +1341,7 @@ static void restore_cpu_ipis(unsigned int cpu) evtchn = bind_ipi.port; /* Record the new mapping. */ - xen_irq_info_ipi_init(cpu, irq, evtchn, ipi); + xen_irq_info_ipi_setup(cpu, irq, evtchn, ipi); bind_evtchn_to_cpu(evtchn, cpu); } } -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 09/12] xen/events: add a evtchn_op for port setup
From: David Vrabel <david.vrabel@citrix.com> Add a hook for port-specific setup and call it from xen_irq_info_common_setup(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 2 +- drivers/xen/events/events_internal.h | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 759acb4..b2746f1 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -116,7 +116,7 @@ static int xen_irq_info_common_setup(struct irq_info *info, irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN); - return 0; + return xen_evtchn_port_setup(info); } static int xen_irq_info_evtchn_setup(unsigned irq, diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 513c34d..32cb928 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -55,6 +55,7 @@ struct irq_info { #define PIRQ_SHAREABLE (1 << 1) struct evtchn_ops { + int (*setup)(struct irq_info *info); void (*bind_to_cpu)(struct irq_info *info, int cpu); void (*clear_pending)(int port); @@ -75,6 +76,13 @@ struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); unsigned cpu_from_evtchn(unsigned int evtchn); +static inline int xen_evtchn_port_setup(struct irq_info *info) +{ + if (evtchn_ops->setup) + return evtchn_ops->setup(info); + return 0; +} + static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) { evtchn_ops->bind_to_cpu(info, cpu); -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 10/12] xen/events: Refactor evtchn_to_irq array to be dynamically allocated
From: Malcolm Crossley <malcolm.crossley@citrix.com> Refactor static array evtchn_to_irq array to be dynamically allocated by implementing get and set functions for accesses to the array. Two new port ops are added: max_channels (maximum supported number of event channels) and nr_channels (number of currently usable event channels). For the N-level ABI, these numbers are both the same as the shared data structure is a fixed size. For the FIFO ABI, these will be different as the event array is expanded dynamically. This allows more than 65000 event channels so an unsigned short is no longer sufficient for an event channel port number and unsigned int is used instead. Signed-off-by: Malcolm Crossley <malcolm.crossley@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/events.c | 129 +++++++++++++++++++++++++--------- drivers/xen/events/events_internal.h | 18 ++++- drivers/xen/events/n-level.c | 11 +++- 3 files changed, 121 insertions(+), 37 deletions(-) diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index b2746f1..55d33d1 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -76,12 +76,16 @@ static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1}; /* IRQ <-> IPI mapping */ static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1}; -int *evtchn_to_irq; +int **evtchn_to_irq; #ifdef CONFIG_X86 static unsigned long *pirq_eoi_map; #endif static bool (*pirq_needs_eoi)(unsigned irq); +#define EVTCHN_ROW(e) (e / (PAGE_SIZE/sizeof(**evtchn_to_irq))) +#define EVTCHN_COL(e) (e % (PAGE_SIZE/sizeof(**evtchn_to_irq))) +#define EVTCHN_PER_ROW (PAGE_SIZE / sizeof(**evtchn_to_irq)) + /* Xen will never allocate port zero for any purpose. */ #define VALID_EVTCHN(chn) ((chn) != 0) @@ -91,6 +95,61 @@ static struct irq_chip xen_pirq_chip; static void enable_dynirq(struct irq_data *data); static void disable_dynirq(struct irq_data *data); +static void clear_evtchn_to_irq_row(unsigned row) +{ + unsigned col; + + for (col = 0; col < EVTCHN_PER_ROW; col++) + evtchn_to_irq[row][col] = -1; +} + +static void clear_evtchn_to_irq_all(void) +{ + unsigned row; + + for (row = 0; row < EVTCHN_ROW(xen_evtchn_max_channels()); row++) { + if (evtchn_to_irq[row] == NULL) + continue; + clear_evtchn_to_irq_row(row); + } +} + +static int set_evtchn_to_irq(unsigned evtchn, unsigned irq) +{ + unsigned row; + unsigned col; + + if (evtchn >= xen_evtchn_max_channels()) + return -EINVAL; + + row = EVTCHN_ROW(evtchn); + col = EVTCHN_COL(evtchn); + + if (evtchn_to_irq[row] == NULL) { + /* Unallocated irq entries return -1 anyway */ + if (irq == -1) + return 0; + + evtchn_to_irq[row] = (int *)get_zeroed_page(GFP_KERNEL); + if (evtchn_to_irq[row] == NULL) + return -ENOMEM; + + clear_evtchn_to_irq_row(row); + } + + evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)] = irq; + return 0; +} + +int get_evtchn_to_irq(unsigned evtchn) +{ + if (evtchn >= xen_evtchn_max_channels()) + return -1; + if (evtchn_to_irq[EVTCHN_ROW(evtchn)] == NULL) + return -1; + return evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)]; +} + /* Get info for IRQ */ struct irq_info *info_for_irq(unsigned irq) { @@ -101,7 +160,7 @@ struct irq_info *info_for_irq(unsigned irq) static int xen_irq_info_common_setup(struct irq_info *info, unsigned irq, enum xen_irq_type type, - unsigned short evtchn, + unsigned evtchn, unsigned short cpu) { @@ -112,7 +171,8 @@ static int xen_irq_info_common_setup(struct irq_info *info, info->evtchn = evtchn; info->cpu = cpu; - evtchn_to_irq[evtchn] = irq; + if (set_evtchn_to_irq(evtchn, irq)) + return -ENOMEM; irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN); @@ -120,7 +180,7 @@ static int xen_irq_info_common_setup(struct irq_info *info, } static int xen_irq_info_evtchn_setup(unsigned irq, - unsigned short evtchn) + unsigned evtchn) { struct irq_info *info = info_for_irq(irq); @@ -129,7 +189,7 @@ static int xen_irq_info_evtchn_setup(unsigned irq, static int xen_irq_info_ipi_setup(unsigned cpu, unsigned irq, - unsigned short evtchn, + unsigned evtchn, enum ipi_vector ipi) { struct irq_info *info = info_for_irq(irq); @@ -143,8 +203,8 @@ static int xen_irq_info_ipi_setup(unsigned cpu, static int xen_irq_info_virq_setup(unsigned cpu, unsigned irq, - unsigned short evtchn, - unsigned short virq) + unsigned evtchn, + unsigned virq) { struct irq_info *info = info_for_irq(irq); @@ -156,9 +216,9 @@ static int xen_irq_info_virq_setup(unsigned cpu, } static int xen_irq_info_pirq_setup(unsigned irq, - unsigned short evtchn, - unsigned short pirq, - unsigned short gsi, + unsigned evtchn, + unsigned pirq, + unsigned gsi, uint16_t domid, unsigned char flags) { @@ -185,7 +245,7 @@ static unsigned int evtchn_from_irq(unsigned irq) unsigned irq_from_evtchn(unsigned int evtchn) { - return evtchn_to_irq[evtchn]; + return get_evtchn_to_irq(evtchn); } EXPORT_SYMBOL_GPL(irq_from_evtchn); @@ -231,7 +291,7 @@ unsigned cpu_from_irq(unsigned irq) unsigned int cpu_from_evtchn(unsigned int evtchn) { - int irq = evtchn_to_irq[evtchn]; + int irq = get_evtchn_to_irq(evtchn); unsigned ret = 0; if (irq != -1) @@ -257,7 +317,7 @@ static bool pirq_needs_eoi_flag(unsigned irq) static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu) { - int irq = evtchn_to_irq[chn]; + int irq = get_evtchn_to_irq(chn); struct irq_info *info = info_for_irq(irq); BUG_ON(irq == -1); @@ -452,7 +512,9 @@ static unsigned int __startup_pirq(unsigned int irq) pirq_query_unmask(irq); - evtchn_to_irq[evtchn] = irq; + rc = set_evtchn_to_irq(evtchn, irq); + if (rc != 0) + return 0; bind_evtchn_to_cpu(evtchn, 0); info->evtchn = evtchn; @@ -473,7 +535,7 @@ static void shutdown_pirq(struct irq_data *data) struct evtchn_close close; unsigned int irq = data->irq; struct irq_info *info = info_for_irq(irq); - int evtchn = evtchn_from_irq(irq); + int rc, evtchn = evtchn_from_irq(irq); BUG_ON(info->type != IRQT_PIRQ); @@ -487,7 +549,7 @@ static void shutdown_pirq(struct irq_data *data) BUG(); bind_evtchn_to_cpu(evtchn, 0); - evtchn_to_irq[evtchn] = -1; + rc = set_evtchn_to_irq(evtchn, -1); info->evtchn = 0; } @@ -550,7 +612,7 @@ static void __unbind_from_irq(unsigned int irq) /* Closed ports are implicitly re-bound to VCPU0. */ bind_evtchn_to_cpu(evtchn, 0); - evtchn_to_irq[evtchn] = -1; + set_evtchn_to_irq(evtchn, -1); } BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND); @@ -754,9 +816,12 @@ int bind_evtchn_to_irq(unsigned int evtchn) int irq; int ret; + if (evtchn >= xen_evtchn_max_channels()) + return -ENOMEM; + mutex_lock(&irq_mapping_update_lock); - irq = evtchn_to_irq[evtchn]; + irq = get_evtchn_to_irq(evtchn); if (irq == -1) { irq = xen_allocate_irq_dynamic(); @@ -846,7 +911,7 @@ static int find_virq(unsigned int virq, unsigned int cpu) int port, rc = -ENOENT; memset(&status, 0, sizeof(status)); - for (port = 0; port <= NR_EVENT_CHANNELS; port++) { + for (port = 0; port < xen_evtchn_max_channels(); port++) { status.dom = DOMID_SELF; status.port = port; rc = HYPERVISOR_event_channel_op(EVTCHNOP_status, &status); @@ -926,8 +991,9 @@ int bind_evtchn_to_irqhandler(unsigned int evtchn, int irq, retval; irq = bind_evtchn_to_irq(evtchn); - if (irq < 0) + if (irq < 0){ return irq; + } retval = request_irq(irq, handler, irqflags, devname, dev_id); if (retval != 0) { unbind_from_irq(irq); @@ -1016,7 +1082,7 @@ EXPORT_SYMBOL_GPL(unbind_from_irqhandler); int evtchn_make_refcounted(unsigned int evtchn) { - int irq = evtchn_to_irq[evtchn]; + int irq = get_evtchn_to_irq(evtchn); struct irq_info *info; if (irq == -1) @@ -1041,12 +1107,12 @@ int evtchn_get(unsigned int evtchn) struct irq_info *info; int err = -ENOENT; - if (evtchn >= NR_EVENT_CHANNELS) + if (evtchn >= xen_evtchn_max_channels()) return -EINVAL; mutex_lock(&irq_mapping_update_lock); - irq = evtchn_to_irq[evtchn]; + irq = get_evtchn_to_irq(evtchn); if (irq == -1) goto done; @@ -1070,7 +1136,7 @@ EXPORT_SYMBOL_GPL(evtchn_get); void evtchn_put(unsigned int evtchn) { - int irq = evtchn_to_irq[evtchn]; + int irq = get_evtchn_to_irq(evtchn); if (WARN_ON(irq == -1)) return; unbind_from_irq(irq); @@ -1147,7 +1213,7 @@ void rebind_evtchn_irq(int evtchn, int irq) mutex_lock(&irq_mapping_update_lock); /* After resume the irq<->evtchn mappings are all cleared out */ - BUG_ON(evtchn_to_irq[evtchn] != -1); + BUG_ON(get_evtchn_to_irq(evtchn) != -1); /* Expect irq to have been bound before, so there should be a proper type */ BUG_ON(info->type == IRQT_UNBOUND); @@ -1422,15 +1488,14 @@ void xen_irq_resume(void) struct irq_info *info; /* New event-channel space is not ''live'' yet. */ - for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++) + for (evtchn = 0; evtchn < xen_evtchn_nr_channels(); evtchn++) mask_evtchn(evtchn); /* No IRQ <-> event-channel mappings. */ list_for_each_entry(info, &xen_irq_list_head, list) info->evtchn = 0; /* zap event-channel binding */ - for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++) - evtchn_to_irq[evtchn] = -1; + clear_evtchn_to_irq_all(); for_each_possible_cpu(cpu) { restore_cpu_virqs(cpu); @@ -1527,14 +1592,12 @@ void __init xen_init_IRQ(void) xen_evtchn_init_nlevel(); - evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), - GFP_KERNEL); + evtchn_to_irq = kcalloc(EVTCHN_ROW(xen_evtchn_max_channels()), + sizeof(*evtchn_to_irq), GFP_KERNEL); BUG_ON(!evtchn_to_irq); - for (i = 0; i < NR_EVENT_CHANNELS; i++) - evtchn_to_irq[i] = -1; /* No event channels are ''live'' right now. */ - for (i = 0; i < NR_EVENT_CHANNELS; i++) + for (i = 0; i < xen_evtchn_nr_channels(); i++) mask_evtchn(i); pirq_needs_eoi = pirq_needs_eoi_flag; diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 32cb928..9d8b70c 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -35,7 +35,7 @@ struct irq_info { int refcnt; enum xen_irq_type type; /* type */ unsigned irq; - unsigned short evtchn; /* event channel */ + unsigned int evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ union { @@ -55,6 +55,9 @@ struct irq_info { #define PIRQ_SHAREABLE (1 << 1) struct evtchn_ops { + unsigned (*max_channels)(void); + unsigned (*nr_channels)(void); + int (*setup)(struct irq_info *info); void (*bind_to_cpu)(struct irq_info *info, int cpu); @@ -70,12 +73,23 @@ struct evtchn_ops { extern const struct evtchn_ops *evtchn_ops; -extern int *evtchn_to_irq; +extern int **evtchn_to_irq; +int get_evtchn_to_irq(unsigned int evtchn); struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); unsigned cpu_from_evtchn(unsigned int evtchn); +static inline unsigned xen_evtchn_max_channels(void) +{ + return evtchn_ops->max_channels(); +} + +static inline unsigned xen_evtchn_nr_channels(void) +{ + return evtchn_ops->nr_channels(); +} + static inline int xen_evtchn_port_setup(struct irq_info *info) { if (evtchn_ops->setup) diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c index a449b69..850a0ed 100644 --- a/drivers/xen/events/n-level.c +++ b/drivers/xen/events/n-level.c @@ -39,6 +39,11 @@ static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], cpu_evtchn_mask); +static unsigned nlevel_max_channels(void) +{ + return NR_EVENT_CHANNELS; +} + static void nlevel_bind_to_cpu(struct irq_info *info, int cpu) { clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); @@ -205,7 +210,7 @@ static void nlevel_handle_events(int cpu) /* Process port. */ port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; - irq = evtchn_to_irq[port]; + irq = get_evtchn_to_irq(port); if (irq != -1) { desc = irq_to_desc(irq); @@ -299,7 +304,7 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) int word_idx = i / BITS_PER_EVTCHN_WORD; printk(" %d: event %d -> irq %d%s%s%s\n", cpu_from_evtchn(i), i, - evtchn_to_irq[i], + get_evtchn_to_irq(i), sync_test_bit(word_idx, BM(&v->evtchn_pending_sel)) ? "" : " l2-clear", !sync_test_bit(i, BM(sh->evtchn_mask)) @@ -315,6 +320,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) } static const struct evtchn_ops evtchn_ops_nlevel = { + .max_channels = nlevel_max_channels, + .nr_channels = nlevel_max_channels, .bind_to_cpu = nlevel_bind_to_cpu, .clear_pending = nlevel_clear_pending, .set_pending = nlevel_set_pending, -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 11/12] xen/events: Add the hypervisor interface for the FIFO-based event channels
From: David Vrabel <david.vrabel@citrix.com> Add the hypercall sub-ops and the structures for the shared data used in the FIFO-based event channel ABI. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- include/xen/interface/event_channel.h | 67 +++++++++++++++++++++++++++++++++ 1 files changed, 67 insertions(+), 0 deletions(-) diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h index f494292..f54ad0a 100644 --- a/include/xen/interface/event_channel.h +++ b/include/xen/interface/event_channel.h @@ -190,6 +190,48 @@ struct evtchn_reset { }; typedef struct evtchn_reset evtchn_reset_t; +/* + * EVTCHNOP_init_control: initialize the control block for the FIFO ABI. + */ +#define EVTCHNOP_init_control 11 +struct evtchn_init_control { + /* IN parameters. */ + uint64_t control_mfn; + uint32_t offset; + uint32_t vcpu; + /* OUT parameters. */ + uint8_t link_bits; +}; + +/* + * EVTCHNOP_expand_array: add an additional page to the event array. + */ +#define EVTCHNOP_expand_array 12 +struct evtchn_expand_array { + /* IN parameters. */ + uint64_t array_mfn; +}; + +/* + * EVTCHNOP_set_priority: set the priority for an event channel. + */ +#define EVTCHNOP_set_priority 13 +struct evtchn_set_priority { + /* IN parameters. */ + uint32_t port; + uint32_t priority; +}; + +/* + * EVTCHNOP_set_limit: set the maximum event channel port that may be bound. + */ +#define EVTCHNOP_set_limit 14 +struct evtchn_set_limit { + /* IN parameters. */ + uint32_t domid; + uint32_t max_port; +}; + struct evtchn_op { uint32_t cmd; /* EVTCHNOP_* */ union { @@ -207,4 +249,29 @@ struct evtchn_op { }; DEFINE_GUEST_HANDLE_STRUCT(evtchn_op); +/* + * FIFO ABI + */ + +/* Events may have priorities from 0 (highest) to 15 (lowest). */ +#define EVTCHN_FIFO_PRIORITY_MIN 15 +#define EVTCHN_FIFO_PRIORITY_DEFAULT 7 + +#define EVTCHN_FIFO_MAX_QUEUES (EVTCHN_FIFO_PRIORITY_MIN + 1) + +typedef uint32_t event_word_t; + +#define EVTCHN_FIFO_PENDING 31 +#define EVTCHN_FIFO_MASKED 30 +#define EVTCHN_FIFO_LINKED 29 + +#define EVTCHN_FIFO_LINK_BITS 17 +#define EVTCHN_FIFO_LINK_MASK ((1 << EVTCHN_FIFO_LINK_BITS) - 1) + +struct evtchn_fifo_control_block { + uint32_t ready; + uint32_t _rsvd1; + event_word_t head[EVTCHN_FIFO_MAX_QUEUES]; +}; + #endif /* __XEN_PUBLIC_EVENT_CHANNEL_H__ */ -- 1.7.2.5
David Vrabel
2013-Aug-09 18:15 UTC
[PATCH 12/12] xen/events: use the FIFO-based ABI if available
From: David Vrabel <david.vrabel@citrix.com> If the hypervisor supports the FIFO-based ABI, enable it by initializing the control block for the boot VCPU and subsequent VCPUs as they are brought up. The event array is expanded as required when event ports are setup. This implementation has some known limitations: - Migration will not work as the control blocks or event arrays are not remapped by Xen at the destination. - The timer VIRQ which previously was treated as the highest priority event has the default priority. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/events/Makefile | 1 + drivers/xen/events/events.c | 7 +- drivers/xen/events/events_internal.h | 1 + drivers/xen/events/fifo.c | 335 ++++++++++++++++++++++++++++++++++ 4 files changed, 343 insertions(+), 1 deletions(-) create mode 100644 drivers/xen/events/fifo.c diff --git a/drivers/xen/events/Makefile b/drivers/xen/events/Makefile index aea331e..74644d0 100644 --- a/drivers/xen/events/Makefile +++ b/drivers/xen/events/Makefile @@ -1,2 +1,3 @@ obj-y += events.o +obj-y += fifo.o obj-y += n-level.o diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c index 55d33d1..2e7416e 100644 --- a/drivers/xen/events/events.c +++ b/drivers/xen/events/events.c @@ -1589,8 +1589,13 @@ void xen_callback_vector(void) {} void __init xen_init_IRQ(void) { int i; + int ret; - xen_evtchn_init_nlevel(); + ret = xen_evtchn_init_fifo(); + if (ret < 0) { + printk(KERN_INFO "xen: falling back to n-level event channels"); + xen_evtchn_init_nlevel(); + } evtchn_to_irq = kcalloc(EVTCHN_ROW(xen_evtchn_max_channels()), sizeof(*evtchn_to_irq), GFP_KERNEL); diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h index 9d8b70c..e14f616 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -138,5 +138,6 @@ static inline void xen_evtchn_handle_events(int cpu) } void xen_evtchn_init_nlevel(void); +int xen_evtchn_init_fifo(void); #endif /* #ifndef __EVENTS_INTERNAL_H__ */ diff --git a/drivers/xen/events/fifo.c b/drivers/xen/events/fifo.c new file mode 100644 index 0000000..b07a603 --- /dev/null +++ b/drivers/xen/events/fifo.c @@ -0,0 +1,335 @@ +/* + * Xen event channels (FIFO-based ABI) + * + * Copyright (C) 2013 Citrix Systems R&D ltd. + * + * This source code is licensed under the GNU General Public License, + * Version 2 or later. See the file COPYING for more details. + */ + +#include <linux/linkage.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/module.h> +#include <linux/smp.h> +#include <linux/percpu.h> +#include <linux/cpu.h> + +#include <asm/sync_bitops.h> +#include <asm/xen/hypercall.h> +#include <asm/xen/hypervisor.h> +#include <asm/xen/page.h> + +#include <xen/xen.h> +#include <xen/xen-ops.h> +#include <xen/events.h> +#include <xen/interface/xen.h> +#include <xen/interface/event_channel.h> + +#include "events_internal.h" + +#define EVENT_WORDS_PER_PAGE (PAGE_SIZE / sizeof(event_word_t)) +#define MAX_EVENT_ARRAY_PAGES ((1 << EVTCHN_FIFO_LINK_BITS) \ + / EVENT_WORDS_PER_PAGE) + +struct evtchn_fifo_queue { + uint32_t head[EVTCHN_FIFO_MAX_QUEUES]; +}; + +static DEFINE_PER_CPU(struct evtchn_fifo_control_block *, cpu_control_block); +static DEFINE_PER_CPU(struct evtchn_fifo_queue, cpu_queue); +static event_word_t *event_array[MAX_EVENT_ARRAY_PAGES]; +static unsigned event_array_pages; + +#define BM(w) ((unsigned long *)(w)) + +static inline event_word_t *event_word_from_port(int port) +{ + int i = port / EVENT_WORDS_PER_PAGE; + + if (i >= event_array_pages) + return NULL; + return event_array[i] + port % EVENT_WORDS_PER_PAGE; +} + +static unsigned fifo_max_channels(void) +{ + return 1 << EVTCHN_FIFO_LINK_BITS; +} + +static unsigned fifo_nr_channels(void) +{ + return event_array_pages * EVENT_WORDS_PER_PAGE; +} + +static int fifo_setup(struct irq_info *info) +{ + int port = info->evtchn; + int i; + int ret = -ENOMEM; + + i = port / EVENT_WORDS_PER_PAGE; + + if (i >= MAX_EVENT_ARRAY_PAGES) + return -EINVAL; + + while (i >= event_array_pages) { + struct page *array_page = NULL; + struct evtchn_expand_array expand_array; + + array_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (array_page == NULL) + goto error; + + expand_array.array_mfn = virt_to_mfn(page_address(array_page)); + + ret = HYPERVISOR_event_channel_op(EVTCHNOP_expand_array, &expand_array); + if (ret < 0) { + __free_page(array_page); + goto error; + } + + event_array[event_array_pages++] = page_address(array_page); + } + return 0; + + error: + if (event_array_pages == 0) + panic("xen: unable to expand event array with initial page (%d)\n", ret); + else + printk(KERN_ERR "xen: unable to expand event array (%d)\n", ret); + return ret; +} + +static void fifo_bind_to_cpu(struct irq_info *info, int cpu) +{ + /* no-op */ +} + +static void fifo_clear_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + sync_clear_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static void fifo_set_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + sync_set_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static bool fifo_is_pending(int port) +{ + event_word_t *word = event_word_from_port(port); + return sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); +} + +static bool fifo_test_and_set_mask(int port) +{ + event_word_t *word = event_word_from_port(port); + return sync_test_and_set_bit(EVTCHN_FIFO_MASKED, BM(word)); +} + +static void fifo_mask(int port) +{ + event_word_t *word = event_word_from_port(port); + if (word) + sync_set_bit(EVTCHN_FIFO_MASKED, BM(word)); +} + +static void fifo_unmask(int port) +{ + unsigned int cpu = get_cpu(); + bool do_hypercall = false; + bool evtchn_pending = false; + + BUG_ON(!irqs_disabled()); + + if (unlikely((cpu != cpu_from_evtchn(port)))) + do_hypercall = true; + else { + event_word_t *word = event_word_from_port(port); + + sync_clear_bit(EVTCHN_FIFO_MASKED, BM(word)); + evtchn_pending = sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); + if (evtchn_pending) + do_hypercall = true; + } + + if (do_hypercall) { + struct evtchn_unmask unmask = { .port = port }; + (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); + } + + put_cpu(); +} + +static uint32_t clear_linked(volatile event_word_t *word) +{ + event_word_t n, o, w; + + w = *word; + + do { + o = w; + n = (w & ~((1 << EVTCHN_FIFO_LINKED) | EVTCHN_FIFO_LINK_MASK)); + } while ( (w = sync_cmpxchg(word, o, n)) != o ); + + return w & EVTCHN_FIFO_LINK_MASK; +} + +static void handle_irq_for_port(int port) +{ + int irq; + struct irq_desc *desc; + + irq = get_evtchn_to_irq(port); + if (irq != -1) { + desc = irq_to_desc(irq); + if (desc) + generic_handle_irq_desc(irq, desc); + } +} + +static void consume_one_event(int cpu, + struct evtchn_fifo_control_block *control_block, + int priority, uint32_t *ready) +{ + struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu); + uint32_t head; + int port; + event_word_t *word; + + head = q->head[priority]; + + /* Reached the tail last time? Read the new HEAD from the + control block. */ + if (head == 0) { + rmb(); /* Ensure word is up-to-date before reading head. */ + head = control_block->head[priority]; + } + + port = head; + word = event_word_from_port(port); + head = clear_linked(word); + + /* + * If the link is non-zero, there are more events in the + * queue, otherwise the queue is empty. + * + * If the queue is empty, clear this priority from our local + * copy of the ready word. + */ + if (head == 0) + clear_bit(priority, BM(ready)); + + if (sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)) + && !sync_test_bit(EVTCHN_FIFO_MASKED, BM(word))) + handle_irq_for_port(port); + + q->head[priority] = head; +} + +#define EVTCHN_FIFO_READY_MASK ((1 << EVTCHN_FIFO_MAX_QUEUES) - 1) + +static void fifo_handle_events(int cpu) +{ + struct evtchn_fifo_control_block *control_block; + uint32_t ready; + int q; + + control_block = per_cpu(cpu_control_block, cpu); + + ready = xchg(&control_block->ready, 0); + + while (ready & EVTCHN_FIFO_READY_MASK) { + q = find_first_bit(BM(&ready), EVTCHN_FIFO_MAX_QUEUES); + consume_one_event(cpu, control_block, q, &ready); + ready |= xchg(&control_block->ready, 0); + } +} + +static const struct evtchn_ops evtchn_ops_fifo = { + .max_channels = fifo_max_channels, + .nr_channels = fifo_nr_channels, + .setup = fifo_setup, + .bind_to_cpu = fifo_bind_to_cpu, + .clear_pending = fifo_clear_pending, + .set_pending = fifo_set_pending, + .is_pending = fifo_is_pending, + .test_and_set_mask = fifo_test_and_set_mask, + .mask = fifo_mask, + .unmask = fifo_unmask, + .handle_events = fifo_handle_events, +}; + +static int __cpuinit fifo_init_control_block(int cpu) +{ + struct page *control_block = NULL; + struct evtchn_init_control init_control; + int ret = -ENOMEM; + + control_block = alloc_page(GFP_KERNEL|__GFP_ZERO); + if (control_block == NULL) + goto error; + + init_control.control_mfn = virt_to_mfn(page_address(control_block)); + init_control.offset = 0; + init_control.vcpu = cpu; + + ret = HYPERVISOR_event_channel_op(EVTCHNOP_init_control, &init_control); + if (ret < 0) + goto error; + + per_cpu(cpu_control_block, cpu) = page_address(control_block); + + return 0; + + error: + __free_page(control_block); + return ret; +} + +static int __cpuinit fifo_cpu_notification(struct notifier_block *self, + unsigned long action, void *hcpu) +{ + int cpu = (long)hcpu; + int ret = 0; + + switch (action) { + case CPU_UP_PREPARE: + ret = fifo_init_control_block(cpu); + break; + default: + break; + } + return ret < 0 ? NOTIFY_BAD : NOTIFY_OK; +} + +static struct notifier_block fifo_cpu_notifier __cpuinitdata = { + .notifier_call = fifo_cpu_notification, +}; + + +int __init xen_evtchn_init_fifo(void) +{ + int cpu = get_cpu(); + int ret; + + ret = fifo_init_control_block(cpu); + if (ret < 0) + goto error; + + printk(KERN_INFO "xen: switching to FIFO-based event channels\n"); + + evtchn_ops = &evtchn_ops_fifo; + + register_cpu_notifier(&fifo_cpu_notifier); + + put_cpu(); + return 0; + + error: + put_cpu(); + return ret; +} -- 1.7.2.5
Wei Liu
2013-Aug-16 17:46 UTC
Re: [PATCH 01/12] xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
On Fri, Aug 09, 2013 at 07:15:10PM +0100, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > These two function did the same thing with different parameters, put > the common bits in retrigger_evtchn(). > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > drivers/xen/events.c | 27 +++++++++------------------ > 1 files changed, 9 insertions(+), 18 deletions(-) > > diff --git a/drivers/xen/events.c b/drivers/xen/events.c > index a58ac43..1697cf2 100644 > --- a/drivers/xen/events.c > +++ b/drivers/xen/events.c > @@ -1529,13 +1529,13 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, > return rebind_irq_to_cpu(data->irq, tcpu); > } > > -int resend_irq_on_evtchn(unsigned int irq) > +static int retrigger_evtchn(int evtchn) > { > - int masked, evtchn = evtchn_from_irq(irq); > + int masked; > struct shared_info *s = HYPERVISOR_shared_info; > > if (!VALID_EVTCHN(evtchn)) > - return 1; > + return 0; >This looks weird. Either resend_irq_on_evtchn is bogus in first place or resend_irq_on_evtchn is different from retrigger_dynirq. Looking at Linus''s tree I tend to believe the first case. If it is the first case I think it deserves a separate patch and backport to stable. Wei.
Wei Liu
2013-Aug-16 17:47 UTC
Re: [PATCH 07/12] xen/events: add struct evtchn_ops for the low-level port operations
On Fri, Aug 09, 2013 at 07:15:16PM +0100, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > evtchn_ops contains the low-level operations that access the shared > data structures. This allows alternate ABIs to be supported. > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > drivers/xen/events/events.c | 4 ++ > drivers/xen/events/events_internal.h | 62 +++++++++++++++++++++++++++++---- > drivers/xen/events/n-level.c | 32 +++++++++++++---- > 3 files changed, 82 insertions(+), 16 deletions(-) > > diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c > index 8d5a5cf..dff8134 100644 > --- a/drivers/xen/events/events.c > +++ b/drivers/xen/events/events.c > @@ -60,6 +60,8 @@ > > #include "events_internal.h" > > +const struct evtchn_ops *evtchn_ops; > + > /* > * This lock protects updates to the following mapping and reference-count > * arrays. The lock does not need to be acquired to read the mapping tables. > @@ -1497,6 +1499,8 @@ void __init xen_init_IRQ(void) > { > int i; > > + xen_evtchn_init_nlevel(); > +There is only 2-level ABI, more importantly, it is the default ABI, so use xen_evtchn_init_default instead?> evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq), > GFP_KERNEL); > BUG_ON(!evtchn_to_irq); > diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h > index 79ac70b..513c34d 100644 > --- a/drivers/xen/events/events_internal.h > +++ b/drivers/xen/events/events_internal.h > @@ -54,21 +54,67 @@ struct irq_info { > #define PIRQ_NEEDS_EOI (1 << 0) > #define PIRQ_SHAREABLE (1 << 1) > > +struct evtchn_ops { > + void (*bind_to_cpu)(struct irq_info *info, int cpu); > + > + void (*clear_pending)(int port); > + void (*set_pending)(int port); > + bool (*is_pending)(int port); > + bool (*test_and_set_mask)(int port); > + void (*mask)(int port); > + void (*unmask)(int port); > + > + void (*handle_events)(int cpu); > +}; > + > +extern const struct evtchn_ops *evtchn_ops; > + > extern int *evtchn_to_irq; > > struct irq_info *info_for_irq(unsigned irq); > unsigned cpu_from_irq(unsigned irq); > unsigned cpu_from_evtchn(unsigned int evtchn); > > -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu); > +static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) > +{ > + evtchn_ops->bind_to_cpu(info, cpu); > +} > + > +static inline void clear_evtchn(int port) > +{ > + evtchn_ops->clear_pending(port); > +} > + > +static inline void set_evtchn(int port) > +{ > + evtchn_ops->set_pending(port); > +} > + > +static inline bool test_evtchn(int port) > +{ > + return evtchn_ops->is_pending(port); > +} > + > +static inline bool test_and_set_mask(int port) > +{ > + return evtchn_ops->test_and_set_mask(port); > +} > + > +static inline void mask_evtchn(int port) > +{ > + return evtchn_ops->mask(port); > +} > + > +static inline void unmask_evtchn(int port) > +{ > + return evtchn_ops->unmask(port); > +} > > -void clear_evtchn(int port); > -void set_evtchn(int port); > -int test_evtchn(int port); > -int test_and_set_mask(int port); > -void mask_evtchn(int port); > -void unmask_evtchn(int port); > +static inline void xen_evtchn_handle_events(int cpu) > +{ > + return evtchn_ops->handle_events(cpu); > +} > > -void xen_evtchn_handle_events(int cpu); > +void xen_evtchn_init_nlevel(void); > > #endif /* #ifndef __EVENTS_INTERNAL_H__ */ > diff --git a/drivers/xen/events/n-level.c b/drivers/xen/events/n-level.c > index 05762d5..a449b69 100644 > --- a/drivers/xen/events/n-level.c > +++ b/drivers/xen/events/n-level.c > @@ -39,43 +39,43 @@ > static DEFINE_PER_CPU(xen_ulong_t [NR_EVENT_CHANNELS/BITS_PER_EVTCHN_WORD], > cpu_evtchn_mask); > > -void xen_evtchn_port_bind_to_cpu(struct irq_info *info, int cpu) > +static void nlevel_bind_to_cpu(struct irq_info *info, int cpu) > { > clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu))); > set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); > } > > -void clear_evtchn(int port) > +static void nlevel_clear_pending(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > sync_clear_bit(port, BM(&s->evtchn_pending[0])); > } > > -void set_evtchn(int port) > +static void nlevel_set_pending(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > sync_set_bit(port, BM(&s->evtchn_pending[0])); > } > > -int test_evtchn(int port) > +static bool nlevel_is_pending(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > return sync_test_bit(port, BM(&s->evtchn_pending[0])); > } > > -int test_and_set_mask(int port) > +static bool nlevel_test_and_set_mask(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); > } > > -void mask_evtchn(int port) > +static void nlevel_mask(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > sync_set_bit(port, BM(&s->evtchn_mask[0])); > } > > -void unmask_evtchn(int port) > +static void nlevel_unmask(int port) > { > struct shared_info *s = HYPERVISOR_shared_info; > unsigned int cpu = get_cpu(); > @@ -141,7 +141,7 @@ static inline xen_ulong_t active_evtchns(unsigned int cpu, > * a bitset of words which contain pending event bits. The second > * level is a bitset of pending events themselves. > */ > -void xen_evtchn_handle_events(int cpu) > +static void nlevel_handle_events(int cpu) > { > xen_ulong_t pending_words; > int start_word_idx, start_bit_idx; > @@ -313,3 +313,19 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) > > return IRQ_HANDLED; > } > + > +static const struct evtchn_ops evtchn_ops_nlevel = { > + .bind_to_cpu = nlevel_bind_to_cpu, > + .clear_pending = nlevel_clear_pending, > + .set_pending = nlevel_set_pending, > + .is_pending = nlevel_is_pending, > + .test_and_set_mask = nlevel_test_and_set_mask, > + .mask = nlevel_mask, > + .unmask = nlevel_unmask, > + .handle_events = nlevel_handle_events, > +}; > +How about use 2level_* or default_*? Just my two cents. :-) Wei.> +void __init xen_evtchn_init_nlevel(void) > +{ > + evtchn_ops = &evtchn_ops_nlevel; > +} > -- > 1.7.2.5
Wei Liu
2013-Aug-16 17:47 UTC
Re: [PATCH 12/12] xen/events: use the FIFO-based ABI if available
On Fri, Aug 09, 2013 at 07:15:21PM +0100, David Vrabel wrote:> From: David Vrabel <david.vrabel@citrix.com> > > If the hypervisor supports the FIFO-based ABI, enable it by > initializing the control block for the boot VCPU and subsequent VCPUs > as they are brought up. The event array is expanded as required when > event ports are setup. > > This implementation has some known limitations: > > - Migration will not work as the control blocks or event arrays are > not remapped by Xen at the destination. >Is it possible to hook into xen_vcpu_restore to make it work?> - The timer VIRQ which previously was treated as the highest priority > event has the default priority. >So what''s missing from the series is a patch that allows kernel to actually make use of the priority queues? IMHO it''s not just about the timer VIRQ. How do you plan to expose this interface to drivers?> Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > drivers/xen/events/Makefile | 1 + > drivers/xen/events/events.c | 7 +- > drivers/xen/events/events_internal.h | 1 + > drivers/xen/events/fifo.c | 335 ++++++++++++++++++++++++++++++++++ > 4 files changed, 343 insertions(+), 1 deletions(-) > create mode 100644 drivers/xen/events/fifo.c > > diff --git a/drivers/xen/events/Makefile b/drivers/xen/events/Makefile > index aea331e..74644d0 100644 > --- a/drivers/xen/events/Makefile > +++ b/drivers/xen/events/Makefile > @@ -1,2 +1,3 @@ > obj-y += events.o > +obj-y += fifo.o > obj-y += n-level.o > diff --git a/drivers/xen/events/events.c b/drivers/xen/events/events.c > index 55d33d1..2e7416e 100644 > --- a/drivers/xen/events/events.c > +++ b/drivers/xen/events/events.c > @@ -1589,8 +1589,13 @@ void xen_callback_vector(void) {} > void __init xen_init_IRQ(void) > { > int i; > + int ret; > > - xen_evtchn_init_nlevel(); > + ret = xen_evtchn_init_fifo(); > + if (ret < 0) { > + printk(KERN_INFO "xen: falling back to n-level event channels"); > + xen_evtchn_init_nlevel(); > + } > > evtchn_to_irq = kcalloc(EVTCHN_ROW(xen_evtchn_max_channels()), > sizeof(*evtchn_to_irq), GFP_KERNEL); > diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h > index 9d8b70c..e14f616 100644 > --- a/drivers/xen/events/events_internal.h > +++ b/drivers/xen/events/events_internal.h > @@ -138,5 +138,6 @@ static inline void xen_evtchn_handle_events(int cpu) > } > > void xen_evtchn_init_nlevel(void); > +int xen_evtchn_init_fifo(void); > > #endif /* #ifndef __EVENTS_INTERNAL_H__ */ > diff --git a/drivers/xen/events/fifo.c b/drivers/xen/events/fifo.c > new file mode 100644 > index 0000000..b07a603 > --- /dev/null > +++ b/drivers/xen/events/fifo.c > @@ -0,0 +1,335 @@ > +/* > + * Xen event channels (FIFO-based ABI) > + * > + * Copyright (C) 2013 Citrix Systems R&D ltd. > + * > + * This source code is licensed under the GNU General Public License, > + * Version 2 or later. See the file COPYING for more details. > + */ > + > +#include <linux/linkage.h> > +#include <linux/interrupt.h> > +#include <linux/irq.h> > +#include <linux/module.h> > +#include <linux/smp.h> > +#include <linux/percpu.h> > +#include <linux/cpu.h> > + > +#include <asm/sync_bitops.h> > +#include <asm/xen/hypercall.h> > +#include <asm/xen/hypervisor.h> > +#include <asm/xen/page.h> > + > +#include <xen/xen.h> > +#include <xen/xen-ops.h> > +#include <xen/events.h> > +#include <xen/interface/xen.h> > +#include <xen/interface/event_channel.h> > + > +#include "events_internal.h" > + > +#define EVENT_WORDS_PER_PAGE (PAGE_SIZE / sizeof(event_word_t)) > +#define MAX_EVENT_ARRAY_PAGES ((1 << EVTCHN_FIFO_LINK_BITS) \ > + / EVENT_WORDS_PER_PAGE) > + > +struct evtchn_fifo_queue { > + uint32_t head[EVTCHN_FIFO_MAX_QUEUES]; > +}; > + > +static DEFINE_PER_CPU(struct evtchn_fifo_control_block *, cpu_control_block); > +static DEFINE_PER_CPU(struct evtchn_fifo_queue, cpu_queue); > +static event_word_t *event_array[MAX_EVENT_ARRAY_PAGES]; > +static unsigned event_array_pages; > + > +#define BM(w) ((unsigned long *)(w)) > + > +static inline event_word_t *event_word_from_port(int port) > +{ > + int i = port / EVENT_WORDS_PER_PAGE; > + > + if (i >= event_array_pages) > + return NULL; > + return event_array[i] + port % EVENT_WORDS_PER_PAGE; > +} > + > +static unsigned fifo_max_channels(void) > +{ > + return 1 << EVTCHN_FIFO_LINK_BITS; > +} > + > +static unsigned fifo_nr_channels(void) > +{ > + return event_array_pages * EVENT_WORDS_PER_PAGE; > +} > + > +static int fifo_setup(struct irq_info *info) > +{ > + int port = info->evtchn; > + int i; > + int ret = -ENOMEM; > + > + i = port / EVENT_WORDS_PER_PAGE; > + > + if (i >= MAX_EVENT_ARRAY_PAGES) > + return -EINVAL; > + > + while (i >= event_array_pages) { > + struct page *array_page = NULL; > + struct evtchn_expand_array expand_array; > + > + array_page = alloc_page(GFP_KERNEL | __GFP_ZERO); > + if (array_page == NULL) > + goto error; > + > + expand_array.array_mfn = virt_to_mfn(page_address(array_page)); > + > + ret = HYPERVISOR_event_channel_op(EVTCHNOP_expand_array, &expand_array); > + if (ret < 0) { > + __free_page(array_page); > + goto error; > + } > + > + event_array[event_array_pages++] = page_address(array_page); > + } > + return 0; > + > + error: > + if (event_array_pages == 0) > + panic("xen: unable to expand event array with initial page (%d)\n", ret); > + else > + printk(KERN_ERR "xen: unable to expand event array (%d)\n", ret); > + return ret; > +} > + > +static void fifo_bind_to_cpu(struct irq_info *info, int cpu) > +{ > + /* no-op */ > +} > + > +static void fifo_clear_pending(int port) > +{ > + event_word_t *word = event_word_from_port(port); > + sync_clear_bit(EVTCHN_FIFO_PENDING, BM(word)); > +} > + > +static void fifo_set_pending(int port) > +{ > + event_word_t *word = event_word_from_port(port); > + sync_set_bit(EVTCHN_FIFO_PENDING, BM(word)); > +} > + > +static bool fifo_is_pending(int port) > +{ > + event_word_t *word = event_word_from_port(port); > + return sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); > +} > + > +static bool fifo_test_and_set_mask(int port) > +{ > + event_word_t *word = event_word_from_port(port); > + return sync_test_and_set_bit(EVTCHN_FIFO_MASKED, BM(word)); > +} > + > +static void fifo_mask(int port) > +{ > + event_word_t *word = event_word_from_port(port); > + if (word) > + sync_set_bit(EVTCHN_FIFO_MASKED, BM(word)); > +} > + > +static void fifo_unmask(int port) > +{ > + unsigned int cpu = get_cpu(); > + bool do_hypercall = false; > + bool evtchn_pending = false; > + > + BUG_ON(!irqs_disabled()); > + > + if (unlikely((cpu != cpu_from_evtchn(port)))) > + do_hypercall = true; > + else { > + event_word_t *word = event_word_from_port(port); > + > + sync_clear_bit(EVTCHN_FIFO_MASKED, BM(word)); > + evtchn_pending = sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)); > + if (evtchn_pending) > + do_hypercall = true; > + } > + > + if (do_hypercall) { > + struct evtchn_unmask unmask = { .port = port }; > + (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); > + } > + > + put_cpu(); > +} > + > +static uint32_t clear_linked(volatile event_word_t *word) > +{ > + event_word_t n, o, w; > + > + w = *word; > + > + do { > + o = w; > + n = (w & ~((1 << EVTCHN_FIFO_LINKED) | EVTCHN_FIFO_LINK_MASK)); > + } while ( (w = sync_cmpxchg(word, o, n)) != o ); > + > + return w & EVTCHN_FIFO_LINK_MASK; > +} > + > +static void handle_irq_for_port(int port) > +{ > + int irq; > + struct irq_desc *desc; > + > + irq = get_evtchn_to_irq(port); > + if (irq != -1) { > + desc = irq_to_desc(irq); > + if (desc) > + generic_handle_irq_desc(irq, desc); > + } > +} > + > +static void consume_one_event(int cpu, > + struct evtchn_fifo_control_block *control_block, > + int priority, uint32_t *ready) > +{ > + struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu); > + uint32_t head; > + int port; > + event_word_t *word; > + > + head = q->head[priority]; > + > + /* Reached the tail last time? Read the new HEAD from the > + control block. */ > + if (head == 0) { > + rmb(); /* Ensure word is up-to-date before reading head. */ > + head = control_block->head[priority]; > + } > + > + port = head; > + word = event_word_from_port(port); > + head = clear_linked(word); > + > + /* > + * If the link is non-zero, there are more events in the > + * queue, otherwise the queue is empty. > + * > + * If the queue is empty, clear this priority from our local > + * copy of the ready word. > + */ > + if (head == 0) > + clear_bit(priority, BM(ready)); > + > + if (sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)) > + && !sync_test_bit(EVTCHN_FIFO_MASKED, BM(word))) > + handle_irq_for_port(port); > + > + q->head[priority] = head; > +} > + > +#define EVTCHN_FIFO_READY_MASK ((1 << EVTCHN_FIFO_MAX_QUEUES) - 1) > + > +static void fifo_handle_events(int cpu) > +{ > + struct evtchn_fifo_control_block *control_block; > + uint32_t ready; > + int q; > + > + control_block = per_cpu(cpu_control_block, cpu); > + > + ready = xchg(&control_block->ready, 0); > + > + while (ready & EVTCHN_FIFO_READY_MASK) { > + q = find_first_bit(BM(&ready), EVTCHN_FIFO_MAX_QUEUES); > + consume_one_event(cpu, control_block, q, &ready); > + ready |= xchg(&control_block->ready, 0); > + } > +} > + > +static const struct evtchn_ops evtchn_ops_fifo = { > + .max_channels = fifo_max_channels, > + .nr_channels = fifo_nr_channels, > + .setup = fifo_setup, > + .bind_to_cpu = fifo_bind_to_cpu, > + .clear_pending = fifo_clear_pending, > + .set_pending = fifo_set_pending, > + .is_pending = fifo_is_pending, > + .test_and_set_mask = fifo_test_and_set_mask, > + .mask = fifo_mask, > + .unmask = fifo_unmask, > + .handle_events = fifo_handle_events, > +}; > + > +static int __cpuinit fifo_init_control_block(int cpu) > +{ > + struct page *control_block = NULL; > + struct evtchn_init_control init_control; > + int ret = -ENOMEM; > + > + control_block = alloc_page(GFP_KERNEL|__GFP_ZERO); > + if (control_block == NULL) > + goto error; > + > + init_control.control_mfn = virt_to_mfn(page_address(control_block)); > + init_control.offset = 0; > + init_control.vcpu = cpu; > + > + ret = HYPERVISOR_event_channel_op(EVTCHNOP_init_control, &init_control); > + if (ret < 0) > + goto error; > + > + per_cpu(cpu_control_block, cpu) = page_address(control_block); > + > + return 0; > + > + error: > + __free_page(control_block); > + return ret; > +} > + > +static int __cpuinit fifo_cpu_notification(struct notifier_block *self, > + unsigned long action, void *hcpu) > +{ > + int cpu = (long)hcpu; > + int ret = 0; > + > + switch (action) { > + case CPU_UP_PREPARE: > + ret = fifo_init_control_block(cpu); > + break;On the hypervisor side fifo_init_control_block would return -EINVAL if this CPU has previous mapped control block. Do you need to tear down the control block when you offline a CPU? (That would mean another sub-op for the interface)> + default: > + break; > + } > + return ret < 0 ? NOTIFY_BAD : NOTIFY_OK; > +} > + > +static struct notifier_block fifo_cpu_notifier __cpuinitdata = { > + .notifier_call = fifo_cpu_notification, > +}; > + > + > +int __init xen_evtchn_init_fifo(void) > +{ > + int cpu = get_cpu(); > + int ret; > + > + ret = fifo_init_control_block(cpu); > + if (ret < 0) > + goto error; > + > + printk(KERN_INFO "xen: switching to FIFO-based event channels\n"); > + > + evtchn_ops = &evtchn_ops_fifo; > + > + register_cpu_notifier(&fifo_cpu_notifier); > + > + put_cpu(); > + return 0; > + > + error: > + put_cpu(); > + return ret;Could be written as: if (ret < 0) goto out; ... register_cpu_notifier(); ret = 0; out: put_cpu(); return ret; It is only a matter of personal taste though. Wei.> +} > -- > 1.7.2.5
David Vrabel
2013-Aug-19 10:39 UTC
Re: [PATCH 07/12] xen/events: add struct evtchn_ops for the low-level port operations
On 16/08/13 18:47, Wei Liu wrote:> On Fri, Aug 09, 2013 at 07:15:16PM +0100, David Vrabel wrote: >> >> + xen_evtchn_init_nlevel(); >> + > > There is only 2-level ABI, more importantly, it is the default ABI, so > use xen_evtchn_init_default instead?It''s not the default, FIFO is the default. N-level is the fallback for when the hypervisor doesn''t support FIFO.>> + >> +static const struct evtchn_ops evtchn_ops_nlevel = { >> + .bind_to_cpu = nlevel_bind_to_cpu, >> + .clear_pending = nlevel_clear_pending, >> + .set_pending = nlevel_set_pending, >> + .is_pending = nlevel_is_pending, >> + .test_and_set_mask = nlevel_test_and_set_mask, >> + .mask = nlevel_mask, >> + .unmask = nlevel_unmask, >> + .handle_events = nlevel_handle_events, >> +}; >> + > > How about use 2level_* or default_*?Identifiers can''t start with numbers. David
Wei Liu
2013-Aug-19 10:52 UTC
Re: [PATCH 07/12] xen/events: add struct evtchn_ops for the low-level port operations
On Mon, Aug 19, 2013 at 11:39:00AM +0100, David Vrabel wrote:> On 16/08/13 18:47, Wei Liu wrote: > > On Fri, Aug 09, 2013 at 07:15:16PM +0100, David Vrabel wrote: > >> > >> + xen_evtchn_init_nlevel(); > >> + > > > > There is only 2-level ABI, more importantly, it is the default ABI, so > > use xen_evtchn_init_default instead? > > It''s not the default, FIFO is the default. N-level is the fallback for > when the hypervisor doesn''t support FIFO. >Makes sense.> >> + > >> +static const struct evtchn_ops evtchn_ops_nlevel = { > >> + .bind_to_cpu = nlevel_bind_to_cpu, > >> + .clear_pending = nlevel_clear_pending, > >> + .set_pending = nlevel_set_pending, > >> + .is_pending = nlevel_is_pending, > >> + .test_and_set_mask = nlevel_test_and_set_mask, > >> + .mask = nlevel_mask, > >> + .unmask = nlevel_unmask, > >> + .handle_events = nlevel_handle_events, > >> +}; > >> + > > > > How about use 2level_* or default_*? > > Identifiers can''t start with numbers. >Of course. :-) Wei.> David
David Vrabel
2013-Aug-19 11:55 UTC
Re: [PATCH 12/12] xen/events: use the FIFO-based ABI if available
On 16/08/13 18:47, Wei Liu wrote:> On Fri, Aug 09, 2013 at 07:15:21PM +0100, David Vrabel wrote: >> From: David Vrabel <david.vrabel@citrix.com> >> >> If the hypervisor supports the FIFO-based ABI, enable it by >> initializing the control block for the boot VCPU and subsequent VCPUs >> as they are brought up. The event array is expanded as required when >> event ports are setup. >> >> This implementation has some known limitations:... because I haven''t implemented them yet.>> - Migration will not work as the control blocks or event arrays are >> not remapped by Xen at the destination. >> > > Is it possible to hook into xen_vcpu_restore to make it work?Yes, I think so.>> - The timer VIRQ which previously was treated as the highest priority >> event has the default priority. >> > > So what''s missing from the series is a patch that allows kernel to > actually make use of the priority queues? IMHO it''s not just about the > timer VIRQ. How do you plan to expose this interface to drivers?There will be Xen-specific xen_evtchn_set_priority() call. I don''t think it can be usefully exposed to non-Xen-specific hardware drivers using PIRQs. I expect to use this for boosting the priority of the timer interrupt only.>> +static int __cpuinit fifo_cpu_notification(struct notifier_block *self, >> + unsigned long action, void *hcpu) >> +{ >> + int cpu = (long)hcpu; >> + int ret = 0; >> + >> + switch (action) { >> + case CPU_UP_PREPARE: >> + ret = fifo_init_control_block(cpu); >> + break; > > On the hypervisor side fifo_init_control_block would return -EINVAL if > this CPU has previous mapped control block.I need to look into this -- CPU hotplug is not something I really looked at or tried yet. Probably need to check if we have already done this.> Do you need to tear down the control block when you offline a CPU? (That > would mean another sub-op for the interface)None of the other per-VCPU is state appears to be cleaned up when a guest offlines a VCPU. e.g., there is a VCPUOP_register_vcpu_info but no VCPUOP_unregister_vcpu_info. David
David Vrabel
2013-Sep-05 17:28 UTC
Re: [PATCH 01/12] xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
On 16/08/13 18:46, Wei Liu wrote:> On Fri, Aug 09, 2013 at 07:15:10PM +0100, David Vrabel wrote: >> From: David Vrabel <david.vrabel@citrix.com> >> >> These two function did the same thing with different parameters, put >> the common bits in retrigger_evtchn(). >> >> Signed-off-by: David Vrabel <david.vrabel@citrix.com> >> --- >> drivers/xen/events.c | 27 +++++++++------------------ >> 1 files changed, 9 insertions(+), 18 deletions(-) >> >> diff --git a/drivers/xen/events.c b/drivers/xen/events.c >> index a58ac43..1697cf2 100644 >> --- a/drivers/xen/events.c >> +++ b/drivers/xen/events.c >> @@ -1529,13 +1529,13 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, >> return rebind_irq_to_cpu(data->irq, tcpu); >> } >> >> -int resend_irq_on_evtchn(unsigned int irq) >> +static int retrigger_evtchn(int evtchn) >> { >> - int masked, evtchn = evtchn_from_irq(irq); >> + int masked; >> struct shared_info *s = HYPERVISOR_shared_info; >> >> if (!VALID_EVTCHN(evtchn)) >> - return 1; >> + return 0; >> > > This looks weird. Either resend_irq_on_evtchn is bogus in first place or > resend_irq_on_evtchn is different from retrigger_dynirq.resend_irq_on_evtchn() is only called from one place in ia64 code and the return value is ignored. I shall note this in the commit message. David